原文地址: http://blog.csdn.net/feixia586/article/details/24950111?utm_source=tuicool&utm_medium=referral
hadoop官方網站對其安裝配置hadoop的步驟太粗略,在這篇博客中,我會詳細介紹在ubuntu中如何安裝hadoop,並處理可能出現的一些問題。這里介紹的方法是用一台機器虛擬多個節點,這個方法已在如下環境中測試通過:
OS: Ubuntu 13.10
Hadoop: 2.2.0 (2.x.x)
個人認為在其他版本上安裝Hadoop 2.x.x的方法基本相同,因此如果嚴格按照我給的步驟,應該不會有問題。
前提
安裝 jdk 和 openssh
$ sudo apt-get install openjdk-7-jdk
$ java -version
java version "1.7.0_55"
OpenJDK Runtime Environment (IcedTea 2.4.7) (7u55-2.4.7-1ubuntu1~0.13.10.1)
OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode)
$ sudo apt-get install openssh-server
openjdk的默認路徑是 /usr/lib/jvm/java-7-openjdk-amd64. 如果你的默認路徑和我的不同,請再后面的操作中替換此路徑。
添加Hadoop用戶組和用戶
$ sudo addgroup hadoop$ sudo adduser --ingroup hadoop hduser
$ sudo adduser hduser sudo
然后切換到hduser賬戶
配置SSH
將public key加入到authorized_keys中,這樣hadoop在運行ssh時就不需要輸入密碼了
現在我們測試一下ssh
$ exit
下載Hadoop 2.2.0 (2.x.x)
$ wget http://www.trieuvan.com/apache/hadoop/common/hadoop-2.2.0/hadoop-2.2.0.tar.gz
$ sudo tar -xzvf hadoop-2.2.0.tar.gz -C /usr/local
$ cd /usr/local
$ sudo mv hadoop-2.2.0 hadoop
$ sudo chown -R hduser:hadoop hadoop
$ vim .bashrc
將下面的內容復制到.bashrc中
#Hadoop variables
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
###end of paste
退出.bashrc
$ vim hadoop-env.sh
# begin of paste
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/
export HADOOP_COMMON_LIB_NATIVE_DIR="/usr/local/hadoop/lib/native/"
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=/usr/local/hadoop/lib/"
### end of paste
退出terminal再重新打開
- <property>
- <name>fs.default.name</name>
- <value>hdfs://localhost:9000</value>
- </property>
- <property>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce_shuffle</value>
- </property>
- <property>
- <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
- <value>org.apache.hadoop.mapred.ShuffleHandler</value>
- </property>
$ vim mapred-site.xml
- <property>
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- </property>
$ mkdir -p ~/mydata/hdfs/datanode
$ vim hdfs-site.xml
- <property>
- <name>dfs.replication</name>
- <value>1</value>
- </property>
- <property>
- <name>dfs.namenode.name.dir</name>
- <value>file:/home/hduser/mydata/hdfs/namenode</value>
- </property>
- <property>
- <name>dfs.datanode.data.dir</name>
- <value>file:/home/hduser/mydata/hdfs/datanode</value>
- </property>
格式化 namenode
17785 SecondaryNameNode
17436 NameNode
17591 DataNode
18096 NodeManager
17952 ResourceManager
23635 Jps
當執行start-dfs.sh的時候,你可能會看到 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable,不用擔心,其實可以正常使用,我們會在trouble shooting那一節談到這個問題。
測試並運行示例
$ hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar TestDFSIO -write -nrFiles 20 -fileSize 10
$ hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar TestDFSIO -clean
$ hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 2 5
網頁界面
HDFS status: http://localhost:50070
Secondary NameNode status: http://localhost:50090
Trouble-shooting
1. Unable to load native-hadoop library for your platform.
這是一個警告,基本不會影響hadoop的使用,但是在之后我們還是給予解決這個warning的方法。通常來講,出現這個warning的原因是你在64位的系統上,但是hadoop的package是為32位的機器編譯的。在這種情況下,確認你不要忘記在hadoop-env.sh中加入這幾行:export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=/usr/local/hadoop/lib/"
我們不希望有warning,如何解決?方法是自己重新編譯源代碼。重新編譯其實很簡單:
安裝 maven
$ sudo apt-get install maven
安裝 protobuf-2.5.0 or later
$ curl -# -O https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.gz
$ tar -xzvf protobuf-2.5.0.tar.gz
$ cd protobuf-2.5.0
$ ./configure --prefix=/usr
$ make
$ sudo make install
$ cd ..
現在並編譯hadoop源代碼,注意編譯之前需要先給源代碼打個補丁
$ wget http://www.eu.apache.org/dist/hadoop/common/stable/hadoop-2.2.0-src.tar.gz
$ tar -xzvf hadoop-2.2.0-src.tar.gz
$ cd hadoop-2.2.0-src
$ wget https://issues.apache.org/jira/secure/attachment/12614482/HADOOP-10110.patch
$ patch -p0 < HADOOP-10110.patch
$ mvn package -Pdist,native -DskipTests -Dtar
現在到 hadoop-dist/target/ 目錄下, 你會看到 hadoop-2.2.0.tar.gz or hadoop-2.2.0, 他們就是編譯后的hadoop包。 你可以使用自己編譯的包,同樣按照之前的步驟安裝64位的hadoop。如果你已經安裝了32位的hadoop,只需要替換 /usr/local/hadoop/lib/native 目錄,然后將如下兩行從hadoop-env.sh中移除即可:
export HADOOP_COMMON_LIB_NATIVE_DIR="/usr/local/hadoop/lib/native/"
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=/usr/local/hadoop/lib/"
2. datanode 不能被啟動
參考文獻
http://www.csrdu.org/nauman/2014/01/23/geting-started-with-hadoop-2-2-0-building/
http://codesfusion.blogspot.com/2013/10/setup-hadoop-2x-220-on-ubuntu.html?m=1
http://www.ercoppa.org/Linux-Install-Hadoop-220-on-Ubuntu-Linux-1304-Single-Node-Cluster.htm
http://www.ercoppa.org/Linux-Compile-Hadoop-220-fix-Unable-to-load-native-hadoop-library.htm
