安裝JDK
1.下載jdk1.6.0_20;(下載文件為jdk-6u20-linux-i586.bin);
2.進入到jdk文件所在目錄,由於運行bin文件是需要權限的,所以運行如下命令:chmod u+x jdk-6u20-linux-i586.bin,使得當前用戶有權限執行該文件;
3.在jdk文件所在目錄下執行:sudo ./jdk-6u20-linux-i586.bin安裝jdk;
4.執行java -version驗證jdk是否安裝成功;如果安裝成功會輸出java的版本信息;
安裝Hadoop
5.下載hadoop 0.20.2,下載后文件名是hadoop-1.0.4.tar.gz ,下載地址:http://www.apache.org/dist/hadoop/core/hadoop-0.20.2/;
6.在hadoop文件所在目錄下執行tar -zxvf hadoop-1.0.4.tar.gz,安裝hadoop(所謂安裝就是解壓,解壓出來里面有些shell腳本等);
安裝、配置ssh
7.運行apt-get install openssh-server,安裝ssh;
8.配置ssh,使得可以無密碼登錄訪問;指令如下:
ssh-keygen -t dsa -P ''";
執行完該指令后,在/root/.ssh目錄下會出現兩個文件:id_dsa和id_dsa.pub文件;
cat ./id_dsa.pub >> authorized_keys;
然后執行ssh localhost查看是否可以無密碼登錄,結果出現錯誤:The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is 40:2b:3c:88:b0:34:f9:cd:6d:15:b8:7b:c4:f7:02:f9.
Are you sure you want to continue connecting (yes/no)?
通過查閱資料,發現是文件權限問題,然后執行下面的命令修改文件權限:
chmod 700 /root/.ssh
chmod 644 /root/.ssh/authorized_keys
執行完上述命令之后,/root/.ssh文件中多了一個文件known_hosts,然后運行ssh localhost,可以無密碼登錄;
配置Hadoop
9.進入到hadoop安裝目錄,(我自己的是hadoop-1.0.4/),運行ls看到conf文件夾,這里存放的是配置相關信息;bin文件夾,存放的是可執行的文件;
10.編輯hadoop-env.sh文件添加語句: JAVA_HOME=/opt/jdk1.6.0_20,該路徑我筆者機器上jdk的安裝路徑,如果不清楚jdk的安裝路徑可以運行命令
which java來查找安裝路徑;
11.配置core-site.xml文件,文件內容如下:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
12.配置hdfs-site.xml文件,文件內容如下:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
13.配置mapred-site.xml文件,文件內容如下:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
以上工作都完成之后,進入到hadoop-1.0.4/bin目錄下,可以看到一些shell腳本等,通過命令sh ./start-all.sh啟動hadoop,是不是可以跑啦呢^o^
貼一下運行結果,小紀念一下,o(∩∩)o
starting namenode, logging to /root/jz/hadoop-1.0.4/libexec/../logs/hadoop-root-namenode-lxh-ThinkPad-Edge.out
localhost: starting datanode, logging to /root/jz/hadoop-1.0.4/libexec/../logs/hadoop-root-datanode-lxh-ThinkPad-Edge.out
localhost: starting secondarynamenode, logging to /root/jz/hadoop-1.0.4/libexec/../logs/hadoop-root-secondarynamenode-lxh-ThinkPad-Edge.out
starting jobtracker, logging to /root/jz/hadoop-1.0.4/libexec/../logs/hadoop-root-jobtracker-lxh-ThinkPad-Edge.out
localhost: starting tasktracker, logging to /root/jz/hadoop-1.0.4/libexec/../logs/hadoop-root-tasktracker-lxh-ThinkPad-Edge.out
