hadoop自帶例子wordcount的具體運行步驟


1.在hadoop所在目錄“usr/local”下創建一個文件夾input

root@ubuntu:/usr/local# mkdir input

2.在文件夾input中創建兩個文本文件file1.txt和file2.txt,file1.txt中內容是“hello word”,file2.txt中內容是“hello hadoop”、“hello mapreduce”(分兩行)。

root@ubuntu:/usr/local# cd input
root@ubuntu:/usr/local/input# echo "hello word" > file1.txt
root@ubuntu:/usr/local/input# echo "hello hadoop" > file2.txt
root@ubuntu:/usr/local/input# echo "hello mapreduce" > file2.txt   (hello mapreduce 會覆蓋原來寫入的hello hadoop ,可以使用gedit編輯file2.txt)
root@ubuntu:/usr/local/input# ls
file1.txt file2.txt

顯示文件內容可用:

root@ubuntu:/usr/local/input# more file1.txt
hello word
root@ubuntu:/usr/local/input# more file2.txt
hello mapreduce
hello hadoop

3.在HDFS上創建輸入文件夾wc_input,並將本地文件夾input中的兩個文本文件上傳到集群的wc_input下

root@ubuntu:/usr/local/hadoop-1.2.1# bin/hadoop fs -mkdir wc_input

root@ubuntu:/usr/local/hadoop-1.2.1# bin/hadoop fs -put /usr/local/input/file* wc_input

查看wc_input中的文件:

root@ubuntu:/usr/local/hadoop-1.2.1# bin/hadoop fs -ls wc_input
Found 2 items
-rw-r--r-- 1 root supergroup 11 2014-03-13 01:19 /user/root/wc_input/file1.txt
-rw-r--r-- 1 root supergroup 29 2014-03-13 01:19 /user/root/wc_input/file2.txt

4.啟動所有進程並查看進程:

 

root@ubuntu:/# ssh localhost   (用於驗證能否實現無密碼登陸localhost,如果能會出現下面的信息。否則需要設置具體步驟見http://blog.csdn.net/joe_007/article/details/8298814)

Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.2.0-24-generic-pae i686)

* Documentation: https://help.ubuntu.com/

Last login: Mon Mar 3 04:44:23 2014 from localhost

root@ubuntu:~# exit
logout
Connection to localhost closed.

 

root@ubuntu:/usr/local/hadoop-1.2.1/bin# ./start-all.sh

starting namenode, logging to /usr/local/hadoop-1.2.1/libexec/../logs/hadoop-root-namenode-ubuntu.out
localhost: starting datanode, logging to /usr/local/hadoop-1.2.1/libexec/../logs/hadoop-root-datanode-ubuntu.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop-1.2.1/libexec/../logs/hadoop-root-secondarynamenode-ubuntu.out
starting jobtracker, logging to /usr/local/hadoop-1.2.1/libexec/../logs/hadoop-root-jobtracker-ubuntu.out
localhost: starting tasktracker, logging to /usr/local/hadoop-1.2.1/libexec/../logs/hadoop-root-tasktracker-ubuntu.out

root@ubuntu:/usr/local/hadoop-1.2.1/bin# jps
7847 SecondaryNameNode
4196
7634 DataNode
7423 NameNode
8319 Jps
7938 JobTracker
8157 TaskTracker

運行hadoop自帶的wordcount jar包(注:再次運行時一定要先將前一次運行的輸出文件夾刪除)

root@ubuntu:/usr/local/hadoop-1.2.1# bin/hadoop jar ./hadoop-examples-1.2.1.jar wordcount wc_input wc_output
14/03/13 01:48:40 INFO input.FileInputFormat: Total input paths to process : 2
14/03/13 01:48:40 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/03/13 01:48:40 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/13 01:48:42 INFO mapred.JobClient: Running job: job_201403130031_0001
14/03/13 01:48:44 INFO mapred.JobClient: map 0% reduce 0%
14/03/13 01:52:47 INFO mapred.JobClient: map 50% reduce 0%
14/03/13 01:53:50 INFO mapred.JobClient: map 100% reduce 0%
14/03/13 01:54:14 INFO mapred.JobClient: map 100% reduce 100%

... ...

5.查看輸出文件夾

root@ubuntu:/usr/local/hadoop-1.2.1# bin/hadoop fs -ls wc_output
Found 3 items
-rw-r--r-- 1 root supergroup 0 2014-03-13 01:54 /user/root/wc_output/_SUCCESS
drwxr-xr-x - root supergroup 0 2014-03-13 01:48 /user/root/wc_output/_logs
-rw-r--r-- 1 root supergroup 36 2014-03-13 01:54 /user/root/wc_output/part-r-00000   (實際輸出結果在part-r-00000中)

6.查看輸出文件part-r-00000中的內容

root@ubuntu:/usr/local/hadoop-1.2.1# bin/hadoop fs -cat /user/root/wc_output/part-r-00000
hadoop 1
hello 3
mapreduce 1
word 1

7.關閉所有進程

root@ubuntu:/usr/local/hadoop-1.2.1/bin# ./stop-all.sh
stopping jobtracker
localhost: stopping tasktracker
stopping namenode
localhost: stopping datanode
localhost: stopping secondarynamenode


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM