hadoop2.7.1安裝和部署


操作系統:Red Hat Enterprise Linux Server release 6.2 (Santiago)

hadoop2.7.1

三台redhat linux主機,ip分別為10.204.16.57-59,59為master,57、58為slave,

jdk版本為jdk-7u79-linux-x64.tar

一、環境准備

1、配置主機域名

設置主機名

配置hosts文件:vim /etc/hosts

在文件末添加內容如下:
10.204.16.59 master
10.204.16.58 slave8
10.204.16.57 slave7

2、設置ssh無密登錄

1)在/home/bob下新建.ssh文件夾:mkdir .ssh

2)修改.ssh權限(關閉組和其他權限,否則ssh還需輸密碼):chmod 700 .ssh

3)生成無密公鑰和私鑰:ssh-keygen -t rsa -P ''

  讓選擇保存密鑰的文件路徑,回車直接用默認即可。

   命令與結果如下:

  

[bob@localhost ~]$ ssh-keygen -t rsa -P ''
Generating public/private rsa key pair.
Enter file in which to save the key (/home/bob/.ssh/id_rsa):
Your identification has been saved in /home/bob/.ssh/id_rsa.
Your public key has been saved in /home/bob/.ssh/id_rsa.pub.
The key fingerprint is:
13:f1:5f:64:91:4c:75:fa:a7:56:4e:74:a5:c0:4f:84 bob@localhost.localdomain
The key's randomart image is:
+--[ RSA 2048]----+
|        .  ..=*++|
|         o  E++oo|
|        . .  o+ o|
|         . . ..o.|
|        S   .   =|
|         .     =.|
|              o .|
|             .   |
|                 |
+-----------------+

4)用root用戶修改ssh配置,啟用RSA認證:vim /etc/ssh/sshd_config,去掉以下三項行首的‘#’,編輯后內容如下:

RSAAuthentication yes # 啟用 RSA 認證

PubkeyAuthentication yes # 啟用公鑰私鑰配對認證方式

AuthorizedKeysFile .ssh/authorized_keys # 公鑰文件路徑

5)導入公鑰至認證文件:cat id_rsa.pub >> authorized_keys

6)設置認證文件權限(關閉組和其他權限,否則ssh還需輸密碼):chmod 600 authorized_keys

7)重啟sshd服務: service sshd restart

8)測試本機ssh無密登錄是否成功:ssh bob@master

  第一次會有確認提示,輸入yes即可。

  Last login: Tue Aug 25 14:43:51 2015 from 10.204.105.165
  [bob@master ~]$ exit
  logout

9)將master的/home/bob/.ssh文件夾傳送至slave7、slave8,分別進行設置(生成密鑰,將公鑰追加至authorized_keys文件)。

  傳送命令: scp -r .ssh bob@slave7:~

  測試master至slave7、slave8的ssh無密登錄(bob用戶),成功則進行后續步驟,否則檢查以上步驟。

3、安裝jdk

解壓安裝包:tar -xzvf jdk-7u79-linux-x64.tar.gz,解壓文件路徑/usr/bob/jdk1.7.0_79

root用戶登錄,設置環境變量:vim /etc/profile

結尾加入以下:

#set java and hadoop envs
export JAVA_HOME=/usr/bob/jdk1.7.0_79
export PATH=$JAVA_HOME/bin:$PATH:.
export CLASSPATH=$JAVA_HOME/jre/lib:.
export HADOOP_HOME=/usr/bob/hadoop-2.7.1
export PATH=$PATH:$HADOOP_HOME/bin

驗證jdk是否按照成功:運行java或javac,成功則繼續,否則檢查以上步驟。

二、安裝和設置hadoop

1)解壓hadoop-2.7.1.tar.gz文件:tar -xzvf hadoop-2.7.1.tar.gz

解壓后文件為hadoop-2.7.1,查看文件內容如下:

[bob@master bob]$ ls -la hadoop-2.7.1
total 60
drwxr-x---  9 bob bob  4096 Jun 29 14:15 .
drwxr-x---. 5 bob bob  4096 Aug 25 15:15 ..
drwxr-x---  2 bob bob  4096 Jun 29 14:15 bin
drwxr-x---  3 bob bob  4096 Jun 29 14:15 etc
drwxr-x---  2 bob bob  4096 Jun 29 14:15 include
drwxr-x---  3 bob bob  4096 Jun 29 14:15 lib
drwxr-x---  2 bob bob  4096 Jun 29 14:15 libexec
-rw-r-----  1 bob bob 15429 Jun 29 14:15 LICENSE.txt
-rw-r-----  1 bob bob   101 Jun 29 14:15 NOTICE.txt
-rw-r-----  1 bob bob  1366 Jun 29 14:15 README.txt
drwxr-x---  2 bob bob  4096 Jun 29 14:15 sbin
drwxr-x---  4 bob bob  4096 Jun 29 14:15 share

2)配置參數:涉及以下四個文件

core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>

<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>

<property>
<name>hadoop.tmp.dir</name>
<value>/usr/bob/hadoop-2.7.1/tmp</value>
</property>

</configuration>

 

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property> 
<name>dfs.namenode.name.dir</name> 
<value>/home/bob/hadoop_space/hdfs/name</value> 
</property> 

<property> 
<name>dfs.datanode.data.dir</name> 
<value>/home/bob/hadoop_space/hdfs/data</value> 
</property>

<property> 
<name>dfs.replication</name> 
<value>2</value> 
</property>

<property>
<name>dfs.blocksize</name>
<value>268435456</value>
</property>

<property>
<name>dfs.namenode.handler.count</name>
<value>100</value>
</property>
 
<property> 
<name>dfs.namenode.secondary.http-address</name> 
<value>master:50090</value> 
</property> 

<property> 
<name>dfs.namenode.secondary.https-address</name> 
<value>master:50091</value> 
</property> 

</configuration>

 

mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property> 
<name>mapreduce.framework.name</name> 
<value>yarn</value> 
</property>

<property> 
<name>mapreduce.jobhistory.address</name> 
<value>master:10020</value> 
</property>

<property> 
<name>mapreduce.jobhistory.webapp.address</name> 
<value>master:19888</value> 
</property>

</configuration>

 

yarn-site.xml

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name> 
<value>10.204.16.59</value>
</property>
 
<property> 
<name>yarn.nodemanager.aux-services</name> 
<value>mapreduce_shuffle</value> 
</property>

<property>
    <name>yarn.resourcemanager.address</name>
    <value>10.204.16.59:8032</value>
  </property>

  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>master:8030</value>
  </property>

  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>master:8031</value>
  </property>

  <property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>master:8033</value>
  </property>

  <property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>master:8088</value>
  </property>

</configuration>

 

 slaves(填寫slave的主機名或ip,僅需要在master上設置),內容如下:

  slave7

  slave8

三、初始化和啟動

1、以bob用戶登錄格式化hdfs文件系統:hdfs namenode -format

運行格式化成功,節選輸出最后三行如下:

  15/08/25 18:09:54 INFO util.ExitUtil: Exiting with status 0
  15/08/25 18:09:54 INFO namenode.NameNode: SHUTDOWN_MSG:
  /************************************************************
  SHUTDOWN_MSG: Shutting down NameNode at master/10.204.16.59
  ************************************************************/

2、啟動hdfs:

以bob用戶登錄,啟動hdfs集群:/usr/bob/hadoop-2.7.1/sbin/start-dfs.sh

輸出如下:

15/08/25 19:00:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
master: starting namenode, logging to /usr/bob/hadoop-2.7.1/logs/hadoop-bob-namenode-master.out
slave8: starting datanode, logging to /usr/bob/hadoop-2.7.1/logs/hadoop-bob-datanode-localhost.localdomain.out
slave7: starting datanode, logging to /usr/bob/hadoop-2.7.1/logs/hadoop-bob-datanode-slave7.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /usr/bob/hadoop-2.7.1/logs/hadoop-bob-secondarynamenode-master.out
15/08/25 19:00:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

3、查看hdfs集群各主機的進程:jps

master上查看進程如下:
[bob@master sbin]$ jps
輸出如下:

  25551 Jps
  25129 NameNode
  25418 SecondaryNameNode

slave(slave7、slave8相同)上查看進程:

[bob@slave7 .ssh]$ jps
輸出如下:

  18468 DataNode
  18560 Jps

 

4、啟動yarn:

 [bob@master sbin]$ ./start-yarn.sh
 輸出如下:

  starting yarn daemons
  starting resourcemanager, logging to /usr/bob/hadoop-2.7.1/logs/yarn-bob-resourcemanager-master.out
  slave8: starting nodemanager, logging to /usr/bob/hadoop-2.7.1/logs/yarn-bob-nodemanager-localhost.localdomain.out
  slave7: starting nodemanager, logging to /usr/bob/hadoop-2.7.1/logs/yarn-bob-nodemanager-slave7.out

5、查看yarn啟動后集群進程狀態:

master上查看進程如下:

[bob@master sbin]$ jps
輸出如下:

  25129 NameNode
  25633 ResourceManager
  25418 SecondaryNameNode
  25904 Jps

slave(slave7、slave8相同)上查看進程如下:

[bob@slave7 .ssh]$ jps
輸出如下:

  18468 DataNode
  18619 NodeManager
  18751 Jps

四、運行范例

1、創建hdfs文件

查看hdfs文件列表告警:

[bob@master sbin]$ hdfs dfs -ls /
15/08/25 19:23:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

查看apache官網,NativeLibraryChecker is a tool to check whether native libraries are loaded correctly. You can launch NativeLibraryChecker as follows:

$ hadoop checknative -a
   14/12/06 01:30:45 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version
   14/12/06 01:30:45 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
   Native library checking:
   hadoop: true /home/ozawa/hadoop/lib/native/libhadoop.so.1.0.0
   zlib:   true /lib/x86_64-linux-gnu/libz.so.1
   snappy: true /usr/lib/libsnappy.so.1
   lz4:    true revision:99
   bzip2:  false

但是我這里運行結果全是false:

[bob@master native]$ hadoop checknative -a
15/08/25 19:40:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Native library checking:
hadoop:  false
zlib:    false
snappy:  false
lz4:     false
bzip2:   false
openssl: false
15/08/25 19:40:04 INFO util.ExitUtil: Exiting with status 1

繼續找原因,難道必需要重新編譯hadoop源碼?

---發現不影響正常功能,不知道如何消除此警告,先繼續往下走吧。

2、上傳本地文件至hdfs

-創建input、output文件夾用於后續輸入、輸出數據

[bob@master hadoop]$ hdfs dfs -mkdir /input

[bob@master hadoop]$ hdfs dfs -mkdir /output

-查看hdfs /目錄下的文件信息

[bob@master hadoop]$ hdfs dfs –ls /

輸出:

Found 5 items
drwxr-xr-x   - bob supergroup          0 2015-08-31 20:23 /input
drwxr-xr-x   - bob supergroup          0 2015-09-01 21:29 /output
drwxr-xr-x   - bob supergroup          0 2015-08-31 18:03 /test1
drwx------   - bob supergroup          0 2015-08-31 19:23 /tmp
drwxr-xr-x   - bob supergroup          0 2015-09-01 22:00 /user

-查看hdfs文件系統情況

[bob@master hadoop]$ hdfs dfsadmin -report

輸出:
15/11/13 20:40:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 92229451776 (85.90 GB)
Present Capacity: 72146309120 (67.19 GB)
DFS Remaining: 71768203264 (66.84 GB)
DFS Used: 378105856 (360.59 MB)
DFS Used%: 0.52%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (2):

Name: 10.204.16.58:50010 (slave8)
Hostname: slave8
Decommission Status : Normal
Configured Capacity: 46114725888 (42.95 GB)
DFS Used: 378073088 (360.56 MB)
Non DFS Used: 10757623808 (10.02 GB)
DFS Remaining: 34979028992 (32.58 GB)
DFS Used%: 0.82%
DFS Remaining%: 75.85%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Nov 13 20:41:00 CST 2015


Name: 10.204.16.57:50010 (slave7)
Hostname: slave7
Decommission Status : Normal
Configured Capacity: 46114725888 (42.95 GB)
DFS Used: 32768 (32 KB)
Non DFS Used: 9325518848 (8.69 GB)
DFS Remaining: 36789174272 (34.26 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.78%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Nov 13 20:41:01 CST 2015

 -創建wordcount文件夾hdfs dfs -mkdir /input/wordcount

 -將本地/home/bob/study/下的所有txt文件上傳到hdfs的/input/wordcount文件夾下

 [bob@master hadoop]$ hdfs dfs -put /home/bob/study/*.txt  /input/wordcount

 -查看上傳后的文件清單:

[bob@master hadoop]$ hadoop dfs -ls /input/wordcount
-rw-r--r--   3 bob supergroup        100 2015-11-13 21:02 /input/wordcount/file1.txt
-rw-r--r--   3 bob supergroup        383 2015-11-13 21:03 /input/wordcount/file2.txt
-rw-r--r--   2 bob supergroup         73 2015-08-31 19:18 /input/wordcount/runHadoop.txt

3、運行自帶的wordcount范例。

[bob@master hadoop]$ hadoop jar /usr/bob/hadoop-2.7.1/share/hadoop/mapreduce/hoop-mapreduce-examples-2.7.1.jar wordcount /input/wordcount/*.txt /output/wordcount
15/11/13 21:41:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/13 21:41:16 INFO client.RMProxy: Connecting to ResourceManager at /10.204.16.59:8032
15/11/13 21:41:17 INFO input.FileInputFormat: Total input paths to process : 3
15/11/13 21:41:17 INFO mapreduce.JobSubmitter: number of splits:3
15/11/13 21:41:18 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1441114883272_0008
15/11/13 21:41:18 INFO impl.YarnClientImpl: Submitted application application_1441114883272_0008
15/11/13 21:41:18 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1441114883272_0008/
15/11/13 21:41:18 INFO mapreduce.Job: Running job: job_1441114883272_0008
15/11/13 21:50:57 INFO mapreduce.Job: Job job_1441114883272_0008 running in uber mode : false
15/11/13 21:50:57 INFO mapreduce.Job:  map 0% reduce 0%
15/11/13 21:51:10 INFO mapreduce.Job:  map 100% reduce 0%
15/11/13 21:58:31 INFO mapreduce.Job: Task Id : attempt_1441114883272_0008_r_000000_0, Status : FAILED
Container launch failed for container_1441114883272_0008_01_000005 : java.net.NoRouteToHostException: No Route to Host from  slave8/10.204.16.58 to slave7:45758 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see:  http://wiki.apache.org/hadoop/NoRouteToHost
        at sun.reflect.GeneratedConstructorAccessor22.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:758)
        at org.apache.hadoop.ipc.Client.call(Client.java:1480)
        at org.apache.hadoop.ipc.Client.call(Client.java:1407)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy36.startContainers(Unknown Source)
        at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
        at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy37.startContainers(Unknown Source)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:151)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:375)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.NoRouteToHostException: No route to host
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
        at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
        at org.apache.hadoop.ipc.Client.call(Client.java:1446)
        ... 15 more

15/11/13 21:58:40 INFO mapreduce.Job:  map 100% reduce 100%
15/11/13 21:58:41 INFO mapreduce.Job: Job job_1441114883272_0008 completed successfully
15/11/13 21:58:41 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=680
                FILE: Number of bytes written=462325
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=887
                HDFS: Number of bytes written=327
                HDFS: Number of read operations=12
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters
                Launched map tasks=3
                Launched reduce tasks=1
                Data-local map tasks=3
                Total time spent by all maps in occupied slots (ms)=30688
                Total time spent by all reduces in occupied slots (ms)=6346
                Total time spent by all map tasks (ms)=30688
                Total time spent by all reduce tasks (ms)=6346
                Total vcore-seconds taken by all map tasks=30688
                Total vcore-seconds taken by all reduce tasks=6346
                Total megabyte-seconds taken by all map tasks=31424512
                Total megabyte-seconds taken by all reduce tasks=6498304
        Map-Reduce Framework
                Map input records=13
                Map output records=52
                Map output bytes=752
                Map output materialized bytes=692
                Input split bytes=331
                Combine input records=52
                Combine output records=45
                Reduce input groups=25
                Reduce shuffle bytes=692
                Reduce input records=45
                Reduce output records=25
                Spilled Records=90
                Shuffled Maps =3
                Failed Shuffles=0
                Merged Map outputs=3
                GC time elapsed (ms)=524
                CPU time spent (ms)=5900
                Physical memory (bytes) snapshot=1006231552
                Virtual memory (bytes) snapshot=4822319104
                Total committed heap usage (bytes)=718798848
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=556
        File Output Format Counters
                Bytes Written=327
運行過程中拋出異常,如下:

5/11/13 21:58:31 INFO mapreduce.Job: Task Id : attempt_1441114883272_0008_r_000000_0, Status : FAILED
Container launch failed for container_1441114883272_0008_01_000005 : java.net.NoRouteToHostException: No Route to Host from  slave8/10.204.16.58 to slave7:45758 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see:  http://wiki.apache.org/hadoop/NoRouteToHost

在等待較長時間后,最終運行成功,報錯的原因以后繼續分析。

-運行成功后,在 /output/wordcount下自動生成兩個文件:_SUCCESS、part-r-00000,可用hdfs命令查看:

[bob@master hadoop]$ hdfs dfs -ls /output/wordcount
15/11/13 22:31:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
-rw-r--r--   2 bob supergroup          0 2015-11-13 21:58 /output/wordcount/_SUCCESS
-rw-r--r--   2 bob supergroup        327 2015-11-13 21:58 /output/wordcount/part-r-00000

-顯示part-r-00000文件內容,命令及輸出如下:

[bob@master hadoop]$ hdfs dfs -cat /output/wordcount/part-r-00000
15/11/13 22:34:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
/home/bob/study/hello.jar       1
/input/*.txt    2
/input/wordcount        1
/output/wordcount       3
/usr/bob/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar2
day     2
example 2
first   2
hadoop  5
hello   2
i       2
in      2
is      2
it      2
jar     3
my      2
myself,come     2
nice    2
on.     2
succeed 2
wordcount       2
中國人  1
中國夢  2
學習    2
學校    2
-------------------------------------------------------------------------

ok,第一次完整搭建過程說完了,歡迎批評指正。

posted @ 2015-08-25 14:26 Bob.Guo

first updated @ 2015-11-13 20:29 Bob.Guo


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM