提示:这里有Exit code: 127 Stack trace: ExitCodeException exitCode=127: 错误的解决的方法,在文章最后面
一、首先要配置好java环境
下载地址:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
mac电脑直接jdk-8u144-macosx-x64.dmg一键安装jdk
然后配置jdk环境,
JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home CLASSPAHT=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar PATH=$JAVA_HOME/bin:$PATH: export JAVA_HOME export CLASSPATH export PATH
- 1
- 2
- 3
- 4
- 5
- 6
二、ssh配置
首先确认能够远程登录
系统偏好设置-共享 -远程登录
-
ssh-keygen -t rsa
Press enter for each line 提示输入直接按回车就好 -
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
-
chmod og-wx ~/.ssh/authorized_keys
ssh localhost
如果ssh localhost还需要密码 查看一下你.ssh目录的权限
.ssh的父目录的权限问题(我的问题就出现在这里):.ssh的父目录文件权限应该是755,即所属用户的 用户文件 (/home下属的一个用户文件)
执行chmod 755 ~/.ssh
三、安装配置hadoop文件
下载地址:http://hadoop.apache.org/releases.html
http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.8.1/hadoop-2.8.1.tar.gz
tar -zxvf hadoop-2.8.1.tar.gz
cd hadoop-2.8.1
- 1
- 2
修改配置文件 配置文件在hadoop目录下的/etc/hadoop
1.修改core-site.xml 文件
<configuration> <!-- 指定HDFS老大(namenode)的通信地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> <!-- 指定hadoop运行时产生文件的存储路径 --> <property> <name>hadoop.tmp.dir</name> <value>/Users/chenxun/software/hadoop-2.8.1/temp</value> </property> </configuration>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
2.修改hadfs-site.xml
默认副本数3,修改为1,dfs.namenode.name.dir指明fsimage存放目录,多个目录用逗号隔开。dfs.datanode.data.dir指定块文件存放目录,多个目录逗号隔开
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/Users/chenxun/software/hadoop-2.8.1/tmp/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/Users/chenxun/software/hadoop-2.8.1/tmp/hdfs/data</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>localhost:9001</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
3.yarn配置
mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.admin.user.env</name> <value>HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME</value> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME</value> </property> <property> <name>mapreduce.application.classpath</name> <value> /Users/chenxun/software/hadoop-2.8.1/etc/hadoop, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/common/*, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/common/lib/*, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/hdfs/*, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/hdfs/lib/*, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/mapreduce/*, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/mapreduce/lib/*, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/yarn/*, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/yarn/lib/* </value> </property> </configuration>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.application.classpath</name> <value> /Users/chenxun/software/hadoop-2.8.1/etc/hadoop, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/common/*, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/common/lib/*, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/hdfs/*, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/hdfs/lib/*, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/mapreduce/*, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/mapreduce/lib/*, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/yarn/*, /Users/chenxun/software/hadoop-2.8.1/share/hadoop/yarn/lib/* </value> </property> </configuration>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
四、配置hadoop环境变量,可能需要重新编译native library
vim ~/.bash_profile
- 1
export HADOOP_HOME=/Users/chenxun/software/hadoop-2.8.1 export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native/ export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native:$HADOOP_COMMON_LIB_NATIVE_DIR"
- 1
- 2
- 3
- 4
- 5
- 6
**上面配置基本完成,一般没什么问题了。可以开始运行你的hadoop了,直接看第五步好了,有问题继续往下看。
**
下面步骤在你运行hadoop的过程可能碰到,如果你碰到了再回头看这,:
在mac可能还要重新编译hadoop的运行用到动态库;这个时候肯能要配置maven环境,根据自己的maven安装路径配置maven环境,再重新编译/lib/native下动态要用到
如果你在运行出现下面的情况就需要重新编译动态库:
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
参考链接:http://blog.csdn.net/tterminator/article/details/51779689
参考二:Mac下 hadoop-2.7.0编译过程记录http://blog.csdn.net/beijihukk/article/details/53782508?utm_medium=referral&utm_source=itdadao
先下载好hadoop源码:hadoop-2.8.1-src
重新编译之前需要安装
protobuf
和maven以及cmake,自己百度搜索安装即可
protobuf安装方法如下:
1. protobuf 2.5版本的brew安装方法 $ brew search protobuf protobuf protobuf-c protobuf-swift homebrew/php/php53-protobuf homebrew/php/php56-protobuf homebrew/php/php54-protobuf homebrew/versions/protobuf250 homebrew/php/php55-protobuf homebrew/versions/protobuf260 $ brew install homebrew/versions/protobuf250
- 1
- 2
- 3
- 4
- 5
- 6
- 7
配置maven环境
export M2_HOME=/Users/chenxun/software/apache-maven-3.5.0
- 1
编译动态库过程出现下面的错误可能是zlib没安装
也可能是openssl没设置好,设置openssl的方法如下(我的mac,你根据你自己的openssl安装路径设置)
export OPENSSL_ROOT_DIR=/usr/local/Cellar/openssl/1.0.2k export OPENSSL_INCLUDE_DIR=/usr/local/Cellar/openssl/1.0.2k/include
- 1
- 2
**开始编译native库,命令如下:
mvn package -Pdist,native -DskipTests -Dtar**
将编译出的native library复制到下载的二进制版本的hadoop-2.8.1相应目录中
编译出的native library库的位置为
hadoop-2.8.1-src/hadoop-dist/target/hadoop-2.8.1/lib/native
拷贝到二进制版本的hadoop-2.8.1的目录
hadoop-2.8.1/lib/native
修改/etc/hadoop/hadoop-env.sh配置
export HADOOP_OPTS=”$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.library.path=/hadoop-2.6.0/lib/native”
- 1
重新启动Hadoop
此时就不会出现本文开头处的那个警告了。
五、运行hadoop 以及hadoop命令简单的介绍
格式化HDFS
hdfs namenode -format
- 1
一次启动hadoop所有进程:
start-all.sh
- 1
打开 http://localhost:50070 进入hdfs管理页面
打开 http://localhost:8088 进入hadoop进程管理页面
hadoop命令简单介绍:
创建目录
hdfs dfs -mkdir -p /user/chenxun/input
- 1
查看:
hadoop fs -ls /user/chenxun/
先自己随便建一个file.txt文件,在文件随便写点东西,如下:
vim file.txt hadoop chenxun hadoop chen
- 1
- 2
- 3
- 4
- 5
- 6
- 7
把这个文件用命令上传你刚才建文件input下面:命令如下:
hdfs dfs -put ./file.txt input
- 1
用命令查看一下是否成功:如果成功你在input下面看到file.txt文件
hadoop fs -ls /user/chenxun/input
- 1
这个时候你测试hadoop提供的wordcount例子:执行下面的语句(如果出现错误不要着急往文章后面看解决方法)
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.1.jar wordcount /user/chenxun/input/file.txt output
- 1
运行过程如下
17/10/14 01:55:26 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 17/10/14 01:55:28 INFO input.FileInputFormat: Total input files to process : 1 17/10/14 01:55:28 INFO mapreduce.JobSubmitter: number of splits:1 17/10/14 01:55:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1507912953622_0017 17/10/14 01:55:28 INFO impl.YarnClientImpl: Submitted application application_1507912953622_0017 17/10/14 01:55:28 INFO mapreduce.Job: The url to track the job: http://chen.local:8088/proxy/application_1507912953622_0017/ 17/10/14 01:55:28 INFO mapreduce.Job: Running job: job_1507912953622_0017 17/10/14 01:55:36 INFO mapreduce.Job: Job job_1507912953622_0017 running in uber mode : false 17/10/14 01:55:36 INFO mapreduce.Job: map 0% reduce 0% 17/10/14 01:55:41 INFO mapreduce.Job: map 100% reduce 0% 17/10/14 01:55:47 INFO mapreduce.Job: map 100% reduce 100% 17/10/14 01:55:47 INFO mapreduce.Job: Job job_1507912953622_0017 completed successfully 17/10/14 01:55:48 INFO mapreduce.Job: Counters: 49 File System Counters FILE: Number of bytes read=44 FILE: Number of bytes written=276523 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=141 HDFS: Number of bytes written=26 HDFS: Number of read operations=6 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 Job Counters Launched map tasks=1 Launched reduce tasks=1 Data-local map tasks=1 Total time spent by all maps in occupied slots (ms)=3032 Total time spent by all reduces in occupied slots (ms)=3133 Total time spent by all map tasks (ms)=3032 Total time spent by all reduce tasks (ms)=3133 Total vcore-milliseconds taken by all map tasks=3032 Total vcore-milliseconds taken by all reduce tasks=3133 Total megabyte-milliseconds taken by all map tasks=3104768 Total megabyte-milliseconds taken by all reduce tasks=3208192 Map-Reduce Framework Map input records=4 Map output records=4 Map output bytes=43 Map output materialized bytes=44 Input split bytes=114 Combine input records=4 Combine output records=3 Reduce input groups=3 Reduce shuffle bytes=44 Reduce input records=3 Reduce output records=3 Spilled Records=6 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=134 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=306708480 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=27 File Output Format Counters Bytes Written=26
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
再执行看看执行结果:
hadoop fs -ls /user/chenxun/output
- 1
这时候你看到:
Found 2 items -rw-r--r-- 1 chenxun supergroup 0 2017-10-14 01:55 /user/chenxun/output/_SUCCESS -rw-r--r-- 1 chenxun supergroup 26 2017-10-14 01:55 /user/chenxun/output/part-r-00000
- 1
- 2
- 3
接下来真正看一下你的单词统计结果:
hadoop fs -cat /user/chenxun/output/part-r-00000
- 1
因为我的file.txt文件内容是
[chenxun@chen.local 11:13 ~/software/hadoop-2.8.1]$cat file.txt hadoop chenxun hadoop chen
- 1
- 2
- 3
- 4
- 5
所有统计结果如下:
chen 1
chenxun 1
hadoop 2
- 1
- 2
- 3
Exit code: 127 Stack trace: ExitCodeException exitCode=127: 的错误解决方法如下:
六、如果你在第五步执行过程出现错误:Exit code: 127 Stack trace: ExitCodeException exitCode=127: 不要着急这里了解决方法:
今天在Mac配置伪分布式环境。部署完毕后,运行MapReduce程序,程序运行刚开始没有问题, 但是到启动Map Task时,就报错误,报exitCode: 127错误。 错误日志如下: 15/04/06 00:08:01 INFO mapreduce.Job: Job job_1428250045856_0002 failed with state FAILED due to: Application application_1428250045856_0002 failed 2 times due to AM Container for appattempt_1428250045856_0002_000002 exited with exitCode: 127 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: org.apache.hadoop.util.Shell$ExitCodeException: at org.apache.hadoop.util.Shell.runCommand(Shell.java:505) at org.apache.hadoop.util.Shell.run(Shell.java:418) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650) 解决办法: 1 从错误日志上看应该是某个Shell命令在Mac上没有找到。 2 查看每个程序的日志:hadoop-2.3.0/logs/userlogs/application_1428247759749_0002/container_1428247759749_0002_02_000001中的错误日志: /bin/bash: /bin/java: No such file or directory 可以看到:/bin/java 的命令没有找到。 建立一个软链接,链接到java程序即可。 上文中的日志文件和异常中的文件名对应不上,这个是我编写时的问题, 实际可以根据日志中的文件名找到具体的错误文件。但是整个流程,是这个。
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
是因为Hadoop默认检查/bin/java路径下的java,可是Mac的Java不是装这里的,它的路径是/usr/bin/java。你不信啊,那你输入命令$ /usr/bin/java -version, 看看是不是会出现类似以下的信息:
Java version "1.8.0_45" Java(TM) SE Runtime Environment (build 1.8.0_45-b14) Java Hotspot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
- 1
- 2
- 3
如果对了的话,那么下一步就是在/bin/java创建一个快捷方式,让Hadoop读到/usr/bin/java里的信息:
- 1
$ sudo ln -s /usr/bin/java /bin/java 再输入命令$ /bin/java -version验证一遍,显示: 引用 Java version "1.8.0_45" Java(TM) SE Runtime Environment (build 1.8.0_45-b14) Java Hotspot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
这个时候你在执行sudo ln -s /usr/bin/java /bin/java出现问题:ln: /bin/java: Operation not permitted
这是mac系统的原因不是你的错。
这是因为苹果在OS X 10.11中引入的SIP特性使得即使加了sudo(也就是具有root权限)也无法修改系统级的目录,其中就包括了/usr/bin。要解决这个问题有两种做法:一种是比较不安全的就是关闭SIP,也就是rootless特性;另一种是将本要链接到/usr/bin下的改链接到/usr/local/bin下就好了。
废话不多说上解决方法:
重启按住 Command+R,进入恢复模式,打开Terminal
csrutil disable
- 1
重启即可。如果要恢复默认,最好这么做,那么
csrutil enable
- 1
- 2
参考链接:http://www.jianshu.com/p/22b89f19afd6
---------------------
作者:后打开撒打发了
来源:CSDN
原文:https://blog.csdn.net/chenxun_2010/article/details/78238251
版权声明:本文为博主原创文章,转载请附上博文链接!