原以為搭建一個本地編程測試hadoop程序的環境很簡單,沒想到還是做得焦頭爛額,在此分享步驟和遇到的問題,希望大家順利.
一.要實現連接hadoop集群並能夠編碼的目的需要做如下准備:
1.遠程hadoop集群(我的master地址為192.168.85.2)
2.本地myeclipse及myeclipse連接hadoop的插件
3.本地hadoop(我用的是hadoop-2.7.2)
先下載插件hadoop-eclipse-plugin,我用的是hadoop-eclipse-plugin-2.6.0.jar,下載之后放在"MyEclipse Professional 2014\dropins"目錄下,重啟myeclipse會在perspective和views發現一個map/reduce的選項
切換到hadoop試圖,然后打開MapReduce Tools
二.接下來新增hadoop服務,要開始配置連接,需要查看hadoop配置
1.hadoop/etc/hadoop/mapred-site.xml配置,查看mapred.job.tracker里面的ip和port,用以配置Map/Reduce Master
2.hadoop/etc/hadoop/core-site.xml配置,查看fs.default.name里面的ip和port,用以配置DFS Master
3.用戶名直接寫hadoop操作用戶即可
到此配置就完成了,順利的話可以看到:
新建hadoop工程.
File】->【New】->【Project...】->【Map/Reduce】->【Map/Reduce Project】->【Project name: WordCount】->【Configure Hadoop install directory...】->【Hadoop installation directory: D:\nlsoftware\hadoop\hadoop-2.7.2】->【Apply】->【OK】->【Next】->【Allow output folders for source folders】->【Finish】
工程下建立三個類,分別是Mapper,Reduce,和main
TestMapper
package bb; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Mapper.Context; public class TestMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } }
TestReducer
package bb; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.Reducer.Context; public class TestReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } }
WordCount
package bb; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class WordCount { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args) .getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TestMapper.class); job.setCombinerClass(TestReducer.class); job.setReducerClass(TestReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
我在hdfs的input里面新建了兩個tex文件,這時候可以用來測試,也可以用其他的文件測試.所以我的參數如圖:
hdfs://192.168.85.2:9000/input/* hdfs://192.168.85.2:9000/output6
-Xms512m -Xmx1024m -XX:MaxPermSize=256m
稍作解釋,參入的兩個參數,一個是輸入文件,一個是輸出結果文件.指定正確目錄即可. output6文件夾的名字是我隨便寫的.會自動創建
那么到了最后也是最關鍵的一步.我run as hadoop時遇到了
Server IPC version 9 cannot communicate with client version 4
報錯.這是提示版本不對,我一看.遠程hadoop版本與jar包版本不同導致的.遠程是2.7.2的.所以我把hadoop相關jar包改為該版本即可(2.*版本的應該都可以,沒有的話相近的也可以用)
然后錯誤換了一個
Exception in thread "main" ExitCodeException exitCode=-1073741515:
經過查閱資料發現這是因為window本地的hadoop沒有winutils.exe導致的.原來本地hadoop的機理要去調用這個程序.我們先要去下載2.7的winutils.exe然后使得其運行沒錯才可以.
下載之后發現需要hadoop.dll文件.暈.再次下載並放在c:\windows\System32目錄下.
然而我的winutils.exe還是無法啟動,這個雖然是我的電腦問題.但是想來有些人還是會遇到(簡單說一下).
報錯缺少msvcr120.dll.下載之后再去啟動提示,"應用程序無法正常啟動0xc000007b".
這是內存錯誤引起的.下載DirectX_Repair修復directx終於解決了問題,最后成功啟動了hadoop程序.
有同學可能能夠啟動winutils.exe但還是不能正常跑應用程序,依然報錯,可以嘗試修改權限驗證.
修改hadoop/etc/hadoop/hdfs-site.xml
添加內容
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
取消權限驗證.