Mac下報錯'WARN security.UserGroupInformation: PriviledgedActionException as:用戶名...No such file or directory'的一種解決方法


Mac下使用IDEA遠程連接Hadoop調試MapReduce程序,參考網上博客,總會出現如題報錯,下面是我在mac下的一種解決辦法,可以參考。

前期准備

如果想遠程調試,需要做一些准備工作,簡單羅列下。

(1)在本地准備一份了hadoop(有博主直接從集群中copy一份),設置環境變量。

# hadoop路徑為具體路徑
export HADOOP_HOME=/Users/yangchaolin/hadoop2.6.0/hadoop-2.6.0-cdh5.14.0

(2)IDEA工程下,將本地hadoop中share文件下的資源jar包都引入到項目中。

(3) 准備MapReduce程序,並創建一個application,這個application使用的工作目錄就使用本地hadoop。

map端程序

 1 package com.kaikeba.mapreduce;
 2 
 3 import org.apache.hadoop.io.IntWritable;
 4 import org.apache.hadoop.io.LongWritable;
 5 import org.apache.hadoop.io.Text;
 6 import org.apache.hadoop.mapreduce.Mapper;
 7 
 8 import java.io.IOException;
 9 
10 /**
11  * mapreduce's map
12  */
13 public class WordCountMap extends Mapper<LongWritable, Text,Text, IntWritable> {
14     //most application should override map method
15     @Override
16     protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
17         //split
18         String readLine = value.toString();
19         String[] words = readLine.split(" ");
20         //words output to disk
21         for(String word:words){
22             context.write(new Text(word),new IntWritable(1));
23         }
24     }
25 }
View Code

reduce端程序

 1 package com.kaikeba.mapreduce;
 2 
 3 import org.apache.hadoop.io.IntWritable;
 4 import org.apache.hadoop.io.Text;
 5 import org.apache.hadoop.mapreduce.Reducer;
 6 
 7 import java.io.IOException;
 8 
 9 /**
10  * mapreduce's reduce
11  */
12 public class WordCountReduce extends Reducer<Text, IntWritable,Text,IntWritable> {
13     //should override reduce method
14     @Override
15     protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
16         //count words count
17         int count=0;
18         for(IntWritable i:values){
19             count+=i.get();
20         }
21         //output key value to hdfs
22         context.write(key,new IntWritable(count));
23     }
24 }
View Code

main函數

 1 package com.kaikeba.mapreduce;
 2 
 3 import org.apache.hadoop.conf.Configuration;
 4 import org.apache.hadoop.fs.Path;
 5 import org.apache.hadoop.io.IntWritable;
 6 import org.apache.hadoop.io.Text;
 7 import org.apache.hadoop.mapreduce.Job;
 8 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
 9 import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
10 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
11 import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
12 
13 import java.io.IOException;
14 
15 /**
16  * mapreduce's main method, 'WordCountMain' is mapreduce's job name
17  */
18 public class WordCountMain {
19     public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
20         //check args,args is input path and output path
21         if(args==null||args.length!=2){
22             System.out.println("please input path");
23             System.exit(0);//exit main method
24         }
25         //create mapreduce job
26         Configuration conf=new Configuration();
27         //if run mapreduce in cluster,set conf and mapred-site should set yarn
28         //conf.set("mapreduce.job.jar","/home/hadoop/IdeaProject/hadoop/target/hadoop-1.0-SNAPSHOT.jar");
29         //conf.set("mapreduce.app-submission.cross-platform","true");
30         //conf.set("mapreduce.framework.name","yarn");
31 
32         /**
33          * Creates a new Job with no particular Cluster and a given jobName.
34          * A Cluster will be created from the conf parameter only when it's needed
35          */
36         Job job=Job.getInstance(conf,WordCountMain.class.getSimpleName());//get class simplename
37         //set mapreduce job
38         job.setJarByClass(WordCountMain.class);//Set the Jar by finding where a given class came from
39         //set input/output format
40         job.setInputFormatClass(TextInputFormat.class);
41         job.setOutputFormatClass(TextOutputFormat.class);
42         //set input/output path
43         FileInputFormat.setInputPaths(job,new Path(args[0]));
44         FileOutputFormat.setOutputPath(job,new Path(args[1]));
45         //set map/reduce class
46         job.setMapperClass(WordCountMap.class);
47         job.setReducerClass(WordCountReduce.class);
48         //add combine class
49         job.setCombinerClass(WordCountReduce.class);
50         //set map/reduce output key-value type
51         //map
52         job.setMapOutputKeyClass(Text.class);
53         job.setMapOutputValueClass(IntWritable.class);
54         //reduce
55         job.setOutputKeyClass(Text.class);
56         job.setOutputValueClass(IntWritable.class);
57         //set job number ---> 4 reduce task
58         job.setNumReduceTasks(4);
59         //job commit
60         try{
61             job.waitForCompletion(true);
62         }catch(Exception e){
63             e.printStackTrace();
64         }
65     }
66 }
View Code

其他資源文件內容省略,因為本文報錯跟資源配置文件沒有關系。

程序使用maven打成jar包,使用hadoop jar 類的全路徑名 輸入路徑 輸出路徑的命令在linux下執行是沒有問題,具體過程省略。

報錯解決

接下來使用IDEA遠程調試hadoop,配置好上文創建的application后,發現執行會報如下錯誤,說找不到路徑或目錄。 

 1 19/10/14 00:53:02 WARN security.UserGroupInformation: PriviledgedActionException as:yangchaolin (auth:SIMPLE) cause:ExitCodeException exitCode=1: chmod: /kkb/install/hadoop-2.6.0-cdh5.14.2/hadoopDatas/tempDatas/mapred/staging/yangchaolin90307621/.staging/job_local90307621_0001: No such file or directory
 2 
 3 Exception in thread "main" ExitCodeException exitCode=1: chmod: /kkb/install/hadoop-2.6.0-cdh5.14.2/hadoopDatas/tempDatas/mapred/staging/yangchaolin90307621/.staging/job_local90307621_0001: No such file or directory
 4 
 5     at org.apache.hadoop.util.Shell.runCommand(Shell.java:604)
 6     at org.apache.hadoop.util.Shell.run(Shell.java:507)
 7     at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:789)
 8     at org.apache.hadoop.util.Shell.execCommand(Shell.java:882)
 9     at org.apache.hadoop.util.Shell.execCommand(Shell.java:865)
10     at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:720)
11     at org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:498)
12     at org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:479)
13     at org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:495)
14     at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:616)
15     at org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:814)
16     at org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:774)
17     at org.apache.hadoop.mapred.JobClient.access$400(JobClient.java:178)
18     at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:991)
19     at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:976)
20     at java.security.AccessController.doPrivileged(Native Method)
21     at javax.security.auth.Subject.doAs(Subject.java:422)
22     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
23     at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:976)
24     at org.apache.hadoop.mapreduce.Job.submit(Job.java:582)
25     at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:612)
26     at com.kaikeba.mapreduce.WordCountMain.main(WordCountMain.java:60)
27 
28 Process finished with exit code 1

(1)嘗試將集群中hadoop安裝目錄權限修改為777,繼續測試,依然報錯。

(2)根據報錯說找不到目錄,參考使用window本的童鞋,發現他們會在磁盤下自動創建一個目錄(這個目錄是在資源配置文件中指定的),查看我的mac發現並沒有這個目錄,嘗試在mac的根目錄下創建主目錄,並賦予777權限,再次執行竟然通過了!!!懷疑跟mac下用戶的權限有關系,因為window下一般用戶都是管理員,而mac不是。

# 進入根目錄
youngchaolinMac:/ yangchaolin$ cd /
# 創建主目錄 youngchaolinMac:/ yangchaolin$ sudo mkdir kkb Password: youngchaolinMac:/ yangchaolin$ ls -l total 45... drwxr-xr-x 2 root wheel 68 10 23 22:52 kkb # 賦予777權限 youngchaolinMac:/ yangchaolin$ sudo chmod -R 777 kkb youngchaolinMac:/ yangchaolin$ ls -l total 45... drwxrwxrwx 2 root wheel 68 10 23 22:52 kkb

IDEA執行通過,完成MapReduce任務。

集群中查看計算結果,ok。

 網上查了很多博客,有這個報警的各種解決方法,但是很多是window下。如果是mac,可參考本文,這可能是這個報錯的一種解決方法。

 

參考博文:

(1)https://www.cnblogs.com/yjmyzz/p/how-to-remote-debug-hadoop-with-eclipse-and-intellij-idea.html


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM