最近開始跟隨《子雨大數據之Spark入門教程(Python版)》 學習大數據方面的知識。
這里是網頁教程的鏈接:
http://dblab.xmu.edu.cn/blog/1709-2/
在學習中遇到的一些問題,將會在這里進行總結,並貼上我的解決方法。
1、Spark獨立應用程序編程時報錯:
按照教程所寫的配置好環境之后,運行第一個spark 程序時報錯顯示:
1 python3 ~/test.py 2 WARNING: An illegal reflective access operation has occurred 3 WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.1.jar) to method sun.security.krb5.Config.getInstance() 4 WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil 5 WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations 6 WARNING: All illegal access operations will be denied in a future release 7 2018-09-11 19:54:12 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 8 Setting default log level to "WARN". 9 To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 10 Traceback (most recent call last): 11 File "/home/hadoop/test.py", line 6, in <module> 12 numAs = logData.filter(lambda line: 'a' in line).count() 13 File "/usr/local/spark/python/pyspark/rdd.py", line 1073, in count 14 return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum() 15 File "/usr/local/spark/python/pyspark/rdd.py", line 1064, in sum 16 return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add) 17 File "/usr/local/spark/python/pyspark/rdd.py", line 935, in fold 18 vals = self.mapPartitions(func).collect() 19 File "/usr/local/spark/python/pyspark/rdd.py", line 834, in collect 20 sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd()) 21 File "/usr/local/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__ 22 File "/usr/local/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value 23 py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. 24 : java.lang.IllegalArgumentException 25 at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source) 26 at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source) 27 at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source) 28 at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46) 29 at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449) 30 at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432) 31 at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) 32 at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103) 33 at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103) 34 at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) 35 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) 36 at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103) 37 at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) 38 at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432) 39 at org.apache.xbean.asm5.ClassReader.a(Unknown Source) 40 at org.apache.xbean.asm5.ClassReader.b(Unknown Source) 41 at org.apache.xbean.asm5.ClassReader.accept(Unknown Source) 42 at org.apache.xbean.asm5.ClassReader.accept(Unknown Source) 43 at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262) 44 at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261) 45 at scala.collection.immutable.List.foreach(List.scala:381) 46 at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261) 47 at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159) 48 at org.apache.spark.SparkContext.clean(SparkContext.scala:2299) 49 at org.apache.spark.SparkContext.runJob(SparkContext.scala:2073) 50 at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099) 51 at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939) 52 at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 53 at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 54 at org.apache.spark.rdd.RDD.withScope(RDD.scala:363) 55 at org.apache.spark.rdd.RDD.collect(RDD.scala:938) 56 at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:162) 57 at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) 58 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 59 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 60 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 61 at java.base/java.lang.reflect.Method.invoke(Method.java:564) 62 at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) 63 at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 64 at py4j.Gateway.invoke(Gateway.java:282) 65 at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) 66 at py4j.commands.CallCommand.execute(CallCommand.java:79) 67 at py4j.GatewayConnection.run(GatewayConnection.java:238) 68 at java.base/java.lang.Thread.run(Thread.java:844)
有人說是JAVA版本的問題。
google找了很久之后發現在Stack Overflow 中有人遇到了相同的問題。
首先移除原有Java 我的設備默認安裝的是10 版本
sudo apt-get purge openjdk-\* icedtea-\* icedtea6-\*
安裝java-1.8.0-openjdk-amd64
sudo apt install openjdk-8-jre-headless
之后要修改當前用戶的環境變量
vim ~/.bashrc
把JAVA_HOME 的路徑修改為:
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64
之后再運行spark腳本,沒有報錯。問題解決。
關於安裝多版本JAVA ,和多版本java的切換 可以查看這篇文章。
https://ywnz.com/linuxjc/2948.html
2、安裝集群環境hadoop,ubuntu1804作為master節點,兩台centos7.5 系統作為從節點。三台設備都添加了hadoop用戶。
所有的操作都是使用hadoop用戶進行的。
最好保證用於hadoop集群的所有機器都使用相同版本的hadoop 和java。
為了保證java版本的統一,直接從網上下載了
jdk-8u181-linux-x64.tar.gz
這個版本的java 放到三台設備中。解壓到/usr/lib/jvm/ 目錄下。
su hadoop
vim ~/.bashrc
在~/.bashrc 文件中添加環境變量
export HADOOP_HOME=/usr/local/hadoop export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_181 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin
要改成實際使用的文件路徑、和java版本。這樣就可以直接使用hadoop和java的相關命令了。
輸入命令:
source ~/.bashrc
使環境變量立即生效。
如果之前執行過hadoop程序,需要先刪除之前的文件。
sudo rm -r ./hadoop/tmp # 刪除主從節點的 Hadoop 臨時文件(注意這會刪除 HDFS 中原有的所有數據,如果原有的數據很重要請不要這樣做) sudo rm -r ./hadoop/logs/*
在主節點執行
hdfs namenode -format # 首次運行需要執行初始化,之后不需要
start-dfs.sh
start-yarn.sh
mr-jobhistory-daemon.sh start historyserver
通過命令 jps
可以查看各個節點所啟動的進程。正確的話,在 Master 節點上可以看到 NameNode、ResourceManager、SecondrryNameNode、JobHistoryServer 進程,如下圖所示:
在 Slave 節點可以看到 DataNode 和 NodeManager 進程,如下圖所示:
缺少任一進程都表示出錯。另外還需要在 Master 節點上通過命令 hdfs dfsadmin -report
查看 DataNode 是否正常啟動,如果 Live datanodes 不為 0 ,則說明集群啟動成功。
在進行hdfs文件操作時報錯:
hadoop@ubuntu-1804:/usr/local/hadoop$ hdfs dfs -mkdir -p /user/hadoop
mkdir: Cannot create directory /user/hadoop. Name node is in safe mode.
這是由於HDFS處於安全模式下。
安全模式是HDFS所處的一種特殊狀態,在這種狀態下,文件系統只接受讀數據請求,而不接受刪除、修改等變更請求。在NameNode主節點啟動時,HDFS首先進入安全模式,DataNode在啟動的時候會向namenode匯報可用的block等狀態,當整個系統達到安全標准時,HDFS自動離開安全模式。如果HDFS出於安全模式下,則文件block不能進行任何的副本復制操作,因此達到最小的副本數量要求是基於datanode啟動時的狀態來判定的,啟動時不會再做任何復制(從而達到最小副本數量要求)
https://blog.csdn.net/yh_zeng2/article/details/53144304
https://blog.csdn.net/bingduanlbd/article/details/51900512
輸入命令退出安全模式
hdfs dfsadmin -safemode leave
再次執行
hdfs dfs -mkdir -p /user/hadoop
沒有報錯。
啟動pyspark 並連接mysql數據庫,在啟動時指定使用jar包
pyspark --jars /usr/local/spark/jars/mysql-connector-java-8.0.12/mysql-connector-java-8.0.12.jar --driver-class-path /usr/local/spark/jars/mysql-connector-java-8.0.12/mysql-connector-java-8.0.12.jar
啟動kafka consumer:
執行:
./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic wordsendertest --from-beginning
報錯提示:
zookeeper is not a recognized option
修改命令為:
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic wordsendertest --from-beginning