記一次--------spark.driver.host參數報錯問題


報錯日志:
20/03/25 10:28:07 WARN UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.spark.SparkException: Exception thrown in awaitResult
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1930)
    at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:66)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:284)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult
    at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
    at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
    at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
    at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:100)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:202)
    at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:67)
    at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:66)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
    ... 4 more
Caused by: java.io.IOException: Failed to connect to localhost/127.0.0.1:41640
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:228)
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:179)
    at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197)
    at org.apache.spark.rpc.netty.Outbox$anon$1.call(Outbox.scala:191)
    at org.apache.spark.rpc.netty.Outbox$anon$1.call(Outbox.scala:187)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:41640
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:257)
    at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:291)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:640)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:575)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:489)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:451)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
    ... 1 more
 
LogType:stderr
Log Upload Time:Wed Mar 25 10:31:22 +0800 2020
LogLength:63452
Log Contents:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/yarn/nm/usercache/root/filecache/4996/__spark_libs__5125379819107399169.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
20/03/25 10:29:18 INFO SignalUtils: Registered signal handler for TERM
20/03/25 10:29:18 INFO SignalUtils: Registered signal handler for HUP
20/03/25 10:29:18 INFO SignalUtils: Registered signal handler for INT
20/03/25 10:29:19 INFO ApplicationMaster: Preparing Local resources
20/03/25 10:29:19 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1585020115190_0150_000002
20/03/25 10:29:19 INFO SecurityManager: Changing view acls to: yarn,root
20/03/25 10:29:19 INFO SecurityManager: Changing modify acls to: yarn,root
20/03/25 10:29:19 INFO SecurityManager: Changing view acls groups to:
20/03/25 10:29:19 INFO SecurityManager: Changing modify acls groups to:
20/03/25 10:29:19 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(yarn, root); groups with view permissions: Set(); users  with modify permissions: Set(yarn, root); groups with modify permissions: Set()
20/03/25 10:29:19 INFO ApplicationMaster: Starting the user application in a separate Thread
20/03/25 10:29:19 INFO ApplicationMaster: Waiting for spark context initialization...
20/03/25 10:29:21 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
20/03/25 10:29:21 INFO RMProxy: Connecting to ResourceManager at df1/172.16.252.11:8030
20/03/25 10:29:28 WARN YarnAllocator: Container marked as failed: container_1585020115190_0150_02_000003 on host: df3. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1585020115190_0150_02_000003
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:604)
    at org.apache.hadoop.util.Shell.run(Shell.java:507)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:789)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
 
Container exited with a non-zero exit code 1

 

 
問題回顧:
    編寫好程序,在本地idea遠程訪問測試環境進行測試, 一切正常。
    提交程序到測試環境,使用spark local模式執行程序 , 一切正常。
    使用cluster 模式執行程序,報錯報錯報錯。。。
 
思路:
    因為在測試環境跑local模式一切正常, 所以首先考慮到是不是因為環境問題,但是別的程序可以正常運行。
所以應該不是環境問題。
    然后就想着應該是代碼出現了問題, 但是看代碼愣是沒看出來, 就只能使用笨辦法,重新寫了一個最簡單的程序
沒有任何配置參數,沒有任何邏輯, 就是讀hdfs 然后打印到控制台。
 
結果正常運行。
就想到可能是 spark的配置參數出問題,然后觀察程序的參數配置發現
 
 
spark.driver.host 參數解釋: driver監聽的主機名或者IP地址。這用於和executors以及獨立的master通信
 部署完Spark后,分別使用yarn-cluster模式和yarn-client模式運行Spark自帶的計算pi的示例。
    Spark的一些配置文件除了一些基本屬性外,均未做配置,結果運行的時候兩種運行模式出現了不同的狀況。yarn-cluster模式可以正常運行,yarn-client模式總是運行失敗。查看ResourceManager、NodeManager端的日志,發現程序總是找不到ApplicationMaster,這就奇怪了!並且,客戶端的Driver程序開啟的端口,在NodeManager端訪問被拒絕!非Spark的其他MR任務,能夠正常執行。
檢查客戶端配置文件,發現原來在客戶端的/etc/hosts文件中,客戶端的一個IP對應了多個Host,Driver程序會默認去取最后對應的那個Host,比如是hostB,但是在NodeManager端是配置的其他Host,hostA,所以導致程序無法訪問。為了不影響其他的程序使用客戶端的Host列表,這里在Spark配置文件spark-defaults.conf中使用屬性spark.driver.host來指定yarn-client模式運行中和Yarn通信的DriverHost,此時yarn-client模式可以正常運行。
上面配置完了之后,發現yarn-cluster模式又不能運行了!想想原因,肯定是上面那個配置參數搞的鬼,注釋掉之后,yarn-cluster模式可以繼續運行。原因是,yarn-cluster模式下,spark的入口函數是在客戶端運行,但是Driver的其他功能是在ApplicationMaster中運行的,上面的那個配置相當於指定了ApplicationMaster的地址,實際上的ApplicationMaster在yarn-master模式下是由ResourceManager隨機指定的。
 
 
注釋掉即可。 耐心找bug。。。。
 
 
 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM