文章發自http://www.cnblogs.com/hark0623/p/4204104.html ,轉載請注明
我發現太多太多的坑要趟了…
向yarn提交sparkstreaming了,提交腳本如下,使用的是yarn-client
spark-submit --driver-memory 1g --executor-memory 1g --executor-cores 1 --num-executors 3 --class com.yhx.sensor.sparkstreaming.LogicHandle --master yarn-client /opt/spark/SparkStreaming.jar
出現了如下的錯誤:
15/01/05 17:12:30 ERROR ReceiverTracker: Deregistered receiver for stream 0: Error starting receiver 0 - org.jboss.netty.channel.ChannelException: Failed to bind to: /xxx.xx.xx.xx:xxxx at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272) at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:103) at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74) at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68) at org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:164) at org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:171) at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:121) at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:106) at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:264) at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:257) at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121) at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) at org.apache.spark.scheduler.Task.run(Task.scala:54) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:180) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:444) at sun.nio.ch.Net.bind(Net.java:436) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290) at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42) ... 3 more 15/01/05 17:12:30 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 70, xxx.xxx.dn02): org.jboss.netty.channel.ChannelException: Failed to bind to: /121.41.49.51:2345 org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272) org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:103) org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74) org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
其實異常中我們大概能看出,是監聽無法被綁定了…
我這里無法綁定是因為被占用了, 當我終止第一次啟動的sparkstreaming后,馬上就開始運行sparkstreaming,這時之前綁定的監聽還沒有被釋放,所以才會出現這個異常。
大概可以使用netstat -anp|grep 端口號 來確認一下……
唉,尼瑪各種坑啊……
