記一次spark classpath jar 沖突問題


spark shell 提交任務報錯

Caused by: java.lang.NoSuchMethodError: org.apache.spark.network.util.AbstractFileRegion.transferred()J
at org.apache.spark.network.util.AbstractFileRegion.transfered(AbstractFileRegion.java:28)
at io.netty.channel.nio.AbstractNioByteChannel.doWrite(AbstractNioByteChannel.java:228)
at io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:282)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:879)

此類 AbstractFileRegion 在spark-network-common.xxx.jar中 classpath是有的,所以懷疑jar沖突

# 提交失敗的classpath
[/data/xxx/dts-executor/executions/4998-0-1/resource
 /data/xxx/dts-executor/plugins/sparkShell/conf
 /data/xxx/dts-executor/plugins/sparkShell/di-dts-plugin-ic-1.0.0.jar
 /usr/hdp/current/hadoop-client/*
 /usr/hdp/current/hadoop-client/lib/*
 /usr/hdp/current/hadoop-hdfs-client/*
 /usr/hdp/current/hadoop-hdfs-client/lib/*
 /usr/hdp/current/hadoop-mapreduce-client/*
 /usr/hdp/current/hadoop-mapreduce-client/lib/*
 /usr/hdp/current/hadoop-yarn-client/*
 /usr/hdp/current/hadoop-yarn-client/lib/*
 /usr/hdp/current/hive/lib/*
 /usr/hdp/current/spark2-client/conf
 /usr/hdp/current/spark2-client/jars/*]

# 提交成功的classpath
 [/data/xxx/dts-executor/plugins/sparkShell/di-dts-plugin-ic-1.0.0.jar
 /data/xxx/dts-executor/plugins/sparkShell/conf
 /usr/hdp/current/spark2-client/jars/*
 /usr/hdp/current/spark2-client/conf
 /usr/hdp/current/hive/lib/*
 /usr/hdp/current/hadoop-client/*
 /usr/hdp/current/hadoop-client/lib/*
 /usr/hdp/current/hadoop-mapreduce-client/*
 /usr/hdp/current/hadoop-mapreduce-client/lib/*
 /usr/hdp/current/hadoop-yarn-client/*
 /usr/hdp/current/hadoop-yarn-client/lib/*
 /usr/hdp/current/hadoop-hdfs-client/*
 /usr/hdp/current/hadoop-hdfs-client/lib/*
 /data/xxx/dts-executor/executions/5008-0-1/resource]

主要區別在於  /usr/hdp/current/spark2-client/jars/* 被提前了,后來發現是netty包沖突了
/usr/hdp/current/hadoop-client/*等下面有與 /usr/hdp/current/spark2-client/jars/* 以下兩個包沖突
netty-3.9.9.Final.jar
netty-all-4.1.17.Final.jar
解決方案:
1.可以把spark相關classpath提到hadoop前,那么優先加載spark相關包(我們采用了這種)
2.如果不使用hadoop的netty相關的兩個包,可以直接刪除掉,那么就不存在沖突了


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM