spark函數sortByKey實現二次排序


最近在項目中遇到二次排序的需求,和平常開發spark的application一樣,開始查看API,編碼,調試,驗證結果。由於之前對spark的API使用過,知道API中的 sortByKey()可以自定義排序規則,通過實現自定義的排序規則來實現二次排序。
這里為了說明問題,舉了一個簡單的例子,key是由兩部分組成的,我們這里按key的第一部分的降序排,key的第二部分升序排,具體如下:


 1 JavaSparkContext javaSparkContext = new JavaSparkContext(sparkConf);
 2 
 3 List<Integer> data = Arrays.asList(5, 1, 1, 4, 4, 2, 2);
 4 
 5 JavaRDD<Integer> javaRDD = javaSparkContext.parallelize(data);
 6 
 7 final Random random = new Random(100);
 8 
 9 JavaPairRDD javaPairRDD = javaRDD.mapToPair(new PairFunction<Integer, String, Integer>() {    
10         @Override    
11         public Tuple2<String, Integer> call(Integer integer) throws Exception {        
12           return new Tuple2<String, Integer>(Integer.toString(integer) + " " + random.nextInt(10),random.nextInt(10));   
13      }
14 });
15 
16 JavaPairRDD<String,Integer> sortByKeyRDD = javaPairRDD.sortByKey(new Comparator<String>() {    
17     @Override    
18     public int compare(String o1, String o2) {        
19         String []o1s = o1.split(" ");        
20         String []o2s = o2.split(" ");       
21         if(o1s[0].compareTo(o2s[0]) == 0)            
22               return o1s[1].compareTo(o2s[1]);        
23         else            
24               return -o1s[0].compareTo(o2s[0]);    
25   }
26 });
27 System.out.println("~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" + sortByKeyRDD.collect());

 

 

上面編碼從語法上沒有什么問題,可是運行下報了如下錯誤:

 

java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$.getObjFieldValues$extension(SerializationDebugger.scala:248) at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:158) at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:107) at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:166) at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:107) at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:166) at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:107) at org.apache.spark.serializer.SerializationDebugger$.find(SerializationDebugger.scala:66) at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:41) at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47) at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:81) at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:312) at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:305) at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:132) at org.apache.spark.SparkContext.clean(SparkContext.scala:1891) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1764) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1779) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:885) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:109) at org.apache.spark.rdd.RDD.withScope(RDD.scala:286) at org.apache.spark.rdd.RDD.collect(RDD.scala:884) at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:335) at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:47)

因此,我再次去查看相應的spark Java API文檔,但是我沒有發現任何指明錯誤的地方。好吧,那只能扒下源碼吧,在javaPairRDD中

def sortByKey(comp: Comparator[K], ascending: Boolean): JavaPairRDD[K, V] = { implicit val ordering = comp // Allow implicit conversion of Comparator to Ordering. fromRDD(new OrderedRDDFunctions[K, V, (K, V)](rdd).sortByKey(ascending)) }
 
        
其實在OrderedRDDFunctions類中有個變量ordering它是隱形的: private val ordering = implicitly[Ordering[K]]。他就是默認的排序規則,我們自己重寫的comp就修改了默認的排序規則。到這里還是沒有發現問題,但是發現類 OrderedRDDFunctions extends Logging with Serializable,又回到上面的報錯信息,掃描到“serializable”!!!因此,返回上述代碼,查看Comparator interface實現,發現原來是它沒有extend Serializable,故只需創建一個 serializable的comparator就可以: public interface SerializableComparator<T> extends Comparator<T>, Serializable { }
具體如下:
 1 private static class Comp implements Comparator<String>,Serializable{    
 2     @Override    
 3     public int compare(String o1, String o2) {            
 4           String []o1s = o1.split(" ");            
 5           String []o2s = o2.split(" ");            
 6           if(o1s[0].compareTo(o2s[0]) == 0)                
 7               return o1s[1].compareTo(o2s[1]);
 8            else
 9                 return -o1s[0].compareTo(o2s[0]);    
10   }
11 }
12 JavaPairRDD<String,Integer> sortByKeyRDD = javaPairRDD.sortByKey(new Comp());

總結下,在spark的Java API中,如果需要使用Comparator接口,須注意是否需要序列化,如sortByKey(),repartitionAndSortWithinPartitions()等都是需要序列化的。

原文引自:

https://www.jianshu.com/p/37231b87de81?utm_campaign=maleskine&utm_content=note&utm_medium=pc_all_hots&utm_source=recommendation


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM