出現“task not serializable"這個錯誤,一般是因為在map、filter等的參數使用了外部的變量,但是這個變量不能序列化。特別是當引用了某個類(經常是當前類)的成員函數或變量時,會導致這個類的所有成員(整個類)都需要支持序列化。解決這個問題最常用的方法有:
- 如果可以,將依賴的變量放到map、filter等的參數內部定義。這樣就可以使用不支持序列化的類;
- 如果可以,將依賴的變量獨立放到一個小的class中,讓這個class支持序列化;這樣做可以減少網絡傳輸量,提高效率;
- 如果可以,將被依賴的類中不能序列化的部分使用transient關鍵字修飾,告訴編譯器它不需要序列化。
- 將引用的類做成可序列化的。
- 以下這兩個沒試過。。
- Make the NotSerializable object as a static and create it once per machine.
- Call rdd.forEachPartition and create the NotSerializable object in there like this:
==================
If you see this error:
org.apache.spark.SparkException: Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: ...
The above error can be triggered when you intialize a variable on the driver (master), but then try to use it on one of the workers. In that case, Spark Streaming will try to serialize the object to send it over to the worker, and fail if the object is not serializable. Consider the following code snippet:
NotSerializable notSerializable = new NotSerializable();
JavaRDD<String> rdd = sc.textFile("/tmp/myfile");
rdd.map(s -> notSerializable.doSomething(s)).collect();
This will trigger that error. Here are some ideas to fix this error:
- Serializable the class
- Declare the instance only within the lambda function passed in map.
- Make the NotSerializable object as a static and create it once per machine.
- Call rdd.forEachPartition and create the NotSerializable object in there like this:
rdd.forEachPartition(iter -> {
NotSerializable notSerializable = new NotSerializable();
// ...Now process iter
});
Pasted from: <http://databricks.gitbooks.io/databricks-spark-knowledge-base/content/troubleshooting/javaionotserializableexception.html>
另外, stackoverflow上http://stackoverflow.com/questions/25914057/task-not-serializable-exception-while-running-apache-spark-job 這個答的也很簡明易懂。