spark編程python實例


spark編程python實例

ValueError: Cannot run multiple SparkContexts at once; existing SparkContext(app=PySparkShell, master=local[])

1.pyspark在jupyter notebook中開發,測試,提交

1.1.啟動

IPYTHON_OPTS="notebook" /opt/spark/bin/pyspark

ubuntu-spark-python-notebook1
下載應用,將應用下載為.py文件(默認notebook后綴是.ipynb)
sparkcode-saveaspy

在shell中提交應用

wxl@wxl-pc:/opt/spark/bin$ spark-submit /bin/spark-submit /home/wxl/Downloads/pysparkdemo.py

!sparkcode-spark-submit

3.遇到的錯誤及解決

ValueError: Cannot run multiple SparkContexts at once; existing SparkContext(app=PySparkShell, master=local[*])
d*

3.1.錯誤

ValueError: Cannot run multiple SparkContexts at once; existing SparkContext(app=PySparkShell, master=local[*])
d*

ValueError: Cannot run multiple SparkContexts at once; existing SparkContext(app=PySparkShell, master=local[*]) created by <module> at /usr/local/lib/python2.7/dist-packages/IPython/utils/py3compat.py:288

spark-python-error-scstop

3.2.解決,成功運行

在from之后添加

try:
    sc.stop()
except:
    pass
sc=SparkContext('local[2]','First Spark App')

這里寫圖片描述

貼上錯誤解決方法來源StackOverFlow

4.源碼

pysparkdemo.ipynb

{
 "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from pyspark import SparkContext" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "try:\n", " sc.stop()\n", "except:\n", " pass\n", "sc=SparkContext('local[2]','First Spark App')" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "data = sc.textFile(\"data/UserPurchaseHistory.csv\").map(lambda line: line.split(\",\")).map(lambda record: (record[0], record[1], record[2]))" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Total purchases: 5\n" ] } ], "source": [ "numPurchases = data.count()\n", "print \"Total purchases: %d\" % numPurchases" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ],
 "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } },
 "nbformat": 4,
 "nbformat_minor": 0 }

pysparkdemo.py


# coding: utf-8

# In[1]:

from pyspark import SparkContext


# In[2]:

try:
    sc.stop()
except:
    pass
sc=SparkContext('local[2]','First Spark App')


# In[3]:

data = sc.textFile("data/UserPurchaseHistory.csv").map(lambda line: line.split(",")).map(lambda record: (record[0], record[1], record[2]))


# In[4]:

numPurchases = data.count()
print "Total purchases: %d" % numPurchases


# In[ ]:


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM