一.環境介紹:
1.安裝jdk 7以上
2.python 2.7.11
3.IDE pycharm
4.package: spark-1.6.0-bin-hadoop2.6.tar.gz
二.Setup
1.解壓spark-1.6.0-bin-hadoop2.6.tar.gz 到目錄D:\spark-1.6.0-bin-hadoop2.6
2.配置環境變量Path,添加D:\spark-1.6.0-bin-hadoop2.6\bin,此后可以在cmd端輸入pySpark,返回如下則安裝完成:
3.將D:\spark-1.6.0-bin-hadoop2.6\python下的pySpark文件拷貝到C:\Python27\Lib\site-packages
4.安裝py4j , pip install py4j -i https://pypi.douban.com/simple
5.配置pychar環境變量:
三.Example
1.make a new python file: wordCount.py
#!/usr/bin/env python # -*- coding: utf-8 -*- import sys from pyspark import SparkContext from operator import add import re def main(): sc = SparkContext(appName= "wordsCount") lines = sc.textFile('words.txt') counts = lines.flatMap(lambda x: x.split(' '))\ .map( lambda x : (x, 1))\ .reduceByKey(add) output = counts.collect() print output for (word, count) in output: print "%s: %i" %(word, count) sc.stop() if __name__ =="__main__": main()
2.代碼中的words.txt如下:
The dynamic lifestyle people lead nowadays causes many reactions in our bodies and the one that is the most frequent of all is the headache
3.給當前運行程序配置spark環境變量:
3.1 工具欄 run --> Edit configuration-->點擊Enviroment variables后面的三個點
3.2 然后點擊 + ,輸入key:SPARK_HOME, value: D:\spark-1.6.0-bin-hadoop2.6
4.輸出結果如下圖:
四.深入練習:
1.文檔:http://spark.apache.org/docs/latest/api/python/pyspark.html
2.在解壓的Spark文檔下,有example下有很多實例可以練習。D:\spark-1.6.0-bin-hadoop2.6\examples\src\main\python
作 者:小閃電
出處:http://www.cnblogs.com/yueyanyu/
本文版權歸作者和博客園共有,歡迎轉載、交流,但未經作者同意必須保留此段聲明,且在文章頁面明顯位置給出原文鏈接。如果覺得本文對您有益,歡迎點贊、歡迎探討。本博客來源於互聯網的資源,若侵犯到您的權利,請聯系博主予以刪除。
原文轉自 :https://www.cnblogs.com/yueyanyu/p/6497956.html