一、filter,map,flatmap練習:
1.讀文本文件生成RDD lines
lines=sc.textFile("file:///usr/local/spark/mycode/rdd/word.txt")
2.將一行一行的文本分割成單詞 words
words = lines.flatMap(lambda line:line.split()).collect()
3.全部轉換為小寫
sc.parallelize(words).pipe("tr 'A-Z' 'a-z'").collect()
4.去掉長度小於3的單詞
words.filter(lambda word:len(word)>3).collect()
5.去掉停用詞
with open('/usr/local/spark/mycode/rdd/stopwords.txt')as f:
stops=f.read().split()
words.filter(lambda word:word not in stops).collect()
二、groupByKey練習
6.練習一的生成單詞鍵值對
words = sc.parallelize([("Hadoop",1),("is",1),("good",1),("Spark",1),("is"),("fast",1),("Spark",1),("is",1),("better",1)])
7.對單詞進行分組
words1 = words.groupByKey()
8.查看分組結果
words1.foreach(print)
學生科目成績文件練習:
0.數據文件上傳
lines = sc.textFile('file:///usr/local/spark/mycode/rdd/chapter4-data01.txt')
1.讀大學計算機系的成績數據集生成RDD
lines.take(5)
2.按學生匯總全部科目的成績
groupByName=lines.map(lambda line:line.split(',')).map(lambda line:(line[0],(line[1],line[2]))).groupByKey()
groupByName.take(5)
groupByName.first()
for i in groupByName.first()[1]:
print(i)
3.按科目匯總學生的成績
groupByCourse=lines.map(lambda line:line.split(',')).map(lambda line:(line[1],(line[0],line[2]))).groupByKey()
groupByCourse.first()
for i in groupByCourse.firs()[1]:
print(i)