[Spark][Python]RDD flatMap 操作例子


RDD flatMap 操作例子:

flatMap,对原RDD的每个元素(行)执行函数操作,然后把每行都“拍扁”

[training@localhost ~]$ hdfs dfs -put cats.txt
[training@localhost ~]$ hdfs dfa -cat cats.txt
Error: Could not find or load main class dfa
[training@localhost ~]$ hdfs dfs -cat cats.txt
The cat on the mat
The aardvark sat on the sofa


mydata=sc.textFile("cats.txt")

mydata.count()
Out[14]: 2

mydata.take(2)
Out[15]: [u'The cat on the mat', u'The aardvark sat on the sofa']


myflatdata=mydata.flatMap(lambda line: line.split(' '))
myflatdta.count()
Out[19]: 11

myflatdata.take(2)
Out[20]: [u'The', u'cat']

myflatdata.take(11)
Out[21]:
[u'The',
u'cat',
u'on',
u'the',
u'mat',
u'The',
u'aardvark',
u'sat',
u'on',
u'the',
u'sofa']

 


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM