用 Spark 為 Elasticsearch 導入搜索數據


越來越健忘了,得記錄下自己的操作才行!

ES和spark版本:

spark-1.6.0-bin-hadoop2.6

Elasticsearch for Apache Hadoop 2.1.2

如果是其他版本,在索引數據寫入的時候可能會出錯。

首先,啟動es后,spark shell導入es-hadoop jar包:

cp elasticsearch-hadoop-2.1.2/dist/elasticsearch-spark* spark-1.6.0-bin-hadoop2.6/lib/
cd spark-1.6.0-bin-hadoop2.6/bin
./spark-shell --jars ../lib/elasticsearch-spark-1.2_2.10-2.1.2.jar

交互如下:

import org.apache.spark.SparkConf
import org.elasticsearch.spark._
val conf = new SparkConf()
conf.set("es.index.auto.create", "true")
conf.set("es.nodes", "127.0.0.1")
val numbers = Map("one" -> 1, "two" -> 2, "three" -> 3)
val airports = Map("OTP" -> "Otopeni", "SFO" -> "San Fran")
sc.makeRDD(Seq(numbers, airports)).saveToEs("spark/docs")

然后查看ES中的數據:

http://127.0.0.1:9200/spark/docs/_search?q=*

結果如下:

{"took":71,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":2,"max_score":1.0,"hits":[{"_index":"spark","_type":"docs","_id":"AVfhVqPBv9dlWdV2DcbH","_score":1.0,"_source":{"OTP":"Otopeni","SFO":"San Fran"}},{"_index":"spark","_type":"docs","_id":"AVfhVqPOv9dlWdV2DcbI","_score":1.0,"_source":{"one":1,"two":2,"three":3}}]}}

 

 

參考:

https://www.elastic.co/guide/en/elasticsearch/hadoop/2.1/spark.html#spark-installation

http://spark.apache.org/docs/latest/programming-guide.html

http://chenlinux.com/2014/09/04/spark-to-elasticsearch/


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM