Apache Spark技術實戰之1 -- KafkaWordCount


歡迎轉載,轉載請注明出處,徽滬一郎。

概要

Spark應用開發實踐性非常強,很多時候可能都會將時間花費在環境的搭建和運行上,如果有一個比較好的指導將會大大的縮短應用開發流程。Spark Streaming中涉及到和許多第三方程序的整合,源碼中的例子如何真正跑起來,文檔不是很多也不詳細。

本篇主要講述如何運行KafkaWordCount,這個需要涉及Kafka集群的搭建,還是說的越仔細越好。

搭建Kafka集群

步驟1:下載kafka 0.8.1及解壓

wget https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.1.1/kafka_2.10-0.8.1.1.tgz
tar zvxf kafka_2.10-0.8.1.1.tgz
cd kafka_2.10-0.8.1.1

步驟2:啟動zookeeper

bin/zookeeper-server-start.sh config/zookeeper.properties

步驟3:修改配置文件config/server.properties,添加如下內容

host.name=localhost

# Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured.  Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
advertised.host.name=localhost

步驟4:啟動Kafka server

bin/kafka-server-start.sh config/server.properties

步驟5:創建topic

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1  --topic test

檢驗topic創建是否成功

bin/kafka-topics.sh --list --zookeeper localhost:2181

如果正常返回test

步驟6:打開producer,發送消息

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test 
##啟動成功后,輸入以下內容測試
This is a message
This is another message

 步驟7:打開consumer,接收消息

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
###啟動成功后,如果一切正常將會顯示producer端輸入的內容
This is a message
This is another message

運行KafkaWordCount

KafkaWordCount源文件位置 examples/src/main/scala/org/apache/spark/examples/streaming/KafkaWordCount.scala

盡管里面有使用說明,見下文,但如果不是事先對Kafka有一定的了解的話,決然不知道這些參數是什么意思,也不知道該如何填寫

/**
 * Consumes messages from one or more topics in Kafka and does wordcount.
 * Usage: KafkaWordCount    
 *    is a list of one or more zookeeper servers that make quorum
 *    is the name of kafka consumer group
 *    is a list of one or more kafka topics to consume from
 *    is the number of threads the kafka consumer should use
 *
 * Example:
 *    `$ bin/run-example \
 *      org.apache.spark.examples.streaming.KafkaWordCount zoo01,zoo02,zoo03 \
 *      my-consumer-group topic1,topic2 1`
 */
object KafkaWordCount {
  def main(args: Array[String]) {
    if (args.length < 4) {
      System.err.println("Usage: KafkaWordCount    ")
      System.exit(1)
    }

    StreamingExamples.setStreamingLogLevels()

    val Array(zkQuorum, group, topics, numThreads) = args
    val sparkConf = new SparkConf().setAppName("KafkaWordCount")
    val ssc =  new StreamingContext(sparkConf, Seconds(2))
    ssc.checkpoint("checkpoint")

    val topicpMap = topics.split(",").map((_,numThreads.toInt)).toMap
    val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicpMap).map(_._2)
    val words = lines.flatMap(_.split(" "))
    val wordCounts = words.map(x => (x, 1L))
      .reduceByKeyAndWindow(_ + _, _ - _, Minutes(10), Seconds(2), 2)
    wordCounts.print()

    ssc.start()
    ssc.awaitTermination()
  }
}

講清楚了寫這篇博客的主要原因之后,來看一看該如何運行KafkaWordCount

步驟1:停止運行剛才的kafka-console-producer和kafka-console-consumer

步驟2:運行KafkaWordCountProducer

bin/run-example org.apache.spark.examples.streaming.KafkaWordCountProducer localhost:9092 test 3 5

解釋一下參數的意思,localhost:9092表示producer的地址和端口, test表示topic,3表示每秒發多少條消息,5表示每條消息中有幾個單詞

步驟3:運行KafkaWordCount

 bin/run-example org.apache.spark.examples.streaming.KafkaWordCount localhost:2181 test-consumer-group test 1

解釋一下參數, localhost:2181表示zookeeper的監聽地址,test-consumer-group表示consumer-group的名稱,必須和$KAFKA_HOME/config/consumer.properties中的group.id的配置內容一致,test表示topic,1表示線程數。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM