druid安裝部署


Druid安裝部署

一、環境需求

Java 8 (8u92 +)

Linux, Mac OS X, or other Unix-like OS(不支持Windows系統)

Zookeeper (3.4 +)

Druid下載:

官網:https://druid.apache.org,進入官網后點擊“Download”,進入下載界面,當前最新版本0.16,這里我們選擇編譯版本下載;

如果想下載歷史版本,可以下拉,點擊“Apache release archives “進入歷史版本下載界面

 

二、單機版安裝部署

1.將安裝包上傳到服務器,並解壓安裝包

tar -xzf apache-druid-0.16.0-incubating-bin.tar.gz  -C  /data

2.下載zookeeper安裝包上傳到服務器,並解壓安裝包到druid的根目錄,並重名為zk

  tar -xzf zookeeper-3.4.6.tar.gz  -C /data/apache-druid-0.16.0-incubating

  mv  /data/apache-druid-0.16.0-incubating/zookeeper-3.4.6  /data /apache-druid-0.16.0-incubating/zk

3.進入druid的安裝目錄,執行單機啟動腳本

  cd /data/apache-druid-0.16.0-incubating

  ./bin/start-micro-quickstart

      4.訪問http://localhost:8888 查看druid管理界面

三、集群版安裝部署

部署規划

角色

機器

集群角色(進程)

主節點

hadoop1(192.168.252.111)

Coordinator,Overlord

數據節點

hadoop2(192.168.252.112)

Historical, MiddleManager

查詢節點

hadoop3(192.168.252.113)

Broker ,router

外部依賴

組件

機器

描述

zookeeper

hadoop1、hadoop2、hadoop3

提供分布式協調服務

hadoop

hadoop1、hadoop2、hadoop3

提供數據文件存儲庫

mysql

hadoop1

提供元數據存儲庫

注:外部依賴組件安裝請查閱相關資料,這里默認外部依賴已安裝成功並運行

1.將安裝包上傳到hadoop1服務器,並解壓安裝包,並進入該目錄

tar -xzf apache-druid-0.16.0-incubating-bin.tar.gz  -C  /data

cd /data/apache-druid-0.16.0-incubating

2.修改核心配置文件,配置文件如下

      vim conf/druid/cluster/_common/common.runtime.properties

  1.  
    #
  2.  
    # Licensed to the Apache Software Foundation (ASF) under one
  3.  
    # or more contributor license agreements. See the NOTICE file
  4.  
    # distributed with this work for additional information
  5.  
    # regarding copyright ownership. The ASF licenses this file
  6.  
    # to you under the Apache License, Version 2.0 (the
  7.  
    # "License"); you may not use this file except in compliance
  8.  
    # with the License. You may obtain a copy of the License at
  9.  
    #
  10.  
    # http://www.apache.org/licenses/LICENSE-2.0
  11.  
    #
  12.  
    # Unless required by applicable law or agreed to in writing,
  13.  
    # software distributed under the License is distributed on an
  14.  
    # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
  15.  
    # KIND, either express or implied. See the License for the
  16.  
    # specific language governing permissions and limitations
  17.  
    # under the License.
  18.  
    #
  19.  
     
  20.  
    # Extensions specified in the load list will be loaded by Druid
  21.  
    # We are using local fs for deep storage - not recommended for production - use S3, HDFS, or NFS instead
  22.  
    # We are using local derby for the metadata store - not recommended for production - use MySQL or Postgres instead
  23.  
     
  24.  
    # If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
  25.  
    # If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
  26.  
    # More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
  27.  
    druid.extensions.loadList=["mysql-metadata-storage","druid-hdfs-storage","druid-kafka-indexing-service"]
  28.  
     
  29.  
    # If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
  30.  
    # and uncomment the line below to point to your directory.
  31.  
    #druid.extensions.hadoopDependenciesDir=/my/dir/hadoop-dependencies
  32.  
     
  33.  
     
  34.  
    #
  35.  
    # Hostname
  36.  
    #
  37.  
    druid.host=localhost
  38.  
     
  39.  
    #
  40.  
    # Logging
  41.  
    #
  42.  
     
  43.  
    # Log all runtime properties on startup. Disable to avoid logging properties on startup:
  44.  
    druid.startup.logging.logProperties=true
  45.  
     
  46.  
    #
  47.  
    # Zookeeper
  48.  
    #
  49.  
     
  50.  
    druid.zk.service.host=hadoop1:2181,hadoop2:2181,hadoop3:2181
  51.  
    druid.zk.paths.base=/druid
  52.  
     
  53.  
    #
  54.  
    # Metadata storage
  55.  
    #
  56.  
     
  57.  
    # For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
  58.  
    #druid.metadata.storage.type=derby
  59.  
    #druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527/var/druid/metadata.db;create=true
  60.  
    #druid.metadata.storage.connector.host=localhost
  61.  
    #druid.metadata.storage.connector.port=1527
  62.  
     
  63.  
    # For MySQL (make sure to include the MySQL JDBC driver on the classpath):
  64.  
    druid.metadata.storage.type=mysql
  65.  
    druid.metadata.storage.connector.connectURI=jdbc:mysql://hadoop1:3306/druid
  66.  
    druid.metadata.storage.connector.user=root
  67.  
    druid.metadata.storage.connector.password=123456
  68.  
     
  69.  
    # For PostgreSQL:
  70.  
    #druid.metadata.storage.type=postgresql
  71.  
    #druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
  72.  
    #druid.metadata.storage.connector.user=...
  73.  
    #druid.metadata.storage.connector.password=...
  74.  
     
  75.  
    #
  76.  
    # Deep storage
  77.  
    #
  78.  
     
  79.  
    # For local disk (only viable in a cluster if this is a network mount):
  80.  
    #druid.storage.type=local
  81.  
    #druid.storage.storageDirectory=var/druid/segments
  82.  
     
  83.  
    # For HDFS:
  84.  
    druid.storage.type=hdfs
  85.  
    druid.storage.storageDirectory=hdfs://testcluster/druid/segments
  86.  
     
  87.  
    # For S3:
  88.  
    #druid.storage.type=s3
  89.  
    #druid.storage.bucket=your-bucket
  90.  
    #druid.storage.baseKey=druid/segments
  91.  
    #druid.s3.accessKey=...
  92.  
    #druid.s3.secretKey=...
  93.  
     
  94.  
    #
  95.  
    # Indexing service logs
  96.  
    #
  97.  
     
  98.  
    # For local disk (only viable in a cluster if this is a network mount):
  99.  
    #druid.indexer.logs.type=file
  100.  
    #druid.indexer.logs.directory=var/druid/indexing-logs
  101.  
     
  102.  
    # For HDFS:
  103.  
    druid.indexer.logs.type=hdfs
  104.  
    druid.indexer.logs.directory=hdfs://testcluster/druid/indexing-logs
  105.  
     
  106.  
    # For S3:
  107.  
    #druid.indexer.logs.type=s3
  108.  
    #druid.indexer.logs.s3Bucket=your-bucket
  109.  
    #druid.indexer.logs.s3Prefix=druid/indexing-logs
  110.  
     
  111.  
    #
  112.  
    # Service discovery
  113.  
    #
  114.  
     
  115.  
    druid.selectors.indexing.serviceName=druid/overlord
  116.  
    druid.selectors.coordinator.serviceName=druid/coordinator
  117.  
     
  118.  
    #
  119.  
    # Monitoring
  120.  
    #
  121.  
     
  122.  
    druid.monitoring.monitors=["org.apache.druid.java.util.metrics.JvmMonitor"]
  123.  
    druid.emitter=noop
  124.  
    druid.emitter.logging.logLevel=info
  125.  
     
  126.  
    # Storage type of double columns
  127.  
    # ommiting this will lead to index double as float at the storage layer
  128.  
     
  129.  
    druid.indexing.doubleStorage=double
  130.  
     
  131.  
    #
  132.  
    # Security
  133.  
    #
  134.  
    druid.server.hiddenProperties=["druid.s3.accessKey","druid.s3.secretKey","druid.metadata.storage.connector.password"]
  135.  
     
  136.  
     
  137.  
    #
  138.  
    # SQL
  139.  
    #
  140.  
    druid.sql.enable=true
  141.  
     
  142.  
    #
  143.  
    # Lookups
  144.  
    #
  145.  
    druid.lookup.enableLookupSyncOnStartup=false

重點關注紅色部分,其中druid.extensions.loadList為需要加載的外部組件配置,示例中依賴外部組件zookeeper、hadoop、mysql。(kafka依賴配置可選)

zookeeper默認配置支持zookeeper3.4.14,如何外部依賴zookeeper版本不兼容,則需要將相關的zookeeper jar拷貝到druid的lib目錄下替換原zookeeper相關jar

hadoop 默認配置支持hadoop 2.8.3客戶端,該處需要將外部依賴中hadoop的核心配置文件core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml拷貝到druid中的conf/druid/cluster/_common/目錄下,如果該hadoop客戶端不兼容組件中的hadoop集群,還需要將集群兼容的hadoop客戶端相關jar拷貝到druid中的extensions/druid-hdfs-storage目錄下替換原來相關jar

mysql 沒有默認支持,需要將相關版本的mysql的驅動jar拷貝到extensions/mysql-metadata-storage目錄下

3.將druid整個目錄發送到hadoop2和hadoop3機器中

4.根據需要(機器配置、性能要求等)修改hadoop1中關於master節點相關配置,在Druid中的conf\druid\cluster\master\coordinator-overlord目錄下,其中runtime.properties為druid想的屬性配置,jvm.config為該節點進程的jvm參數配置,默認的集群版本的jvm配置均比較高,如果用於學習測試機器配置不夠用情況下,可以將單機版的配置拷貝過來,單機版配置目錄conf\druid\single-server\micro-quickstart\coordinator-overlord

5.根據需要(機器配置、性能要求等)修改hadoop2中關於data節點相關配置,在Druid中的conf\druid\cluster\ data目錄下,該節點有兩個進程historical和middleManager,根據需要修改相關配置,類似步驟4如果測試可以選擇單機版的相關配置

6. 根據需要(機器配置、性能要求等)修改hadoop3中關於query節點相關配置,在Druid中的conf\druid\cluster\ query目錄下,該節點有兩個進程broker和router,根據需要修改相關配置,類似步驟4如果測試可以選擇單機版的相關配置

注:這里druid集群版本中默認配置較測試學習虛擬機要求過高,筆者測試均采用單機版配置

7.啟動

      hadoop1:sh bin/start-cluster-master-no-zk-server >master.log &

      hadoop2:sh bin/start-cluster-data-server > data.log &

      hadoop3:sh bin/start-cluster-query-server > query.log &

8.關閉:只需要在相關的節點執行 bin/service  --down 關閉當前節點的druid相關的全部進程

9.Druid相關管理界面訪問地址:

      http://192.168.252.113:8888/

      http://192.168.252.111:8081/

 

常見問題

1.關閉命令,不能使用sh bin/service  --down ,因為service  腳本獲取參數缺陷

2. bin/verify-default-ports 腳本為啟動檢查端口占用腳本,如果你修改過相關組件的端口后,還是提示端口占用,需要關注該腳本,同步修改(該情況一般出現在單機版中)

3.如果啟動日志中出現:Unrecogized VM option 'ExitOnOutOfMemoryError' 說明內存不足,需要修改相應進程的jvm.config配置,或者加機器內存

https://blog.csdn.net/cb2474600377/article/details/103577796


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM