dolphinscheduler 2.0.2 安裝部署踩坑


1.創建dolphinscheduler 用戶 要有sudo權限

2.部署dolphinscheduler的機器 需要相互免密

3.解壓文件 cd dolphinscheduler-2.0.1/conf

  因為我部署的dolphinscheduler元數據庫是mysql 所以需要修改conf目錄下的 application-mysql.yaml 和 common.properties

 application-mysql.yaml
spring:
  datasource:
    driver-class-name: com.mysql.jdbc.Driver
    #主要修改成自己的mysql連接地址
    url: jdbc:mysql://ds1:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8
    username: dolphinscheduler
    password: dolphinscheduler
    hikari:
      connection-test-query: select 1
      minimum-idle: 5
      auto-commit: true
      validation-timeout: 3000
      pool-name: DolphinScheduler
      maximum-pool-size: 50
      connection-timeout: 30000
      idle-timeout: 600000
      leak-detection-threshold: 0
      initialization-fail-timeout: 1
~                                                                                                                                                                                                                                                                             
~

3.54.數據庫初始化 
因為是使用mysql作為默認元數據庫 所以需要添加mysql-connector-java驅動包到DolphinScheduler的lib目錄下

#登錄mysql
   mysql -uroot -p
   #創建mysql庫和對應的賬號密碼
   mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
   mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
   mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
   mysql> flush privileges;
把user和password 改成自己要設置的用戶和密碼

4.執行script 目錄下的創建表及導入基礎數據腳本
sh script/create-dolphinscheduler.sh
日志最后一行出現 create DolphinScheduler success 表示引入腳本成功 
檢查創建的元數據庫里有沒有生成對應的表

注:common.properties 在第一次部署的時候可能不需要修改 在執行 install.sh 之后 去部署機器上可以修改配置 然后重啟所有 workers生效

common.properties
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

#用戶數據本地目錄路徑,請確認該目錄存在並具有讀寫權限
data.basedir.path=/tmp/dolphinscheduler

# 資源存儲類型: HDFS, S3, NONE
resource.storage.type=HDFS

#hdfs數據存儲路徑
resource.upload.path=/dolphinscheduler

#是否啟動kerberos
hadoop.security.authentication.startup.state=false

# java.security.krb5.conf path
#java.security.krb5.conf.path=/opt/krb5.conf

# login user from keytab username
login.user.keytab.username=hdfs-mycluster@ESZ.COM

# login user from keytab path
login.user.keytab.path=/u01/isi/application/bigsoft/ds-2.0.1/conf/hdfs.headless.keytab

# kerberos expire time, the unit is hour
kerberos.expire.time=2

# resource view suffixs
#resource.view.suffixs=txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js

# hdfs用戶 必須要有在hdfs根路徑下創建目錄的權限
# if resource.storage.type=HDFS, the user must have the permission to create directories under the HDFS root path
hdfs.root.user=hdfs

# namenode地址 如果開啟了高可用 就把hdfs的core-site.xml 和 hdfs-site.xml 倆配置文件復制到conf里一份
# if resource.storage.type=S3, the value like: s3a://dolphinscheduler; if resource.storage.type=HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir
fs.defaultFS=hdfs://ds1:8020

# if resource.storage.type=S3, s3 endpoint
fs.s3a.endpoint=http://192.168.xx.xx:9010

# if resource.storage.type=S3, s3 access key
fs.s3a.access.key=xxxxxxxxxx

# if resource.storage.type=S3, s3 secret key
fs.s3a.secret.key=xxxxxxxxxx

# resourcemanager端口
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=8088

# resourcemanager是ha 就把地址都寫上 如果不是高可用 空着就行
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=

# 如果resourcemanager HA被啟用或未使用resourcemanager,請保持默認值;如果resourcemanager為single,則只需要將主機名替換為實際的resourcemanager主機名即可
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://ds1:%s/ws/v1/cluster/apps/%s

# 歷史服務器地址
# job history status url when application number threshold is reached(default 10000, maybe it was set to 1000)
yarn.job.history.status.address=http://ds1:19888/ws/v1/history/mapreduce/jobs/%s

#數據源加密 保持默認
# datasource encryption enable
datasource.encryption.enable=false

# datasource encryption salt
datasource.encryption.salt=!@#$%^&*

# 是否使用sudo,如果設置為true,執行用戶是租戶用戶,部署用戶需要sudo權限;如果設置為false,則執行用戶是部署用戶,不需要sudo權限
# use sudo or not, if set true, executing user is tenant user and deploy user needs sudo permissions; if set false, executing user is the deploy user and doesn't need sudo permissions
sudo.enable=true

# network interface preferred like eth0, default: empty
#dolphin.scheduler.network.interface.preferred=

# network IP gets priority, default: inner outer
#dolphin.scheduler.network.priority.strategy=default

# system env path
#dolphinscheduler.env.path=env/dolphinscheduler_env.sh

# 保持默認
# development state
development.state=false

#插件路徑 默認就行
#datasource.plugin.dir config
datasource.plugin.dir=lib/plugin/datasource

cd dolphinscheduler-2.0.1/conf/env

5. 修改 dolphinscheduler_env.sh 將環境變量的配置 修改成自己的環境變量

dolphinscheduler_env.sh
#ambari/HDP 環境 需要手動加上機器的環境變量之后 再配置海豚的env環境 暫時沒有部署的直接注釋掉就行 
#海豚就是拼接命令 然后根據env環境去調用組件的

export HADOOP_HOME=/usr/hdp/3.1.4.0-315/hadoop/
export HADOOP_CONF_DIR=/usr/hdp/3.1.4.0-315/hadoop/etc/hadoop
#export SPARK_HOME1=/opt/soft/spark1
export SPARK_HOME2=/usr/hdp/3.1.4.0-315/spark2
#export PYTHON_HOME=/opt/soft/python
export JAVA_HOME=/usr/local/jdk1.8.0_112
export HIVE_HOME=/usr/hdp/3.1.4.0-315/hive
#export FLINK_HOME=
export DATAX_HOME=/u01/isi/application/bigsoft/datax

export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH

 6. 修改安裝配置文件
vim /dolphinscheduler-2.0.0/conf/config/install_config.conf 

install_config.conf 
#重頭戲來了 部署命令的文件


# ---------------------------------------------------------
# INSTALL MACHINE
# ---------------------------------------------------------
# A comma separated list of machine hostname or IP would be installed DolphinScheduler,
# including master, worker, api, alert. If you want to deploy in pseudo-distributed
# mode, just write a pseudo-distributed hostname
# Example for hostnames: ips="ds1,ds2,ds3,ds4,ds5", Example for IPs: ips="192.168.8.1,192.168.8.2,192.168.8.3,192.168.8.4,192.168.8.5"
# 分布式集群補數的機器名或者ip地址
ips="ds1,ds2,ds3"

# Port of SSH protocol, default value is 22. For now we only support same port in all `ips` machine
# modify it if you use different ssh port
# ssh默認端口
sshPort="22"

# A comma separated list of machine hostname or IP would be installed Master server, it
# must be a subset of configuration `ips`.
# Example for hostnames: masters="ds1,ds2", Example for IPs: masters="192.168.8.1,192.168.8.2"
# master部署在哪台機器上 必須是配置`ips`的子集 
masters="ds2"

# A comma separated list of machine <hostname>:<workerGroup> or <IP>:<workerGroup>.All hostname or IP must be a
# subset of configuration `ips`, And workerGroup have default value as `default`, but we recommend you declare behind the hosts
# Example for hostnames: workers="ds1:default,ds2:default,ds3:default", Example for IPs: workers="192.168.8.1:default,192.168.8.2:default,192.168.8.3:default"
# worker在哪個用戶組 可以后續在worker機器的conf里面改
workers="ds1:default,ds2:default,ds3:default"

# A comma separated list of machine hostname or IP would be installed Alert server, it
# must be a subset of configuration `ips`.
# Example for hostname: alertServer="ds3", Example for IP: alertServer="192.168.8.3"
# 告警服務器地址
alertServer="ds3"

# A comma separated list of machine hostname or IP would be installed API server, it
# must be a subset of configuration `ips`.
# Example for hostname: apiServers="ds1", Example for IP: apiServers="192.168.8.1"
# api服務器地址
apiServers="ds1"

# The directory to install DolphinScheduler for all machine we config above. It will automatically be created by `install.sh` script if not exists.
# Do not set this configuration same as the current path (pwd)
# 安裝目錄
installPath="/u01/isi/application/bigsoft/ds-2.0.1"

# The user to deploy DolphinScheduler for all machine we config above. For now user must create by yourself before running `install.sh`
# script. The user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled than the root directory needs
# to be created by this user
# 部署用戶 需要sudo權限和操作hdfs的權限 可以按照官網文檔新建 也可以用之前的
deployUser="isi"

# The directory to store local data for all machine we config above. Make sure user `deployUser` have permissions to read and write this directory.
# 用於存儲我們上面配置的所有機器的本地數據的目錄。確保用戶 `deployUser` 有讀寫這個目錄的權限。
dataBasedirPath="/tmp/dolphinscheduler"

# ---------------------------------------------------------
# DolphinScheduler ENV
# ---------------------------------------------------------
# JAVA_HOME, we recommend use same JAVA_HOME in all machine you going to install DolphinScheduler
# and this configuration only support one parameter so far.
# java環境變量地址
javaHome="/usr/local/jdk1.8.0_112"

# DolphinScheduler API service port, also this is your DolphinScheduler UI component's URL port, default value is 12345
# api服務端口 也是 web ui的 url端口 后續可以在部署api的worker上修改
apiServerPort="12345"

# ---------------------------------------------------------
# Database
# NOTICE: If database value has special characters, such as `.*[]^${}\+?|()@#&`, Please add prefix `\` for escaping.
# ---------------------------------------------------------
# The type for the metadata database
# Supported values: ``postgresql``, ``mysql`, `h2``.
# 元數據庫類型
DATABASE_TYPE=${DATABASE_TYPE:-"mysql"}

# Spring datasource url, following <HOST>:<PORT>/<database>?<parameter> format, If you using mysql, you could use jdbc
# string jdbc:mysql://127.0.0.1:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8 as example
# 連接元數據的url
SPRING_DATASOURCE_URL=${SPRING_DATASOURCE_URL:-"jdbc:mysql://ds1:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8"}

# Spring datasource username 用戶
SPRING_DATASOURCE_USERNAME=${SPRING_DATASOURCE_USERNAME:-"dolphinscheduler"}

# Spring datasource password 密碼
SPRING_DATASOURCE_PASSWORD=${SPRING_DATASOURCE_PASSWORD:-"dolphinscheduler"}

# ---------------------------------------------------------
# Registry Server
# ---------------------------------------------------------
# Registry Server plugin name, should be a substring of `registryPluginDir`, DolphinScheduler use this for verifying configuration consistency
registryPluginName="zookeeper"

# Registry Server address. zk的地址
registryServers="ds1:2181,ds2:2181,ds3:2181"

# The root of zookeeper, for now DolphinScheduler default registry server is zookeeper.
# zk上創建的目錄
zkRoot="/dolphinscheduler"

# ---------------------------------------------------------
# Worker Task Server
# ---------------------------------------------------------
# Worker Task Server plugin dir. DolphinScheduler will find and load the worker task plugin jar package from this dir.
taskPluginDir="lib/plugin/task"

# resource storage type: HDFS, S3, NONE
# 資源存儲類型
resourceStorageType="HDFS"

# resource store on HDFS/S3 path, resource file will store to this hdfs path, self configuration, please make sure the directory exists on hdfs and has read write permissions. "/dolphinscheduler" is recommended
# hdfs上的存放路徑 
resourceUploadPath="/dolphinscheduler"

# if resourceStorageType is HDFS,defaultFS write namenode address,HA, you need to put core-site.xml and hdfs-site.xml in the conf directory.
# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
# Note,S3 be sure to create the root directory /dolphinscheduler
# namenode地址
defaultFS="hdfs://ds1:8020"

# if resourceStorageType is S3, the following three configuration is required, otherwise please ignore
s3Endpoint="http://192.168.xx.xx:9010"
s3AccessKey="xxxxxxxxxx"
s3SecretKey="xxxxxxxxxx"

# resourcemanager port, the default value is 8088 if not specified
# resourcemanager端口
resourceManagerHttpAddressPort="8088"

# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single node, keep this value empty
# 如果啟用了資源管理器 HA,請設置 HA IP;如果資源管理器是單節點,則將此值保留為空
yarnHaIps=""

# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single node, you only need to replace 'yarnIp1' to actual resourcemanager hostname
# # 如果resourcemanager HA開啟或不使用resourcemanager,請保持默認值;如果resourcemanager是單節點,你只需要將'yarnIp1'替換為實際的resourcemanager主機名
singleYarnIp="ds1"

# who has permission to create directory under HDFS/S3 root path
# Note: if kerberos is enabled, please config hdfsRootUser=
# 有操作hdfs權限的用戶
hdfsRootUser="hdfs"

# kerberos config
# whether kerberos starts, if kerberos starts, following four items need to config, otherwise please ignore
kerberosStartUp="false"
# kdc krb5 config file path
krb5ConfPath="$installPath/conf/krb5.conf"
# keytab username,watch out the @ sign should followd by \\
keytabUserName="hdfs-mycluster\\@ESZ.COM"
# username keytab path
keytabPath="$installPath/conf/hdfs.headless.keytab"
# kerberos expire time, the unit is hour
kerberosExpireTime="2"

# use sudo or not
# 打開sudo權限
sudoEnable="true"

# worker tenant auto create
# 租戶需要手動創建 需要和部署用戶還有hdfs用戶有關系
workerTenantAutoCreate="false"

7.切換到部署用戶dolphinscheduler,然后執行一鍵部署腳本,如果不切換用戶 可能會出現用戶權限操作失敗的問題

su dolphinscheduler
sh install.sh
部署腳本完成之后 啟動的服務有這幾種  
1: ApiApplicationServer WorkerServer LoggerServer 
2: WorkerServer 
3: LoggerServer WorkerServer AlertServer MasterServer 

 

 8.其他的啟停命令

# 一鍵停止集群所有服務
sh ./bin/stop-all.sh

# 一鍵開啟集群所有服務
sh ./bin/start-all.sh

# 啟停 Master
sh ./bin/dolphinscheduler-daemon.sh stop master-server
sh ./bin/dolphinscheduler-daemon.sh start master-server

# 啟停 Worker
sh ./bin/dolphinscheduler-daemon.sh start worker-server
sh ./bin/dolphinscheduler-daemon.sh stop worker-server

# 啟停 Api
sh ./bin/dolphinscheduler-daemon.sh start api-server
sh ./bin/dolphinscheduler-daemon.sh stop api-server

# 啟停 Logger
sh ./bin/dolphinscheduler-daemon.sh start logger-server
sh ./bin/dolphinscheduler-daemon.sh stop logger-server

# 啟停 Alert
sh ./bin/dolphinscheduler-daemon.sh start alert-server
sh ./bin/dolphinscheduler-daemon.sh stop alert-server

 

WebUI

訪問前端頁面地址: http://Ava01:12345/dolphinscheduler ,出現前端登錄頁面 主機名是部署了ApiApplicationServer 的機器

默認用戶名密碼:admin/dolphinscheduler123

 

 

踩坑 

關於worker.properties

安裝到不同機器的這個配置文件 是用來指定這台機器 歸於哪個用戶組下  在配置1.3.8版本的時候 發現在admin賬號上配置對應的worker組 在有的時候不起作用,master日志會有 
[ERROR] 2021-12-08 15:37:00.405 org.apache.dolphinscheduler.server.master.consumer.TaskPriorityQueueConsumer:[154] - dispatch error: fail to execute : Command [type=TASK_EXECUTE_REQUEST, opaque=2988, bodyLen=1506] due to no suitable worker, current task needs worker group Ava03 to execute
這樣的報錯,需要手動停止ds服務,修改每台機器上的worker.properties 之后 ,啟動ds服務.
#worker.properties 這個表示 這個機器歸屬的用戶組是default 和 Ava01 這兩個用戶組
worker.groups=default,Ava01

 其他:

1.資源中心創建文件只支持網頁上傳,直接將jar包傳到資源中心的目錄是識別不了的
2.映射的worker組優先級大於流程啟動選擇的worker組,最好保證需要跑的任務的環境在那台worker上進行配置了
3.關於參數 流程節點中 使用 ${xx} 來表示參數 但是下面傳參的時候 格式需要去掉 ${} 直接用xx接受參數 看下圖
4.關於2.0部署 手動添加mysql驅動包之后 調度任務執行sql節點會報錯:找不到驅動類的解決辦法:
 手動復制驅動jar包到procedure的插件庫里 或者 把驅動程序包放在 lib/plugin/task/sql 目錄下就可以了

 

 

 

 

 2.0.0 找不到驅動的問題解決

在解壓之后的dolphinscheduler lib里面已經手動添加了mysql的驅動包,安裝之后的ds2目錄lib下也是有這個驅動包的,在web界面上 數據源連接mysql也可以連接成功,但是任務一配到sql節點 就報錯沒有jdbc驅動包.

解決辦法: MySQL :: Download MySQL Connector/J (Archived Versions) 下載mysql驅動包 手動導入到解壓之后的安裝包的lib目錄上

 

 

2.0.1 構建hive數據源 報錯:Update Kerberos environment failed  

解決辦法:如果不是kerberos環境的話,需要將conf/cpmmon.properties中的 java.security.krb5.conf.path的配置項注釋掉,重啟workers. 構建hive數據源就可以成功.

 

2021.12.24  免密登錄一直配置不成功 查了各種資料 最后發現是SELinux的問題 需要關閉或者臨時關閉

#查看當前狀態命令
getenforce
#臨時關閉SELinux
setenforce 0
#臨時開啟SELinux
setenforce 1
#具體辦法
https://www.cnblogs.com/liuzgg/p/11656532.html

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM