Dolphinscheduler2.0.0 可視化工作流任務調度平台 部署


Dolphinscheduler 下載:

wget --no-check-certificate  https://dlcdn.apache.org/dolphinscheduler/2.0.0-alpha/apache-dolphinscheduler-2.0.0-alpha-bin.tar.gz

 

Dolphinscheduler  集群部署

前置准備工作 && 准備 DolphinScheduler 啟動環境

 

1.  JDK(1.8+2.  mysql5.7
3.  zk
4.  yum install  psmisc   #進程樹分析

 

 

配置用戶和權限

# 創建用戶需使用 root 登錄
useradd dolphinscheduler

# 添加密碼
echo "dolphinscheduler" | passwd --stdin dolphinscheduler

# 配置 sudo 免密
sed -i '$adolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' /etc/sudoers
sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers

# 修改目錄權限,使得部署用戶對 dolphinscheduler-bin 目錄有操作權限
chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-bin

 

密鑰

su dolphinscheduler

ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

 

初始化數據庫

mysql -uroot -p

mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;

mysql> CREATE USER 'dolphinscheduler'@'%'  IDENTIFIED BY 'dolphinscheduler';
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dolphinscheduler'@'%' ;

mysql> source /usr/local/src/apache-dolphinscheduler/sql/dolphinscheduler_mysql.sql; mysql
> flush privileges;

 

拷貝 mysql 驅動程序

mysql-connector-java  到 DolphinScheduler lib目錄下

[dolphinscheduler@hdp0 ~]$ scp mysql-connector-java.jar dolphinscheduler@hdp1:/data/apache-dolphinscheduler/lib/

 

修改配置

進入到 /data/tools/apache-dolphinscheduler/conf/config
修改install_config.conf


#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# ---------------------------------------------------------
# INSTALL MACHINE
# ---------------------------------------------------------
# A comma separated list of machine hostname or IP would be installed DolphinScheduler,
# including master, worker, api, alert. If you want to deploy in pseudo-distributed
# mode, just write a pseudo-distributed hostname
# Example for hostnames: ips="ds1,ds2,ds3,ds4,ds5", Example for IP: ips="192.168.8.1,192.168.8.2,192.168.8.3,192.168.8.4,192.168.8.5"
ips="hdp0,hdp1,hdp2,hdp3"

# Port of SSH protocol, default value is 22. For now we only support same port in all `ips` machine
# modify it if you use different ssh port
sshPort="22"

# A comma separated list of machine hostname or IP would be installed Master server, it
# must be a subset of configuration `ips`.
# Example for hostnames: ips="ds1,ds2", Example for IP: ips="192.168.8.1,192.168.8.2"
masters="hdp0,hdp1"

# A comma separated list of machine <hostname>:<workerGroup> or <IP>:<workerGroup>.All hostname or IP must be a
# subset of configuration `ips`, And workerGroup have default value as `default`, but we recommend you declare behind the hosts
# Example for hostnames: ips="ds1:default,ds2:default,ds3:default", Example for IP: ips="192.168.8.1:default,192.168.8.2:default,192.168.8.3:default"
workers="hdp2:default,hdp3:default"

# A comma separated list of machine hostname or IP would be installed Alert server, it
# must be a subset of configuration `ips`.
# Example for hostnames: ips="ds3", Example for IP: ips="192.168.8.3"
alertServer="hdp0"

# A comma separated list of machine hostname or IP would be installed API server, it
# must be a subset of configuration `ips`.
# Example for hostnames: ips="ds1", Example for IP: ips="192.168.8.1"
apiServers="hdp1,hdp0"

# The directory to install DolphinScheduler for all machine we config above. It will automatically created by `install.sh` script if not exists.
# **DO NOT** set this configuration same as the current path (pwd)
#installPath="/data1_1T/dolphinscheduler"
installPath="/data/dolphinscheduler"

# The user to deploy DolphinScheduler for all machine we config above. For now user must create by yourself before run `install.sh`
# script. The user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled than the root directory needs
# to be created by this user
deployUser="dolphinscheduler"

# The directory to store local data for all machine we config above. Make sure user `deployUser` have permissions to read and write this directory.
#dataBasedirPath="/tmp/dolphinscheduler"
dataBasedirPath="/data/dolphinscheduler/dolphinschedulerdata"

# ---------------------------------------------------------
# DolphinScheduler ENV
# ---------------------------------------------------------
# JAVA_HOME, we recommend use same JAVA_HOME in all machine you going to install DolphinScheduler
# and this configuration only support one parameter so far.
javaHome="/data/jdk"

# DolphinScheduler API service port, also this your DolphinScheduler UI component's URL port, default values is 12345
apiServerPort="12345"

# ---------------------------------------------------------
# Database
# NOTICE: If database value has special characters, such as `.*[]^${}\+?|()@#&`, Please add prefix `\` for escaping.
# ---------------------------------------------------------
# The type for the metadata database
# Supported values: ``postgresql``, ``mysql``.
dbtype="mysql"

# The <HOST>:<PORT> connection pair DolphinScheduler connect to the metadata database
dbhost="172.31.115.17:3306"

# The username DolphinScheduler connect to the metadata database
username="dolphinscheduler"

# The password DolphinScheduler connect to the metadata database
password="dolphinscheduler"

# The database DolphinScheduler connect to the metadata database
dbname="dolphinscheduler"

# ---------------------------------------------------------
# Registry Server
# ---------------------------------------------------------
# Registry Server plugin dir. DolphinScheduler will find and load the registry plugin jar package from this dir.
# For now default registry server is zookeeper, so the default value is `lib/plugin/registry/zookeeper`.
# If you want to implement your own registry server, please see https://dolphinscheduler.apache.org/en-us/docs/dev/user_doc/registry_spi.html
registryPluginDir="lib/plugin/registry/zookeeper"

# Registry Server plugin name, should be a substring of `registryPluginDir`, DolphinScheduler use this for verifying configuration consistency
registryPluginName="zookeeper"

# Registry Server address.
registryServers="172.31.115.17:2181,172.31.115.18:2181,172.31.115.19:2181"

# The root of zookeeper, for now DolphinScheduler default registry server is zookeeper.
zkRoot="/dolphinscheduler"

# ---------------------------------------------------------
# Alert Server
# ---------------------------------------------------------
# Alert Server plugin dir. DolphinScheduler will find and load the alert plugin jar package from this dir.
alertPluginDir="lib/plugin/alert"

# ---------------------------------------------------------
# Worker Task Server
# ---------------------------------------------------------
# Worker Task Server plugin dir. DolphinScheduler will find and load the worker task plugin jar package from this dir.
taskPluginDir="lib/plugin/task"

# resource storage type: HDFS, S3, NONE
resourceStorageType="HDFS"

# resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended
resourceUploadPath="/dolphinscheduler"

# if resourceStorageType is HDFS,defaultFS write namenode address,HA you need to put core-site.xml and hdfs-site.xml in the conf directory.
# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
# Note,s3 be sure to create the root directory /dolphinscheduler
defaultFS="hdfs://hdp0.fengjian.com:8020"

# if resourceStorageType is S3, the following three configuration is required, otherwise please ignore
s3Endpoint="http://192.168.xx.xx:9010"
s3AccessKey="xxxxxxxxxx"
s3SecretKey="xxxxxxxxxx"

# resourcemanager port, the default value is 8088 if not specified
resourceManagerHttpAddressPort="8088"

# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarnHaIps="hdp0.fengjian.com,hdp1.fengjian.com"

# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
singleYarnIp=""

# who have permissions to create directory under HDFS/S3 root path
# Note: if kerberos is enabled, please config hdfsRootUser=
hdfsRootUser="hdfs"

# kerberos config
# whether kerberos starts, if kerberos starts, following four items need to config, otherwise please ignore
kerberosStartUp="false"
# kdc krb5 config file path
krb5ConfPath="$installPath/conf/krb5.conf"
# keytab username,watch out the @ sign should followd by \\
keytabUserName="hdfs-mycluster\\@ESZ.COM"
# username keytab path
keytabPath="$installPath/conf/hdfs.headless.keytab"
# kerberos expire time, the unit is hour
kerberosExpireTime="2"

# use sudo or not
sudoEnable="true"

# worker tenant auto create
workerTenantAutoCreate="false"

 

如果是hdfs集群

# if resourceStorageType is HDFS,defaultFS write namenode address,HA you need to put core-site.xml and hdfs-site.xml in the conf directory.
# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
# Note,s3 be sure to create the root directory /dolphinscheduler
defaultFS="hdfs://Abcd:8020"

並且拷貝

cp /data/hadoop/etc/core-site.xml /usr/local/src/dolphscheduler/config/

cp /data/hadoop/etc/hdfs
-site.xml /usr/local/src/dolphscheduler/config/

 

 

 

 

修改環境變量

進入到 /data/tools/apache-dolphinscheduler/conf/env
修改dolphinscheduler_env.sh


export HADOOP_HOME=/usr/hdp/3.0.0.0-1634/hadoop
export HADOOP_CONF_DIR=/usr/hdp/3.0.0.0-1634/hadoop/etc/hadoop
export SPARK_HOME1=/usr/hdp/3.0.0.0-1634/spark2
export SPARK_HOME2=/usr/hdp/3.0.0.0-1634/spark2
#export PYTHON_HOME=/opt/soft/python
export JAVA_HOME=/data/jdk
export HIVE_HOME=/usr/hdp/3.0.0.0-1634/hive
#export FLINK_HOME=/opt/soft/flink
#export DATAX_HOME=/opt/soft/datax

export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH
#export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH 

 

修改jvm 堆棧參數

/data/tools/apache-dolphinscheduler/bin/dolphinscheduler-daemon.sh
根據內存修改

 

啟動 Dolphinscheduler

#會復制到其他服務器,並且啟動
sh install.sh

 

登陸Dolphinscheduler

瀏覽器訪問地址 http://hdp0.fengjjian.com:12345/dolphinscheduler 即可登錄系統UI。默認的用戶名和密碼是 admin/dolphinscheduler123

 

一些其他的啟停服務命令

# 一鍵停止集群所有服務
sh ./bin/stop-all.sh

# 一鍵開啟集群所有服務
sh ./bin/start-all.sh

# 啟停 Master
sh ./bin/dolphinscheduler-daemon.sh stop master-server
sh ./bin/dolphinscheduler-daemon.sh start master-server

# 啟停 Worker
sh ./bin/dolphinscheduler-daemon.sh start worker-server
sh ./bin/dolphinscheduler-daemon.sh stop worker-server

# 啟停 Api
sh ./bin/dolphinscheduler-daemon.sh start api-server
sh ./bin/dolphinscheduler-daemon.sh stop api-server

# 啟停 Logger
sh ./bin/dolphinscheduler-daemon.sh start logger-server
sh ./bin/dolphinscheduler-daemon.sh stop logger-server

# 啟停 Alert
sh ./bin/dolphinscheduler-daemon.sh start alert-server
sh ./bin/dolphinscheduler-daemon.sh stop alert-server

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM