搭建YARN集群
作者:尹正傑
版權聲明:原創作品,謝絕轉載!否則將追究法律責任。
一.啟動HDFS集群
1>.搭建HDFS分布式集群
博主推薦閱讀: https://www.cnblogs.com/yinzhengjie2020/p/12424192.html
2>.啟動HDFS集群

[root@hadoop101.yinzhengjie.com ~]# manage-hdfs.sh start hadoop101.yinzhengjie.com | CHANGED | rc=0 >> starting namenode, logging to /yinzhengjie/softwares/hadoop-2.10.0-fully-mode/logs/hadoop-root-namenode-hadoop101.yinzhengjie.com.out hadoop105.yinzhengjie.com | CHANGED | rc=0 >> starting secondarynamenode, logging to /yinzhengjie/softwares/hadoop/logs/hadoop-root-secondarynamenode-hadoop105.yinzhengjie.com.out hadoop104.yinzhengjie.com | CHANGED | rc=0 >> starting datanode, logging to /yinzhengjie/softwares/hadoop/logs/hadoop-root-datanode-hadoop104.yinzhengjie.com.out hadoop103.yinzhengjie.com | CHANGED | rc=0 >> starting datanode, logging to /yinzhengjie/softwares/hadoop/logs/hadoop-root-datanode-hadoop103.yinzhengjie.com.out hadoop102.yinzhengjie.com | CHANGED | rc=0 >> starting datanode, logging to /yinzhengjie/softwares/hadoop/logs/hadoop-root-datanode-hadoop102.yinzhengjie.com.out Starting HDFS: [ OK ] [root@hadoop101.yinzhengjie.com ~]#

[root@hadoop101.yinzhengjie.com ~]# ansible all -m shell -a 'jps' hadoop102.yinzhengjie.com | CHANGED | rc=0 >> 5565 Jps 5358 DataNode hadoop103.yinzhengjie.com | CHANGED | rc=0 >> 5314 DataNode 5518 Jps hadoop105.yinzhengjie.com | CHANGED | rc=0 >> 5291 SecondaryNameNode 5407 Jps hadoop101.yinzhengjie.com | CHANGED | rc=0 >> 5280 NameNode 5584 Jps hadoop104.yinzhengjie.com | CHANGED | rc=0 >> 5504 Jps 5298 DataNode [root@hadoop101.yinzhengjie.com ~]#
二.修改YARN配置文件(yarn-site.xml)
1>.修改yarn-site.xm配置文件
[root@hadoop101.yinzhengjie.com ~]# vim ${HADOOP_HOME}/etc/hadoop/yarn-site.xml [root@hadoop101.yinzhengjie.com ~]# [root@hadoop101.yinzhengjie.com ~]# cat ${HADOOP_HOME}/etc/hadoop/yarn-site.xml <?xml version="1.0"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <configuration> <!-- Site specific YARN configuration properties --> <!-- 配置YARN支持MapReduce框架 --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> <description> 此屬性可以包含多個輔助服務的列表,以支持在YARN下運行的不同應用程序框架。以逗號分隔的服務列表,其中服務名稱應僅包含a-zA-Z0-9_並且不能以數字開頭。 設置此屬性以通知NodeManager需要實現名為"mapreduce_shuffle"。該屬性讓NodeManager知道MapReduce容器從map任務到reduce任務的過程中需要執行shuffle操作。 因為shuffle是一個輔助服務,而不是NodeManager的一部分,所以必須在此顯式設置其值。否則無法運行MR任務。 默認值為空,在本例中,指定了mapreduce_shuffle值,因為當前僅在集群中運行基於MapReduce的作業。 </description> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> <description> 此參數指示MapReduce如何執行shuffle操作。本例中為該參數指定的值是"org.apache.hadoop.mapred.ShuffleHandler"(其實就是默認值)。指示YARN使用這個類執行shuffle操作。 提供的類名稱指示如何實現為屬性"yarn.nodemanager.aux-services"設置的值。 </description> </property> <!-- 配置ResourceManager(簡稱RM)相關參數 --> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop101.yinzhengjie.com</value> <description>指定RM的主機名。默認值為"0.0.0.0"</description> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>${yarn.resourcemanager.hostname}:8030</value> <description> 指定調度程序接口的地址,即RM對ApplicationMaster暴露的訪問地址。 ApplicationMaster通過該地址向RM申請資源、釋放資源等。若不指定默認值為:"${yarn.resourcemanager.hostname}:8030" </description> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>${yarn.resourcemanager.hostname}:8031</value> <description>指定RM對NM暴露的地址。NM通過該地址向RM匯報心跳,領取任務等。若不指定默認值為:"${yarn.resourcemanager.hostname}:8031"</description> </property> <property> <name>yarn.resourcemanager.address</name> <value>${yarn.resourcemanager.hostname}:8032</value> <description> 指定RM中的應用程序管理器接口的地址,即RM對客戶端暴露的地址。 客戶端通過該地址向RM提交應用程序,殺死應用程序等。若不指定默認值為:"${yarn.resourcemanager.hostname}:8032" </description> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>${yarn.resourcemanager.hostname}:8033</value> <description>RM對管理員暴露的訪問地址。管理員通過該地址向RM發送管理命令等。若不指定默認值為:"${yarn.resourcemanager.hostname}:8033"</description> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>${yarn.resourcemanager.hostname}:8088</value> <description>RM Web應用程序的http地址。如果僅提供主機作為值,則將在隨機端口上提供webapp。若不指定默認值為:"${yarn.resourcemanager.hostname}:8088"</description> </property> <property> <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value> <description>用作資源調度程序的類。目前可用的有FIFO、CapacityScheduler和FairScheduler。</description> </property> <property> <name>yarn.resourcemanager.resource-tracker.client.thread-count</name> <value>50</value> <description>處理來自NodeManager的RPC請求的Handler數目。默認值為50</description> </property> <property> <name>yarn.resourcemanager.scheduler.client.thread-count</name> <value>50</value> <description>處理來自ApplicationMaster的RPC請求的Handler數目。默認值為50</description> </property> <property> <name>yarn.resourcemanager.nodes.include-path</name> <value></value> <description>指定包含節點的文件路徑。即設置白名單,默認值為空。(改參數並不是必須配置的,先混個眼熟,后續用到可以直接拿來配置。)</description> </property> <property> <name>yarn.resourcemanager.nodes.exclude-path</name> <value></value> <description> 指定包含要排除的節點的文件路徑。即設置黑名單,默認值為空。(改參數並不是必須配置的,先混個眼熟,后續用到可以直接拿來配置。) 如果發現若干個NodeManager存在問題,比如故障率很高,任務運行失敗率高,則可以將之加入黑名單中。注意,這兩個配置參數可以動態生效。(調用一個refresh命令即可) </description> </property> <property> <name>yarn.resourcemanager.nodemanagers.heartbeat-interval-ms</name> <value>3000</value> <description>集群中每個NodeManager的心跳間隔(以毫秒為單位)。默認值為1000ms(即1秒)</description> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>2048</value> <description> RM上每個容器請求的最小分配(以MB為單位)。低於此值的內存請求將被設置為此屬性的值。此外,如果節點管理器配置為內存小於此值,則資源管理器將關閉該節點管理器。 此參數指定分配的每個容器最小內存為2048MB(即2G),該值不宜設置過大。默認值為1024MB。 由於我們將yarn.nodemanager.resource.memory-mb的值設置為81920MB,因此意味着此節點限制在任何給定時間內運行的容器數量不超過40個(81920/2048)個。 </description> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>81920</value> <description> RM上每個容器請求的最大分配(MB)。高於此值的內存請求將引發InvalidResourceRequestException。 此參數指定分配的每個容器最大內存為81920MB(即80G),該值不宜設置過小。默認值為8192MB(即8G)。 </description> </property> <property> <name>yarn.scheduler.minimum-allocation-vcores</name> <value>1</value> <description> 就虛擬CPU內核而言,RM上每個容器請求的最小分配。低於此值的請求將設置為此屬性的值。默認值為1。 此外,資源管理器將關閉配置為具有比該值更少的虛擬核的節點管理器。 </description> </property> <property> <name>yarn.scheduler.maximum-allocation-vcores</name> <value>32</value> <description> 就虛擬CPU核心而言,RM處每個容器請求的最大分配。默認值為4。 高於此值的請求將引發InvalidResourceRequestException。 </description> </property> <!-- 配置NodeManager(簡稱NM)相關參數 --> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>81920</value> <description> 此參數用於指定YARN可以在每個節點上消耗的總內存(用於分配給容器的物理內存量,以MB為單位)。在生產環境中建議設置為物理內存的70%的容量即可. 如果設置為-1(默認值)且yarn.nodemanager.resource.detect-hardware-capabilities為true,則會自動計算(在Windows和Linux中)。在其他情況下,默認值為8192MB(即8GB)。 </description> </property> <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>32</value> <description> 此參數可以指定分配給YARN容器的CPU內核數。理論上應該將其設置為小於節點上物理內核數。但在實際生產環境中可以將一個物理cpu當成2個來用,尤其是在CPU密集型的集群。 如果設置為-1(默認值)且yarn.nodemanager.resource.detect-hardware-capabilities為true,則會自動計算(在Windows和Linux中)。在其他情況下,默認值為8。 </description> </property> <property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>3.0</value> <description> 此參數指定YARN容器配置的每個map和reduce任務使用的虛擬內存比的上限(換句話說,每使用1MB物理內存,最多可用的虛擬內存數)。 默認值是2.1,可以設置一個不同的值,比如3.0。不過生產環境中我一般都會禁用swap分區,目的在於盡量避免使用虛擬內存喲。因此若禁用了swap分區,個人覺得改參數配置可忽略。 </description> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> <description> 每個DataNode上的NodeManager使用此屬性來聚合應用程序日志。默認值為"false",啟用日志聚合時,Hadoop收集作為應用程序一部分的每個容器的日志,並在應用完成后將這些文件移動到HDFS。 可以使用"yarn.nodemanager.remote-app-log-dir"和"yarn.nodemanager.remote-app-log-dir-suffix"屬性來指定在HDFS中聚合日志的位置。 </description> </property> <property> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/yinzhengjie/logs/hdfs/</value> <description> 此屬性指定HDFS中聚合應用程序日志文件的目錄。JobHistoryServer將應用日志存儲在HDFS中的此目錄中。默認值為"/tmp/logs" </description> </property> <property> <name>yarn.nodemanager.remote-app-log-dir-suffix</name> <value>yinzhengjie-logs</value> <description>遠程日志目錄將創建在"{yarn.nodemanager.remote-app-log-dir}/${user}/{thisParam}",默認值為"logs"。</description> </property> <property> <name>yarn.nodemanager.log-dirs</name> <value>/yinzhengjie/logs/yarn/container1,/yinzhengjie/logs/yarn/container2,/yinzhengjie/logs/yarn/container3</value> <description> 指定存儲容器日志的位置,默認值為"${yarn.log.dir}/userlogs"。此屬性指定YARN在Linux文件系統上發送應用程序日志文件的路徑。通常會配置多個不同的掛在目錄,以增強I/O性能。 由於上面啟用了日志聚合功能,一旦應用程序完成,YARN將刪除本地文件。可以通過JobHistroyServer訪問它們(從匯總了日志的HDFS上)。 在該示例中將其設置為"/yinzhengjie/logs/yarn/container",只有NondeManager使用這些目錄。 </description> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>/yinzhengjie/data/hdfs/nm1,/yinzhengjie/data/hdfs/nm2,/yinzhengjie/data/hdfs/nm3</value> <description> 指定用於存儲本地化文件的目錄列表(即指定中間結果存放位置,通常會配置多個不同的掛在目錄,以增強I/O性能)。默認值為"${hadoop.tmp.dir}/nm-local-dir"。 YARN需要存儲其本地文件,例如MapReduce的中間輸出,將它們存儲在本地文件系統上的某個位置。可以使用此參數指定多個本地目錄,YARN的分布式緩存也是用這些本地資源文件。 </description> </property> <property> <name>yarn.nodemanager.log.retain-seconds</name> <value>10800</value> <description>保留用戶日志的時間(以秒為單位)。僅在禁用日志聚合的情況下適用,默認值為:10800s(即3小時)。</description> </property> <property> <name>yarn.application.classpath</name> <value></value> <description> 此屬性指定本地文件系統上用於存儲在集群中執行應用程序所需的Hadoop,YARN和HDFS常用JAR文件的位置。以逗號分隔的CLASSPATH條目列表。 當此值為空時,將使用以下用於YARN應用程序的默認CLASSPATH。 對於Linux: $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*, $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $HADOOP_YARN_HOME/share/hadoop/yarn/*, $HADOOP_YARN_HOME/share/hadoop/yarn/lib/* 對於Windows: %HADOOP_CONF_DIR%, %HADOOP_COMMON_HOME%/share/hadoop/common/*, %HADOOP_COMMON_HOME%/share/hadoop/common/lib/*, %HADOOP_HDFS_HOME%/share/hadoop/hdfs/*, %HADOOP_HDFS_HOME%/share/hadoop/hdfs/lib/*, %HADOOP_YARN_HOME%/share/hadoop/yarn/*, %HADOOP_YARN_HOME%/share/hadoop/yarn/lib/* 應用程序的ApplicationMaster和運行該應用程序都需要知道本地文件系統圖上各個HDFS,YARN和Hadoop常用JAR文件所在的位置。 </description> </property> </configuration> [root@hadoop101.yinzhengjie.com ~]#
2>.使用ansiblle分發配置文件

[root@hadoop101.yinzhengjie.com ~]# tail -17 /etc/ansible/hosts #Add by yinzhengjie for Hadoop. [nn] hadoop101.yinzhengjie.com [snn] hadoop105.yinzhengjie.com [dn] hadoop102.yinzhengjie.com hadoop103.yinzhengjie.com hadoop104.yinzhengjie.com [other] hadoop102.yinzhengjie.com hadoop103.yinzhengjie.com hadoop104.yinzhengjie.com hadoop105.yinzhengjie.com [root@hadoop101.yinzhengjie.com ~]#

[root@hadoop101.yinzhengjie.com ~]# ansible other -m copy -a "src=${HADOOP_HOME}/etc/hadoop/yarn-site.xml dest=${HADOOP_HOME}/etc/hadoop/" hadoop102.yinzhengjie.com | CHANGED => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": true, "checksum": "8910e3a8414606ebaeeda440b0fc32cac584e0ed", "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/yarn-site.xml", "gid": 0, "group": "root", "md5sum": "33900cbf0053e7754d9b6b4991c4faf5", "mode": "0644", "owner": "root", "size": 7569, "src": "/root/.ansible/tmp/ansible-tmp-1602332056.56-7256-154782649881983/source", "state": "file", "uid": 0 } hadoop104.yinzhengjie.com | CHANGED => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": true, "checksum": "8910e3a8414606ebaeeda440b0fc32cac584e0ed", "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/yarn-site.xml", "gid": 0, "group": "root", "md5sum": "33900cbf0053e7754d9b6b4991c4faf5", "mode": "0644", "owner": "root", "size": 7569, "src": "/root/.ansible/tmp/ansible-tmp-1602332056.6-7259-266144991332982/source", "state": "file", "uid": 0 } hadoop105.yinzhengjie.com | CHANGED => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": true, "checksum": "8910e3a8414606ebaeeda440b0fc32cac584e0ed", "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/yarn-site.xml", "gid": 0, "group": "root", "md5sum": "33900cbf0053e7754d9b6b4991c4faf5", "mode": "0644", "owner": "root", "size": 7569, "src": "/root/.ansible/tmp/ansible-tmp-1602332056.6-7260-5791703664800/source", "state": "file", "uid": 0 } hadoop103.yinzhengjie.com | CHANGED => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": true, "checksum": "8910e3a8414606ebaeeda440b0fc32cac584e0ed", "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/yarn-site.xml", "gid": 0, "group": "root", "md5sum": "33900cbf0053e7754d9b6b4991c4faf5", "mode": "0644", "owner": "root", "size": 7569, "src": "/root/.ansible/tmp/ansible-tmp-1602332056.58-7258-195798408486819/source", "state": "file", "uid": 0 } [root@hadoop101.yinzhengjie.com ~]#
三.啟動YARN訪問並編寫自定義啟動腳本
1>.啟動resourcemanager守護進程

[root@hadoop101.yinzhengjie.com ~]# jps 5280 NameNode 5634 Jps [root@hadoop101.yinzhengjie.com ~]# [root@hadoop101.yinzhengjie.com ~]# ss -ntl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 172.200.6.101:9000 *:* LISTEN 0 128 *:50070 *:* LISTEN 0 128 *:22 *:* LISTEN 0 128 :::22 :::* [root@hadoop101.yinzhengjie.com ~]# [root@hadoop101.yinzhengjie.com ~]# yarn-daemon.sh start resourcemanager starting resourcemanager, logging to /yinzhengjie/softwares/hadoop-2.10.0-fully-mode/logs/yarn-root-resourcemanager-hadoop101.yinzhengjie.com.out [root@hadoop101.yinzhengjie.com ~]# [root@hadoop101.yinzhengjie.com ~]# [root@hadoop101.yinzhengjie.com ~]# jps 5280 NameNode 5908 Jps 5679 ResourceManager [root@hadoop101.yinzhengjie.com ~]# [root@hadoop101.yinzhengjie.com ~]# ss -ntl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 172.200.6.101:9000 *:* LISTEN 0 128 *:50070 *:* LISTEN 0 128 *:22 *:* LISTEN 0 128 *:8088 *:* LISTEN 0 128 *:8030 *:* LISTEN 0 128 *:8031 *:* LISTEN 0 128 *:8032 *:* LISTEN 0 128 *:8033 *:* LISTEN 0 128 :::22 :::* [root@hadoop101.yinzhengjie.com ~]#
2>.啟動NodeManager節點

[root@hadoop101.yinzhengjie.com ~]# ansible all -m shell -a 'jps' hadoop105.yinzhengjie.com | CHANGED | rc=0 >> 5505 Jps 5291 SecondaryNameNode hadoop104.yinzhengjie.com | CHANGED | rc=0 >> 5298 DataNode 5591 Jps hadoop101.yinzhengjie.com | CHANGED | rc=0 >> 5280 NameNode 6061 Jps 5679 ResourceManager hadoop103.yinzhengjie.com | CHANGED | rc=0 >> 5314 DataNode 5608 Jps hadoop102.yinzhengjie.com | CHANGED | rc=0 >> 5650 Jps 5358 DataNode [root@hadoop101.yinzhengjie.com ~]# [root@hadoop101.yinzhengjie.com ~]#

[root@hadoop101.yinzhengjie.com ~]# ansible dn -m shell -a 'yarn-daemon.sh start nodemanager' hadoop104.yinzhengjie.com | CHANGED | rc=0 >> starting nodemanager, logging to /yinzhengjie/softwares/hadoop/logs/yarn-root-nodemanager-hadoop104.yinzhengjie.com.out hadoop102.yinzhengjie.com | CHANGED | rc=0 >> starting nodemanager, logging to /yinzhengjie/softwares/hadoop/logs/yarn-root-nodemanager-hadoop102.yinzhengjie.com.out hadoop103.yinzhengjie.com | CHANGED | rc=0 >> starting nodemanager, logging to /yinzhengjie/softwares/hadoop/logs/yarn-root-nodemanager-hadoop103.yinzhengjie.com.out [root@hadoop101.yinzhengjie.com ~]#

[root@hadoop101.yinzhengjie.com ~]# ansible all -m shell -a 'jps' hadoop105.yinzhengjie.com | CHANGED | rc=0 >> 5291 SecondaryNameNode 6141 Jps hadoop103.yinzhengjie.com | CHANGED | rc=0 >> 7152 Jps 5314 DataNode 6879 NodeManager hadoop101.yinzhengjie.com | CHANGED | rc=0 >> 5280 NameNode 6792 Jps 5679 ResourceManager hadoop104.yinzhengjie.com | CHANGED | rc=0 >> 5298 DataNode 6853 NodeManager 7126 Jps hadoop102.yinzhengjie.com | CHANGED | rc=0 >> 6941 NodeManager 5358 DataNode 7215 Jps [root@hadoop101.yinzhengjie.com ~]#
3>.編寫腳本管理Hadoop集群

[root@hadoop101.yinzhengjie.com ~]# vim /usr/local/bin/manage-hdfs.sh [root@hadoop101.yinzhengjie.com ~]# [root@hadoop101.yinzhengjie.com ~]# cat /usr/local/bin/manage-hdfs.sh #!/bin/bash # #******************************************************************** #Author: yinzhengjie #QQ: 1053419035 #Date: 2019-11-27 #FileName: manage-hdfs.sh #URL: http://www.cnblogs.com/yinzhengjie #Description: The test script #Copyright notice: original works, no reprint! Otherwise, legal liability will be investigated. #******************************************************************** #判斷用戶是否傳參 if [ $# -lt 1 ];then echo "請輸入參數"; exit fi #調用操作系統自帶的函數,(我這里需要用到action函數,可以使用"declare -f action"查看該函數的定義過程) . /etc/init.d/functions function start_hdfs(){ ansible nn -m shell -a 'hadoop-daemon.sh start namenode' ansible snn -m shell -a 'hadoop-daemon.sh start secondarynamenode' ansible dn -m shell -a 'hadoop-daemon.sh start datanode' #提示用戶服務啟動成功 action "Starting HDFS:" true } function stop_hdfs(){ ansible nn -m shell -a 'hadoop-daemon.sh stop namenode' ansible snn -m shell -a 'hadoop-daemon.sh stop secondarynamenode' ansible dn -m shell -a 'hadoop-daemon.sh stop datanode' #提示用戶服務停止成功 action "Stoping HDFS:" true } function status_hdfs(){ ansible all -m shell -a 'jps' } case $1 in "start") start_hdfs ;; "stop") stop_hdfs ;; "restart") stop_hdfs start_hdfs ;; "status") status_hdfs ;; *) echo "Usage: manage-hdfs.sh start|stop|restart|status" ;; esac [root@hadoop101.yinzhengjie.com ~]#

[root@hadoop101.yinzhengjie.com ~]# vim /usr/local/bin/manage-yarn.sh [root@hadoop101.yinzhengjie.com ~]# [root@hadoop101.yinzhengjie.com ~]# cat /usr/local/bin/manage-yarn.sh #!/bin/bash # #******************************************************************** #Author: yinzhengjie #QQ: 1053419035 #Date: 2019-11-27 #FileName: manage-yarn.sh #URL: http://www.cnblogs.com/yinzhengjie #Description: The test script #Copyright notice: original works, no reprint! Otherwise, legal liability will be investigated. #******************************************************************** #判斷用戶是否傳參 if [ $# -lt 1 ];then echo "請輸入參數"; exit fi #調用操作系統自帶的函數,(我這里需要用到action函數,可以使用"declare -f action"查看該函數的定義過程) . /etc/init.d/functions function start_yarn(){ ansible nn -m shell -a 'yarn-daemon.sh start resourcemanager' ansible dn -m shell -a 'yarn-daemon.sh start nodemanager' #提示用戶服務啟動成功 action "Starting HDFS:" true } function stop_yarn(){ ansible nn -m shell -a 'yarn-daemon.sh stop resourcemanager' ansible dn -m shell -a 'yarn-daemon.sh stop nodemanager' #提示用戶服務停止成功 action "Stoping HDFS:" true } function status_yarn(){ ansible all -m shell -a 'jps' } case $1 in "start") start_yarn ;; "stop") stop_yarn ;; "restart") stop_yarn start_yarn ;; "status") status_yarn ;; *) echo "Usage: manage-yarn.sh start|stop|restart|status" ;; esac [root@hadoop101.yinzhengjie.com ~]#