本文針對redhat或者centos
對於測試集群,如果通過ambari安裝Hadoop集群后,想重新再來一次的話,需要清理集群。
對於安裝了很多hadoop組件的話,這個工作很繁瑣。接下來是我整理的清理過程。
1,通過ambari將集群中的所用組件都關閉,如果關閉不了,直接kill -9 XXX
2,關閉ambari-server,ambari-agent
- ambari-server stop
- ambari-agent stop
3,卸載安裝的軟件
- yum remove hadoop_2* hdp-select* ranger_2* zookeeper_* bigtop* atlas-metadata* ambari* postgresql spark* slider* storm* snappy*
以上命令可能不全,執行完一下命令后,再執行
- yum list | grep @HDP
查看是否還有沒有卸載的,如果有,繼續通過#yum remove XXX卸載
4,刪除postgresql的數據
postgresql軟件卸載后,其數據還保留在硬盤中,需要把這部分數據刪除掉,如果不刪除掉,重新安裝ambari-server后,有可能還應用以前的安裝數據,而這些數據時錯誤數據,所以需要刪除掉。
- rm -rf /var/lib/pgsql
5,刪除用戶
ambari安裝hadoop集群會創建一些用戶,清除集群時有必要清除這些用戶,並刪除對應的文件夾。這樣做可以避免集群運行時出現的文件訪問權限錯誤的問題。
- userdel oozie
- userdel hive
- userdel ambari-qa
- userdel flume
- userdel hdfs
- userdel knox
- userdel storm
- userdel mapred
- userdel hbase
- userdel tez
- userdel zookeeper
- userdel kafka
- userdel falcon
- userdel sqoop
- userdel yarn
- userdel hcat
- userdel atlas
- userdel spark
- userdel ams
- rm -rf /home/atlas
- rm -rf /home/accumulo
- rm -rf /home/hbase
- rm -rf /home/hive
- rm -rf /home/oozie
- rm -rf /home/storm
- rm -rf /home/yarn
- rm -rf /home/ambari-qa
- rm -rf /home/falcon
- rm -rf /home/hcat
- rm -rf /home/kafka
- rm -rf /home/mahout
- rm -rf /home/spark
- rm -rf /home/tez
- rm -rf /home/zookeeper
- rm -rf /home/flume
- rm -rf /home/hdfs
- rm -rf /home/knox
- rm -rf /home/mapred
- rm -rf /home/sqoop
6,刪除ambari遺留數據
- rm -rf /var/lib/ambari*
- rm -rf /usr/lib/python2.6/site-packages/ambari_*
- rm -rf /usr/lib/python2.6/site-packages/resource_management
- rm -rf /usr/lib/ambri-*
7,刪除其他hadoop組件遺留數據
-
rm -rf /etc/falcon
rm -rf /etc/knox
rm -rf /etc/hive-webhcat
rm -rf /etc/kafka
rm -rf /etc/slider
rm -rf /etc/storm-slider-client
rm -rf /etc/spark
rm -rf /var/run/spark
rm -rf /var/run/hadoop
rm -rf /var/run/hbase
rm -rf /var/run/zookeeper
rm -rf /var/run/flume
rm -rf /var/run/storm
rm -rf /var/run/webhcat
rm -rf /var/run/hadoop-yarn
rm -rf /var/run/hadoop-mapreduce
rm -rf /var/run/kafka
rm -rf /var/log/hadoop
rm -rf /var/log/hbase
rm -rf /var/log/flume
rm -rf /var/log/storm
rm -rf /var/log/hadoop-yarn
rm -rf /var/log/hadoop-mapreduce
rm -rf /var/log/knox
rm -rf /usr/lib/flume
rm -rf /usr/lib/storm
rm -rf /var/lib/hive
rm -rf /var/lib/oozie
rm -rf /var/lib/flume
rm -rf /var/lib/hadoop-hdfs
rm -rf /var/lib/knox
rm -rf /var/log/hive
rm -rf /var/log/oozie
rm -rf /var/log/zookeeper
rm -rf /var/log/falcon
rm -rf /var/log/webhcat
rm -rf /var/log/spark
rm -rf /var/tmp/oozie
rm -rf /tmp/ambari-qa
rm -rf /var/hadoop
rm -rf /hadoop/falcon
rm -rf /tmp/hadoop
rm -rf /tmp/hadoop-hdfs
rm -rf /usr/hdp
rm -rf /usr/hadoop
rm -rf /opt/hadoop
rm -rf /opt/hadoop2
rm -rf /tmp/hadoop
rm -rf /var/hadoop
rm -rf /hadoop
rm -rf /etc/ambari-metrics-collector
rm -rf /etc/ambari-metrics-monitor
rm -rf /var/run/ambari-metrics-collector
rm -rf /var/run/ambari-metrics-monitor
rm -rf /var/log/ambari-metrics-collector
rm -rf /var/log/ambari-metrics-monitor
rm -rf /var/lib/hadoop-yarn
rm -rf /var/lib/hadoop-mapreduce
8,清理yum數據源
- #yum clean all