清理ambari安裝的hadoop集群
本文針對redhat或者centos
對於測試集群,如果通過ambari安裝hadoop集群后,想重新再來一次的話,需要清理集群。
對於安裝了很多hadoop組件的話,這個工作很繁瑣。接下來是我整理的清理過程。
1,通過ambari將集群中的所用組件都關閉,如果關閉不了,直接kill -9 XXX
2,關閉ambari-server,ambari-agent
- ambari-server stop
- ambari-agent stop
3,卸載安裝的軟件
- yum remove hadoop_2* hdp-select* ranger_2* zookeeper* bigtop* atlas-metadata* ambari* postgresql spark*
以上命令可能不全,執行完一下命令后,再執行
- yum list | grep @HDP
查看是否還有沒有卸載的,如果有,繼續通過#yum remove XXX卸載
4,刪除postgresql的數據
postgresql軟件卸載后,其數據還保留在硬盤中,需要把這部分數據刪除掉,如果不刪除掉,重新安裝ambari-server后,有可能還應用以前的安裝數據,而這些數據時錯誤數據,所以需要刪除掉。
- rm -rf /var/lib/pgsql
5,刪除用戶
ambari安裝hadoop集群會創建一些用戶,清除集群時有必要清除這些用戶,並刪除對應的文件夾。這樣做可以避免集群運行時出現的文件訪問權限錯誤的問題。
- userdel oozie
- userdel hive
- userdel ambari-qa
- userdel flume
- userdel hdfs
- userdel knox
- userdel storm
- userdel mapred
- userdel hbase
- userdel tez
- userdel zookeeper
- userdel kafka
- userdel falcon
- userdel sqoop
- userdel yarn
- userdel hcat
- userdel atlas
- userdel spark
- rm -rf /home/atlas
- rm -rf /home/accumulo
- rm -rf /home/hbase
- rm -rf /home/hive
- rm -rf /home/oozie
- rm -rf /home/storm
- rm -rf /home/yarn
- rm -rf /home/ambari-qa
- rm -rf /home/falcon
- rm -rf /home/hcat
- rm -rf /home/kafka
- rm -rf /home/mahout
- rm -rf /home/spark
- rm -rf /home/tez
- rm -rf /home/zookeeper
- rm -rf /home/flume
- rm -rf /home/hdfs
- rm -rf /home/knox
- rm -rf /home/mapred
- rm -rf /home/sqoop
6,刪除ambari遺留數據
- rm -rf /var/lib/ambari*
- rm -rf /usr/lib/python2.6/site-packages/ambari_*
- rm -rf /usr/lib/python2.6/site-packages/resource_management
- rm -rf /usr/lib/ambari-*
- rm -rf /etc/ambari-*
7,刪除其他hadoop組件遺留數據
- m -rf /etc/hadoop
- rm -rf /etc/hbase
- rm -rf /etc/hive
- rm -rf /etc/oozie
- rm -rf /etc/sqoop
- rm -rf /etc/zookeeper
- rm -rf /etc/flume
- rm -rf /etc/storm
- rm -rf /etc/hive-hcatalog
- rm -rf /etc/tez
- rm -rf /etc/falcon
- rm -rf /etc/knox
- rm -rf /etc/hive-webhcat
- rm -rf /etc/kafka
- rm -rf /etc/slider
- rm -rf /etc/storm-slider-client
- rm -rf /etc/spark
- rm -rf /var/run/spark
- rm -rf /var/run/hadoop
- rm -rf /var/run/hbase
- rm -rf /var/run/zookeeper
- rm -rf /var/run/flume
- rm -rf /var/run/storm
- rm -rf /var/run/webhcat
- rm -rf /var/run/hadoop-yarn
- rm -rf /var/run/hadoop-mapreduce
- rm -rf /var/run/kafka
- rm -rf /var/log/hadoop
- rm -rf /var/log/hbase
- rm -rf /var/log/flume
- rm -rf /var/log/storm
- rm -rf /var/log/hadoop-yarn
- rm -rf /var/log/hadoop-mapreduce
- rm -rf /var/log/knox
- rm -rf /usr/lib/flume
- rm -rf /usr/lib/storm
- rm -rf /var/lib/hive
- rm -rf /var/lib/oozie
- rm -rf /var/lib/flume
- rm -rf /var/lib/hadoop-hdfs
- rm -rf /var/lib/knox
- rm -rf /var/log/hive
- rm -rf /var/log/oozie
- rm -rf /var/log/zookeeper
- rm -rf /var/log/falcon
- rm -rf /var/log/webhcat
- rm -rf /var/log/spark
- rm -rf /var/tmp/oozie
- rm -rf /tmp/ambari-qa
- rm -rf /var/hadoop
- rm -rf /hadoop/falcon
- rm -rf /tmp/hadoop
- rm -rf /tmp/hadoop-hdfs
- rm -rf /usr/hdp
- rm -rf /usr/hadoop
- rm -rf /opt/hadoop
- rm -rf /tmp/hadoop
- rm -rf /var/hadoop
- rm -rf /hadoop
- userdel nagios
- userdel hive
- userdel ambari-qa
- userdel hbase
- userdel oozie
- userdel hcat
- userdel mapred
- userdel hdfs
- userdel rrdcached
- userdel zookeeper
- userdel sqoop
- userdel puppet
- 可以查看:http://www.cnblogs.com/cenyuhai/p/3287855.html
- userdel nagios
- userdel hive
- userdel ambari-qa
- userdel hbase
- userdel oozie
- userdel hcat
- userdel mapred
- userdel hdfs
- userdel rrdcached
- userdel zookeeper
- userdel sqoop
- userdel puppet
- userdel nagios
- userdel hive
- userdel ambari-qa
- userdel hbase
- userdel oozie
- userdel hcat
- userdel mapred
- userdel hdfs
- userdel rrdcached
- userdel zookeeper
- userdel sqoop
- userdel puppet
- userdel -r accumulo
- userdel -r ambari-qa
- userdel -r ams
- userdel -r falcon
- userdel -r flume
- userdel -r hbase
- userdel -r hcat
- userdel -r hdfs
- userdel -r hive
- userdel -r kafka
- userdel -r knox
- userdel -r mapred
- userdel -r oozie
- userdel -r ranger
- userdel -r spark
- userdel -r sqoop
- userdel -r storm
- userdel -r tez
- userdel -r yarn
- userdel -r zeppelin
- userdel -r zookeeper
- userdel -r kms
- userdel -r sqoop2
- userdel -r hue
- rm -rf /var/lib/hbase
- rm -rf /var/lib/hive
- rm -rf /var/lib/oozie
- rm -rf /var/lib/sqoop
- rm -rf /var/lib/zookeeper
- rm -rf /var/lib/hadoop-hdfs
- rm -rf /var/lib/hadoop-mapreduce
8,清理yum數據源
- #yum clean all
通過以上清理后,重新安裝ambari和hadoop集群(包括HDFS,YARN+MapReduce2,Zookeeper,Ambari Metrics,Spark)成功。如果安裝其他組件碰到由於未清理徹底而導致的問題,請留言指出需要清理的數據,本人會補全該文檔。
然后后續的重新搭建的方式,可以參考ambari2.6.0搭建文檔。
