6.1hadoop日志
Master節點

Slave節點

6.2 hadoop排錯
(待補充)
6.3 spark

6.4 zookeeper

6.5 hive

6.6 kafka

7重啟命令
7.1 hadoop
a.dfs啟動停止
#start-dfs.sh
#stop-dfs.sh
亦或
# /usr/local/hadoop/sbin/start-dfs.sh
# /usr/local/hadoop/sbin/stop-dfs.sh
執行完后,jps 進程有DataNode NameNode SecondaryNameNode

端口開放9870
b.yarn啟動停止
#start-yarn.sh
#stop-yarn.sh
亦或
# /usr/local/hadoop/sbin/start-yarn.sh
# /usr/local/hadoop/sbin/stop-yarn.sh
執行完后,jps 進程有ResourceManager ResourceManager

端口開放8088,8032等

注意:zookeeper必須提前啟動
c.hdfs 安全模式
當前模式
hdfs dfsadmin -safemode get
開啟安全模式
hdfs dfsadmin -safemode enter
關閉安全模式
hdfs dfsadmin -safemode leave
7.2 zookeeper
#/usr/local/zookeeper/bin/zkServer.sh start
#/usr/local/zookeeper/bin/zkServer.sh stop
執行完后,jps 進程有QuorumPeerMain

端口2181開放

7.3 spark
/usr/local/spark/sbin/start-all.sh
執行完后,jps 進程有Worker Master

/usr/local/spark/sbin/start-history-server.sh
執行完后,jps 進程有HistoryServer

netstat -anltup |grep 18080

注意:依次啟動Hadoop的start-dfs.sh和Spark的start-all.sh后,再運行start-history-server.sh
7.4 hive
后台啟動
#nohup hive –service hiveserver2 &

Jps初見進程RunJar

10002端口為GUI的beeline界面

8開機自啟動
Vim /etc/rc.local
source /etc/profile
start-all.sh
/usr/local/zookeeper/bin/zkServer.sh start
/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
/usr/local/spark/sbin/start-all.sh
/usr/local/spark/sbin/start-history-server.sh
nohup hive --service hiveserver2 &
chmod +x /etc/rc.local
