安裝Flink之前先安裝hadoop集群。
Flink下載:
https://flink.apache.org/downloads.html https://mirrors.tuna.tsinghua.edu.cn/apache/flink/flink-1.11.1/flink-1.11.1-bin-scala_2.11.tgz
解壓安裝:
tar -xf flink-1.11.1-bin-scala_2.11.tgz -C /usr/local/ cd /usr/local/ ln -sv flink-1.11.1/ flink
環境變量:
cat > /etc/profile.d/flink.sh <<EOF export FLINK_HOME=/usr/local/flink export PATH=\$PATH:\$FLINK_HOME/bin EOF . /etc/profile.d/flink.sh
普通用戶的環境變量:
cat >> ~/.bashrc <<EOF export FLINK_HOME=/usr/local/flink export PATH=\$PATH:\$FLINK_HOME/bin EOF . ~/.bashrc
修改配置:
cd /usr/local/flink/conf vim flink-conf.yaml jobmanager.rpc.address: hadoop-master
配置master節點:
cat > masters <<EOF hadoop-master EOF
配置worker節點:master節點也可以作為worker節點
cat > workers <<EOF hadoop-master hadoop-node1 hadoop-node2 EOF
復制配置文件到worker節點:
scp ./* root@hadoop-node1:/usr/local/flink/conf scp ./* root@hadoop-node2:/usr/local/flink/conf
修改屬組屬主:
chown -R hadoop.hadoop /usr/local/flink/ /usr/local/flink
啟動集群:
start-cluster.sh
查看運行的進程:master節點。
~]$ jps 26801 TaskManagerRunner 26455 StandaloneSessionClusterEntrypoint ...
worker節點:
~]$ jps
TaskManagerRunner
...
關閉集群:
stop-cluster.sh
拓展集群:向正在運行的集群中添加JobManager、TaskManager
bin/jobmanager.sh ((start|start-foreground) [host] [webui-port])|stop|stop-all bin/taskmanager.sh start|start-foreground|stop|stop-all
web頁面:
http://192.168.0.54:8081/#/overview