centos7容器
docker pull centos7 啟動一個基礎容器
docker cp elastic.tar.gz ab33344ef:/root
docker exec -ti ab33344ef /bin/bash

instances: - name: 'node1' dns: [ 'node1.elastic.test.com' ] - name: 'my-kibana' dns: [ 'kibana.local' ] - name: 'logstash' dns: [ 'logstash.local' ]
生成CA和服務器證書(Elasticsearch安裝完畢之后)
bin/elasticsearch-certutil cert ca --pem --in ~/tmp/cert_blog/instance.yml --out ~/tmp/cert_blog/certs.zip
unzip certs.zip -d ./certs
配置elasticsearch.yml

node.name: node1 network.host: node1.elastic.test.com xpack.security.enabled: true xpack.security.http.ssl.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.http.ssl.key: certs/node1.key xpack.security.http.ssl.certificate: certs/node1.crt xpack.security.http.ssl.certificate_authorities: certs/ca.crt xpack.security.transport.ssl.key: certs/node1.key xpack.security.transport.ssl.certificate: certs/node1.crt xpack.security.transport.ssl.certificate_authorities: certs/ca.crt discovery.seed_hosts: [ "node1.elastic.test.com" ] cluster.initial_master_nodes: [ "node1" ]
設置內置用戶的密碼
bin/elasticsearch-setup-passwords auto -u "https://node1.elastic.test.com:9200"
啟動es
/usr/share/elasticsearch/bin/elasticsearch
root@siem-cluster-node-09 /]# id elsearch
uid=1000(elsearch) gid=1000(elsearch) groups=1000(elsearch)
docker commit 2c7add484489 192.168.30.13/aabb/elastic
docker push 192.168.30.13/aabb/elastic
Elastic容器設置
Kibana容器設置
kibana.yml中證書來源於在elasticsearch容器中生成certs和key以及生成的es內置用戶賬號

server.name: "my-kibana" server.host: "kibana.local" server.ssl.enabled: true server.ssl.certificate: /etc/kibana/config/certs/my-kibana.crt server.ssl.key: /etc/kibana/config/certs/my-kibana.key elasticsearch.hosts: ["https://node1.elastic.test.com:9200"] elasticsearch.username: "kibana" elasticsearch.password: "uFrZj1AW82vwSMeW02Cc" elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/config/certs/ca.crt" ] monitoring.enabled: false
可忽略的錯誤 不影響kibana的使用
訪問kibana
示例參考地址
https://www.elastic.co/cn/blog/configuring-ssl-tls-and-https-to-secure-elasticsearch-kibana-beats-and-logstash#create-ssl
問題
elasticsearch工作負載一旦重新創建 它生成的內置用戶賬號就會失效 kibana和logstash就會連接不上新啟動的es Pod
所以elasticsearch一旦創建后不能隨意進行創建
替換容器中的環境變量
sed -i 's/redis_ip="[0-9.]*"/redis_ip="'$redis_ip'"/' config.ini
docker run -d --restart=always --ulimit core=-1 --privileged=true -e redis_ip=$REDIS_IP -e redis_port=$REDIS_PORT
容器使用環境變量傳遞動態參數
1.找到對應容器
2.登錄容器
3.創建啟動腳本start.sh
sed -i 's/elasticsearch.password: ".*"/elasticsearch.password: "'$espasswd'"/' /root/kibana-7.8.1-linux-x86_64/config/kibana.yml
1.動態使用環境變量修改配置文件
2.啟動相關應用
4.保存鏡像
docker commit 1d008b3396f6 192.168.30.11/9999/kibana:v2
docker push 192.168.30.11/9999/kibana:v2
5.啟動容器時候的設置
Logstash容器設置

node.name: logstash.local path.config: /etc/logstash/conf.d/*.conf xpack.monitoring.enabled: true xpack.monitoring.elasticsearch.username: 'elastic' xpack.monitoring.elasticsearch.password: 'AokJY29yc13rvdKuFzsM' xpack.monitoring.elasticsearch.hosts: [ 'https://node1.elastic.test.com:9200' ] xpack.monitoring.elasticsearch.ssl.certificate_authority: /etc/logstash/config/certs/ca.crt

input { beats { port => 5044 ssl => true ssl_key => '/etc/logstash/config/certs/logstash.pkcs8.key' ssl_certificate => '/etc/logstash/config/certs/logstash.crt' } } output { elasticsearch { hosts => ["https://node1.elastic.test.com:9200"] cacert => '/etc/logstash/config/certs/ca.crt' user => 'elastic' password => 'AokJY29yc13rvdKuFzsM' } }

vi start.sh sed -i 's/xpack.monitoring.elasticsearch.password: ".*"/xpack.monitoring.elasticsearch.password: "'$espasswd'"/' /root/logstash-7.8.1/config/logstash.yml sed -i 's/password => ".*"/password => "'$espasswd'"/' /etc/logstash/conf.d/example.conf /root/logstash-7.8.1/bin/logstash
搭建成功 elk
elasticsearch的pod重新創建后需要進入es容器重新生成密碼 然后給kibana和logstashpod重新設置espasswd的環境變量
Flink容器安裝
flink容器啟動失敗 經過分析是因為啟動腳本是以后台進程的方式啟動的導致腳本一結束容器便自動結束
flink以前台進程方式啟動是需要手動修改start-cluster.sh文件
vi start-cluster.sh 修改文件內容
啟動shell的方式不同導致結果不同
正確的啟動腳本方式 ./start-cluster.sh
錯誤的啟動腳本方式 sh start-cluster.sh 會提示語法錯誤
jobmanager.sh start 以后台進程方式啟動
jobmanager.sh start-foreground 以前台進程的方式啟動
以docker容器的方式啟動主進程必須要以前台的方式運行
kafka部署
docker pull kafkamanager/kafka-manager
docker tag 14a7c1e556f7 192.168.30.113/library/kafka
docker push 192.168.30.113/library/kafka
kafka依賴zookeeper 必須先部署zookeeper
kafaka容器設置
kafka成功啟動