docker logs默認會顯示命令的標准輸出(STDOUT)和標准錯誤(STDERR)。下面使用echo.sh和Dockerfile創建一個名為echo.v1的鏡像,echo.sh會一直輸出”hello“
[root@ docker]# cat echo.sh #!/bin/sh while true;do echo hello;sleep 2;done
[root@ docker]# cat Dockerfile FROM busybox:latest WORKDIR /home COPY echo.sh /home CMD [ "sh", "-c", "/home/echo.sh" ]
# chmod 777 echo.sh # docker build -t echo:v1 .
運行上述鏡像,在對於的容器進程目錄下可以看到該進程打開個4個文件,其中fd為10的即是運行的shell 腳本,
# ps -ef|grep echo root 11198 11181 0 09:04 pts/0 00:00:01 /bin/sh /home/echo.sh root 24346 21490 0 12:30 pts/5 00:00:00 grep --color=auto echo [root@ docker]# cd /proc/11198/fd [root@ fd]# ll lrwx------. 1 root root 64 Jan 28 12:30 0 -> /dev/pts/0 lrwx------. 1 root root 64 Jan 28 12:30 1 -> /dev/pts/0 lr-x------. 1 root root 64 Jan 28 12:30 10 -> /home/echo.sh lrwx------. 1 root root 64 Jan 28 12:30 2 -> /dev/pts/0
執行docker logs -f CONTAINER_ID 跟蹤容器輸出,fd為1的文件為docker logs記錄的輸出,可以直接導入一個自定義的字符串,如echo ”你好“ > 1,可以在docker log日志中看到如下輸出
hello
hello
你好
hello
docker支持多種插件,可以在docker啟動時通過命令行傳遞log driver,也可以通過配置docker的daemon.json文件來設置dockerd的log driver。docker默認使用json-file的log driver,使用如下命令查看當前系統的log driver
# docker info --format '{{.LoggingDriver}}'
json-file
下面使用journald來作為log driver
# docker run -itd --log-driver=journald echo:v1 8a8c828fa673c0bea8005d3f53e50b2112b4c8682d7e04100affeba25ebd588c
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8a8c828fa673 echo:v1 "sh -c /home/echo.sh" 2 minutes ago Up 2 minutes vibrant_curie
# journalctl CONTAINER_NAME=vibrant_curie --all
在journalctl中可以看到有如下log日志,8a8c828fa673就是上述容器的ID
-- Logs begin at Fri 2019-01-25 10:15:42 CST, end at Mon 2019-01-28 13:12:55 CST. -- Jan 28 13:08:47 . 8a8c828fa673[9709]: hello Jan 28 13:08:49 . 8a8c828fa673[9709]: hello ...
同時使用docker inspect查看該容器配置,可以看到log driver變為了journald,
"LogConfig": { "Type": "journald", "Config": {} },
生產中一般使用日志收集工具來對服務日志進行收集和解析,下面介紹使用fluentd來采集日志,fluentd支持多種插件,支持多種日志的輸入輸出方式,插件使用方式可以參考官網。下載官方鏡像
docker pull fluent/fluentd
首先創建一個fluentd的配置文件,該配置文件用於接收遠端日志,並打印到標准輸出
# cat fluentd.conf <source> @type forward </source> <match *> @type stdout </match>
創建2個docker images,echo:v1和echo:v2,內容如下
# cat echo.sh ---echo:v1 #!/bin/sh while true;do echo "docker1 -> 11111" echo "docker1,this is docker1" echo "docker1,12132*)(" sleep 2 done
# cat echo.sh ----echo:v2 #!/bin/sh while true;do echo "docker2 -> 11111" echo "docker2,this is docker1" echo "docker2,12132*)(" sleep 2 done
首先啟動fluentd,然后啟動echo:v1,fluentd使用本地配置文件/home/fluentd/fluentd.conf替換默認配置文件,fluentd-address用於指定fluentd的地址,更多選項參見fluentd logging driver
# docker run -it --rm -p 24224:24224 -v /home/fluentd/fluentd.conf:/fluentd/etc/fluentd.conf -e FLUENTD_CONF=fluentd.conf fluent/fluentd:latest # docker run --rm --name=docker1 --log-driver=fluentd --log-opt tag="{{.Name}}" --log-opt fluentd-address=192.168.80.189:24224 echo:v1
fluentd默認綁定地址為0.0.0.0,即接收本機所有接口IP的數據,綁定端口為指定的端口24224,fluentd啟動時有如下輸出
[info]: #0 listening port port=24224 bind="0.0.0.0"
在fluentd界面上可以看到echo:v1重定向過來的輸出,下面加粗的docker1為容器啟動時設置的tag值,docker支持tag模板,可以參考Customize log driver output
2019-01-29 07:46:24.000000000 +0000 docker1: {"container_name":"/docker1","source":"stdout","log":"docker1 -> 11111","container_id":"74c0af9defd10d33db0e197f0dd3af382a5c06a858f06bdd2f0f49e43bf0a25e"} 2019-01-29 07:46:24.000000000 +0000 docker1: {"container_id":"74c0af9defd10d33db0e197f0dd3af382a5c06a858f06bdd2f0f49e43bf0a25e","container_name":"/docker1","source":"stdout","log":"docker1,this is docker1"} 2019-01-29 07:46:24.000000000 +0000 docker1: {"container_id":"74c0af9defd10d33db0e197f0dd3af382a5c06a858f06bdd2f0f49e43bf0a25e","container_name":"/docker1","source":"stdout","log":"docker1,12132*)("}
上述場景中,如果fluentd沒有啟動,echo:v1也會啟動失敗,可以在容器啟動時使用fluentd-async-connect來避免因fluentd退出或未啟動而導致容器異常,如下圖,當fluentd未啟動也不會導致容器啟動失敗
docker run --rm --name=docker1 --log-driver=fluentd --log-opt tag="docker1.{{.Name}}" --log-opt fluentd-async-connect=true --log-opt fluentd-address=192.168.80.189:24224 echo:v1
上述場景輸出直接重定向到標准輸出,也可以使用插件重定向到文件,fluentd使用如下配置文件,日志文件會重定向到/home/fluent目錄下,match用於匹配echo:v1的輸出(tag="docker1.{{.Name}}"),這樣就可以過濾掉echo:v2的輸出
# cat fluentd.conf <source> @type forward </source> <match docker1.*> @type file path /home/fluent/ </match>
使用如下方式啟動fluentd
# docker run -it --rm -p 24224:24224 -v /home/fluentd/fluentd.conf:/fluentd/etc/fluentd.conf -v /home/fluent:/home/fluent -e FLUENTD_CONF=fluentd.conf fluent/fluentd:latest
在/home/fluent下面可以看到有生成的日志文件
# ll total 8 -rw-r--r--. 1 charlie charlie 2404 Jan 29 17:14 buffer.b58095399160f67b3b56a8f76791e3f1a.log -rw-r--r--. 1 charlie charlie 68 Jan 29 17:14 buffer.b58095399160f67b3b56a8f76791e3f1a.log.meta
上述展示了使用fluentd的標准輸出來顯示docker logs以及使用file來持久化日志。生產中一般使用elasticsearch作為日志的存儲和搜索引擎,使用kibana為log日志提供顯示界面。可以在這里獲取各個版本的elasticsearch和kibana鏡像以及使用文檔,本次使用6.5版本的elasticsearch和kibana。注:啟動elasticsearch時需要設置sysctl -w vm.max_map_count=262144
fluentd使用elasticsearch時需要在鏡像中安裝elasticsearch的plugin,也可以直接下載包含elasticsearch plugin的docker鏡像,如果沒有k8s.gcr.io/fluentd-elasticsearch的訪問權限,可以pull這里的鏡像。
使用docker-compose來啟動elasticsearch,kibana和fluentd,文件結構如下
# ll -rw-r--r--. 1 root root 1287 Jan 31 16:51 docker-compose.yml -rw-r--r--. 1 root root 196 Jan 30 11:56 elasticsearch.yml -rw-r--r--. 1 root root 332 Jan 31 16:48 fluentd.conf -rw-r--r--. 1 root root 1408 Jan 30 12:07 kibana.yml
centos上docker-compose的安裝可以參見這里。docker-compose.yml以及各組件的配置如下。它們共同部署在同一個bridge esnet上,同時注意kibana.yml和fluentd.yml中使用elasticsearch的service名字作為host。kibana的所有配置可以參見kibana.yml
# cat docker-compose.yml version: '2' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4 container_name: elasticsearch environment: - http.host=0.0.0.0 - transport.host=0.0.0.0 - "ES_JAVA_OPTS=-Xms1g -Xmx1g" volumes: - esdata:/usr/share/elasticsearch/data - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml ports: - 9200:9200 - 9300:9300 networks: - esnet ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 mem_limit: 2g cap_add: - IPC_LOCK kibana: image: docker.elastic.co/kibana/kibana:6.5.4 depends_on: - elasticsearch container_name: kibana environment: - SERVER_HOST=0.0.0.0 volumes: - ./kibana.yml:/usr/share/kibana/config/kibana.yml ports: - 5601:5601 networks: - esnet flunted: image: fluentd-elasticsearch:v2.4.0 depends_on: - elasticsearch container_name: flunted environment: - FLUENTD_CONF=fluentd.conf volumes: - ./fluentd.conf:/etc/fluent/fluent.conf ports: - 24224:24224 networks: - esnet volumes: esdata: driver: local networks: esnet:
# cat elasticsearch.yml cluster.name: "chimeo-docker-cluster" node.name: "chimeo-docker-single-node" network.host: 0.0.0.0
# cat kibana.yml #kibana is served by a back end server. This setting specifies the port to use. server.port: 5601 # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. # The default is 'localhost', which usually means remote machines will not be able to connect. # To allow connections from remote users, set this parameter to a non-loopback address. server.host: "localhost" # The Kibana server's name. This is used for display purposes. server.name: "charlie" # The URLs of the Elasticsearch instances to use for all your queries. elasticsearch.url: "http://elasticsearch:9200" # Kibana uses an index in Elasticsearch to store saved searches, visualizations and # dashboards. Kibana creates a new index if the index doesn't already exist. kibana.index: ".kibana" # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of # the elasticsearch.requestTimeout setting. elasticsearch.pingTimeout: 5000 # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value # must be a positive integer. elasticsearch.requestTimeout: 30000 # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying. elasticsearch.startupTimeout: 10000 # Set the interval in milliseconds to sample system and process performance # metrics. Minimum is 100ms. Defaults to 5000. ops.interval: 5000
# cat fluentd.conf <source> @type forward </source> <match **> @type elasticsearch log_level info include_tag_key true host elasticsearch port 9200 logstash_format true chunk_limit_size 10M flush_interval 5s max_retry_wait 30 disable_retry_limit num_threads 8 </match>
使用如下命令啟動即可
# docker-compose up
啟動一個使用fluentd的容器。注:測試過程中可以不加fluentd-async-connect=true,可以判定該容器是否能連接到fluentd
docker run -it --rm --name=docker1 --log-driver=fluentd --log-opt tag="fluent.{{.Name}}" --log-opt fluentd-async-connect=true --log-opt fluentd-address=127.0.0.1:24224 echo:v1
打開本地瀏覽器,輸入kibana的默認url:http://localhost:5601,創建index后就可以看到echo:v1容器的打印日志

在使用到kubernetes時,fluentd一般以DaemonSet方式部署到每個node節點,采集node節點的log日志。也可以以sidecar的方式采集同pod的容器服務的日志。更多參見Logging Architecture
TIPS:
- fluentd給elasticsearch推送數據是以chunk為單位的,如果chunk過大可能導致elasticsearch因為payload過大而無法接收,需要設置chunk_limit_size大小,參考Fluentd-ElasticSearch配置
- 生產中fluentd直接發送日志到elasticsearch,可能會因為es沒能及時讀取日志而導致fluentd中緩存的日志爆滿,建議在fluentd和es之間使用kafka進行日志緩存
參考:
https://stackoverflow.com/questions/44002643/how-to-use-the-official-docker-elasticsearch-container
