一、介紹
1. Fluentd 是一個開源收集事件和日志系統,用與各node節點日志數據的收集、處理等等。詳細介紹移步-->官方地址:http://fluentd.org/
2. Elasticsearch 是一個開源的,基於Lucene的搜索服務器。它提供了一個分布式多用戶能力的全文搜索引擎,基於RESTful web接口。詳細介紹移步-->官方地址:http://www.elasticsearch.org/overview/
3. Kibana 開源的用於數據可視化的web ui工具,可使用它對日志進行高效的搜索、可視化、分析等各種操作。詳細介紹移步-->官方地址http://www.elasticsearch.org/overview/kibana/
二、流程
每個node節點上面的fluentd監控並收集該節點上面的系統日志,並將處理過后的日志信息發送給Elasticsearch,Elasticsearch匯總各個node節點的日志信息,最后結合Kibana 實現web ui界面的數據展示。
三、安裝實現
1.確保k8s集群正常工作(當然這是必須的....)
2.fluentd.yaml文件編寫,這里要實現每個節點都能有fluentd跑起來,只需要將kind設置為DaemonSet即可。
1 apiVersion: extensions/v1beta1 2 kind: DaemonSet 3 metadata: 4 name: fluentd-elasticsearch 5 namespace: kube-system 6 labels: 7 k8s-app: fluentd-logging 8 spec: 9 template: 10 metadata: 11 labels: 12 name: fluentd-elasticsearch 13 spec: 14 containers: 15 - name: fluentd-elasticsearch 16 image: gcr.io/google-containers/fluentd-elasticsearch:1.20 17 resources: 18 limits: 19 memory: 200Mi 20 requests: 21 cpu: 100m 22 memory: 200Mi 23 volumeMounts: 24 - name: varlog 25 mountPath: /var/log 26 - name: varlibdockercontainers 27 mountPath: /var/lib/docker/containers 28 readOnly: true 29 terminationGracePeriodSeconds: 30 30 volumes: 31 - name: varlog 32 hostPath: 33 path: /var/log 34 - name: varlibdockercontainers 35 hostPath: 36 path: /var/lib/docker/containers
3.elasticsearch-rc.yaml&elasticsearch-svc.yaml
apiVersion: v1 kind: ReplicationController metadata: name: elasticsearch-logging-v1 namespace: kube-system labels: k8s-app: elasticsearch-logging version: v1 kubernetes.io/cluster-service: "true" spec: replicas: 2 selector: k8s-app: elasticsearch-logging version: v1 template: metadata: labels: k8s-app: elasticsearch-logging version: v1 kubernetes.io/cluster-service: "true" spec: containers: - image: gcr.io/google-containers/elasticsearch:v2.4.1 name: elasticsearch-logging resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m requests: cpu: 100m ports: - containerPort: 9200 name: db protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: es-persistent-storage mountPath: /data volumes: - name: es-persistent-storage emptyDir: {}
apiVersion: v1 kind: Service metadata: name: elasticsearch-logging namespace: kube-system labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" kubernetes.io/name: "Elasticsearch" spec: ports: - port: 9200 protocol: TCP targetPort: db selector: k8s-app: elasticsearch-logging
4.kibana-rc.yaml&kibana-svc.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kibana-logging namespace: kube-system labels: k8s-app: kibana-logging kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: matchLabels: k8s-app: kibana-logging template: metadata: labels: k8s-app: kibana-logging spec: containers: - name: kibana-logging image: gcr.io/google-containers/kibana:v4.6.1 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m requests: cpu: 100m env: - name: "ELASTICSEARCH_URL" value: "http://elasticsearch-logging:9200" - name: "KIBANA_BASE_URL" value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging" ports: - containerPort: 5601 name: ui protocol: TCP
apiVersion: v1 kind: Service metadata: name: kibana-logging namespace: kube-system labels: k8s-app: kibana-logging kubernetes.io/cluster-service: "true" kubernetes.io/name: "Kibana" spec: ports: - port: 5601 protocol: TCP targetPort: ui selector: k8s-app: kibana-logging
5.kubectl create -f ****** ,這里就自己發揮吧。
鏡像推薦使用最新的iamge,多去github/kubernetes看看 里面有詳細的說明