k8s之使用k8s搭建ELK日志收集系統


  之前有筆記記錄使用filebeat收集主機日志及容器日志

  參考:https://www.cnblogs.com/minseo/p/12469176.html

  該筆記使用了主機搭建elasticsearch及kibana,本文記錄全部使用容器及K8s搭建ELK系統過程

  前提條件

  1. K8s集群  搭建參考:https://www.cnblogs.com/minseo/p/12361731.html
  2. Heketi管理的glusterfs集群 搭建參考:https://www.cnblogs.com/minseo/p/12575604.html

 

  下載鏡像,為了部署速度,本次下載鏡像以后提交私有鏡像參考harbor

docker pull docker.elastic.co/elasticsearch/elasticsearch:6.6.2
docker pull docker.elastic.co/kibana/kibana:6.6.2
docker pull docker.elastic.co/logstash/logstash:6.6.2
docker pull docker.elastic.co/beats/filebeat:6.6.2

   一,搭建elasticsearch+kibana

  elasticsearch配置文件

# cat elasticsearch.yml 
cluster.name: my-es
#node.name: dev-es-kibana
path.data: /usr/share/elasticsearch/data
#path.logs: /var/log/elasticsearch
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
#集群個節點IP地址,也可以使用els、els.shuaiguoxia.com等名稱,需要各節點能夠解析
#discovery.zen.ping.unicast.hosts: ["172.16.30.11", "172.17.77.12"]
#集群節點數
#discovery.zen.minimum_master_nodes: 2
#增加參數,使head插件可以訪問es 
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type

   kibana配置文件

  注意kibana連接的主機使用了域名,是由有狀態應用statefulset創建的Pod

# cat kibana.yml 
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://es-kibana-0.es-kibana.kube-system:9200"
kibana.index: ".kibana"

   創建elasticsearch和kibana的配置文件configmap

  本次把日志收集系統放置在命名空間kube-system

kubectl create configmap es-config -n kube-system --from-file=elasticsearch.yml 
kubectl create configmap kibana-config -n kube-system --from-file=kibana.yml 

   es存儲pvc配置文件

# cat es-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: es-pv-claim
  namespace: kube-system
  labels:
    app: es
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: "gluster-heketi-storageclass"
  resources:
    requests:
      storage: 200Gi

   前提條件:storageClass需要提前創建好,本次使用的是heketi管理的glusterfs存儲

  創建pvc

 kubectl apply -f es-pvc.yaml 

   查看創建的pvc

# kubectl get pvc -n kube-system
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
es-pv-claim   Bound    pvc-5fa351ee-730f-4e2c-9b62-8091532ed408   200Gi      RWX            gluster-heketi-storageclass   23h

   創建es-kibana的yaml配置文件

# cat es-statefulset.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: es-kibana
  name: es-kibana
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: es-kibana
  serviceName: "es-kibana"
  template:
    metadata:
      labels:
        app: es-kibana
    spec:
      imagePullSecrets:
      - name: registry-pull-secret
      containers:
      - image: 192.168.1.11/project/elasticsearch:kube-system
        imagePullPolicy: Always
        name: elasticsearch
        resources:
          requests:
            memory: "4Gi"
            cpu: "1000m"
          limits:        
            memory: "8Gi"
            cpu: "2000m"
        volumeMounts:
        - name: es-config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          subPath: elasticsearch.yml
        - name: es-persistent-storage
          mountPath: /usr/share/elasticsearch/data
      - image: 192.168.1.11/project/kibana:kube-system
        imagePullPolicy: Always
        name: kibana
        volumeMounts:
        - name: kibana-config
          mountPath: /usr/share/kibana/config/kibana.yml
          subPath: kibana.yml
      volumes:
      - name: es-config
        configMap:
          name: es-config
      - name: kibana-config
        configMap:
          name: kibana-config
      - name: es-persistent-storage
        persistentVolumeClaim:
          claimName: es-pv-claim
      #hostNetwork: true
      #dnsPolicy: ClusterFirstWithHostNet
      nodeSelector:
       kubernetes.io/hostname: 172.16.30.1

   創建es-kibana應用

 kubectl apply -f es-statefulset.yaml 

   查看

# kubectl get pod -n kube-system es-kibana-0
NAME          READY   STATUS    RESTARTS   AGE
es-kibana-0   2/2     Running   0          22h

   正常運行沒有報錯代表elasticsearch及kibana創建成功

  測試elasticsearch及kibana是否正常

  首先獲取到Pod的IP

 

   使用curl命令測試elasticsearch是否正常

# curl 172.17.77.7:9200
{
  "name" : "kqddSiw",
  "cluster_name" : "my-es",
  "cluster_uuid" : "1YTsqP6mTfKLtUrzEcx7zg",
  "version" : {
    "number" : "6.6.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "3bd3e59",
    "build_date" : "2019-03-06T15:16:26.864148Z",
    "build_snapshot" : false,
    "lucene_version" : "7.6.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

   此時kibana會出現訪問elasticsearch錯誤提示,原因是沒有創建es-kibana的cluserip的svc

# cat es-cluster-none-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    app: es-kibana
  name: es-kibana
  namespace: kube-system
spec:
  ports:
  - name: es9200
    port: 9200
    protocol: TCP
    targetPort: 9200
  - name: es9300
    port: 9300
    protocol: TCP
    targetPort: 9300
  clusterIP: None
  selector:
    app: es-kibana
  type: ClusterIP

   該yaml配置文件可以使用以下命令生成

 kubectl create service clusterip es-kibane -n kube-system --clusterip=None --tcp=9200:9200 --tcp=9300:9300 --dry-run -o yaml

   命令解析

 kubectl create service clusterip #創建一個內部clusterip
es-kibana #對應的statefulset本次為es-kibana
-n kube-system #命名空間
--clusterip=None#clusterip為None內部使用DNS域名訪問
 --tcp=9200:9200 #映射的端口
--tcp=9300:9300 
--dry-run #不運行檢查命令是否正確
-o yaml#輸出yaml

   創建svc

kubectl apply -f es-cluster-none-svc.yaml 

   創建完以后kiban及可以正常連接elasticsearch

  為了查看方便創建一個nodeport

# cat es-nodeport-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    app: es-kibana
  name: es-kibana-nodeport-svc
  namespace: kube-system
spec:
  ports:
  - name: 9200-9200
    port: 9200
    protocol: TCP
    targetPort: 9200
    #nodePort: 9200
  - name: 5601-5601
    port: 5601
    protocol: TCP
    targetPort: 5601
    #nodePort: 5601
  selector:
    app: es-kibana
  type: NodePort

   創建nodeport的svc

 kubectl apply -f es-nodeport-svc.yaml 

   查看創建的隨機端口

 

   使用nodeip+port訪問,本次端口為51652

  頁面顯示正常即可

 

 

   二,創建logstash服務

  logstash.yml配置文件

  輸出至es使用域名配置

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.url: http://es-kibana-0.es-kibana.kube-system:9200

   logstash.conf配置文件

# cat logstash.conf 
input {
  beats {
     port => 5044
  }
}
  
filter {
 #需要配置否則host是一個json不是文本則無法輸出至elasticsearch
  mutate {
    rename => { "[host][name]" => "host" }
  }
}
  
output {
           elasticsearch {
              hosts => ["http://es-kibana-0.es-kibana.kube-system:9200"]
              index => "k8s-system-log-%{+YYYY.MM.dd}"
           }
          stdout{
              codec => rubydebug
           }
}

   創建兩個配置文件

kubectl create configmap logstash-yml-config -n kube-system  --from-file=logstash.yml 
kubectl create configmap logstash-config -n kube-system  --from-file=logstash.conf 

   logstash的yaml配置文件

# cat logstash-statefulset.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: logstash
  name: logstash
  namespace: kube-system
spec:
  serviceName: "logstash"
  replicas: 1
  selector:
    matchLabels:
      app: logstash
  template:
    metadata:
      labels:
        app: logstash
    spec:
      imagePullSecrets:
      - name: registry-pull-secret
      containers:
      - image: 192.168.1.11/project/logstash:6.6.2
        name: logstash
        volumeMounts:
        - name: logstash-yml-config
          mountPath: /usr/share/logstash/config/logstash.yml
          subPath: logstash.yml
        - name: logstash-config
          mountPath: /usr/share/logstash/pipeline/logstash.conf
          subPath: logstash.conf
      volumes:
      - name: logstash-yml-config
        configMap:
          name: logstash-yml-config
      - name: logstash-config
        configMap:
          name: logstash-config
      nodeSelector:
       kubernetes.io/hostname: 172.16.30.1

   創建logstash應用

 kubectl apply -f logstash-statefulset.yaml 

   查看

 

   日志不報錯並且可以通對應Pod的5044端口及啟動正常

 

   三,創建filebeat服務

  filebeat.yml配置文件

# cat filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /messages
  fields:
    app: k8s
    type: module
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
output.logstash:
  hosts: ["logstash-0.logstash.kube-system:5044"]
processors:
  - add_host_metadata:
  - add_cloud_metadata:

   解析

  容器日志路徑為/messages 需要在啟動Pod時候把該路徑對應掛載

  使用的是k8s內部的dns配置elasticsearch服務

  創建filebeat的configmap

 kubectl create configmap filebeat-config -n kube-system --from-file=filebeat.yml

   filebeat的yaml文件

# cat filebeat-daemonset.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: filebeat
  name: filebeat
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      imagePullSecrets:
      - name: registry-pull-secret
      containers:
      - image: 192.168.1.11/project/filebeat:6.6.2
        name: filebeat
        volumeMounts:
        - name: filebeat-config
          mountPath: /etc/filebeat.yml
          subPath: filebeat.yml
        - name: k8s-system-logs
          mountPath: /messages
        #使用配置文件啟動filebeat
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            cpu: 500m
            memory: 500Mi
        #設置訪問容器的用戶ID本次設置為0即訪問容器為root用戶
        #不設置默認容器用戶為filebeat則會出現訪問日志文件沒權限的問題
        #設置該參數使用kubelet exec登錄容器的用戶為root用戶
        securityContext:
          runAsUser: 0
      volumes:
      - name: filebeat-config
        configMap:
          name: filebeat-config
      #把主機的日志/var/logs/messages掛載至容器
      - name: k8s-system-logs
        hostPath:
          path: /var/log/messages
          type: File

   本次使用DaemonSet保證每個node有且僅調度一個Pod用於收集node主機的/var/log/messages日志

  啟動

kubectl apply -f filebeat-daemonset.yaml 

   查看啟動的Pod會在每一個node啟動一個

   在kibana添加日志索引以后查看

 

   注意:使用該方式部署的主機的host名為對應Pod的主機名,而不是原始主機的主機名,原始日志無原始主機主機名,該問題不知如何解決

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM