Cilium路由模式(一)


Encapsulation 封裝

官方文檔

缺省模式,Cilium 會自動運行在封裝模式下,因為這種模式對底層網絡基礎設施(特別是雲廠商)依賴性比較低,只要2層網絡通城,執行封裝技術即可達到Pod通信目的

在這種模式下,所有集群節點使用基於 UDP 的封裝協議 VXLAN 或 Geneve 形成隧道網狀結構。 Cilium 節點之間的所有流量都被封裝

二種封裝技術

  1. VXLAN
  2. Geneve

如果你的網絡有設置防火牆,注意了,這幾個端口

Encapsulation Mode Port Range / Protocol
VXLAN (Default) 8472/UDP
Geneve 6081/UDP

封裝優勢

  • 簡單 

連接集群節點的網絡不需要知道 PodCIDR。 集群節點可以產生多個路由或鏈路層域。 只要集群節點可以使用 IP/UDP 相互訪問,底層網絡的拓撲結構就無關緊要

  • 尋址空間

由於不依賴於任何底層網絡限制,如果 PodCIDR 大小相應地配置,可用尋址空間可能要大得多,並且允許每個節點運行任意數量的 Pod

  • 自動配置

當與編排系統(例如 Kubernetes)一起運行時,集群中所有節點的列表(包括其關聯的分配前綴節點)將自動提供給每個cilium-agent, 加入集群的新節點將自動合並到網格中

  • identity context

 封裝協議允許將元數據與網絡數據包一起攜帶。 Cilium 利用這種能力來傳輸元數據,例如源安全身份。 身份傳輸是一種優化,旨在避免在遠程節點上進行一次身份查找

封裝劣勢 

  • MTU Overhead

由於添加了封裝header,MTU太大引起的弊端低於本機路由(VXLAN 的每個網絡數據包 50 字節)。 這導致特定網絡連接影響低吞吐量。 當然可以通過啟用巨型幀(每 1500 字節有 50 字節的開銷,而每 9000 字節有 50 字節的開銷)來很大程度上緩解。

配置過程

  1. 配置過程如下
    helm install cilium cilium/cilium --version 1.9.9 \
        --namespace kube-system \
        --set tunnel=vxlan \
        --set kubeProxyReplacement=strict \
        --set ipam.mode=kubernetes \
        --set ipam.operator.clusterPoolIPv4PodCIDR=172.21.0.0/20 \
        --set ipam.operator.clusterPoolIPv4MaskSize=26 \
        --set k8sServiceHost=apiserver.qiangyun.com \
        --set k8sServicePort=6443
  2. 觀察節點的路由,如下
    <root@PROD-FE-K8S-WN1 ~># netstat -rn
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
    0.0.0.0         10.1.16.253     0.0.0.0         UG        0 0          0 eth0
    10.1.16.0       0.0.0.0         255.255.255.0   U         0 0          0 eth0
    169.254.0.0     0.0.0.0         255.255.0.0     U         0 0          0 eth0
    172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
    172.21.0.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.1.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.2.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.3.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.4.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.5.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.6.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.7.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.8.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.9.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.9.225    0.0.0.0         255.255.255.255 UH        0 0          0 cilium_host
    172.21.10.0     172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.11.0     172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.12.0     172.21.9.225    255.255.255.192 UG        0 0          0 cilium_host
    172.21.12.64    172.21.9.225    255.255.255.192 UG        0 0          0 cilium_host
    172.21.12.128   172.21.9.225    255.255.255.192 UG        0 0          0 cilium_host
    172.21.12.192   172.21.9.225    255.255.255.192 UG        0 0          0 cilium_host
    172.21.13.0     172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.14.0     172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.15.0     172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host

    # 說明
    每個節點都有一個CiliumInternal地址 172.21.9.225
    每個節點都有一個IPAM
    每個節點都有一個healthcheck的地址
    以下地址在下方體現
  3. CRD CiliumNode
    spec:
      addresses:
        - ip: 10.1.16.221
          type: InternalIP
        - ip: 172.21.9.225
          type: CiliumInternalIP
      azure: {}
      encryption: {}
      eni: {}
      health:
        ipv4: 172.21.9.190
      ipam:
        podCIDRs:
          - 172.21.9.0/24
  4. 簡單說下,通信原理
    <root@PROD-FE-K8S-WN1 ~># ifconfig 
    cilium_host: flags=4291<UP,BROADCAST,RUNNING,NOARP,MULTICAST>  mtu 1500
            inet 172.21.9.225  netmask 255.255.255.255  broadcast 0.0.0.0
            ether 22:cb:9d:23:d8:48  txqueuelen 1000  (Ethernet)
            RX packets 4665  bytes 356292 (347.9 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 273  bytes 19841 (19.3 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    cilium_net: flags=4291<UP,BROADCAST,RUNNING,NOARP,MULTICAST>  mtu 1500
            ether 26:7f:1f:99:b5:db  txqueuelen 1000  (Ethernet)
            RX packets 273  bytes 19841 (19.3 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 4665  bytes 356292 (347.9 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    cilium_vxlan: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            ether 02:16:be:c2:2f:2f  txqueuelen 1000  (Ethernet)
            RX packets 10023  bytes 634132 (619.2 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 9979  bytes 629067 (614.3 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    # 說明
    cilium_host 類似於一個路由器,網關設備
    cilium_net & cilium_host成對出現,像是一個veth,一端接容器一端接主機
    cilium_vxlan 就是虛擬二層網絡,用來提供Pod跨節點通信MTU封裝
  5. 雖然cilium默認使用的封裝技術,但是host routing模式還是使用到BPF模式,如下
    root@PROD-FE-K8S-WN1:/home/cilium# cilium status --verbose
    KVStore:                Ok   Disabled
    Kubernetes:             Ok   1.18 (v1.18.5) [linux/amd64]
    Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
    KubeProxyReplacement:   Strict   [eth0 (Direct Routing)]
    Cilium:                 Ok   1.9.9 (v1.9.9-5bcf83c)
    NodeMonitor:            Listening for events on 2 CPUs with 64x4096 of shared memory
    Cilium health daemon:   Ok   
    IPAM:                   IPv4: 2/255 allocated from 172.21.9.0/24, 
    Allocated addresses: 172.21.9.225 (router) 172.21.9.26 (health)
    BandwidthManager:       Disabled
    Host Routing: BPF
    Masquerading:           BPF   [eth0]   172.21.9.0/24
    Clock Source for BPF:   ktime
    Controller Status:      18/18 healthy
      Name                                  Last success   Last error   Count   Message
      cilium-health-ep                      33s ago        never        0       no error   
      dns-garbage-collector-job             41s ago        never        0       no error   
      endpoint-2159-regeneration-recovery   never          never        0       no error   
      endpoint-3199-regeneration-recovery   never          never        0       no error   
      k8s-heartbeat                         11s ago        never        0       no error   
      mark-k8s-node-as-available            48m34s ago     never        0       no error   
      metricsmap-bpf-prom-sync              6s ago         never        0       no error   
      neighbor-table-refresh                3m34s ago      never        0       no error   
      resolve-identity-2159                 3m34s ago      never        0       no error   
      resolve-identity-3199                 3m33s ago      never        0       no error   
      sync-endpoints-and-host-ips           34s ago        never        0       no error   
      sync-lb-maps-with-k8s-services        48m34s ago     never        0       no error   
      sync-policymap-2159                   31s ago        never        0       no error   
      sync-policymap-3199                   31s ago        never        0       no error   
      sync-to-k8s-ciliumendpoint (2159)     4s ago         never        0       no error   
      sync-to-k8s-ciliumendpoint (3199)     13s ago        never        0       no error   
      template-dir-watcher                  never          never        0       no error   
      update-k8s-node-annotations           48m40s ago     never        0       no error   
    Proxy Status:   OK, ip 172.21.9.225, 0 redirects active on ports 10000-20000
    Hubble:         Ok   Current/Max Flows: 4096/4096 (100.00%), Flows/s: 5.39   Metrics: Disabled
    KubeProxyReplacement Details:
      Status:              Strict
      Protocols:           TCP, UDP
      Devices:             eth0 (Direct Routing)
      Mode: SNAT (kuber-proxy的模式,默認就是SNAT)
      Backend Selection:   Random
      Session Affinity:    Enabled
      XDP Acceleration:    Disabled
      Services:
      - ClusterIP:      Enabled
      - NodePort:       Enabled (Range: 30000-32767) 
      - LoadBalancer:   Enabled 
      - externalIPs:    Enabled 
      - HostPort:       Enabled
    BPF Maps:   dynamic sizing: on (ratio: 0.002500)
      Name                          Size
      Non-TCP connection tracking   65536
      TCP connection tracking       131072
      Endpoint policy               65535
      Events                        2
      IP cache                      512000
      IP masquerading agent         16384
      IPv4 fragmentation            8192
      IPv4 service                  65536
      IPv6 service                  65536
      IPv4 service backend          65536
      IPv6 service backend          65536
      IPv4 service reverse NAT      65536
      IPv6 service reverse NAT      65536
      Metrics                       1024
      NAT                           131072
      Neighbor table                131072
      Global policy                 16384
      Per endpoint policy           65536
      Session affinity              65536
      Signal                        2
      Sockmap                       65535
      Sock reverse NAT              65536
      Tunnel                        65536
    Cluster health:                 19/19 reachable   (2021-08-27T17:54:39Z)
      Name                          IP                Node        Endpoints
      prod-fe-k8s-wn1 (localhost)   10.1.16.221       reachable   reachable
      prod-be-k8s-wn1               10.1.17.231       reachable   reachable
      prod-be-k8s-wn2               10.1.17.232       reachable   reachable
      prod-be-k8s-wn6               10.1.17.236       reachable   reachable
      prod-be-k8s-wn7               10.1.17.237       reachable   reachable
      prod-be-k8s-wn8               10.1.17.238       reachable   reachable
      prod-data-k8s-wn1             10.1.18.50        reachable   reachable
      prod-data-k8s-wn2             10.1.18.49        reachable   reachable
      prod-data-k8s-wn3             10.1.18.51        reachable   reachable
      prod-fe-k8s-wn2               10.1.16.222       reachable   reachable
      prod-fe-k8s-wn3               10.1.16.223       reachable   reachable
      prod-k8s-cp1                  10.1.0.5          reachable   reachable
      prod-k8s-cp2                  10.1.0.7          reachable   reachable
      prod-k8s-cp3                  10.1.0.6          reachable   reachable
      prod-sys-k8s-wn1              10.1.0.8          reachable   reachable
      prod-sys-k8s-wn2              10.1.0.9          reachable   reachable
      prod-sys-k8s-wn3              10.1.0.11         reachable   reachable
      prod-sys-k8s-wn4              10.1.0.10         reachable   reachable
      prod-sys-k8s-wn5              10.1.0.12         reachable   reachable
  6. cilium-agent 具體啟動輸出日志
    <root@PROD-FE-K8S-WN1 ~># dps
    6fdce5c6148b    Up 51 minutes   k8s_pord-ingress_prod-ingress-b76597794-tmrtc_ingress-nginx_a63d92fe-5c99-4948-89ca-fd70d2298f99_3
    43686a967be8    Up 52 minutes   k8s_cilium-agent_cilium-cgrdw_kube-system_14e0fb48-cc56-46d8-b929-64f66b36c6b7_2
    <root@PROD-FE-K8S-WN1 ~># dok^C
    <root@PROD-FE-K8S-WN1 ~># docker logs -f 43686a967be8
    level=info msg="Skipped reading configuration file" reason="Config File \"ciliumd\" Not Found in \"[/root]\"" subsys=config
    level=info msg="Started gops server" address="127.0.0.1:9890" subsys=daemon
    level=info msg="Memory available for map entries (0.003% of 3976814592B): 9942036B" subsys=config
    level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 131072" subsys=config
    level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 65536" subsys=config
    level=info msg="option bpf-nat-global-max set by dynamic sizing to 131072" subsys=config
    level=info msg="option bpf-neigh-global-max set by dynamic sizing to 131072" subsys=config
    level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 65536" subsys=config
    level=info msg="  --agent-health-port='9876'" subsys=daemon
    level=info msg="  --agent-labels=''" subsys=daemon
    level=info msg="  --allow-icmp-frag-needed='true'" subsys=daemon
    level=info msg="  --allow-localhost='auto'" subsys=daemon
    level=info msg="  --annotate-k8s-node='true'" subsys=daemon
    level=info msg="  --api-rate-limit='map[]'" subsys=daemon
    level=info msg="  --arping-refresh-period='5m0s'" subsys=daemon
    level=info msg="  --auto-create-cilium-node-resource='true'" subsys=daemon
    level=info msg=" --auto-direct-node-routes='false'" subsys=daemon DSR 模式關閉狀態(因為只能運行於Native-Routing)
    level=info msg="  --blacklist-conflicting-routes='false'" subsys=daemon
    level=info msg="  --bpf-compile-debug='false'" subsys=daemon
    level=info msg="  --bpf-ct-global-any-max='262144'" subsys=daemon
    level=info msg="  --bpf-ct-global-tcp-max='524288'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-tcp='6h0m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-service-tcp='6h0m0s'" subsys=daemon
    level=info msg="  --bpf-fragments-map-max='8192'" subsys=daemon
    level=info msg="  --bpf-lb-acceleration='disabled'" subsys=daemon
    level=info msg="  --bpf-lb-algorithm='random'" subsys=daemon
    level=info msg="  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'" subsys=daemon
    level=info msg="  --bpf-lb-maglev-table-size='16381'" subsys=daemon
    level=info msg="  --bpf-lb-map-max='65536'" subsys=daemon
    level=info msg="  --bpf-lb-mode='snat'" subsys=daemon
    level=info msg="  --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
    level=info msg="  --bpf-nat-global-max='524288'" subsys=daemon
    level=info msg="  --bpf-neigh-global-max='524288'" subsys=daemon
    level=info msg="  --bpf-policy-map-max='16384'" subsys=daemon
    level=info msg="  --bpf-root=''" subsys=daemon
    level=info msg="  --bpf-sock-rev-map-max='262144'" subsys=daemon
    level=info msg="  --certificates-directory='/var/run/cilium/certs'" subsys=daemon
    level=info msg="  --cgroup-root='/run/cilium/cgroupv2'" subsys=daemon
    level=info msg="  --cluster-id=''" subsys=daemon
    level=info msg="  --cluster-name='default'" subsys=daemon
    level=info msg="  --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
    level=info msg="  --cmdref=''" subsys=daemon
    level=info msg="  --config=''" subsys=daemon
    level=info msg="  --config-dir='/tmp/cilium/config-map'" subsys=daemon
    level=info msg="  --conntrack-gc-interval='0s'" subsys=daemon
    level=info msg="  --crd-wait-timeout='5m0s'" subsys=daemon
    level=info msg="  --datapath-mode='veth'" subsys=daemon
    level=info msg="  --debug='false'" subsys=daemon
    level=info msg="  --debug-verbose=''" subsys=daemon
    level=info msg="  --device=''" subsys=daemon
    level=info msg="  --devices=''" subsys=daemon
    level=info msg="  --direct-routing-device=''" subsys=daemon
    level=info msg="  --disable-cnp-status-updates='true'" subsys=daemon
    level=info msg="  --disable-conntrack='false'" subsys=daemon
    level=info msg="  --disable-endpoint-crd='false'" subsys=daemon
    level=info msg="  --disable-envoy-version-check='false'" subsys=daemon
    level=info msg="  --disable-iptables-feeder-rules=''" subsys=daemon
    level=info msg="  --dns-max-ips-per-restored-rule='1000'" subsys=daemon
    level=info msg=" --egress-masquerade-interfaces=''" subsys=daemon 偽裝模式的一種(基於iptables-base),還有一種是基於eBPF-base,二者並存優先取eBPF-base
    level=info msg="  --egress-multi-home-ip-rule-compat='false'" subsys=daemon
    level=info msg="  --enable-auto-protect-node-port-range='true'" subsys=daemon
    level=info msg="  --enable-bandwidth-manager='false'" subsys=daemon
    level=info msg="  --enable-bpf-clock-probe='true'" subsys=daemon
    level=info msg="  --enable-bpf-masquerade='true'" subsys=daemon
    level=info msg="  --enable-bpf-tproxy='false'" subsys=daemon
    level=info msg="  --enable-endpoint-health-checking='true'" subsys=daemon
    level=info msg=" --enable-endpoint-routes='false'" subsys=daemon  # Native-Routing的另一種路由模式,后面會有介紹,與前者不能並存,否則會報錯
    level=info msg="  --enable-external-ips='true'" subsys=daemon
    level=info msg="  --enable-health-check-nodeport='true'" subsys=daemon
    level=info msg="  --enable-health-checking='true'" subsys=daemon
    level=info msg="  --enable-host-firewall='false'" subsys=daemon
    level=info msg="  --enable-host-legacy-routing='false'" subsys=daemon
    level=info msg="  --enable-host-port='true'" subsys=daemon
    level=info msg="  --enable-host-reachable-services='false'" subsys=daemon
    level=info msg="  --enable-hubble='true'" subsys=daemon
    level=info msg="  --enable-identity-mark='true'" subsys=daemon
    level=info msg="  --enable-ip-masq-agent='false'" subsys=daemon
    level=info msg="  --enable-ipsec='false'" subsys=daemon
    level=info msg="  --enable-ipv4='true'" subsys=daemon
    level=info msg="  --enable-ipv4-fragment-tracking='true'" subsys=daemon
    level=info msg="  --enable-ipv6='false'" subsys=daemon
    level=info msg="  --enable-ipv6-ndp='false'" subsys=daemon
    level=info msg="  --enable-k8s-api-discovery='false'" subsys=daemon
    level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=daemon
    level=info msg="  --enable-k8s-event-handover='false'" subsys=daemon
    level=info msg="  --enable-l7-proxy='true'" subsys=daemon
    level=info msg="  --enable-local-node-route='true'" subsys=daemon
    level=info msg="  --enable-local-redirect-policy='false'" subsys=daemon
    level=info msg="  --enable-monitor='true'" subsys=daemon
    level=info msg="  --enable-node-port='false'" subsys=daemon
    level=info msg="  --enable-policy='default'" subsys=daemon
    level=info msg="  --enable-remote-node-identity='true'" subsys=daemon
    level=info msg="  --enable-selective-regeneration='true'" subsys=daemon
    level=info msg="  --enable-session-affinity='true'" subsys=daemon
    level=info msg="  --enable-svc-source-range-check='true'" subsys=daemon
    level=info msg="  --enable-tracing='false'" subsys=daemon
    level=info msg="  --enable-well-known-identities='false'" subsys=daemon
    level=info msg="  --enable-xt-socket-fallback='true'" subsys=daemon
    level=info msg="  --encrypt-interface=''" subsys=daemon
    level=info msg="  --encrypt-node='false'" subsys=daemon
    level=info msg="  --endpoint-interface-name-prefix='lxc+'" subsys=daemon
    level=info msg="  --endpoint-queue-size='25'" subsys=daemon
    level=info msg="  --endpoint-status=''" subsys=daemon
    level=info msg="  --envoy-log=''" subsys=daemon
    level=info msg="  --exclude-local-address=''" subsys=daemon
    level=info msg="  --fixed-identity-mapping='map[]'" subsys=daemon
    level=info msg="  --flannel-master-device=''" subsys=daemon
    level=info msg="  --flannel-uninstall-on-exit='false'" subsys=daemon
    level=info msg="  --force-local-policy-eval-at-source='true'" subsys=daemon
    level=info msg="  --gops-port='9890'" subsys=daemon
    level=info msg="  --host-reachable-services-protos='tcp,udp'" subsys=daemon
    level=info msg="  --http-403-msg=''" subsys=daemon
    level=info msg="  --http-idle-timeout='0'" subsys=daemon
    level=info msg="  --http-max-grpc-timeout='0'" subsys=daemon
    level=info msg="  --http-normalize-path='true'" subsys=daemon
    level=info msg="  --http-request-timeout='3600'" subsys=daemon
    level=info msg="  --http-retry-count='3'" subsys=daemon
    level=info msg="  --http-retry-timeout='0'" subsys=daemon
    level=info msg="  --hubble-disable-tls='false'" subsys=daemon
    level=info msg="  --hubble-event-queue-size='0'" subsys=daemon
    level=info msg="  --hubble-flow-buffer-size='4095'" subsys=daemon
    level=info msg="  --hubble-listen-address=':4244'" subsys=daemon
    level=info msg="  --hubble-metrics=''" subsys=daemon
    level=info msg="  --hubble-metrics-server=''" subsys=daemon
    level=info msg="  --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
    level=info msg="  --hubble-tls-cert-file='/var/lib/cilium/tls/hubble/server.crt'" subsys=daemon
    level=info msg="  --hubble-tls-client-ca-files='/var/lib/cilium/tls/hubble/client-ca.crt'" subsys=daemon
    level=info msg="  --hubble-tls-key-file='/var/lib/cilium/tls/hubble/server.key'" subsys=daemon
    level=info msg="  --identity-allocation-mode='crd'" subsys=daemon
    level=info msg="  --identity-change-grace-period='5s'" subsys=daemon
    level=info msg="  --install-iptables-rules='true'" subsys=daemon
    level=info msg="  --ip-allocation-timeout='2m0s'" subsys=daemon
    level=info msg="  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
    level=info msg=" --ipam='kubernetes'" subsys=daemon CRD的模式,有好幾種,官方默認是cluster-pool,kubernetes模式代表從node到拿IP地址,基於controller-manager 啟動參數--allocate-node-cidrs
    level=info msg="  --ipsec-key-file=''" subsys=daemon
    level=info msg="  --iptables-lock-timeout='5s'" subsys=daemon
    level=info msg="  --iptables-random-fully='false'" subsys=daemon
    level=info msg="  --ipv4-node='auto'" subsys=daemon
    level=info msg="  --ipv4-pod-subnets=''" subsys=daemon
    level=info msg="  --ipv4-range='auto'" subsys=daemon
    level=info msg="  --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
    level=info msg="  --ipv4-service-range='auto'" subsys=daemon
    level=info msg="  --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
    level=info msg="  --ipv6-mcast-device=''" subsys=daemon
    level=info msg="  --ipv6-node='auto'" subsys=daemon
    level=info msg="  --ipv6-pod-subnets=''" subsys=daemon
    level=info msg="  --ipv6-range='auto'" subsys=daemon
    level=info msg="  --ipv6-service-range='auto'" subsys=daemon
    level=info msg="  --ipvlan-master-device='undefined'" subsys=daemon
    level=info msg="  --join-cluster='false'" subsys=daemon
    level=info msg="  --k8s-api-server=''" subsys=daemon
    level=info msg="  --k8s-force-json-patch='false'" subsys=daemon
    level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=daemon
    level=info msg="  --k8s-kubeconfig-path=''" subsys=daemon
    level=info msg="  --k8s-namespace='kube-system'" subsys=daemon
    level=info msg="  --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
    level=info msg="  --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
    level=info msg="  --k8s-service-cache-size='128'" subsys=daemon
    level=info msg="  --k8s-service-proxy-name=''" subsys=daemon
    level=info msg="  --k8s-sync-timeout='3m0s'" subsys=daemon
    level=info msg="  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
    level=info msg="  --k8s-watcher-queue-size='1024'" subsys=daemon
    level=info msg="  --keep-config='false'" subsys=daemon
    level=info msg="  --kube-proxy-replacement='strict'" subsys=daemon
    level=info msg="  --kube-proxy-replacement-healthz-bind-address=''" subsys=daemon
    level=info msg="  --kvstore=''" subsys=daemon
    level=info msg="  --kvstore-connectivity-timeout='2m0s'" subsys=daemon
    level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=daemon
    level=info msg="  --kvstore-opt='map[]'" subsys=daemon
    level=info msg="  --kvstore-periodic-sync='5m0s'" subsys=daemon
    level=info msg="  --label-prefix-file=''" subsys=daemon
    level=info msg="  --labels=''" subsys=daemon
    level=info msg="  --lib-dir='/var/lib/cilium'" subsys=daemon
    level=info msg="  --log-driver=''" subsys=daemon
    level=info msg="  --log-opt='map[]'" subsys=daemon
    level=info msg="  --log-system-load='false'" subsys=daemon
    level=info msg="  --masquerade='true'" subsys=daemon
    level=info msg="  --max-controller-interval='0'" subsys=daemon
    level=info msg="  --metrics=''" subsys=daemon
    level=info msg="  --monitor-aggregation='medium'" subsys=daemon
    level=info msg="  --monitor-aggregation-flags='all'" subsys=daemon
    level=info msg="  --monitor-aggregation-interval='5s'" subsys=daemon
    level=info msg="  --monitor-queue-size='0'" subsys=daemon
    level=info msg="  --mtu='0'" subsys=daemon
    level=info msg="  --nat46-range='0:0:0:0:0:FFFF::/96'" subsys=daemon
    level=info msg="  --native-routing-cidr=''" subsys=daemon
    level=info msg="  --node-port-acceleration='disabled'" subsys=daemon
    level=info msg="  --node-port-algorithm='random'" subsys=daemon
    level=info msg="  --node-port-bind-protection='true'" subsys=daemon
    level=info msg="  --node-port-mode='snat'" subsys=daemon
    level=info msg="  --node-port-range='30000,32767'" subsys=daemon
    level=info msg="  --policy-audit-mode='false'" subsys=daemon
    level=info msg="  --policy-queue-size='100'" subsys=daemon
    level=info msg="  --policy-trigger-interval='1s'" subsys=daemon
    level=info msg="  --pprof='false'" subsys=daemon
    level=info msg="  --preallocate-bpf-maps='false'" subsys=daemon
    level=info msg="  --prefilter-device='undefined'" subsys=daemon
    level=info msg="  --prefilter-mode='native'" subsys=daemon
    level=info msg="  --prepend-iptables-chains='true'" subsys=daemon
    level=info msg="  --prometheus-serve-addr=''" subsys=daemon
    level=info msg="  --proxy-connect-timeout='1'" subsys=daemon
    level=info msg="  --proxy-prometheus-port='0'" subsys=daemon
    level=info msg="  --read-cni-conf=''" subsys=daemon
    level=info msg="  --restore='true'" subsys=daemon
    level=info msg="  --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
    level=info msg="  --single-cluster-route='false'" subsys=daemon
    level=info msg="  --skip-crd-creation='false'" subsys=daemon
    level=info msg="  --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
    level=info msg="  --sockops-enable='false'" subsys=daemon
    level=info msg="  --state-dir='/var/run/cilium'" subsys=daemon
    level=info msg="  --tofqdns-dns-reject-response-code='refused'" subsys=daemon
    level=info msg="  --tofqdns-enable-dns-compression='true'" subsys=daemon
    level=info msg="  --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
    level=info msg="  --tofqdns-idle-connection-grace-period='0s'" subsys=daemon
    level=info msg="  --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
    level=info msg="  --tofqdns-min-ttl='0'" subsys=daemon
    level=info msg="  --tofqdns-pre-cache=''" subsys=daemon
    level=info msg="  --tofqdns-proxy-port='0'" subsys=daemon
    level=info msg="  --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
    level=info msg="  --trace-payloadlen='128'" subsys=daemon
    level=info msg="  --tunnel='vxlan'" subsys=daemon
    level=info msg="  --version='false'" subsys=daemon
    level=info msg="  --write-cni-conf-when-ready=''" subsys=daemon
    level=info msg="     _ _ _" subsys=daemon
    level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
    level=info msg="|  _| | | | | |     |" subsys=daemon
    level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
    level=info msg="Cilium 1.9.9 5bcf83c 2021-07-19T16:45:00-07:00 go version go1.15.14 linux/amd64" subsys=daemon
    level=info msg="cilium-envoy  version: 82a70d56bf324287ced3129300db609eceb21d10/1.17.3/Distribution/RELEASE/BoringSSL" subsys=daemon
    level=info msg="clang (10.0.0) and kernel (5.11.1) versions: OK!" subsys=linux-datapath
    level=info msg="linking environment: OK!" subsys=linux-datapath
    level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
    level=info msg="Mounted cgroupv2 filesystem at /run/cilium/cgroupv2" subsys=cgroups
    level=info msg="Parsing base label prefixes from default label list" subsys=labels-filter
    level=info msg="Parsing additional label prefixes from user inputs: []" subsys=labels-filter
    level=info msg="Final label prefixes to be used for identity evaluation:" subsys=labels-filter
    level=info msg=" - reserved:.*" subsys=labels-filter
    level=info msg=" - :io.kubernetes.pod.namespace" subsys=labels-filter
    level=info msg=" - :io.cilium.k8s.namespace.labels" subsys=labels-filter
    level=info msg=" - :app.kubernetes.io" subsys=labels-filter
    level=info msg=" - !:io.kubernetes" subsys=labels-filter
    level=info msg=" - !:kubernetes.io" subsys=labels-filter
    level=info msg=" - !:.*beta.kubernetes.io" subsys=labels-filter
    level=info msg=" - !:k8s.io" subsys=labels-filter
    level=info msg=" - !:pod-template-generation" subsys=labels-filter
    level=info msg=" - !:pod-template-hash" subsys=labels-filter
    level=info msg=" - !:controller-revision-hash" subsys=labels-filter
    level=info msg=" - !:annotation.*" subsys=labels-filter
    level=info msg=" - !:etcd_node" subsys=labels-filter
    level=info msg="Auto-disabling \"enable-bpf-clock-probe\" feature since KERNEL_HZ cannot be determined" error="Cannot probe CONFIG_HZ" subsys=daemon
    level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.221.0.0/16
    level=info msg="Initializing daemon" subsys=daemon
    level=info msg="Establishing connection to apiserver" host="https://apiserver.qiangyun.com:6443" subsys=k8s
    level=info msg="Connected to apiserver" subsys=k8s
    level=info msg="Trying to auto-enable \"enable-node-port\", \"enable-external-ips\", \"enable-host-reachable-services\", \"enable-host-port\", \"enable-session-affinity\" features" subsys=daemon
    level=info msg="Inheriting MTU from external network interface" device=eth0 ipAddr=10.1.16.221 mtu=1500 subsys=mtu
    level=info msg="Restored services from maps" failed=0 restored=11 subsys=service
    level=info msg="Reading old endpoints..." subsys=daemon
    level=info msg="No old endpoints found." subsys=daemon
    level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock" subsys=envoy-manager
    level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
    level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
    level=info msg="Retrieved node information from kubernetes node" nodeName=prod-fe-k8s-wn1 subsys=k8s
    level=info msg="Received own node information from API server" ipAddr.ipv4=10.1.16.221 ipAddr.ipv6="<nil>" k8sNodeIP=10.1.16.221 labels="map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/env:prod kubernetes.io/hostname:prod-fe-k8s-wn1 kubernetes.io/ingress:prod kubernetes.io/os:linux kubernetes.io/resource:prod-fe node-role.kubernetes.io/worker:worker topology.diskplugin.csi.alibabacloud.com/zone:cn-hangzhou-h]" nodeName=prod-fe-k8s-wn1 subsys=k8s v4Prefix=172.21.9.0/24 v6Prefix="<nil>"
    level=info msg="Restored router IPs from node information" ipv4=172.21.9.225 ipv6="<nil>" subsys=k8s
    level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
    level=info msg="Using auto-derived devices to attach Loadbalancer, Host Firewall or Bandwidth Manager program" devices="[eth0]" directRoutingDevice=eth0 subsys=daemon
    level=info msg="Enabling k8s event listener" subsys=k8s-watcher
    level=info msg="Removing stale endpoint interfaces" subsys=daemon
    level=info msg="Skipping kvstore configuration" subsys=daemon
    level=info msg="Restored router address from node_config" file=/var/run/cilium/state/globals/node_config.h ipv4=172.21.9.225 ipv6="<nil>" subsys=node
    level=info msg="Initializing node addressing" subsys=daemon
    level=info msg="Initializing kubernetes IPAM" subsys=ipam v4Prefix=172.21.9.0/24 v6Prefix="<nil>"
    level=info msg="Restoring endpoints..." subsys=daemon
    level=info msg="Endpoints restored" failed=0 restored=0 subsys=daemon
    level=info msg="Addressing information:" subsys=daemon
    level=info msg="  Cluster-Name: default" subsys=daemon
    level=info msg="  Cluster-ID: 0" subsys=daemon
    level=info msg="  Local node-name: prod-fe-k8s-wn1" subsys=daemon
    level=info msg="  Node-IPv6: <nil>" subsys=daemon
    level=info msg="  External-Node IPv4: 10.1.16.221" subsys=daemon
    level=info msg="  Internal-Node IPv4: 172.21.9.225" subsys=daemon
    level=info msg="  IPv4 allocation prefix: 172.21.9.0/24" subsys=daemon
    level=info msg="  Loopback IPv4: 169.254.42.1" subsys=daemon
    level=info msg="  Local IPv4 addresses:" subsys=daemon
    level=info msg="  - 10.1.16.221" subsys=daemon
    level=info msg="Creating or updating CiliumNode resource" node=prod-fe-k8s-wn1 subsys=nodediscovery
    level=info msg="Waiting until all pre-existing resources related to policy have been received" subsys=k8s-watcher
    level=info msg="Adding local node to cluster" node="{prod-fe-k8s-wn1 default [{InternalIP 10.1.16.221} {CiliumInternalIP 172.21.9.225}] 172.21.9.0/24 <nil> 172.21.9.26 <nil> 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/env:prod kubernetes.io/hostname:prod-fe-k8s-wn1 kubernetes.io/ingress:prod kubernetes.io/os:linux kubernetes.io/resource:prod-fe node-role.kubernetes.io/worker:worker topology.diskplugin.csi.alibabacloud.com/zone:cn-hangzhou-h] 6}" subsys=nodediscovery
    level=info msg="Annotating k8s node" subsys=daemon v4CiliumHostIP.IPv4=172.21.9.225 v4Prefix=172.21.9.0/24 v4healthIP.IPv4=172.21.9.26 v6CiliumHostIP.IPv6="<nil>" v6Prefix="<nil>" v6healthIP.IPv6="<nil>"
    level=info msg="Initializing identity allocator" subsys=identity-cache
    level=info msg="Cluster-ID is not specified, skipping ClusterMesh initialization" subsys=daemon
    level=info msg="Setting up BPF datapath" bpfClockSource=ktime bpfInsnSet=v3 subsys=datapath-loader
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.timer_migration sysParamValue=0
    level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
    level=info msg="All pre-existing resources related to policy have been received; continuing" subsys=k8s-watcher
    level=info msg="Adding new proxy port rules for cilium-dns-egress:37581" proxy port name=cilium-dns-egress subsys=proxy
    level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
    level=info msg="Validating configured node address ranges" subsys=daemon
    level=info msg="Starting connection tracking garbage collector" subsys=daemon
    level=info msg="Datapath signal listener running" subsys=signal
    level=info msg="Starting IP identity watcher" subsys=ipcache
    level=info msg="Initial scan of connection tracking completed" subsys=ct-gc
    level=info msg="Regenerating restored endpoints" numRestored=0 subsys=daemon
    level=info msg="Creating host endpoint" subsys=daemon
    level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2159 ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2159 identityLabels="k8s:node-role.kubernetes.io/worker=worker,k8s:topology.diskplugin.csi.alibabacloud.com/zone=cn-hangzhou-h,reserved:host" ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2159 identity=1 identityLabels="k8s:node-role.kubernetes.io/worker=worker,k8s:topology.diskplugin.csi.alibabacloud.com/zone=cn-hangzhou-h,reserved:host" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
    level=info msg="Launching Cilium health daemon" subsys=daemon
    level=info msg="Finished regenerating restored endpoints" regenerated=0 subsys=daemon total=0
    level=info msg="Launching Cilium health endpoint" subsys=daemon
    level=info msg="Started healthz status API server" address="127.0.0.1:9876" subsys=daemon
    level=info msg="Initializing Cilium API" subsys=daemon
    level=info msg="Daemon initialization completed" bootstrapTime=8.140788349s subsys=daemon
    level=info msg="Serving cilium API at unix:///var/run/cilium/cilium.sock" subsys=daemon
    level=info msg="Configuring Hubble server" eventQueueSize=2048 maxFlows=4095 subsys=hubble
    level=info msg="Starting local Hubble server" address="unix:///var/run/cilium/hubble.sock" subsys=hubble
    level=info msg="Beginning to read perf buffer" startTime="2021-08-27 17:06:36.376614944 +0000 UTC m=+8.225871854" subsys=monitor-agent
    level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3199 ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3199 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3199 identity=4 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
    level=info msg="Compiled new BPF template" BPFCompilationTime=1.833919045s file-path=/var/run/cilium/state/templates/532a69347dd40c75334a195185011bc79bd07ca7/bpf_host.o subsys=datapath-loader
    level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=2159 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Compiled new BPF template" BPFCompilationTime=1.88288814s file-path=/var/run/cilium/state/templates/fb6dc13c1055d6e188939f7cb8ae5c7e8ed3fe25/bpf_lxc.o subsys=datapath-loader
    level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3199 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Serving cilium health API at unix:///var/run/cilium/health.sock" subsys=health-server
    level=info msg="Waiting for Hubble server TLS certificate and key files to be created" subsys=hubble
    level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.008514404296875 newInterval=7m30s subsys=map-ct
    level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.013427734375 newInterval=11m15s subsys=map-ct
    level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.0150604248046875 newInterval=16m53s subsys=map-ct
    level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.0257110595703125 newInterval=25m20s subsys=map-ct

     


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM