Cilium路由模式(二)


Native-Routing


原生路由/主機路由,二者是一樣,二種叫法

參考官方文檔 

官方Gitlab目錄  cilium/install/kubernetes/cilium/values.yaml

cilium/values.yaml at v1.9.9 · cilium/cilium · GitHub

  1. 在本地路由模式下,Cilium 會將本節點網絡地址范圍外的IP數據包委托給 Linux 內核的路由子系統發到網絡的另一端。 這意味着數據包將被路由,就好像本地進程會發出數據包一樣。 因此,連接集群節點的網絡必須能夠路由 PodCIDR
  2. 配置本機路由時,Cilium 會在 Linux 內核中自動啟用 IP 轉發。

運行要求  

  • In order to run the native routing mode, the network connecting the hosts on which Cilium is running on must be capable of forwarding IP traffic using addresses given to pods or other workloads.  差不多意思就是,如果要使用本地路由時,網絡連接必須能夠轉發運行Cilium的節點或者負載均衡的IP流量

  • 在初始化必須指定參數  --set tunnel=disabled關閉封裝模式,以開啟路由模式,本機數據包轉發模式利用 Cilium 運行的網絡的路由功能,而不是執行封裝

實現方式

根據官方文檔,使用native-routing時,節點上的 Linux 內核必須知道如何轉發運行 Cilium 的所有節點的 pod 或其他工作負載的數據包。 這可以通過兩種方式實現:

  1. 節點本身不知道如何路由所有 pod IP,但網絡上必須存在一個知道如何到達所有其他 pod 的路由器。 在這種情況下,Linux 節點配置為包含指向此類路由器的默認路由。 該模型用於雲提供商網絡集成。 有關更多詳細信息,請參閱 Google Cloud、AWS ENI 和 Azure IPAM
  2. 每個單獨的節點都知道所有其他節點的所有 pod IP,並將路由插入 Linux 內核路由表以表示這一點。 如果所有節點共享一個 L2 網絡,則可以通過啟用選項 auto-direct-node-routes: true(--set autoDirectNodeRoutes=true)實現Pod間的路由,此模式即DSR。 否則,必須運行其他系統組件(例如 BGP 守護程序)來分發路由。 請參閱使用 kube-router 運行 BGP 的指南,了解如何使用 kube-router 項目實現

 不管以上那種,首先必須要關閉默認模式tunnel (--set tunnel=disabled)

配置方案

使用阿里雲平台提供的路由

  1. 配置過程
    # no DSR
    helm install cilium cilium/cilium --version 1.9.9 \
        --namespace kube-system \
        --set tunnel=disabled \
        --set kubeProxyReplacement=strict \
        --set nativeRoutingCIDR=172.21.0.0/20 \
        --set ipam.mode=kubernetes \
        --set ipam.operator.clusterPoolIPv4PodCIDR=172.21.0.0/20 \
        --set ipam.operator.clusterPoolIPv4MaskSize=26 \
        --set k8sServiceHost=apiserver.qiangyun.com \
        --set k8sServicePort=6443
    
    <root@PROD-K8S-CP1 ~># helm install cilium cilium/cilium --version 1.9.9 \
    >     --namespace kube-system \
    >     --set tunnel=disabled \
    >     --set kubeProxyReplacement=strict \
    >     --set nativeRoutingCIDR=172.21.0.0/20 \
    >     --set ipam.mode=kubernetes \
    >     --set ipam.operator.clusterPoolIPv4PodCIDR=172.21.0.0/20 \
    >     --set ipam.operator.clusterPoolIPv4MaskSize=26 \
    >     --set k8sServiceHost=apiserver.qiangyun.com \
    >     --set k8sServicePort=6443
    NAME: cilium
    LAST DEPLOYED: Sat Aug 28 15:30:25 2021
    NAMESPACE: kube-system
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    You have successfully installed Cilium with Hubble.
    
    Your release version is 1.9.9.
    
    For any further help, visit https://docs.cilium.io/en/v1.9/gettinghelp
    <root@PROD-K8S-CP1 ~># dps
    1e8bef8a28ac    Up Less than a second    k8s_cilium-agent_cilium-mnddn_kube-system_aa96f316-d435-4cc4-8fc3-26fe2bee35e3_0
    8b87a2f6fce0    Up 18 hours    k8s_kube-controller-manager_kube-controller-manager-prod-k8s-cp1_kube-system_c5548fca3d6f1bb0c7cbee586dff7327_3
    e13f8dc37637    Up 18 hours    k8s_etcd_etcd-prod-k8s-cp1_kube-system_30e073f094203874eecc5317ed3ce2f6_10
    998ebbddead1    Up 18 hours    k8s_kube-scheduler_kube-scheduler-prod-k8s-cp1_kube-system_10803dd5434c54168be1114c7d99a067_10
    85e2890ed099    Up 18 hours    k8s_kube-apiserver_kube-apiserver-prod-k8s-cp1_kube-system_e14dd2db1d7c352e9552e3944ff3b802_16
    <root@PROD-K8S-CP1 ~># docker logs -f 1e8
    level=info msg="Skipped reading configuration file" reason="Config File \"ciliumd\" Not Found in \"[/root]\"" subsys=config
    level=info msg="Started gops server" address="127.0.0.1:9890" subsys=daemon
    level=info msg="Memory available for map entries (0.003% of 16508948480B): 41272371B" subsys=config
    level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 144815" subsys=config
    level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 72407" subsys=config
    level=info msg="option bpf-nat-global-max set by dynamic sizing to 144815" subsys=config
    level=info msg="option bpf-neigh-global-max set by dynamic sizing to 144815" subsys=config
    level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 72407" subsys=config
    level=info msg="  --agent-health-port='9876'" subsys=daemon
    level=info msg="  --agent-labels=''" subsys=daemon
    level=info msg="  --allow-icmp-frag-needed='true'" subsys=daemon
    level=info msg="  --allow-localhost='auto'" subsys=daemon
    level=info msg="  --annotate-k8s-node='true'" subsys=daemon
    level=info msg="  --api-rate-limit='map[]'" subsys=daemon
    level=info msg="  --arping-refresh-period='5m0s'" subsys=daemon
    level=info msg="  --auto-create-cilium-node-resource='true'" subsys=daemon
    level=info msg=" --auto-direct-node-routes='false'" subsys=daemon 關閉DSR模式,使用雲平台的路由功能,阿里雲需要指定Cilium-node所分配的PodCIDR的網段地址
    level=info msg="  --blacklist-conflicting-routes='false'" subsys=daemon
    level=info msg="  --bpf-compile-debug='false'" subsys=daemon
    level=info msg="  --bpf-ct-global-any-max='262144'" subsys=daemon
    level=info msg="  --bpf-ct-global-tcp-max='524288'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-tcp='6h0m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-service-tcp='6h0m0s'" subsys=daemon
    level=info msg="  --bpf-fragments-map-max='8192'" subsys=daemon
    level=info msg="  --bpf-lb-acceleration='disabled'" subsys=daemon
    level=info msg="  --bpf-lb-algorithm='random'" subsys=daemon
    level=info msg="  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'" subsys=daemon
    level=info msg="  --bpf-lb-maglev-table-size='16381'" subsys=daemon
    level=info msg="  --bpf-lb-map-max='65536'" subsys=daemon
    level=info msg=" --bpf-lb-mode='snat'" subsys=daemon loadbalance負載均衡轉發模式SNAT,默認配置
    level=info msg="  --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
    level=info msg="  --bpf-nat-global-max='524288'" subsys=daemon
    level=info msg="  --bpf-neigh-global-max='524288'" subsys=daemon
    level=info msg="  --bpf-policy-map-max='16384'" subsys=daemon
    level=info msg="  --bpf-root=''" subsys=daemon
    level=info msg="  --bpf-sock-rev-map-max='262144'" subsys=daemon
    level=info msg="  --certificates-directory='/var/run/cilium/certs'" subsys=daemon
    level=info msg="  --cgroup-root='/run/cilium/cgroupv2'" subsys=daemon
    level=info msg="  --cluster-id=''" subsys=daemon
    level=info msg="  --cluster-name='default'" subsys=daemon
    level=info msg="  --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
    level=info msg="  --cmdref=''" subsys=daemon
    level=info msg="  --config=''" subsys=daemon
    level=info msg="  --config-dir='/tmp/cilium/config-map'" subsys=daemon
    level=info msg="  --conntrack-gc-interval='0s'" subsys=daemon
    level=info msg="  --crd-wait-timeout='5m0s'" subsys=daemon
    level=info msg="  --datapath-mode='veth'" subsys=daemon
    level=info msg="  --debug='false'" subsys=daemon
    level=info msg="  --debug-verbose=''" subsys=daemon
    level=info msg="  --device=''" subsys=daemon
    level=info msg="  --devices=''" subsys=daemon
    level=info msg="  --direct-routing-device=''" subsys=daemon
    level=info msg="  --disable-cnp-status-updates='true'" subsys=daemon
    level=info msg="  --disable-conntrack='false'" subsys=daemon
    level=info msg="  --disable-endpoint-crd='false'" subsys=daemon
    level=info msg="  --disable-envoy-version-check='false'" subsys=daemon
    level=info msg="  --disable-iptables-feeder-rules=''" subsys=daemon
    level=info msg="  --dns-max-ips-per-restored-rule='1000'" subsys=daemon
    level=info msg=" --egress-masquerade-interfaces=''" subsys=daemon Cilium路由模式(一)提到過,Pod向外請求時,偽裝地址出口設備接口,此功能是依賴傳統的iptables-bases,默認是internal接口
    level=info msg="  --egress-multi-home-ip-rule-compat='false'" subsys=daemon
    level=info msg="  --enable-auto-protect-node-port-range='true'" subsys=daemon
    level=info msg="  --enable-bandwidth-manager='false'" subsys=daemon
    level=info msg="  --enable-bpf-clock-probe='true'" subsys=daemon
    level=info msg="  --enable-bpf-masquerade='true'" subsys=daemon
    level=info msg="  --enable-bpf-tproxy='false'" subsys=daemon
    level=info msg="  --enable-endpoint-health-checking='true'" subsys=daemon
    level=info msg=" --enable-endpoint-routes='false'" subsys=daemon 關閉以endpoint為單位的路由模式,就是獨立的lxc1e216780d18e(使用netstat -in 即可獲得,實際就是指Container的網絡設備)
    level=info msg="  --enable-external-ips='true'" subsys=daemon
    level=info msg="  --enable-health-check-nodeport='true'" subsys=daemon
    level=info msg="  --enable-health-checking='true'" subsys=daemon
    level=info msg="  --enable-host-firewall='false'" subsys=daemon
    level=info msg=" --enable-host-legacy-routing='false'" subsys=daemon 關閉主機傳統路由模式,個人理解在Pod向外發送請求時,使用eBPF處理數據包
    level=info msg="  --enable-host-port='true'" subsys=daemon
    level=info msg="  --enable-host-reachable-services='false'" subsys=daemon
    level=info msg="  --enable-hubble='true'" subsys=daemon
    level=info msg="  --enable-identity-mark='true'" subsys=daemon
    level=info msg=" --enable-ip-masq-agent='false'" subsys=daemon 后面再作詳細補充
    level=info msg="  --enable-ipsec='false'" subsys=daemon
    level=info msg="  --enable-ipv4='true'" subsys=daemon
    level=info msg="  --enable-ipv4-fragment-tracking='true'" subsys=daemon
    level=info msg="  --enable-ipv6='false'" subsys=daemon
    level=info msg="  --enable-ipv6-ndp='false'" subsys=daemon
    level=info msg="  --enable-k8s-api-discovery='false'" subsys=daemon
    level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=daemon
    level=info msg="  --enable-k8s-event-handover='false'" subsys=daemon
    level=info msg="  --enable-l7-proxy='true'" subsys=daemon
    level=info msg="  --enable-local-node-route='true'" subsys=daemon
    level=info msg="  --enable-local-redirect-policy='false'" subsys=daemon
    level=info msg="  --enable-monitor='true'" subsys=daemon
    level=info msg="  --enable-node-port='false'" subsys=daemon
    level=info msg="  --enable-policy='default'" subsys=daemon
    level=info msg="  --enable-remote-node-identity='true'" subsys=daemon
    level=info msg="  --enable-selective-regeneration='true'" subsys=daemon
    level=info msg="  --enable-session-affinity='true'" subsys=daemon
    level=info msg="  --enable-svc-source-range-check='true'" subsys=daemon
    level=info msg="  --enable-tracing='false'" subsys=daemon
    level=info msg="  --enable-well-known-identities='false'" subsys=daemon
    level=info msg="  --enable-xt-socket-fallback='true'" subsys=daemon
    level=info msg="  --encrypt-interface=''" subsys=daemon
    level=info msg="  --encrypt-node='false'" subsys=daemon
    level=info msg="  --endpoint-interface-name-prefix='lxc+'" subsys=daemon
    level=info msg="  --endpoint-queue-size='25'" subsys=daemon
    level=info msg="  --endpoint-status=''" subsys=daemon
    level=info msg="  --envoy-log=''" subsys=daemon
    level=info msg="  --exclude-local-address=''" subsys=daemon
    level=info msg="  --fixed-identity-mapping='map[]'" subsys=daemon
    level=info msg="  --flannel-master-device=''" subsys=daemon
    level=info msg="  --flannel-uninstall-on-exit='false'" subsys=daemon
    level=info msg="  --force-local-policy-eval-at-source='true'" subsys=daemon
    level=info msg="  --gops-port='9890'" subsys=daemon
    level=info msg="  --host-reachable-services-protos='tcp,udp'" subsys=daemon
    level=info msg="  --http-403-msg=''" subsys=daemon
    level=info msg="  --http-idle-timeout='0'" subsys=daemon
    level=info msg="  --http-max-grpc-timeout='0'" subsys=daemon
    level=info msg="  --http-normalize-path='true'" subsys=daemon
    level=info msg="  --http-request-timeout='3600'" subsys=daemon
    level=info msg="  --http-retry-count='3'" subsys=daemon
    level=info msg="  --http-retry-timeout='0'" subsys=daemon
    level=info msg="  --hubble-disable-tls='false'" subsys=daemon
    level=info msg="  --hubble-event-queue-size='0'" subsys=daemon
    level=info msg="  --hubble-flow-buffer-size='4095'" subsys=daemon
    level=info msg="  --hubble-listen-address=':4244'" subsys=daemon
    level=info msg="  --hubble-metrics=''" subsys=daemon
    level=info msg="  --hubble-metrics-server=''" subsys=daemon
    level=info msg="  --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
    level=info msg="  --hubble-tls-cert-file='/var/lib/cilium/tls/hubble/server.crt'" subsys=daemon
    level=info msg="  --hubble-tls-client-ca-files='/var/lib/cilium/tls/hubble/client-ca.crt'" subsys=daemon
    level=info msg="  --hubble-tls-key-file='/var/lib/cilium/tls/hubble/server.key'" subsys=daemon
    level=info msg="  --identity-allocation-mode='crd'" subsys=daemon
    level=info msg="  --identity-change-grace-period='5s'" subsys=daemon
    level=info msg="  --install-iptables-rules='true'" subsys=daemon
    level=info msg="  --ip-allocation-timeout='2m0s'" subsys=daemon
    level=info msg="  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
    level=info msg="  --ipam='kubernetes'" subsys=daemon
    level=info msg="  --ipsec-key-file=''" subsys=daemon
    level=info msg="  --iptables-lock-timeout='5s'" subsys=daemon
    level=info msg="  --iptables-random-fully='false'" subsys=daemon
    level=info msg="  --ipv4-node='auto'" subsys=daemon
    level=info msg="  --ipv4-pod-subnets=''" subsys=daemon
    level=info msg="  --ipv4-range='auto'" subsys=daemon
    level=info msg="  --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
    level=info msg="  --ipv4-service-range='auto'" subsys=daemon
    level=info msg="  --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
    level=info msg="  --ipv6-mcast-device=''" subsys=daemon
    level=info msg="  --ipv6-node='auto'" subsys=daemon
    level=info msg="  --ipv6-pod-subnets=''" subsys=daemon
    level=info msg="  --ipv6-range='auto'" subsys=daemon
    level=info msg="  --ipv6-service-range='auto'" subsys=daemon
    level=info msg="  --ipvlan-master-device='undefined'" subsys=daemon
    level=info msg="  --join-cluster='false'" subsys=daemon
    level=info msg="  --k8s-api-server=''" subsys=daemon
    level=info msg="  --k8s-force-json-patch='false'" subsys=daemon
    level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=daemon
    level=info msg="  --k8s-kubeconfig-path=''" subsys=daemon
    level=info msg="  --k8s-namespace='kube-system'" subsys=daemon
    level=info msg="  --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
    level=info msg="  --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
    level=info msg="  --k8s-service-cache-size='128'" subsys=daemon
    level=info msg="  --k8s-service-proxy-name=''" subsys=daemon
    level=info msg="  --k8s-sync-timeout='3m0s'" subsys=daemon
    level=info msg="  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
    level=info msg="  --k8s-watcher-queue-size='1024'" subsys=daemon
    level=info msg="  --keep-config='false'" subsys=daemon
    level=info msg="  --kube-proxy-replacement='strict'" subsys=daemon
    level=info msg="  --kube-proxy-replacement-healthz-bind-address=''" subsys=daemon
    level=info msg="  --kvstore=''" subsys=daemon
    level=info msg="  --kvstore-connectivity-timeout='2m0s'" subsys=daemon
    level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=daemon
    level=info msg="  --kvstore-opt='map[]'" subsys=daemon
    level=info msg="  --kvstore-periodic-sync='5m0s'" subsys=daemon
    level=info msg="  --label-prefix-file=''" subsys=daemon
    level=info msg="  --labels=''" subsys=daemon
    level=info msg="  --lib-dir='/var/lib/cilium'" subsys=daemon
    level=info msg="  --log-driver=''" subsys=daemon
    level=info msg="  --log-opt='map[]'" subsys=daemon
    level=info msg="  --log-system-load='false'" subsys=daemon
    level=info msg="  --masquerade='true'" subsys=daemon
    level=info msg="  --max-controller-interval='0'" subsys=daemon
    level=info msg="  --metrics=''" subsys=daemon
    level=info msg="  --monitor-aggregation='medium'" subsys=daemon
    level=info msg="  --monitor-aggregation-flags='all'" subsys=daemon
    level=info msg="  --monitor-aggregation-interval='5s'" subsys=daemon
    level=info msg="  --monitor-queue-size='0'" subsys=daemon
    level=info msg="  --mtu='0'" subsys=daemon
    level=info msg="  --nat46-range='0:0:0:0:0:FFFF::/96'" subsys=daemon
    level=info msg="  --native-routing-cidr='172.21.0.0/20'" subsys=daemon
    level=info msg="  --node-port-acceleration='disabled'" subsys=daemon
    level=info msg="  --node-port-algorithm='random'" subsys=daemon
    level=info msg="  --node-port-bind-protection='true'" subsys=daemon
    level=info msg=" --node-port-mode='snat'" subsys=daemon NodePort的模式
    level=info msg="  --node-port-range='30000,32767'" subsys=daemon
    level=info msg="  --policy-audit-mode='false'" subsys=daemon
    level=info msg="  --policy-queue-size='100'" subsys=daemon
    level=info msg="  --policy-trigger-interval='1s'" subsys=daemon
    level=info msg="  --pprof='false'" subsys=daemon
    level=info msg="  --preallocate-bpf-maps='false'" subsys=daemon
    level=info msg="  --prefilter-device='undefined'" subsys=daemon
    level=info msg="  --prefilter-mode='native'" subsys=daemon
    level=info msg="  --prepend-iptables-chains='true'" subsys=daemon
    level=info msg="  --prometheus-serve-addr=''" subsys=daemon
    level=info msg="  --proxy-connect-timeout='1'" subsys=daemon
    level=info msg="  --proxy-prometheus-port='0'" subsys=daemon
    level=info msg="  --read-cni-conf=''" subsys=daemon
    level=info msg="  --restore='true'" subsys=daemon
    level=info msg="  --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
    level=info msg="  --single-cluster-route='false'" subsys=daemon
    level=info msg="  --skip-crd-creation='false'" subsys=daemon
    level=info msg="  --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
    level=info msg="  --sockops-enable='false'" subsys=daemon
    level=info msg="  --state-dir='/var/run/cilium'" subsys=daemon
    level=info msg="  --tofqdns-dns-reject-response-code='refused'" subsys=daemon
    level=info msg="  --tofqdns-enable-dns-compression='true'" subsys=daemon
    level=info msg="  --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
    level=info msg="  --tofqdns-idle-connection-grace-period='0s'" subsys=daemon
    level=info msg="  --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
    level=info msg="  --tofqdns-min-ttl='0'" subsys=daemon
    level=info msg="  --tofqdns-pre-cache=''" subsys=daemon
    level=info msg="  --tofqdns-proxy-port='0'" subsys=daemon
    level=info msg="  --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
    level=info msg="  --trace-payloadlen='128'" subsys=daemon
    level=info msg=" --tunnel='disabled'" subsys=daemon 關閉默認tunnel功能,即走路由模式
    level=info msg="  --version='false'" subsys=daemon
    level=info msg="  --write-cni-conf-when-ready=''" subsys=daemon
    level=info msg="     _ _ _" subsys=daemon
    level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
    level=info msg="|  _| | | | | |     |" subsys=daemon
    level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
    level=info msg="Cilium 1.9.9 5bcf83c 2021-07-19T16:45:00-07:00 go version go1.15.14 linux/amd64" subsys=daemon
    level=info msg="cilium-envoy  version: 82a70d56bf324287ced3129300db609eceb21d10/1.17.3/Distribution/RELEASE/BoringSSL" subsys=daemon
    level=info msg="clang (10.0.0) and kernel (5.11.1) versions: OK!" subsys=linux-datapath
    level=info msg="linking environment: OK!" subsys=linux-datapath
    level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
    level=info msg="Mounted cgroupv2 filesystem at /run/cilium/cgroupv2" subsys=cgroups
    level=info msg="Parsing base label prefixes from default label list" subsys=labels-filter
    level=info msg="Parsing additional label prefixes from user inputs: []" subsys=labels-filter
    level=info msg="Final label prefixes to be used for identity evaluation:" subsys=labels-filter
    level=info msg=" - reserved:.*" subsys=labels-filter
    level=info msg=" - :io.kubernetes.pod.namespace" subsys=labels-filter
    level=info msg=" - :io.cilium.k8s.namespace.labels" subsys=labels-filter
    level=info msg=" - :app.kubernetes.io" subsys=labels-filter
    level=info msg=" - !:io.kubernetes" subsys=labels-filter
    level=info msg=" - !:kubernetes.io" subsys=labels-filter
    level=info msg=" - !:.*beta.kubernetes.io" subsys=labels-filter
    level=info msg=" - !:k8s.io" subsys=labels-filter
    level=info msg=" - !:pod-template-generation" subsys=labels-filter
    level=info msg=" - !:pod-template-hash" subsys=labels-filter
    level=info msg=" - !:controller-revision-hash" subsys=labels-filter
    level=info msg=" - !:annotation.*" subsys=labels-filter
    level=info msg=" - !:etcd_node" subsys=labels-filter
    level=info msg="Auto-disabling \"enable-bpf-clock-probe\" feature since KERNEL_HZ cannot be determined" error="Cannot probe CONFIG_HZ" subsys=daemon
    level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.5.0.0/16
    level=info msg="Initializing daemon" subsys=daemon
    level=info msg="Establishing connection to apiserver" host="https://apiserver.qiangyun.com:6443" subsys=k8s
    level=info msg="Connected to apiserver" subsys=k8s
    level=info msg="Trying to auto-enable \"enable-node-port\", \"enable-external-ips\", \"enable-host-reachable-services\", \"enable-host-port\", \"enable-session-affinity\" features" subsys=daemon
    level=info msg="Inheriting MTU from external network interface" device=eth0 ipAddr=10.1.0.5 mtu=1500 subsys=mtu
    level=info msg="Restored services from maps" failed=0 restored=11 subsys=service
    level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock" subsys=envoy-manager
    level=info msg="Reading old endpoints..." subsys=daemon
    level=info msg="Reusing previous DNS proxy port: 39451" subsys=daemon
    level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
    level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
    level=info msg="Retrieved node information from kubernetes node" nodeName=prod-k8s-cp1 subsys=k8s
    level=info msg="Received own node information from API server" ipAddr.ipv4=10.1.0.5 ipAddr.ipv6="<nil>" k8sNodeIP=10.1.0.5 labels="map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:prod-k8s-cp1 kubernetes.io/os:linux node-role.kubernetes.io/master: topology.diskplugin.csi.alibabacloud.com/zone:cn-hangzhou-h]" nodeName=prod-k8s-cp1 subsys=k8s v4Prefix=172.21.0.0/24 v6Prefix="<nil>"
    level=info msg="Restored router IPs from node information" ipv4=172.21.0.85 ipv6="<nil>" subsys=k8s
    level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
    level=info msg="Using auto-derived devices to attach Loadbalancer, Host Firewall or Bandwidth Manager program" devices="[eth0]" directRoutingDevice=eth0 subsys=daemon
    level=info msg="Enabling k8s event listener" subsys=k8s-watcher
    level=info msg="Removing stale endpoint interfaces" subsys=daemon
    level=info msg="Skipping kvstore configuration" subsys=daemon
    level=info msg="Restored router address from node_config" file=/var/run/cilium/state/globals/node_config.h ipv4=172.21.0.85 ipv6="<nil>" subsys=node
    level=info msg="Initializing node addressing" subsys=daemon
    level=info msg="Initializing kubernetes IPAM" subsys=ipam v4Prefix=172.21.0.0/24 v6Prefix="<nil>"
    level=info msg="Restoring endpoints..." subsys=daemon
    level=info msg="Endpoints restored" failed=0 restored=1 subsys=daemon
    level=info msg="Addressing information:" subsys=daemon
    level=info msg="  Cluster-Name: default" subsys=daemon
    level=info msg="  Cluster-ID: 0" subsys=daemon
    level=info msg=" Local node-name: prod-k8s-cp1" subsys=daemon 本地節點名稱
    level=info msg="  Node-IPv6: <nil>" subsys=daemon
    level=info msg=" External-Node IPv4: 10.1.0.5" subsys=daemon 節點地址
    level=info msg=" Internal-Node IPv4: 172.21.0.85" subsys=daemon 這里就是cilium_host設備接口的地址,也可叫網關地址或者是路由器的地址
    level=info msg=" IPv4 allocation prefix: 172.21.0.0/24" subsys=daemon 本節點可以分配的PodCIDR地址范圍
    level=info msg=" IPv4 native routing prefix: 172.21.0.0/20" subsys=daemon 整個集群的PodCIDRs地址范圍
    level=info msg="  Loopback IPv4: 169.254.42.1" subsys=daemon
    level=info msg="  Local IPv4 addresses:" subsys=daemon
    level=info msg="  - 10.1.0.5" subsys=daemon
    level=info msg="  - 172.21.0.85" subsys=daemon
    level=info msg="Creating or updating CiliumNode resource" node=prod-k8s-cp1 subsys=nodediscovery
    level=info msg="Waiting until all pre-existing resources related to policy have been received" subsys=k8s-watcher
    level=info msg="Adding local node to cluster" node="{prod-k8s-cp1 default [{InternalIP 10.1.0.5} {CiliumInternalIP 172.21.0.85}] 172.21.0.0/24 <nil> 172.21.0.171 <nil> 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:prod-k8s-cp1 kubernetes.io/os:linux node-role.kubernetes.io/master: topology.diskplugin.csi.alibabacloud.com/zone:cn-hangzhou-h] 6}" subsys=nodediscovery
    level=info msg="Successfully created CiliumNode resource" subsys=nodediscovery
    level=info msg="Annotating k8s node" subsys=daemon v4CiliumHostIP.IPv4=172.21.0.85 v4Prefix=172.21.0.0/24 v4healthIP.IPv4=172.21.0.171 v6CiliumHostIP.IPv6="<nil>" v6Prefix="<nil>" v6healthIP.IPv6="<nil>"
    level=info msg="Initializing identity allocator" subsys=identity-cache
    level=info msg="Cluster-ID is not specified, skipping ClusterMesh initialization" subsys=daemon
    level=info msg="Setting up BPF datapath" bpfClockSource=ktime bpfInsnSet=v3 subsys=datapath-loader
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.timer_migration sysParamValue=0
    level=info msg="All pre-existing resources related to policy have been received; continuing" subsys=k8s-watcher
    level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
    level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
    level=info msg="Adding new proxy port rules for cilium-dns-egress:39451" proxy port name=cilium-dns-egress subsys=proxy
    level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
    level=info msg="Validating configured node address ranges" subsys=daemon
    level=info msg="Starting connection tracking garbage collector" subsys=daemon
    level=info msg="Starting IP identity watcher" subsys=ipcache
    level=info msg="Initial scan of connection tracking completed" subsys=ct-gc
    level=info msg="Regenerating restored endpoints" numRestored=1 subsys=daemon
    level=info msg="Datapath signal listener running" subsys=signal
    level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3912 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Successfully restored endpoint. Scheduling regeneration" endpointID=3912 subsys=daemon
    level=info msg="Removed endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2275 identity=4 ipv4=172.21.0.2 ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Launching Cilium health daemon" subsys=daemon
    level=info msg="Launching Cilium health endpoint" subsys=daemon
    level=info msg="Started healthz status API server" address="127.0.0.1:9876" subsys=daemon
    level=info msg="Initializing Cilium API" subsys=daemon
    level=info msg="Daemon initialization completed" bootstrapTime=7.030950659s subsys=daemon
    level=info msg="Serving cilium API at unix:///var/run/cilium/cilium.sock" subsys=daemon
    level=info msg="Configuring Hubble server" eventQueueSize=4096 maxFlows=4095 subsys=hubble
    level=info msg="Starting local Hubble server" address="unix:///var/run/cilium/hubble.sock" subsys=hubble
    level=info msg="Beginning to read perf buffer" startTime="2021-08-28 07:30:34.868191244 +0000 UTC m=+7.098570357" subsys=monitor-agent
    level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=739 ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=739 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=739 identity=4 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
    level=info msg="Compiled new BPF template" BPFCompilationTime=1.661777466s file-path=/var/run/cilium/state/templates/64d3584c04c9bb7a4a5bcb47425a2a11f84f3b3c/bpf_host.o subsys=datapath-loader
    level=info msg="Compiled new BPF template" BPFCompilationTime=1.275228541s file-path=/var/run/cilium/state/templates/2ad9ace8cb85023fc28f2df51df10829d79ebbfa/bpf_lxc.o subsys=datapath-loader
    level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=739 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3912 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Restored endpoint" endpointID=3912 ipAddr="[ ]" subsys=endpoint
    level=info msg="Finished regenerating restored endpoints" regenerated=1 subsys=daemon total=1
  2. 查看非DSR模式下的cilium-agent狀態
    <root@PROD-K8S-CP1 ~># dps
    1e8bef8a28ac    Up 18 minutes    k8s_cilium-agent_cilium-mnddn_kube-system_aa96f316-d435-4cc4-8fc3-26fe2bee35e3_0
    8b87a2f6fce0    Up 18 hours    k8s_kube-controller-manager_kube-controller-manager-prod-k8s-cp1_kube-system_c5548fca3d6f1bb0c7cbee586dff7327_3
    e13f8dc37637    Up 18 hours    k8s_etcd_etcd-prod-k8s-cp1_kube-system_30e073f094203874eecc5317ed3ce2f6_10
    998ebbddead1    Up 18 hours    k8s_kube-scheduler_kube-scheduler-prod-k8s-cp1_kube-system_10803dd5434c54168be1114c7d99a067_10
    85e2890ed099    Up 18 hours    k8s_kube-apiserver_kube-apiserver-prod-k8s-cp1_kube-system_e14dd2db1d7c352e9552e3944ff3b802_16
    <root@PROD-K8S-CP1 ~># docker exec -it 1e8 bash
    root@PROD-K8S-CP1:/home/cilium# cilium status --verbose
    KVStore:                Ok   Disabled
    Kubernetes:             Ok   1.18 (v1.18.5) [linux/amd64]
    Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
    KubeProxyReplacement:   Strict   [eth0 (Direct Routing)]
    Cilium:                 Ok   1.9.9 (v1.9.9-5bcf83c)
    NodeMonitor:            Listening for events on 4 CPUs with 64x4096 of shared memory
    Cilium health daemon:   Ok   
    IPAM:                   IPv4: 2/255 allocated from 172.21.0.0/24, 
    Allocated addresses:
      172.21.0.171 (health)
      172.21.0.85 (router)
    BandwidthManager:       Disabled
    Host Routing: BPF Masquerading: BPF [eth0] 172.21.0.0/20
    Clock Source for BPF:   ktime
    Controller Status:      18/18 healthy
      Name                                  Last success   Last error   Count   Message
      cilium-health-ep                      52s ago        never        0       no error   
      dns-garbage-collector-job             1m0s ago       never        0       no error   
      endpoint-3912-regeneration-recovery   never          never        0       no error   
      endpoint-739-regeneration-recovery    never          never        0       no error   
      k8s-heartbeat                         30s ago        never        0       no error   
      mark-k8s-node-as-available            18m53s ago     never        0       no error   
      metricsmap-bpf-prom-sync              5s ago         never        0       no error   
      neighbor-table-refresh                3m53s ago      never        0       no error   
      resolve-identity-739                  3m52s ago      never        0       no error   
      restoring-ep-identity (3912)          18m53s ago     never        0       no error   
      sync-endpoints-and-host-ips           53s ago        never        0       no error   
      sync-lb-maps-with-k8s-services        18m53s ago     never        0       no error   
      sync-policymap-3912                   50s ago        never        0       no error   
      sync-policymap-739                    51s ago        never        0       no error   
      sync-to-k8s-ciliumendpoint (3912)     3s ago         never        0       no error   
      sync-to-k8s-ciliumendpoint (739)      12s ago        never        0       no error   
      template-dir-watcher                  never          never        0       no error   
      update-k8s-node-annotations           18m59s ago     never        0       no error   
    Proxy Status:   OK, ip 172.21.0.85, 0 redirects active on ports 10000-20000
    Hubble:         Ok   Current/Max Flows: 170/4096 (4.15%), Flows/s: 0.15   Metrics: Disabled
    KubeProxyReplacement Details:
      Status:              Strict
      Protocols:           TCP, UDP
      Devices:             eth0 (Direct Routing)
      Mode: SNAT
      Backend Selection:   Random
      Session Affinity:    Enabled
      XDP Acceleration:    Disabled
      Services:
      - ClusterIP:      Enabled
      - NodePort:       Enabled (Range: 30000-32767) 
      - LoadBalancer:   Enabled 
      - externalIPs:    Enabled 
      - HostPort:       Enabled
    BPF Maps:   dynamic sizing: on (ratio: 0.002500)
      Name                          Size
      Non-TCP connection tracking   72407
      TCP connection tracking       144815
      Endpoint policy               65535
      Events                        4
      IP cache                      512000
      IP masquerading agent         16384
      IPv4 fragmentation            8192
      IPv4 service                  65536
      IPv6 service                  65536
      IPv4 service backend          65536
      IPv6 service backend          65536
      IPv4 service reverse NAT      65536
      IPv6 service reverse NAT      65536
      Metrics                       1024
      NAT                           144815
      Neighbor table                144815
      Global policy                 16384
      Per endpoint policy           65536
      Session affinity              65536
      Signal                        4
      Sockmap                       65535
      Sock reverse NAT              72407
      Tunnel                        65536
    Cluster health:              1/19 reachable   (2021-08-28T07:40:36Z)
      Name                       IP               Node      Endpoints
      prod-k8s-cp1 (localhost)   10.1.0.5         unknown   unknown
      prod-be-k8s-wn1            10.1.17.231      unknown   unreachable
      prod-be-k8s-wn2            10.1.17.232      unknown   unreachable
      prod-be-k8s-wn6            10.1.17.236      unknown   unreachable
      prod-be-k8s-wn7            10.1.17.237      unknown   unreachable
      prod-be-k8s-wn8            10.1.17.238      unknown   unreachable
      prod-data-k8s-wn1          10.1.18.50       unknown   unreachable
      prod-data-k8s-wn2          10.1.18.49       unknown   unreachable
      prod-data-k8s-wn3          10.1.18.51       unknown   unreachable
      prod-fe-k8s-wn1            10.1.16.221      unknown   unreachable
      prod-fe-k8s-wn2            10.1.16.222      unknown   unreachable
      prod-fe-k8s-wn3            10.1.16.223      unknown   unreachable
      prod-k8s-cp2               10.1.0.7         unknown   unreachable
      prod-k8s-cp3               10.1.0.6         unknown   unreachable
      prod-sys-k8s-wn1           10.1.0.8         unknown   unreachable
      prod-sys-k8s-wn2           10.1.0.9         unknown   unreachable
      prod-sys-k8s-wn3           10.1.0.11        unknown   unreachable
      prod-sys-k8s-wn4           10.1.0.10        unknown   unreachable
      prod-sys-k8s-wn5           10.1.0.12        unknown   unreachable
  3. 查看當前節點的路由
    <root@PROD-K8S-CP1 ~># netstat -rn
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
    0.0.0.0         10.1.0.253      0.0.0.0         UG        0 0          0 eth0
    10.1.0.0        0.0.0.0         255.255.255.0   U         0 0          0 eth0
    169.254.0.0     0.0.0.0         255.255.0.0     U         0 0          0 eth0
    172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
    172.21.0.0      172.21.0.85     255.255.255.0   UG        0 0          0 cilium_host 
    172.21.0.64     172.21.0.85     255.255.255.192 UG        0 0          0 cilium_host
    172.21.0.85     0.0.0.0         255.255.255.255 UH        0 0          0 cilium_host
    # 簡單說明一下
    發往 172.21.0.0/24 默認網關設備接口地址172.21.0.85,這個地址實際就是cilium_host設備接口地址
    發往 172.21.0.85 的請求默認網關是0.0.0.0 實際的下一跳就是本機默認網關10.1.0.253
    <root@PROD-K8S-CP1 ~># netstat -in Kernel Interface table Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg cilium_host 1500 90686 0 0 0 1022 0 0 0 BMORU cilium_net 1500 1022 0 0 0 90686 0 0 0 BMORU docker0 1500 0 0 0 0 0 0 0 0 BMU eth0 1500 7686462 0 0 0 7443167 0 0 0 BMRU lo 65536 8147119 0 0 0 8147119 0 0 0 LRU lxc_health 1500 331 0 0 0 380 0 0 0 BMRU
  4. 配置阿里雲自定義路由,具體配置略過,測試Pod的網絡通信情況
    # 切換至work-node,如下,隨便找一個tomcat測試
    <root@PROD-BE-K8S-WN6 ~># dps 64cdb3a1adfc Up About an hour k8s_cilium-agent_cilium-l9cjf_kube-system_c436f659-486e-4979-8849-3afb464ab7a8_0 b854d3384278 Up 15 hours k8s_tomcat_tomcat-cc8d8d7d9-zw6dx_default_d8919c65-acba-4dbb-a5da-3dc3b37896f8_1 344816fbdaaa Up 15 hours k8s_tomcat_tomcat-cc8d8d7d9-ln2qk_default_f53dab7b-b14b-4795-8fa7-24b5d90bfd70_1 676e012ec482 Up 15 hours k8s_tomcat_tomcat-cc8d8d7d9-fwqzg_default_0725de58-eb13-404d-aac8-75906cc0ca2f_1 <root@PROD-BE-K8S-WN6 ~># docker exec -it 344 bash root@tomcat-cc8d8d7d9-ln2qk:/usr/local/tomcat# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether c2:22:eb:3a:6e:c5 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.21.12.109/32 scope global eth0 valid_lft forever preferred_lft forever

    # 在容器內測試ping外部域名時,發現不同,正常現象,因為DNS問題,與Kubernetes的Coredns網絡不通,所以無法解析baidu root@tomcat
    -cc8d8d7d9-ln2qk:/usr/local/tomcat# ping www.baidu.com
    # 測試ping上海的DNS地址,結果可達 root@tomcat
    -cc8d8d7d9-ln2qk:/usr/local/tomcat# ping 202.96.209.5 PING 202.96.209.5 (202.96.209.5) 56(84) bytes of data. 64 bytes from 202.96.209.5: icmp_seq=1 ttl=53 time=12.8 ms 64 bytes from 202.96.209.5: icmp_seq=2 ttl=53 time=12.7 ms --- 202.96.209.5 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 3ms rtt min/avg/max/mdev = 12.685/12.752/12.820/0.131 ms

    # 測試ping同網段的生產區的機器,但是該機器不在Kubernetes平台中 root@tomcat
    -cc8d8d7d9-ln2qk:/usr/local/tomcat# ping 10.1.17.205 PING 10.1.17.205 (10.1.17.205) 56(84) bytes of data. 64 bytes from 10.1.17.205: icmp_seq=1 ttl=63 time=0.404 ms 64 bytes from 10.1.17.205: icmp_seq=2 ttl=63 time=0.245 ms 64 bytes from 10.1.17.205: icmp_seq=3 ttl=63 time=0.174 ms

    #切換到非Kubernetes平台中的生產區機器,測試與Pod的網絡可達性
    <root@PROD-BE-QN-LOANWEB01 ~># ping 172.21.12.109
    PING 172.21.12.109 (172.21.12.109) 56(84) bytes of data.
    64 bytes from 172.21.12.109: icmp_seq=1 ttl=63 time=0.263 ms
    64 bytes from 172.21.12.109: icmp_seq=2 ttl=63 time=0.167 ms
    64 bytes from 172.21.12.109: icmp_seq=3 ttl=63 time=0.152 ms
    查看該節點的路由,發現其實並沒有真正去Pod的路由,這是因為走的阿里雲ECS網絡提供的路由
    <root@PROD-BE-QN-LOANWEB01 ~># netstat -rn
    Kernel IP routing table
    Destination Gateway Genmask Flags MSS Window irtt Iface
    0.0.0.0 10.1.17.253 0.0.0.0 UG 0 0 0 eth0
    10.1.17.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
    169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0DSR模式

DSR模式

個人理解,在雲廠商搭建自建的Kubernetes網絡還依賴各自雲平台的underlay network(如果雲廠商的underlay network不支持需要借助於開源的網絡組件支持跨網段通信,比如kube-router等其他

  1. 初始化
    # DSR
    helm install cilium cilium/cilium --version 1.9.9 \
        --namespace kube-system \
        --set tunnel=disabled \
        --set autoDirectNodeRoutes=true \
        --set kubeProxyReplacement=strict \
        --set loadBalancer.mode=hybrid \
        --set nativeRoutingCIDR=172.21.0.0/20 \
        --set ipam.mode=kubernetes \
        --set ipam.operator.clusterPoolIPv4PodCIDR=172.21.0.0/20 \
        --set ipam.operator.clusterPoolIPv4MaskSize=26 \
        --set k8sServiceHost=apiserver.qiangyun.com \
        --set k8sServicePort=6443
    
    <root@PROD-K8S-CP1 ~># helm install cilium cilium/cilium --version 1.9.9 \
    >     --namespace kube-system \
    >     --set tunnel=disabled \
    >     --set autoDirectNodeRoutes=true \
    >     --set kubeProxyReplacement=strict \
    >     --set loadBalancer.mode=hybrid \
    >     --set nativeRoutingCIDR=172.21.0.0/20 \
    >     --set ipam.mode=kubernetes \
    >     --set ipam.operator.clusterPoolIPv4PodCIDR=172.21.0.0/20 \
    >     --set ipam.operator.clusterPoolIPv4MaskSize=26 \
    >     --set k8sServiceHost=apiserver.qiangyun.com \
    >     --set k8sServicePort=6443
    NAME: cilium
    LAST DEPLOYED: Sat Aug 28 16:59:25 2021
    NAMESPACE: kube-system
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    You have successfully installed Cilium with Hubble.
    
    Your release version is 1.9.9.
    
    For any further help, visit https://docs.cilium.io/en/v1.9/gettinghelp
    <root@PROD-K8S-CP1 ~># docker logs -f a16
    level=info msg="Skipped reading configuration file" reason="Config File \"ciliumd\" Not Found in \"[/root]\"" subsys=config
    level=info msg="Started gops server" address="127.0.0.1:9890" subsys=daemon
    level=info msg="Memory available for map entries (0.003% of 16508948480B): 41272371B" subsys=config
    level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 144815" subsys=config
    level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 72407" subsys=config
    level=info msg="option bpf-nat-global-max set by dynamic sizing to 144815" subsys=config
    level=info msg="option bpf-neigh-global-max set by dynamic sizing to 144815" subsys=config
    level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 72407" subsys=config
    level=info msg="  --agent-health-port='9876'" subsys=daemon
    level=info msg="  --agent-labels=''" subsys=daemon
    level=info msg="  --allow-icmp-frag-needed='true'" subsys=daemon
    level=info msg="  --allow-localhost='auto'" subsys=daemon
    level=info msg="  --annotate-k8s-node='true'" subsys=daemon
    level=info msg="  --api-rate-limit='map[]'" subsys=daemon
    level=info msg="  --arping-refresh-period='5m0s'" subsys=daemon
    level=info msg="  --auto-create-cilium-node-resource='true'" subsys=daemon
    level=info msg=" --auto-direct-node-routes='true'" subsys=daemon 開啟DSR模式,路由直接返回真實的后端
    level=info msg="  --blacklist-conflicting-routes='false'" subsys=daemon
    level=info msg="  --bpf-compile-debug='false'" subsys=daemon
    level=info msg="  --bpf-ct-global-any-max='262144'" subsys=daemon
    level=info msg="  --bpf-ct-global-tcp-max='524288'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-tcp='6h0m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-service-tcp='6h0m0s'" subsys=daemon
    level=info msg="  --bpf-fragments-map-max='8192'" subsys=daemon
    level=info msg="  --bpf-lb-acceleration='disabled'" subsys=daemon
    level=info msg="  --bpf-lb-algorithm='random'" subsys=daemon
    level=info msg="  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'" subsys=daemon
    level=info msg="  --bpf-lb-maglev-table-size='16381'" subsys=daemon
    level=info msg="  --bpf-lb-map-max='65536'" subsys=daemon
    level=info msg=" --bpf-lb-mode='snat'" subsys=daemon loadbalance的模式SNAT
    level=info msg="  --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
    level=info msg="  --bpf-nat-global-max='524288'" subsys=daemon
    level=info msg="  --bpf-neigh-global-max='524288'" subsys=daemon
    level=info msg="  --bpf-policy-map-max='16384'" subsys=daemon
    level=info msg="  --bpf-root=''" subsys=daemon
    level=info msg="  --bpf-sock-rev-map-max='262144'" subsys=daemon
    level=info msg="  --certificates-directory='/var/run/cilium/certs'" subsys=daemon
    level=info msg="  --cgroup-root='/run/cilium/cgroupv2'" subsys=daemon
    level=info msg="  --cluster-id=''" subsys=daemon
    level=info msg="  --cluster-name='default'" subsys=daemon
    level=info msg="  --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
    level=info msg="  --cmdref=''" subsys=daemon
    level=info msg="  --config=''" subsys=daemon
    level=info msg="  --config-dir='/tmp/cilium/config-map'" subsys=daemon
    level=info msg="  --conntrack-gc-interval='0s'" subsys=daemon
    level=info msg="  --crd-wait-timeout='5m0s'" subsys=daemon
    level=info msg="  --datapath-mode='veth'" subsys=daemon
    level=info msg="  --debug='false'" subsys=daemon
    level=info msg="  --debug-verbose=''" subsys=daemon
    level=info msg="  --device=''" subsys=daemon
    level=info msg="  --devices=''" subsys=daemon
    level=info msg="  --direct-routing-device=''" subsys=daemon
    level=info msg="  --disable-cnp-status-updates='true'" subsys=daemon
    level=info msg="  --disable-conntrack='false'" subsys=daemon
    level=info msg="  --disable-endpoint-crd='false'" subsys=daemon
    level=info msg="  --disable-envoy-version-check='false'" subsys=daemon
    level=info msg="  --disable-iptables-feeder-rules=''" subsys=daemon
    level=info msg="  --dns-max-ips-per-restored-rule='1000'" subsys=daemon
    level=info msg="  --egress-masquerade-interfaces=''" subsys=daemon
    level=info msg="  --egress-multi-home-ip-rule-compat='false'" subsys=daemon
    level=info msg="  --enable-auto-protect-node-port-range='true'" subsys=daemon
    level=info msg="  --enable-bandwidth-manager='false'" subsys=daemon
    level=info msg="  --enable-bpf-clock-probe='true'" subsys=daemon
    level=info msg="  --enable-bpf-masquerade='true'" subsys=daemon
    level=info msg="  --enable-bpf-tproxy='false'" subsys=daemon
    level=info msg="  --enable-endpoint-health-checking='true'" subsys=daemon
    level=info msg=" --enable-endpoint-routes='false'" subsys=daemon 關閉以endpoint為路由單位的模式
    level=info msg="  --enable-external-ips='true'" subsys=daemon
    level=info msg="  --enable-health-check-nodeport='true'" subsys=daemon
    level=info msg="  --enable-health-checking='true'" subsys=daemon
    level=info msg="  --enable-host-firewall='false'" subsys=daemon
    level=info msg=" --enable-host-legacy-routing='false'" subsys=daemon 關閉傳統路由模式,數據包接受eBPF處理
    level=info msg="  --enable-host-port='true'" subsys=daemon
    level=info msg="  --enable-host-reachable-services='false'" subsys=daemon
    level=info msg="  --enable-hubble='true'" subsys=daemon
    level=info msg="  --enable-identity-mark='true'" subsys=daemon
    level=info msg="  --enable-ip-masq-agent='false'" subsys=daemon
    level=info msg="  --enable-ipsec='false'" subsys=daemon
    level=info msg="  --enable-ipv4='true'" subsys=daemon
    level=info msg="  --enable-ipv4-fragment-tracking='true'" subsys=daemon
    level=info msg="  --enable-ipv6='false'" subsys=daemon
    level=info msg="  --enable-ipv6-ndp='false'" subsys=daemon
    level=info msg="  --enable-k8s-api-discovery='false'" subsys=daemon
    level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=daemon
    level=info msg="  --enable-k8s-event-handover='false'" subsys=daemon
    level=info msg="  --enable-l7-proxy='true'" subsys=daemon
    level=info msg="  --enable-local-node-route='true'" subsys=daemon
    level=info msg="  --enable-local-redirect-policy='false'" subsys=daemon
    level=info msg="  --enable-monitor='true'" subsys=daemon
    level=info msg="  --enable-node-port='false'" subsys=daemon
    level=info msg="  --enable-policy='default'" subsys=daemon
    level=info msg="  --enable-remote-node-identity='true'" subsys=daemon
    level=info msg="  --enable-selective-regeneration='true'" subsys=daemon
    level=info msg="  --enable-session-affinity='true'" subsys=daemon
    level=info msg="  --enable-svc-source-range-check='true'" subsys=daemon
    level=info msg="  --enable-tracing='false'" subsys=daemon
    level=info msg="  --enable-well-known-identities='false'" subsys=daemon
    level=info msg="  --enable-xt-socket-fallback='true'" subsys=daemon
    level=info msg="  --encrypt-interface=''" subsys=daemon
    level=info msg="  --encrypt-node='false'" subsys=daemon
    level=info msg="  --endpoint-interface-name-prefix='lxc+'" subsys=daemon
    level=info msg="  --endpoint-queue-size='25'" subsys=daemon
    level=info msg="  --endpoint-status=''" subsys=daemon
    level=info msg="  --envoy-log=''" subsys=daemon
    level=info msg="  --exclude-local-address=''" subsys=daemon
    level=info msg="  --fixed-identity-mapping='map[]'" subsys=daemon
    level=info msg="  --flannel-master-device=''" subsys=daemon
    level=info msg="  --flannel-uninstall-on-exit='false'" subsys=daemon
    level=info msg="  --force-local-policy-eval-at-source='true'" subsys=daemon
    level=info msg="  --gops-port='9890'" subsys=daemon
    level=info msg="  --host-reachable-services-protos='tcp,udp'" subsys=daemon
    level=info msg="  --http-403-msg=''" subsys=daemon
    level=info msg="  --http-idle-timeout='0'" subsys=daemon
    level=info msg="  --http-max-grpc-timeout='0'" subsys=daemon
    level=info msg="  --http-normalize-path='true'" subsys=daemon
    level=info msg="  --http-request-timeout='3600'" subsys=daemon
    level=info msg="  --http-retry-count='3'" subsys=daemon
    level=info msg="  --http-retry-timeout='0'" subsys=daemon
    level=info msg="  --hubble-disable-tls='false'" subsys=daemon
    level=info msg="  --hubble-event-queue-size='0'" subsys=daemon
    level=info msg="  --hubble-flow-buffer-size='4095'" subsys=daemon
    level=info msg="  --hubble-listen-address=':4244'" subsys=daemon
    level=info msg="  --hubble-metrics=''" subsys=daemon
    level=info msg="  --hubble-metrics-server=''" subsys=daemon
    level=info msg="  --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
    level=info msg="  --hubble-tls-cert-file='/var/lib/cilium/tls/hubble/server.crt'" subsys=daemon
    level=info msg="  --hubble-tls-client-ca-files='/var/lib/cilium/tls/hubble/client-ca.crt'" subsys=daemon
    level=info msg="  --hubble-tls-key-file='/var/lib/cilium/tls/hubble/server.key'" subsys=daemon
    level=info msg="  --identity-allocation-mode='crd'" subsys=daemon
    level=info msg="  --identity-change-grace-period='5s'" subsys=daemon
    level=info msg="  --install-iptables-rules='true'" subsys=daemon
    level=info msg="  --ip-allocation-timeout='2m0s'" subsys=daemon
    level=info msg="  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
    level=info msg="  --ipam='kubernetes'" subsys=daemon
    level=info msg="  --ipsec-key-file=''" subsys=daemon
    level=info msg="  --iptables-lock-timeout='5s'" subsys=daemon
    level=info msg="  --iptables-random-fully='false'" subsys=daemon
    level=info msg="  --ipv4-node='auto'" subsys=daemon
    level=info msg="  --ipv4-pod-subnets=''" subsys=daemon
    level=info msg="  --ipv4-range='auto'" subsys=daemon
    level=info msg="  --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
    level=info msg="  --ipv4-service-range='auto'" subsys=daemon
    level=info msg="  --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
    level=info msg="  --ipv6-mcast-device=''" subsys=daemon
    level=info msg="  --ipv6-node='auto'" subsys=daemon
    level=info msg="  --ipv6-pod-subnets=''" subsys=daemon
    level=info msg="  --ipv6-range='auto'" subsys=daemon
    level=info msg="  --ipv6-service-range='auto'" subsys=daemon
    level=info msg="  --ipvlan-master-device='undefined'" subsys=daemon
    level=info msg="  --join-cluster='false'" subsys=daemon
    level=info msg="  --k8s-api-server=''" subsys=daemon
    level=info msg="  --k8s-force-json-patch='false'" subsys=daemon
    level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=daemon
    level=info msg="  --k8s-kubeconfig-path=''" subsys=daemon
    level=info msg="  --k8s-namespace='kube-system'" subsys=daemon
    level=info msg="  --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
    level=info msg="  --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
    level=info msg="  --k8s-service-cache-size='128'" subsys=daemon
    level=info msg="  --k8s-service-proxy-name=''" subsys=daemon
    level=info msg="  --k8s-sync-timeout='3m0s'" subsys=daemon
    level=info msg="  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
    level=info msg="  --k8s-watcher-queue-size='1024'" subsys=daemon
    level=info msg="  --keep-config='false'" subsys=daemon
    level=info msg="  --kube-proxy-replacement='strict'" subsys=daemon
    level=info msg="  --kube-proxy-replacement-healthz-bind-address=''" subsys=daemon
    level=info msg="  --kvstore=''" subsys=daemon
    level=info msg="  --kvstore-connectivity-timeout='2m0s'" subsys=daemon
    level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=daemon
    level=info msg="  --kvstore-opt='map[]'" subsys=daemon
    level=info msg="  --kvstore-periodic-sync='5m0s'" subsys=daemon
    level=info msg="  --label-prefix-file=''" subsys=daemon
    level=info msg="  --labels=''" subsys=daemon
    level=info msg="  --lib-dir='/var/lib/cilium'" subsys=daemon
    level=info msg="  --log-driver=''" subsys=daemon
    level=info msg="  --log-opt='map[]'" subsys=daemon
    level=info msg="  --log-system-load='false'" subsys=daemon
    level=info msg=" --masquerade='true'" subsys=daemon 偽裝模式默認開啟
    level=info msg="  --max-controller-interval='0'" subsys=daemon
    level=info msg="  --metrics=''" subsys=daemon
    level=info msg="  --monitor-aggregation='medium'" subsys=daemon
    level=info msg="  --monitor-aggregation-flags='all'" subsys=daemon
    level=info msg="  --monitor-aggregation-interval='5s'" subsys=daemon
    level=info msg="  --monitor-queue-size='0'" subsys=daemon
    level=info msg="  --mtu='0'" subsys=daemon
    level=info msg="  --nat46-range='0:0:0:0:0:FFFF::/96'" subsys=daemon
    level=info msg="  --native-routing-cidr='172.21.0.0/20'" subsys=daemon
    level=info msg="  --node-port-acceleration='disabled'" subsys=daemon
    level=info msg="  --node-port-algorithm='random'" subsys=daemon
    level=info msg="  --node-port-bind-protection='true'" subsys=daemon
    level=info msg="  --node-port-mode='hybrid'" subsys=daemon
    level=info msg="  --node-port-range='30000,32767'" subsys=daemon
    level=info msg="  --policy-audit-mode='false'" subsys=daemon
    level=info msg="  --policy-queue-size='100'" subsys=daemon
    level=info msg="  --policy-trigger-interval='1s'" subsys=daemon
    level=info msg="  --pprof='false'" subsys=daemon
    level=info msg="  --preallocate-bpf-maps='false'" subsys=daemon
    level=info msg="  --prefilter-device='undefined'" subsys=daemon
    level=info msg="  --prefilter-mode='native'" subsys=daemon
    level=info msg="  --prepend-iptables-chains='true'" subsys=daemon
    level=info msg="  --prometheus-serve-addr=''" subsys=daemon
    level=info msg="  --proxy-connect-timeout='1'" subsys=daemon
    level=info msg="  --proxy-prometheus-port='0'" subsys=daemon
    level=info msg="  --read-cni-conf=''" subsys=daemon
    level=info msg="  --restore='true'" subsys=daemon
    level=info msg="  --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
    level=info msg="  --single-cluster-route='false'" subsys=daemon
    level=info msg="  --skip-crd-creation='false'" subsys=daemon
    level=info msg="  --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
    level=info msg="  --sockops-enable='false'" subsys=daemon
    level=info msg="  --state-dir='/var/run/cilium'" subsys=daemon
    level=info msg="  --tofqdns-dns-reject-response-code='refused'" subsys=daemon
    level=info msg="  --tofqdns-enable-dns-compression='true'" subsys=daemon
    level=info msg="  --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
    level=info msg="  --tofqdns-idle-connection-grace-period='0s'" subsys=daemon
    level=info msg="  --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
    level=info msg="  --tofqdns-min-ttl='0'" subsys=daemon
    level=info msg="  --tofqdns-pre-cache=''" subsys=daemon
    level=info msg="  --tofqdns-proxy-port='0'" subsys=daemon
    level=info msg="  --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
    level=info msg="  --trace-payloadlen='128'" subsys=daemon
    level=info msg="  --tunnel='disabled'" subsys=daemon
    level=info msg="  --version='false'" subsys=daemon
    level=info msg="  --write-cni-conf-when-ready=''" subsys=daemon
    level=info msg="     _ _ _" subsys=daemon
    level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
    level=info msg="|  _| | | | | |     |" subsys=daemon
    level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
    level=info msg="Cilium 1.9.9 5bcf83c 2021-07-19T16:45:00-07:00 go version go1.15.14 linux/amd64" subsys=daemon
    level=info msg="cilium-envoy  version: 82a70d56bf324287ced3129300db609eceb21d10/1.17.3/Distribution/RELEASE/BoringSSL" subsys=daemon
    level=info msg="clang (10.0.0) and kernel (5.11.1) versions: OK!" subsys=linux-datapath
    level=info msg="linking environment: OK!" subsys=linux-datapath
    level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
    level=info msg="Mounted cgroupv2 filesystem at /run/cilium/cgroupv2" subsys=cgroups
    level=info msg="Parsing base label prefixes from default label list" subsys=labels-filter
    level=info msg="Parsing additional label prefixes from user inputs: []" subsys=labels-filter
    level=info msg="Final label prefixes to be used for identity evaluation:" subsys=labels-filter
    level=info msg=" - reserved:.*" subsys=labels-filter
    level=info msg=" - :io.kubernetes.pod.namespace" subsys=labels-filter
    level=info msg=" - :io.cilium.k8s.namespace.labels" subsys=labels-filter
    level=info msg=" - :app.kubernetes.io" subsys=labels-filter
    level=info msg=" - !:io.kubernetes" subsys=labels-filter
    level=info msg=" - !:kubernetes.io" subsys=labels-filter
    level=info msg=" - !:.*beta.kubernetes.io" subsys=labels-filter
    level=info msg=" - !:k8s.io" subsys=labels-filter
    level=info msg=" - !:pod-template-generation" subsys=labels-filter
    level=info msg=" - !:pod-template-hash" subsys=labels-filter
    level=info msg=" - !:controller-revision-hash" subsys=labels-filter
    level=info msg=" - !:annotation.*" subsys=labels-filter
    level=info msg=" - !:etcd_node" subsys=labels-filter
    level=info msg="Auto-disabling \"enable-bpf-clock-probe\" feature since KERNEL_HZ cannot be determined" error="Cannot probe CONFIG_HZ" subsys=daemon
    level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.5.0.0/16
    level=info msg="Initializing daemon" subsys=daemon
    level=info msg="Establishing connection to apiserver" host="https://apiserver.qiangyun.com:6443" subsys=k8s
    level=info msg="Connected to apiserver" subsys=k8s
    level=info msg="Trying to auto-enable \"enable-node-port\", \"enable-external-ips\", \"enable-host-reachable-services\", \"enable-host-port\", \"enable-session-affinity\" features" subsys=daemon
    level=info msg="Inheriting MTU from external network interface" device=eth0 ipAddr=10.1.0.5 mtu=1500 subsys=mtu
    level=info msg="Restored services from maps" failed=0 restored=11 subsys=service
    level=info msg="Reading old endpoints..." subsys=daemon
    level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock" subsys=envoy-manager
    level=info msg="Reusing previous DNS proxy port: 39451" subsys=daemon
    level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
    level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
    level=info msg="Retrieved node information from kubernetes node" nodeName=prod-k8s-cp1 subsys=k8s
    level=info msg="Received own node information from API server" ipAddr.ipv4=10.1.0.5 ipAddr.ipv6="<nil>" k8sNodeIP=10.1.0.5 labels="map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:prod-k8s-cp1 kubernetes.io/os:linux node-role.kubernetes.io/master: topology.diskplugin.csi.alibabacloud.com/zone:cn-hangzhou-h]" nodeName=prod-k8s-cp1 subsys=k8s v4Prefix=172.21.0.0/24 v6Prefix="<nil>"
    level=info msg="Restored router IPs from node information" ipv4=172.21.0.85 ipv6="<nil>" subsys=k8s
    level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
    level=info msg="Using auto-derived devices to attach Loadbalancer, Host Firewall or Bandwidth Manager program" devices="[eth0]" directRoutingDevice=eth0 subsys=daemon
    level=info msg="Enabling k8s event listener" subsys=k8s-watcher
    level=info msg="Removing stale endpoint interfaces" subsys=daemon
    level=info msg="Skipping kvstore configuration" subsys=daemon
    level=info msg="Restored router address from node_config" file=/var/run/cilium/state/globals/node_config.h ipv4=172.21.0.85 ipv6="<nil>" subsys=node
    level=info msg="Initializing node addressing" subsys=daemon
    level=info msg="Initializing kubernetes IPAM" subsys=ipam v4Prefix=172.21.0.0/24 v6Prefix="<nil>"
    level=info msg="Restoring endpoints..." subsys=daemon
    level=info msg="Waiting until all pre-existing resources related to policy have been received" subsys=k8s-watcher
    level=info msg="Endpoints restored" failed=0 restored=1 subsys=daemon
    level=info msg="Addressing information:" subsys=daemon
    level=info msg="  Cluster-Name: default" subsys=daemon
    level=info msg="  Cluster-ID: 0" subsys=daemon
    level=info msg="  Local node-name: prod-k8s-cp1" subsys=daemon
    level=info msg="  Node-IPv6: <nil>" subsys=daemon
    level=info msg="  External-Node IPv4: 10.1.0.5" subsys=daemon
    level=info msg="  Internal-Node IPv4: 172.21.0.85" subsys=daemon
    level=info msg="  IPv4 allocation prefix: 172.21.0.0/24" subsys=daemon
    level=info msg="  IPv4 native routing prefix: 172.21.0.0/20" subsys=daemon
    level=info msg="  Loopback IPv4: 169.254.42.1" subsys=daemon
    level=info msg="  Local IPv4 addresses:" subsys=daemon
    level=info msg="  - 10.1.0.5" subsys=daemon
    level=info msg="  - 172.21.0.85" subsys=daemon
    level=info msg="Creating or updating CiliumNode resource" node=prod-k8s-cp1 subsys=nodediscovery
    level=info msg="Adding local node to cluster" node="{prod-k8s-cp1 default [{InternalIP 10.1.0.5} {CiliumInternalIP 172.21.0.85}] 172.21.0.0/24 <nil> 172.21.0.71 <nil> 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:prod-k8s-cp1 kubernetes.io/os:linux node-role.kubernetes.io/master: topology.diskplugin.csi.alibabacloud.com/zone:cn-hangzhou-h] 6}" subsys=nodediscovery
    level=info msg="Successfully created CiliumNode resource" subsys=nodediscovery
    level=info msg="Annotating k8s node" subsys=daemon v4CiliumHostIP.IPv4=172.21.0.85 v4Prefix=172.21.0.0/24 v4healthIP.IPv4=172.21.0.71 v6CiliumHostIP.IPv6="<nil>" v6Prefix="<nil>" v6healthIP.IPv6="<nil>"
    level=info msg="Initializing identity allocator" subsys=identity-cache
    level=info msg="Cluster-ID is not specified, skipping ClusterMesh initialization" subsys=daemon
    level=info msg="Setting up BPF datapath" bpfClockSource=ktime bpfInsnSet=v3 subsys=datapath-loader
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.timer_migration sysParamValue=0
    level=info msg="All pre-existing resources related to policy have been received; continuing" subsys=k8s-watcher
    # 屬於正常,因為我們生產環境網絡划分的原因,DSR模式要求所有后端在同一個L2網段中,不影響通信 level
    =warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.13.0/24 Src: <nil> Gw: 10.1.18.50 Flags: [] Table: 0}" error="route to destination 10.1.18.50 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.12.64/26 Src: <nil> Gw: 10.1.17.236 Flags: [] Table: 0}" error="route to destination 10.1.17.236 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.9.0/24 Src: <nil> Gw: 10.1.16.221 Flags: [] Table: 0}" error="route to destination 10.1.16.221 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.5.0/24 Src: <nil> Gw: 10.1.17.231 Flags: [] Table: 0}" error="route to destination 10.1.17.231 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.15.0/24 Src: <nil> Gw: 10.1.18.51 Flags: [] Table: 0}" error="route to destination 10.1.18.51 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.12.0/26 Src: <nil> Gw: 10.1.17.237 Flags: [] Table: 0}" error="route to destination 10.1.17.237 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.14.0/24 Src: <nil> Gw: 10.1.18.49 Flags: [] Table: 0}" error="route to destination 10.1.18.49 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.6.0/24 Src: <nil> Gw: 10.1.17.232 Flags: [] Table: 0}" error="route to destination 10.1.17.232 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.10.0/24 Src: <nil> Gw: 10.1.16.222 Flags: [] Table: 0}" error="route to destination 10.1.16.222 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.12.192/26 Src: <nil> Gw: 10.1.16.223 Flags: [] Table: 0}" error="route to destination 10.1.16.223 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.12.128/26 Src: <nil> Gw: 10.1.17.238 Flags: [] Table: 0}" error="route to destination 10.1.17.238 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager level=info msg="Adding new proxy port rules for cilium-dns-egress:39451" proxy port name=cilium-dns-egress subsys=proxy level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent level=info msg="Validating configured node address ranges" subsys=daemon level=info msg="Starting connection tracking garbage collector" subsys=daemon level=info msg="Starting IP identity watcher" subsys=ipcache level=info msg="Initial scan of connection tracking completed" subsys=ct-gc level=info msg="Regenerating restored endpoints" numRestored=1 subsys=daemon level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.014266576435979946 newInterval=7m30s subsys=map-ct level=info msg="Datapath signal listener running" subsys=signal level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3912 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint level=info msg="Successfully restored endpoint. Scheduling regeneration" endpointID=3912 subsys=daemon level=info msg="Removed endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=739 identity=4 ipv4=172.21.0.171 ipv6= k8sPodName=/ subsys=endpoint level=info msg="Launching Cilium health daemon" subsys=daemon level=info msg="Launching Cilium health endpoint" subsys=daemon level=info msg="Started healthz status API server" address="127.0.0.1:9876" subsys=daemon level=info msg="Initializing Cilium API" subsys=daemon level=info msg="Daemon initialization completed" bootstrapTime=6.17475652s subsys=daemon level=info msg="Serving cilium API at unix:///var/run/cilium/cilium.sock" subsys=daemon level=info msg="Configuring Hubble server" eventQueueSize=4096 maxFlows=4095 subsys=hubble level=info msg="Starting local Hubble server" address="unix:///var/run/cilium/hubble.sock" subsys=hubble level=info msg="Beginning to read perf buffer" startTime="2021-08-28 08:59:34.474285821 +0000 UTC m=+6.245198613" subsys=monitor-agent level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3610 ipv4= ipv6= k8sPodName=/ subsys=endpoint level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3610 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ subsys=endpoint level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3610 identity=4 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager level=info msg="Compiled new BPF template" BPFCompilationTime=1.654455554s file-path=/var/run/cilium/state/templates/ebd8a5ff175221b719cd4ae752053c5787bcb5b2/bpf_host.o subsys=datapath-loader level=info msg="Compiled new BPF template" BPFCompilationTime=1.340506836s file-path=/var/run/cilium/state/templates/1cfa9d9a215498b4089c630b564520f2b1b80c85/bpf_lxc.o subsys=datapath-loader level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3610 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3912 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint level=info msg="Restored endpoint" endpointID=3912 ipAddr="[ ]" subsys=endpoint level=info msg="Finished regenerating restored endpoints" regenerated=1 subsys=daemon total=1 level=info msg="Waiting for Hubble server TLS certificate and key files to be created" subsys=hubble level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.12.192/26 Src: <nil> Gw: 10.1.16.223 Flags: [] Table: 0}" error="route to destination 10.1.16.223 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.6.0/24 Src: <nil> Gw: 10.1.17.232 Flags: [] Table: 0}" error="route to destination 10.1.17.232 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.9.0/24 Src: <nil> Gw: 10.1.16.221 Flags: [] Table: 0}" error="route to destination 10.1.16.221 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.13.0/24 Src: <nil> Gw: 10.1.18.50 Flags: [] Table: 0}" error="route to destination 10.1.18.50 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.12.128/26 Src: <nil> Gw: 10.1.17.238 Flags: [] Table: 0}" error="route to destination 10.1.17.238 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.5.0/24 Src: <nil> Gw: 10.1.17.231 Flags: [] Table: 0}" error="route to destination 10.1.17.231 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.10.0/24 Src: <nil> Gw: 10.1.16.222 Flags: [] Table: 0}" error="route to destination 10.1.16.222 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.14.0/24 Src: <nil> Gw: 10.1.18.49 Flags: [] Table: 0}" error="route to destination 10.1.18.49 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.12.64/26 Src: <nil> Gw: 10.1.17.236 Flags: [] Table: 0}" error="route to destination 10.1.17.236 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.15.0/24 Src: <nil> Gw: 10.1.18.51 Flags: [] Table: 0}" error="route to destination 10.1.18.51 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.12.0/26 Src: <nil> Gw: 10.1.17.237 Flags: [] Table: 0}" error="route to destination 10.1.17.237 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath
  2. 查看DSR模式下的cilium-agent的狀態
    <root@PROD-K8S-CP1 ~># dps
    a166d3d25ee3    Up 18 minutes    k8s_cilium-agent_cilium-zlhzc_kube-system_231baf2d-f32c-463b-88e8-faa73db507f4_0
    8b87a2f6fce0    Up 19 hours    k8s_kube-controller-manager_kube-controller-manager-prod-k8s-cp1_kube-system_c5548fca3d6f1bb0c7cbee586dff7327_3
    e13f8dc37637    Up 19 hours    k8s_etcd_etcd-prod-k8s-cp1_kube-system_30e073f094203874eecc5317ed3ce2f6_10
    998ebbddead1    Up 19 hours    k8s_kube-scheduler_kube-scheduler-prod-k8s-cp1_kube-system_10803dd5434c54168be1114c7d99a067_10
    85e2890ed099    Up 19 hours    k8s_kube-apiserver_kube-apiserver-prod-k8s-cp1_kube-system_e14dd2db1d7c352e9552e3944ff3b802_16
    <root@PROD-K8S-CP1 ~># docker exec -it a16 bash
    root@PROD-K8S-CP1:/home/cilium# cilium status --verbose
    KVStore:                Ok   Disabled
    Kubernetes:             Ok   1.18 (v1.18.5) [linux/amd64]
    Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
    KubeProxyReplacement:   Strict   [eth0 (Direct Routing)]
    Cilium:                 Ok   1.9.9 (v1.9.9-5bcf83c)
    NodeMonitor:            Listening for events on 4 CPUs with 64x4096 of shared memory
    Cilium health daemon:   Ok   
    IPAM:                   IPv4: 2/255 allocated from 172.21.0.0/24, 
    Allocated addresses:
      172.21.0.71 (health)
      172.21.0.85 (router)
    BandwidthManager:       Disabled
    Host Routing: BPF Masquerading: BPF [eth0] 172.21.0.0/20
    Clock Source for BPF:   ktime
    Controller Status:      18/18 healthy
      Name                                  Last success   Last error   Count   Message
      cilium-health-ep                      52s ago        never        0       no error   
      dns-garbage-collector-job             59s ago        never        0       no error   
      endpoint-3610-regeneration-recovery   never          never        0       no error   
      endpoint-3912-regeneration-recovery   never          never        0       no error   
      k8s-heartbeat                         28s ago        never        0       no error   
      mark-k8s-node-as-available            18m53s ago     never        0       no error   
      metricsmap-bpf-prom-sync              3s ago         never        0       no error   
      neighbor-table-refresh                3m53s ago      never        0       no error   
      resolve-identity-3610                 3m52s ago      never        0       no error   
      restoring-ep-identity (3912)          18m53s ago     never        0       no error   
      sync-endpoints-and-host-ips           53s ago        never        0       no error   
      sync-lb-maps-with-k8s-services        18m53s ago     never        0       no error   
      sync-policymap-3610                   50s ago        never        0       no error   
      sync-policymap-3912                   50s ago        never        0       no error   
      sync-to-k8s-ciliumendpoint (3610)     12s ago        never        0       no error   
      sync-to-k8s-ciliumendpoint (3912)     3s ago         never        0       no error   
      template-dir-watcher                  never          never        0       no error   
      update-k8s-node-annotations           18m57s ago     never        0       no error   
    Proxy Status:   OK, ip 172.21.0.85, 0 redirects active on ports 10000-20000
    Hubble:         Ok   Current/Max Flows: 782/4096 (19.09%), Flows/s: 0.69   Metrics: Disabled
    KubeProxyReplacement Details:
      Status: Strict
      Protocols:           TCP, UDP
      Devices:             eth0 (Direct Routing)
      Mode: Hybrid
      Backend Selection:   Random
      Session Affinity:    Enabled
      XDP Acceleration:    Disabled
      Services:
      - ClusterIP:      Enabled
      - NodePort:       Enabled (Range: 30000-32767) 
      - LoadBalancer:   Enabled 
      - externalIPs:    Enabled 
      - HostPort:       Enabled
    BPF Maps:   dynamic sizing: on (ratio: 0.002500)
      Name                          Size
      Non-TCP connection tracking   72407
      TCP connection tracking       144815
      Endpoint policy               65535
      Events                        4
      IP cache                      512000
      IP masquerading agent         16384
      IPv4 fragmentation            8192
      IPv4 service                  65536
      IPv6 service                  65536
      IPv4 service backend          65536
      IPv6 service backend          65536
      IPv4 service reverse NAT      65536
      IPv6 service reverse NAT      65536
      Metrics                       1024
      NAT                           144815
      Neighbor table                144815
      Global policy                 16384
      Per endpoint policy           65536
      Session affinity              65536
      Signal                        4
      Sockmap                       65535
      Sock reverse NAT              72407
      Tunnel                        65536
    Cluster health:              2/19 reachable   (2021-08-28T09:17:36Z)
      Name                       IP               Node        Endpoints
      prod-k8s-cp1 (localhost)   10.1.0.5         reachable   reachable
      prod-be-k8s-wn1            10.1.17.231      reachable   unreachable
      prod-be-k8s-wn2            10.1.17.232      reachable   unreachable
      prod-be-k8s-wn6            10.1.17.236      reachable   unreachable
      prod-be-k8s-wn7            10.1.17.237      reachable   unreachable
      prod-be-k8s-wn8            10.1.17.238      reachable   unreachable
      prod-data-k8s-wn1          10.1.18.50       reachable   reachable
      prod-data-k8s-wn2          10.1.18.49       reachable   unreachable
      prod-data-k8s-wn3          10.1.18.51       reachable   unreachable
      prod-fe-k8s-wn1            10.1.16.221      reachable   unreachable
      prod-fe-k8s-wn2            10.1.16.222      reachable   unreachable
      prod-fe-k8s-wn3            10.1.16.223      reachable   unreachable
      prod-k8s-cp2               10.1.0.7         reachable   unreachable
      prod-k8s-cp3               10.1.0.6         reachable   unreachable
      prod-sys-k8s-wn1           10.1.0.8         reachable   unreachable
      prod-sys-k8s-wn2           10.1.0.9         reachable   unreachable
      prod-sys-k8s-wn3           10.1.0.11        reachable   unreachable
      prod-sys-k8s-wn4           10.1.0.10        reachable   unreachable
      prod-sys-k8s-wn5           10.1.0.12        reachable   unreachable
  3. 查看基於DSR模式下的路由情況
    # 不同的是DSR模式下只能偵察到同網段的路由信息,無法獲取跨網段的路由,如果與不同的網段通信,則判斷走本節點的默認路由,下一跳獲取阿里雲后端的自定義路由信息
    <root@PROD-K8S-CP1 ~># netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.1.0.253 0.0.0.0 UG 0 0 0 eth0 10.1.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.21.0.0 172.21.0.85 255.255.255.0 UG 0 0 0 cilium_host 172.21.0.64 172.21.0.85 255.255.255.192 UG 0 0 0 cilium_host 172.21.0.85 0.0.0.0 255.255.255.255 UH 0 0 0 cilium_host 172.21.1.0 10.1.0.7 255.255.255.0 UG 0 0 0 eth0 172.21.2.0 10.1.0.6 255.255.255.0 UG 0 0 0 eth0 172.21.3.0 10.1.0.8 255.255.255.0 UG 0 0 0 eth0 172.21.4.0 10.1.0.9 255.255.255.0 UG 0 0 0 eth0 172.21.7.0 10.1.0.11 255.255.255.0 UG 0 0 0 eth0 172.21.8.0 10.1.0.10 255.255.255.0 UG 0 0 0 eth0 172.21.11.0 10.1.0.12 255.255.255.0 UG 0 0 0 eth0 <root@PROD-BE-K8S-WN6 ~># netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.1.17.253 0.0.0.0 UG 0 0 0 eth0 10.1.17.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.21.5.0 10.1.17.231 255.255.255.0 UG 0 0 0 eth0 172.21.6.0 10.1.17.232 255.255.255.0 UG 0 0 0 eth0 172.21.12.0 10.1.17.237 255.255.255.192 UG 0 0 0 eth0 172.21.12.64 172.21.12.86 255.255.255.192 UG 0 0 0 cilium_host 172.21.12.86 0.0.0.0 255.255.255.255 UH 0 0 0 cilium_host 172.21.12.128 10.1.17.238 255.255.255.192 UG 0 0 0 eth0 <root@PROD-DATA-K8S-WN1 ~># netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.1.18.253 0.0.0.0 UG 0 0 0 eth0 10.1.18.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.21.13.0 172.21.13.25 255.255.255.0 UG 0 0 0 cilium_host 172.21.13.25 0.0.0.0 255.255.255.255 UH 0 0 0 cilium_host 172.21.14.0 10.1.18.49 255.255.255.0 UG 0 0 0 eth0 172.21.15.0 10.1.18.51 255.255.255.0 UG 0 0 0 eth0
  4. 測試Pod網絡連通性跳過,路由存在則網絡必達

endpoint模式

官方原文的意思

--set endpointRoutes.enabled=true

endpointRoutes:

# -- Enable use of per endpoint routes instead of routing via
# the cilium_host interface.
enabled: false

  1. 初始化
    <root@PROD-K8S-CP1 ~># helm install cilium cilium/cilium --version 1.9.9 \
    >     --namespace kube-system \
    >     --set tunnel=disabled \
    >     --set endpointRoutes.enabled=true \
    >     --set kubeProxyReplacement=strict \
    >     --set loadBalancer.mode=hybrid \
    >     --set nativeRoutingCIDR=172.21.0.0/20 \
    >     --set ipam.mode=kubernetes \
    >     --set ipam.operator.clusterPoolIPv4PodCIDR=172.21.0.0/20 \
    >     --set ipam.operator.clusterPoolIPv4MaskSize=26 \
    >     --set k8sServiceHost=apiserver.qiangyun.com \
    >     --set k8sServicePort=6443
    NAME: cilium
    LAST DEPLOYED: Sat Aug 28 18:04:09 2021
    NAMESPACE: kube-system
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    You have successfully installed Cilium with Hubble.
    
    Your release version is 1.9.9.
    
    For any further help, visit https://docs.cilium.io/en/v1.9/gettinghelp
  2. 查看cilium-agent日志
    <root@PROD-K8S-CP1 ~># docker logs -f 716
    level=info msg="Skipped reading configuration file" reason="Config File \"ciliumd\" Not Found in \"[/root]\"" subsys=config
    level=info msg="Started gops server" address="127.0.0.1:9890" subsys=daemon
    level=info msg="Memory available for map entries (0.003% of 16508948480B): 41272371B" subsys=config
    level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 144815" subsys=config
    level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 72407" subsys=config
    level=info msg="option bpf-nat-global-max set by dynamic sizing to 144815" subsys=config
    level=info msg="option bpf-neigh-global-max set by dynamic sizing to 144815" subsys=config
    level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 72407" subsys=config
    level=info msg="  --agent-health-port='9876'" subsys=daemon
    level=info msg="  --agent-labels=''" subsys=daemon
    level=info msg="  --allow-icmp-frag-needed='true'" subsys=daemon
    level=info msg="  --allow-localhost='auto'" subsys=daemon
    level=info msg="  --annotate-k8s-node='true'" subsys=daemon
    level=info msg="  --api-rate-limit='map[]'" subsys=daemon
    level=info msg="  --arping-refresh-period='5m0s'" subsys=daemon
    level=info msg="  --auto-create-cilium-node-resource='true'" subsys=daemon
    level=info msg=" --auto-direct-node-routes='false'" subsys=daemon 關閉DSR模式
    level=info msg="  --blacklist-conflicting-routes='false'" subsys=daemon
    level=info msg="  --bpf-compile-debug='false'" subsys=daemon
    level=info msg="  --bpf-ct-global-any-max='262144'" subsys=daemon
    level=info msg="  --bpf-ct-global-tcp-max='524288'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-tcp='6h0m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
    level=info msg="  --bpf-ct-timeout-service-tcp='6h0m0s'" subsys=daemon
    level=info msg="  --bpf-fragments-map-max='8192'" subsys=daemon
    level=info msg="  --bpf-lb-acceleration='disabled'" subsys=daemon
    level=info msg="  --bpf-lb-algorithm='random'" subsys=daemon
    level=info msg="  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'" subsys=daemon
    level=info msg="  --bpf-lb-maglev-table-size='16381'" subsys=daemon
    level=info msg="  --bpf-lb-map-max='65536'" subsys=daemon
    level=info msg=" --bpf-lb-mode='snat'" subsys=daemon loadbalance模式SNAT
    level=info msg="  --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
    level=info msg="  --bpf-nat-global-max='524288'" subsys=daemon
    level=info msg="  --bpf-neigh-global-max='524288'" subsys=daemon
    level=info msg="  --bpf-policy-map-max='16384'" subsys=daemon
    level=info msg="  --bpf-root=''" subsys=daemon
    level=info msg="  --bpf-sock-rev-map-max='262144'" subsys=daemon
    level=info msg="  --certificates-directory='/var/run/cilium/certs'" subsys=daemon
    level=info msg="  --cgroup-root='/run/cilium/cgroupv2'" subsys=daemon
    level=info msg="  --cluster-id=''" subsys=daemon
    level=info msg="  --cluster-name='default'" subsys=daemon
    level=info msg="  --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
    level=info msg="  --cmdref=''" subsys=daemon
    level=info msg="  --config=''" subsys=daemon
    level=info msg="  --config-dir='/tmp/cilium/config-map'" subsys=daemon
    level=info msg="  --conntrack-gc-interval='0s'" subsys=daemon
    level=info msg="  --crd-wait-timeout='5m0s'" subsys=daemon
    level=info msg="  --datapath-mode='veth'" subsys=daemon
    level=info msg="  --debug='false'" subsys=daemon
    level=info msg="  --debug-verbose=''" subsys=daemon
    level=info msg="  --device=''" subsys=daemon
    level=info msg="  --devices=''" subsys=daemon
    level=info msg="  --direct-routing-device=''" subsys=daemon
    level=info msg="  --disable-cnp-status-updates='true'" subsys=daemon
    level=info msg="  --disable-conntrack='false'" subsys=daemon
    level=info msg="  --disable-endpoint-crd='false'" subsys=daemon
    level=info msg="  --disable-envoy-version-check='false'" subsys=daemon
    level=info msg="  --disable-iptables-feeder-rules=''" subsys=daemon
    level=info msg="  --dns-max-ips-per-restored-rule='1000'" subsys=daemon
    level=info msg="  --egress-masquerade-interfaces=''" subsys=daemon
    level=info msg="  --egress-multi-home-ip-rule-compat='false'" subsys=daemon
    level=info msg="  --enable-auto-protect-node-port-range='true'" subsys=daemon
    level=info msg="  --enable-bandwidth-manager='false'" subsys=daemon
    level=info msg="  --enable-bpf-clock-probe='true'" subsys=daemon
    level=info msg="  --enable-bpf-masquerade='true'" subsys=daemon
    level=info msg="  --enable-bpf-tproxy='false'" subsys=daemon
    level=info msg="  --enable-endpoint-health-checking='true'" subsys=daemon
    level=info msg="  --enable-endpoint-routes='true'" subsys=daemon
    level=info msg="  --enable-external-ips='true'" subsys=daemon
    level=info msg="  --enable-health-check-nodeport='true'" subsys=daemon
    level=info msg="  --enable-health-checking='true'" subsys=daemon
    level=info msg="  --enable-host-firewall='false'" subsys=daemon
    level=info msg=" --enable-host-legacy-routing='false'" subsys=daemon 關閉傳統主機路由模式,但endpointRoutes模式與eBPF會沖突,i dont know
    level=info msg="  --enable-host-port='true'" subsys=daemon
    level=info msg="  --enable-host-reachable-services='false'" subsys=daemon
    level=info msg="  --enable-hubble='true'" subsys=daemon
    level=info msg="  --enable-identity-mark='true'" subsys=daemon
    level=info msg="  --enable-ip-masq-agent='false'" subsys=daemon
    level=info msg="  --enable-ipsec='false'" subsys=daemon
    level=info msg="  --enable-ipv4='true'" subsys=daemon
    level=info msg="  --enable-ipv4-fragment-tracking='true'" subsys=daemon
    level=info msg="  --enable-ipv6='false'" subsys=daemon
    level=info msg="  --enable-ipv6-ndp='false'" subsys=daemon
    level=info msg="  --enable-k8s-api-discovery='false'" subsys=daemon
    level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=daemon
    level=info msg="  --enable-k8s-event-handover='false'" subsys=daemon
    level=info msg="  --enable-l7-proxy='true'" subsys=daemon
    level=info msg="  --enable-local-node-route='true'" subsys=daemon
    level=info msg="  --enable-local-redirect-policy='false'" subsys=daemon
    level=info msg="  --enable-monitor='true'" subsys=daemon
    level=info msg="  --enable-node-port='false'" subsys=daemon
    level=info msg="  --enable-policy='default'" subsys=daemon
    level=info msg="  --enable-remote-node-identity='true'" subsys=daemon
    level=info msg="  --enable-selective-regeneration='true'" subsys=daemon
    level=info msg="  --enable-session-affinity='true'" subsys=daemon
    level=info msg="  --enable-svc-source-range-check='true'" subsys=daemon
    level=info msg="  --enable-tracing='false'" subsys=daemon
    level=info msg="  --enable-well-known-identities='false'" subsys=daemon
    level=info msg="  --enable-xt-socket-fallback='true'" subsys=daemon
    level=info msg="  --encrypt-interface=''" subsys=daemon
    level=info msg="  --encrypt-node='false'" subsys=daemon
    level=info msg="  --endpoint-interface-name-prefix='lxc+'" subsys=daemon
    level=info msg="  --endpoint-queue-size='25'" subsys=daemon
    level=info msg="  --endpoint-status=''" subsys=daemon
    level=info msg="  --envoy-log=''" subsys=daemon
    level=info msg="  --exclude-local-address=''" subsys=daemon
    level=info msg="  --fixed-identity-mapping='map[]'" subsys=daemon
    level=info msg="  --flannel-master-device=''" subsys=daemon
    level=info msg="  --flannel-uninstall-on-exit='false'" subsys=daemon
    level=info msg="  --force-local-policy-eval-at-source='true'" subsys=daemon
    level=info msg="  --gops-port='9890'" subsys=daemon
    level=info msg="  --host-reachable-services-protos='tcp,udp'" subsys=daemon
    level=info msg="  --http-403-msg=''" subsys=daemon
    level=info msg="  --http-idle-timeout='0'" subsys=daemon
    level=info msg="  --http-max-grpc-timeout='0'" subsys=daemon
    level=info msg="  --http-normalize-path='true'" subsys=daemon
    level=info msg="  --http-request-timeout='3600'" subsys=daemon
    level=info msg="  --http-retry-count='3'" subsys=daemon
    level=info msg="  --http-retry-timeout='0'" subsys=daemon
    level=info msg="  --hubble-disable-tls='false'" subsys=daemon
    level=info msg="  --hubble-event-queue-size='0'" subsys=daemon
    level=info msg="  --hubble-flow-buffer-size='4095'" subsys=daemon
    level=info msg="  --hubble-listen-address=':4244'" subsys=daemon
    level=info msg="  --hubble-metrics=''" subsys=daemon
    level=info msg="  --hubble-metrics-server=''" subsys=daemon
    level=info msg="  --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
    level=info msg="  --hubble-tls-cert-file='/var/lib/cilium/tls/hubble/server.crt'" subsys=daemon
    level=info msg="  --hubble-tls-client-ca-files='/var/lib/cilium/tls/hubble/client-ca.crt'" subsys=daemon
    level=info msg="  --hubble-tls-key-file='/var/lib/cilium/tls/hubble/server.key'" subsys=daemon
    level=info msg="  --identity-allocation-mode='crd'" subsys=daemon
    level=info msg="  --identity-change-grace-period='5s'" subsys=daemon
    level=info msg="  --install-iptables-rules='true'" subsys=daemon
    level=info msg="  --ip-allocation-timeout='2m0s'" subsys=daemon
    level=info msg="  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
    level=info msg="  --ipam='kubernetes'" subsys=daemon
    level=info msg="  --ipsec-key-file=''" subsys=daemon
    level=info msg="  --iptables-lock-timeout='5s'" subsys=daemon
    level=info msg="  --iptables-random-fully='false'" subsys=daemon
    level=info msg="  --ipv4-node='auto'" subsys=daemon
    level=info msg="  --ipv4-pod-subnets=''" subsys=daemon
    level=info msg="  --ipv4-range='auto'" subsys=daemon
    level=info msg="  --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
    level=info msg="  --ipv4-service-range='auto'" subsys=daemon
    level=info msg="  --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
    level=info msg="  --ipv6-mcast-device=''" subsys=daemon
    level=info msg="  --ipv6-node='auto'" subsys=daemon
    level=info msg="  --ipv6-pod-subnets=''" subsys=daemon
    level=info msg="  --ipv6-range='auto'" subsys=daemon
    level=info msg="  --ipv6-service-range='auto'" subsys=daemon
    level=info msg="  --ipvlan-master-device='undefined'" subsys=daemon
    level=info msg="  --join-cluster='false'" subsys=daemon
    level=info msg="  --k8s-api-server=''" subsys=daemon
    level=info msg="  --k8s-force-json-patch='false'" subsys=daemon
    level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=daemon
    level=info msg="  --k8s-kubeconfig-path=''" subsys=daemon
    level=info msg="  --k8s-namespace='kube-system'" subsys=daemon
    level=info msg="  --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
    level=info msg="  --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
    level=info msg="  --k8s-service-cache-size='128'" subsys=daemon
    level=info msg="  --k8s-service-proxy-name=''" subsys=daemon
    level=info msg="  --k8s-sync-timeout='3m0s'" subsys=daemon
    level=info msg="  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
    level=info msg="  --k8s-watcher-queue-size='1024'" subsys=daemon
    level=info msg="  --keep-config='false'" subsys=daemon
    level=info msg="  --kube-proxy-replacement='strict'" subsys=daemon
    level=info msg="  --kube-proxy-replacement-healthz-bind-address=''" subsys=daemon
    level=info msg="  --kvstore=''" subsys=daemon
    level=info msg="  --kvstore-connectivity-timeout='2m0s'" subsys=daemon
    level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=daemon
    level=info msg="  --kvstore-opt='map[]'" subsys=daemon
    level=info msg="  --kvstore-periodic-sync='5m0s'" subsys=daemon
    level=info msg="  --label-prefix-file=''" subsys=daemon
    level=info msg="  --labels=''" subsys=daemon
    level=info msg="  --lib-dir='/var/lib/cilium'" subsys=daemon
    level=info msg="  --log-driver=''" subsys=daemon
    level=info msg="  --log-opt='map[]'" subsys=daemon
    level=info msg="  --log-system-load='false'" subsys=daemon
    level=info msg="  --masquerade='true'" subsys=daemon
    level=info msg="  --max-controller-interval='0'" subsys=daemon
    level=info msg="  --metrics=''" subsys=daemon
    level=info msg="  --monitor-aggregation='medium'" subsys=daemon
    level=info msg="  --monitor-aggregation-flags='all'" subsys=daemon
    level=info msg="  --monitor-aggregation-interval='5s'" subsys=daemon
    level=info msg="  --monitor-queue-size='0'" subsys=daemon
    level=info msg="  --mtu='0'" subsys=daemon
    level=info msg="  --nat46-range='0:0:0:0:0:FFFF::/96'" subsys=daemon
    level=info msg="  --native-routing-cidr='172.21.0.0/20'" subsys=daemon
    level=info msg="  --node-port-acceleration='disabled'" subsys=daemon
    level=info msg="  --node-port-algorithm='random'" subsys=daemon
    level=info msg="  --node-port-bind-protection='true'" subsys=daemon
    level=info msg="  --node-port-mode='hybrid'" subsys=daemon
    level=info msg="  --node-port-range='30000,32767'" subsys=daemon
    level=info msg="  --policy-audit-mode='false'" subsys=daemon
    level=info msg="  --policy-queue-size='100'" subsys=daemon
    level=info msg="  --policy-trigger-interval='1s'" subsys=daemon
    level=info msg="  --pprof='false'" subsys=daemon
    level=info msg="  --preallocate-bpf-maps='false'" subsys=daemon
    level=info msg="  --prefilter-device='undefined'" subsys=daemon
    level=info msg="  --prefilter-mode='native'" subsys=daemon
    level=info msg="  --prepend-iptables-chains='true'" subsys=daemon
    level=info msg="  --prometheus-serve-addr=''" subsys=daemon
    level=info msg="  --proxy-connect-timeout='1'" subsys=daemon
    level=info msg="  --proxy-prometheus-port='0'" subsys=daemon
    level=info msg="  --read-cni-conf=''" subsys=daemon
    level=info msg="  --restore='true'" subsys=daemon
    level=info msg="  --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
    level=info msg="  --single-cluster-route='false'" subsys=daemon
    level=info msg="  --skip-crd-creation='false'" subsys=daemon
    level=info msg="  --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
    level=info msg="  --sockops-enable='false'" subsys=daemon
    level=info msg="  --state-dir='/var/run/cilium'" subsys=daemon
    level=info msg="  --tofqdns-dns-reject-response-code='refused'" subsys=daemon
    level=info msg="  --tofqdns-enable-dns-compression='true'" subsys=daemon
    level=info msg="  --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
    level=info msg="  --tofqdns-idle-connection-grace-period='0s'" subsys=daemon
    level=info msg="  --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
    level=info msg="  --tofqdns-min-ttl='0'" subsys=daemon
    level=info msg="  --tofqdns-pre-cache=''" subsys=daemon
    level=info msg="  --tofqdns-proxy-port='0'" subsys=daemon
    level=info msg="  --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
    level=info msg="  --trace-payloadlen='128'" subsys=daemon
    level=info msg="  --tunnel='disabled'" subsys=daemon
    level=info msg="  --version='false'" subsys=daemon
    level=info msg="  --write-cni-conf-when-ready=''" subsys=daemon
    level=info msg="     _ _ _" subsys=daemon
    level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
    level=info msg="|  _| | | | | |     |" subsys=daemon
    level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
    level=info msg="Cilium 1.9.9 5bcf83c 2021-07-19T16:45:00-07:00 go version go1.15.14 linux/amd64" subsys=daemon
    level=info msg="cilium-envoy  version: 82a70d56bf324287ced3129300db609eceb21d10/1.17.3/Distribution/RELEASE/BoringSSL" subsys=daemon
    level=info msg="clang (10.0.0) and kernel (5.11.1) versions: OK!" subsys=linux-datapath
    level=info msg="linking environment: OK!" subsys=linux-datapath
    level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
    level=info msg="Mounted cgroupv2 filesystem at /run/cilium/cgroupv2" subsys=cgroups
    level=info msg="Parsing base label prefixes from default label list" subsys=labels-filter
    level=info msg="Parsing additional label prefixes from user inputs: []" subsys=labels-filter
    level=info msg="Final label prefixes to be used for identity evaluation:" subsys=labels-filter
    level=info msg=" - reserved:.*" subsys=labels-filter
    level=info msg=" - :io.kubernetes.pod.namespace" subsys=labels-filter
    level=info msg=" - :io.cilium.k8s.namespace.labels" subsys=labels-filter
    level=info msg=" - :app.kubernetes.io" subsys=labels-filter
    level=info msg=" - !:io.kubernetes" subsys=labels-filter
    level=info msg=" - !:kubernetes.io" subsys=labels-filter
    level=info msg=" - !:.*beta.kubernetes.io" subsys=labels-filter
    level=info msg=" - !:k8s.io" subsys=labels-filter
    level=info msg=" - !:pod-template-generation" subsys=labels-filter
    level=info msg=" - !:pod-template-hash" subsys=labels-filter
    level=info msg=" - !:controller-revision-hash" subsys=labels-filter
    level=info msg=" - !:annotation.*" subsys=labels-filter
    level=info msg=" - !:etcd_node" subsys=labels-filter
    level=info msg="Auto-disabling \"enable-bpf-clock-probe\" feature since KERNEL_HZ cannot be determined" error="Cannot probe CONFIG_HZ" subsys=daemon
    level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.5.0.0/16
    level=info msg="Initializing daemon" subsys=daemon
    level=info msg="Establishing connection to apiserver" host="https://apiserver.qiangyun.com:6443" subsys=k8s
    level=info msg="Connected to apiserver" subsys=k8s
    level=info msg="Trying to auto-enable \"enable-node-port\", \"enable-external-ips\", \"enable-host-reachable-services\", \"enable-host-port\", \"enable-session-affinity\" features" subsys=daemon
    level=info msg="BPF host routing is incompatible with enable-endpoint-routes. Falling back to legacy host routing (enable-host-legacy-routing=true)." subsys=daemon 與eBPF沖突,在初始化是指定 --set bpf.hostRouting=true
    level=info msg="Inheriting MTU from external network interface" device=eth0 ipAddr=10.1.0.5 mtu=1500 subsys=mtu
    level=info msg="Restored services from maps" failed=0 restored=11 subsys=service
    level=info msg="Reading old endpoints..." subsys=daemon
    level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock" subsys=envoy-manager
    level=info msg="Reusing previous DNS proxy port: 39451" subsys=daemon
    level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
    level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
    level=info msg="Retrieved node information from kubernetes node" nodeName=prod-k8s-cp1 subsys=k8s
    level=info msg="Received own node information from API server" ipAddr.ipv4=10.1.0.5 ipAddr.ipv6="<nil>" k8sNodeIP=10.1.0.5 labels="map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:prod-k8s-cp1 kubernetes.io/os:linux node-role.kubernetes.io/master: topology.diskplugin.csi.alibabacloud.com/zone:cn-hangzhou-h]" nodeName=prod-k8s-cp1 subsys=k8s v4Prefix=172.21.0.0/24 v6Prefix="<nil>"
    level=info msg="Restored router IPs from node information" ipv4=172.21.0.85 ipv6="<nil>" subsys=k8s
    level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
    level=info msg="Using auto-derived devices to attach Loadbalancer, Host Firewall or Bandwidth Manager program" devices="[eth0]" directRoutingDevice=eth0 subsys=daemon
    level=info msg="Enabling k8s event listener" subsys=k8s-watcher
    level=info msg="Waiting until all pre-existing resources related to policy have been received" subsys=k8s-watcher
    level=info msg="Removing stale endpoint interfaces" subsys=daemon
    level=info msg="Skipping kvstore configuration" subsys=daemon
    level=info msg="Restored router address from node_config" file=/var/run/cilium/state/globals/node_config.h ipv4=172.21.0.85 ipv6="<nil>" subsys=node
    level=info msg="Initializing node addressing" subsys=daemon
    level=info msg="Initializing kubernetes IPAM" subsys=ipam v4Prefix=172.21.0.0/24 v6Prefix="<nil>"
    level=info msg="Restoring endpoints..." subsys=daemon
    level=info msg="Endpoints restored" failed=0 restored=1 subsys=daemon
    level=info msg="Addressing information:" subsys=daemon
    level=info msg="  Cluster-Name: default" subsys=daemon
    level=info msg="  Cluster-ID: 0" subsys=daemon
    level=info msg="  Local node-name: prod-k8s-cp1" subsys=daemon
    level=info msg="  Node-IPv6: <nil>" subsys=daemon
    level=info msg="  External-Node IPv4: 10.1.0.5" subsys=daemon
    level=info msg="  Internal-Node IPv4: 172.21.0.85" subsys=daemon
    level=info msg="  IPv4 allocation prefix: 172.21.0.0/24" subsys=daemon
    level=info msg="  IPv4 native routing prefix: 172.21.0.0/20" subsys=daemon
    level=info msg="  Loopback IPv4: 169.254.42.1" subsys=daemon
    level=info msg="  Local IPv4 addresses:" subsys=daemon
    level=info msg="  - 10.1.0.5" subsys=daemon
    level=info msg="  - 172.21.0.85" subsys=daemon
    level=info msg="Adding local node to cluster" node="{prod-k8s-cp1 default [{InternalIP 10.1.0.5} {CiliumInternalIP 172.21.0.85}] 172.21.0.0/24 <nil> 172.21.0.197 <nil> 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:prod-k8s-cp1 kubernetes.io/os:linux node-role.kubernetes.io/master: topology.diskplugin.csi.alibabacloud.com/zone:cn-hangzhou-h] 6}" subsys=nodediscovery
    level=info msg="Creating or updating CiliumNode resource" node=prod-k8s-cp1 subsys=nodediscovery
    level=info msg="Successfully created CiliumNode resource" subsys=nodediscovery
    level=info msg="Annotating k8s node" subsys=daemon v4CiliumHostIP.IPv4=172.21.0.85 v4Prefix=172.21.0.0/24 v4healthIP.IPv4=172.21.0.197 v6CiliumHostIP.IPv6="<nil>" v6Prefix="<nil>" v6healthIP.IPv6="<nil>"
    level=info msg="Initializing identity allocator" subsys=identity-cache
    level=info msg="Cluster-ID is not specified, skipping ClusterMesh initialization" subsys=daemon
    level=info msg="Setting up BPF datapath" bpfClockSource=ktime bpfInsnSet=v3 subsys=datapath-loader
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
    level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.timer_migration sysParamValue=0
    level=info msg="All pre-existing resources related to policy have been received; continuing" subsys=k8s-watcher
    level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
    level=info msg="Adding new proxy port rules for cilium-dns-egress:39451" proxy port name=cilium-dns-egress subsys=proxy
    level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
    level=info msg="Validating configured node address ranges" subsys=daemon
    level=info msg="Starting connection tracking garbage collector" subsys=daemon
    level=info msg="Starting IP identity watcher" subsys=ipcache
    level=info msg="Initial scan of connection tracking completed" subsys=ct-gc
    level=info msg="Regenerating restored endpoints" numRestored=1 subsys=daemon
    level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.025936718825527918 newInterval=7m30s subsys=map-ct
    level=info msg="Datapath signal listener running" subsys=signal
    level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3912 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Successfully restored endpoint. Scheduling regeneration" endpointID=3912 subsys=daemon
    level=info msg="Removed endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3610 identity=4 ipv4=172.21.0.71 ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Launching Cilium health daemon" subsys=daemon
    level=info msg="Launching Cilium health endpoint" subsys=daemon
    level=info msg="Started healthz status API server" address="127.0.0.1:9876" subsys=daemon
    level=info msg="Initializing Cilium API" subsys=daemon
    level=info msg="Daemon initialization completed" bootstrapTime=5.687347691s subsys=daemon
    level=info msg="Serving cilium API at unix:///var/run/cilium/cilium.sock" subsys=daemon
    level=info msg="Configuring Hubble server" eventQueueSize=4096 maxFlows=4095 subsys=hubble
    level=info msg="Starting local Hubble server" address="unix:///var/run/cilium/hubble.sock" subsys=hubble
    level=info msg="Beginning to read perf buffer" startTime="2021-08-28 10:04:17.337903259 +0000 UTC m=+5.762296463" subsys=monitor-agent
    level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2454 ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2454 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2454 identity=4 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
    level=info msg="Compiled new BPF template" BPFCompilationTime=1.676219511s file-path=/var/run/cilium/state/templates/07d958f5310f668aa25992c4b03f0ab71d723a11/bpf_host.o subsys=datapath-loader
    level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
    level=info msg="Compiled new BPF template" BPFCompilationTime=1.348419572s file-path=/var/run/cilium/state/templates/f7d40533d0d45d623a9ad0f1855c105aed55472e/bpf_lxc.o subsys=datapath-loader
    level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3912 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=2454 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint
    level=info msg="Restored endpoint" endpointID=3912 ipAddr="[ ]" subsys=endpoint
    level=info msg="Finished regenerating restored endpoints" regenerated=1 subsys=daemon total=1
    level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
    level=info msg="Waiting for Hubble server TLS certificate and key files to be created" subsys=hubble
  3. 查看enpointRoutes模式下cilium-agent的狀態
    root@PROD-K8S-CP1:/home/cilium# cilium status --verbose
    KVStore:                Ok   Disabled
    Kubernetes:             Ok   1.18 (v1.18.5) [linux/amd64]
    Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
    KubeProxyReplacement:   Strict   [eth0 (Direct Routing)]
    Cilium:                 Ok   1.9.9 (v1.9.9-5bcf83c)
    NodeMonitor:            Listening for events on 4 CPUs with 64x4096 of shared memory
    Cilium health daemon:   Ok   
    IPAM:                   IPv4: 2/255 allocated from 172.21.0.0/24, 
    Allocated addresses:
      172.21.0.197 (health)
      172.21.0.85 (router)
    BandwidthManager:       Disabled
    Host Routing: Legacy 注意主機路由模式
    Masquerading:           BPF   [eth0]   172.21.0.0/20
    Clock Source for BPF:   ktime
    Controller Status:      18/18 healthy
      Name                                  Last success   Last error   Count   Message
      cilium-health-ep                      11s ago        never        0       no error   
      dns-garbage-collector-job             17s ago        never        0       no error   
      endpoint-2454-regeneration-recovery   never          never        0       no error   
      endpoint-3912-regeneration-recovery   never          never        0       no error   
      k8s-heartbeat                         17s ago        never        0       no error   
      mark-k8s-node-as-available            24m12s ago     never        0       no error   
      metricsmap-bpf-prom-sync              2s ago         never        0       no error   
      neighbor-table-refresh                4m12s ago      never        0       no error   
      resolve-identity-2454                 4m11s ago      never        0       no error   
      restoring-ep-identity (3912)          24m12s ago     never        0       no error   
      sync-endpoints-and-host-ips           12s ago        never        0       no error   
      sync-lb-maps-with-k8s-services        24m12s ago     never        0       no error   
      sync-policymap-2454                   58s ago        never        0       no error   
      sync-policymap-3912                   58s ago        never        0       no error   
      sync-to-k8s-ciliumendpoint (2454)     11s ago        never        0       no error   
      sync-to-k8s-ciliumendpoint (3912)     2s ago         never        0       no error   
      template-dir-watcher                  never          never        0       no error   
      update-k8s-node-annotations           24m16s ago     never        0       no error   
    Proxy Status:   OK, ip 172.21.0.85, 0 redirects active on ports 10000-20000
    Hubble:         Ok   Current/Max Flows: 224/4096 (5.47%), Flows/s: 0.15   Metrics: Disabled
    KubeProxyReplacement Details:
      Status:              Strict
      Protocols:           TCP, UDP
      Devices:             eth0 (Direct Routing)
      Mode: Hybrid 個人理解除非開啟 DSR模式,否則單獨設置沒啥意義
      Backend Selection:   Random
      Session Affinity:    Enabled
      XDP Acceleration:    Disabled
      Services:
      - ClusterIP:      Enabled
      - NodePort:       Enabled (Range: 30000-32767) 
      - LoadBalancer:   Enabled 
      - externalIPs:    Enabled 
      - HostPort:       Enabled
    BPF Maps:   dynamic sizing: on (ratio: 0.002500)
      Name                          Size
      Non-TCP connection tracking   72407
      TCP connection tracking       144815
      Endpoint policy               65535
      Events                        4
      IP cache                      512000
      IP masquerading agent         16384
      IPv4 fragmentation            8192
      IPv4 service                  65536
      IPv6 service                  65536
      IPv4 service backend          65536
      IPv6 service backend          65536
      IPv4 service reverse NAT      65536
      IPv6 service reverse NAT      65536
      Metrics                       1024
      NAT                           144815
      Neighbor table                144815
      Global policy                 16384
      Per endpoint policy           65536
      Session affinity              65536
      Signal                        4
      Sockmap                       65535
      Sock reverse NAT              72407
      Tunnel                        65536
    Cluster health:              3/19 reachable   (2021-08-28T10:20:49Z)
      Name                       IP               Node        Endpoints
      prod-k8s-cp1 (localhost)   10.1.0.5         reachable   reachable
      prod-be-k8s-wn1            10.1.17.231      reachable   unreachable
      prod-be-k8s-wn2            10.1.17.232      reachable   unreachable
      prod-be-k8s-wn6            10.1.17.236      reachable   reachable
      prod-be-k8s-wn7            10.1.17.237      reachable   unreachable
      prod-be-k8s-wn8            10.1.17.238      reachable   unreachable
      prod-data-k8s-wn1          10.1.18.50       reachable   reachable
      prod-data-k8s-wn2          10.1.18.49       reachable   unreachable
      prod-data-k8s-wn3          10.1.18.51       reachable   unreachable
      prod-fe-k8s-wn1            10.1.16.221      reachable   unreachable
      prod-fe-k8s-wn2            10.1.16.222      reachable   unreachable
      prod-fe-k8s-wn3            10.1.16.223      reachable   unreachable
      prod-k8s-cp2               10.1.0.7         reachable   unreachable
      prod-k8s-cp3               10.1.0.6         reachable   unreachable
      prod-sys-k8s-wn1           10.1.0.8         reachable   unreachable
      prod-sys-k8s-wn2           10.1.0.9         reachable   unreachable
      prod-sys-k8s-wn3           10.1.0.11        reachable   unreachable
      prod-sys-k8s-wn4           10.1.0.10        reachable   unreachable
      prod-sys-k8s-wn5           10.1.0.12        reachable   unreachable
  4. 查看節點路由信息
    <root@PROD-K8S-CP1 ~># netstat -rn
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
    0.0.0.0         10.1.0.253      0.0.0.0         UG        0 0          0 eth0
    10.1.0.0        0.0.0.0         255.255.255.0   U         0 0          0 eth0
    169.254.0.0     0.0.0.0         255.255.0.0     U         0 0          0 eth0
    172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
    172.21.0.0      172.21.0.85     255.255.255.0   UG        0 0          0 cilium_host
    172.21.0.85     0.0.0.0         255.255.255.255 UH        0 0          0 cilium_host
    172.21.0.117    0.0.0.0         255.255.255.255 UH        0 0          0 lxc_health
    
    <root@PROD-DATA-K8S-WN1 ~># netstat -rn
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
    0.0.0.0         10.1.18.253     0.0.0.0         UG        0 0          0 eth0
    10.1.18.0       0.0.0.0         255.255.255.0   U         0 0          0 eth0
    169.254.0.0     0.0.0.0         255.255.0.0     U         0 0          0 eth0
    172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
    172.21.13.0     172.21.13.25    255.255.255.0   UG        0 0          0 cilium_host
    172.21.13.25    0.0.0.0         255.255.255.255 UH        0 0          0 cilium_host
    172.21.13.73    0.0.0.0         255.255.255.255 UH        0 0          0 lxc_health<root@PROD-FE-K8S-WN1 ~># netstat -rn
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
    0.0.0.0         10.1.16.253     0.0.0.0         UG        0 0          0 eth0
    10.1.16.0       0.0.0.0         255.255.255.0   U         0 0          0 eth0
    169.254.0.0     0.0.0.0         255.255.0.0     U         0 0          0 eth0
    172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
    172.21.9.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
    172.21.9.173    0.0.0.0         255.255.255.255 UH        0 0          0 lxc_health
    172.21.9.225    0.0.0.0         255.255.255.255 UH        0 0          0 cilium_host

    <root@PROD-BE-K8S-WN6 ~># netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.1.17.253 0.0.0.0 UG 0 0 0 eth0 10.1.17.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.21.12.64 172.21.12.86 255.255.255.192 UG 0 0 0 cilium_host 172.21.12.74 0.0.0.0 255.255.255.255 UH 0 0 0 lxc_health 172.21.12.80 0.0.0.0 255.255.255.255 UH 0 0 0 lxc8de3adfa749f 172.21.12.86 0.0.0.0 255.255.255.255 UH 0 0 0 cilium_host 172.21.12.88 0.0.0.0 255.255.255.255 UH 0 0 0 lxcc1a4ab58fd8d 172.21.12.125 0.0.0.0 255.255.255.255 UH 0 0 0 lxcc8ea1535db0e

    # 從上面看出都由endpoint為單位獨立路由
  5. 測試Pod網絡連通性(路由配置好,網絡必達)


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM