1、Calico概述
Calico是Kubernetes生態系統中另一種流行的網絡選擇。雖然Flannel被公認為是最簡單的選擇,但Calico以其性能、靈活性而聞名。Calico的功能更為全面,不僅提供主機和pod之間的網絡連接,還涉及網絡安全和管理。Calico CNI插件在CNI框架內封裝了Calico的功能。
Calico是一個基於BGP的純三層的網絡方案,與OpenStack、Kubernetes、AWS、GCE等雲平台都能夠良好地集成。Calico在每個計算節點都利用Linux Kernel實現了一個高效的虛擬路由器vRouter來負責數據轉發。每個vRouter都通過BGP1協議把在本節點上運行的容器的路由信息向整個Calico網絡廣播,並自動設置到達其他節點的路由轉發規則。Calico保證所有容器之間的數據流量都是通過IP路由的方式完成互聯互通的。Calico節點組網時可以直接利用數據中心的網絡結構(L2或者L3),不需要額外的NAT、隧道或者Overlay Network,沒有額外的封包解包,能夠節約CPU運算,提高網絡效率。

Calico在小規模集群中可以直接互聯,在大規模集群中可以通過額外的BGP route reflector來完成。

此外,Calico基於iptables還提供了豐富的網絡策略,實現了Kubernetes的Network Policy策略,提供容器間網絡可達性限制的功能。
2、Calico架構及BGP實現
BGP是互聯網上一個核心的去中心化自治路由協議,它通過維護IP路由表或“前綴”表來實現自治系統AS之間的可達性,屬於矢量路由協議。不過,考慮到並非所有的網絡都能支持BGP,以及Calico控制平面的設計要求物理網絡必須是二層網絡,以確保 vRouter間均直接可達,路由不能夠將物理設備當作下一跳等原因,為了支持三層網絡,Calico還推出了IP-in-IP疊加的模型,它也使用Overlay的方式來傳輸數據。IPIP的包頭非常小,而且也是內置在內核中,因此理論上它的速度要比VxLAN快一點 ,但安全性更差。Calico 3.x的默認配置使用的是IPIP類型的傳輸方案而非BGP。
Calico的系統架構如圖所示

Calico主要由Felix、Orchestrator Plugin、etcd、BIRD和BGP Router Reflector等組件組成。
Felix: Calico Agent,運行於每個節點。Orchestrator Plugi:編排系統(如 Kubernetes 、 OpenStack 等)以將Calico整合進系統中的插件,例如Kubernetes的CNI。etcd:持久存儲Calico數據的存儲管理系統。BIRD:用於分發路由信息的BGP客戶端。BGP Route Reflector:BGP路由反射器,可選組件,用於較大規模的網絡場景。
3、Calico部署
在Kubernetes中部署Calico的主要步驟如下:
-
修改
Kubernetes服務的啟動參數,並重啟服務- 設置Master上kube-apiserver服務的啟動參數:--allowprivileged=true(因為calico-node需要以特權模式運行在各Node上)。
- 設置各Node上kubelet服務的啟動參數:--networkplugin=cni(使用CNI網絡插件)
-
創建
Calico服務,主要包括calico-node和calico policy controller。需要創建的資源對象如下- 創建ConfigMap calico-config,包含Calico所需的配置參數
- 創建Secret calico-etcd-secrets,用於使用TLS方式連接etcd。
- 在每個Node上都運行calico/node容器,部署為DaemonSet
- 在每個Node上都安裝Calico(由install-cni容器完成)
- 部署一個名為calico/kube-policy-controller的Deployment,以對 接Kubernetes集群中為Pod設置的Network Policy
具體部署的步驟如下
下載yaml
curl https://docs.projectcalico.org/v3.11/manifests/calico-etcd.yaml -o calico-etcd.yaml
下載完后修改配置項
- 配置連接etcd地址,如果使用https,還需要配置證書。(ConfigMap,Secret)
# cat /opt/etcd/ssl/ca.pem | base64 -w 0
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURlakNDQW1LZ0F3SUJBZ0lVRHQrZ21iYnhzWmoxRGNrbGl3K240MkI5YW5Nd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1F6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEVEQU9CZ05WQkFNVEIyVjBZMlFnUTBFd0hoY05NVGt4TWpBeE1UQXdNREF3V2hjTk1qUXhNVEk1Ck1UQXdNREF3V2pCRE1Rc3dDUVlEVlFRR0V3SkRUakVRTUE0R0ExVUVDQk1IUW1WcGFtbHVaekVRTUE0R0ExVUUKQnhNSFFtVnBhbWx1WnpFUU1BNEdBMVVFQXhNSFpYUmpaQ0JEUVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRApnZ0VQQURDQ0FRb0NnZ0VCQUtEaGFsNHFaVG5DUE0ra3hvN3pYT2ZRZEFheGo2R3JVSWFwOGd4MTR4dFhRcnhrCmR0ZmVvUXh0UG5EbDdVdG1ZUkUza2xlYXdDOVhxM0hPZ3J1YkRuQ2ZMRnJZV05DUjFkeG1KZkNFdXU0YmZKeE4KVHNETVF1aUlxcnZ2aVN3QnQ3ZHUzczVTbEJUc2NOV0Y4TWNBMkNLTkVRbzR2Snp5RFZXRTlGTm1kdC8wOEV3UwpmZVNPRmpRV3BWWnprQW1Fc0VRaldtYUVHZjcyUXZvbmRNM2Raejl5M2x0UTgrWnJxOGdaZHRBeWpXQmdrZHB1ClVXZ2NaUTBZWmQ2Q2p4YWUwVzBqVkt5RER4bGlSQ3pLcUFiaUNucW9XYW1DVDR3RUdNU2o0Q0JiYTkwVXc3cTgKajVyekFIVVdMK0dnM2dzdndQcXFnL2JmMTR2TzQ2clRkR1g0Q2hzQ0F3RUFBYU5tTUdRd0RnWURWUjBQQVFILwpCQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUVGRFJTakhxMm0wVWVFM0JmCks2bDZJUUpPU2Vzck1COEdBMVVkSXdRWU1CYUFGRFJTakhxMm0wVWVFM0JmSzZsNklRSk9TZXNyTUEwR0NTcUcKU0liM0RRRUJDd1VBQTRJQkFRQUsyZXhBY2VhUndIRU9rQXkxbUsyWlhad1Q1ZC9jRXFFMmZCTmROTXpFeFJSbApnZDV0aGwvYlBKWHRSeWt0aEFUdVB2dzBjWVFPM1gwK09QUGJkOHl6dzRsZk5Ka1FBaUlvRUJUZEQvZWdmODFPCmxZOCtrRFhxZ1FZdFZLQm9HSGt5K2xRNEw4UUdOVEdaeWIvU3J5N0g3VXVDcTN0UmhzR2E4WGQ2YTNIeHJKYUsKTWZna1ZsNDA0bW83QXlWUHl0eHMrNmpLWCtJSmd3a2dHcG9DOXA2cDMyZDI1Q0NJelEweDRiZCtqejQzNXY1VApvRldBUmcySGdiTTR0aHdhRm1VRDcrbHdqVHpMczMreFN3Tys0S3Bmc2tScTR5dEEydUdNRDRqUTd0bnpoNi8wCkhQRkx6N0FGazRHRXoxaTNmMEtVTThEUlhwS0JKUXZNYzk4a3IrK24KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
# cat /opt/etcd/ssl/server-key.pem | base64 -w 0
LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBdUZJdlNWNndHTTROZURqZktSWDgzWEVTWHhDVERvSVdpcFVJQmVTL3JnTHBFek10Cmk3enp6SjRyUGplbElJZ2ZRdVJHMHdJVXNzN3FDOVhpa3JGcEdnNXp5d2dMNmREZE9KNkUxcUIrUWprbk85ZzgKalRaenc3cWwxSitVOExNZ0k3TWNCU2VtWVo0TUFSTmd6Z09xd2x2WkVBMnUvNFh5azdBdUJYWFgrUTI2SitLVApYVEJ4MnBHOXoxdnVmU0xzMG52YzdKY2gxT2lLZ2R2UHBqSktPQjNMTm83ZnJXMWlaenprTExWSjlEV3U1NFdMCk4rVE5GZWZCK1lBTTdlSHVHTjdHSTJKdW1YL3hKczc3dnQzRjF2VStYSitzVTZ3cmRGMStULzVyamNUN1dDdmgKbkZVWlBxTk9NWUZTWUdlOVBIY1l0Y1Y1MENnUVV2NUx2OTE1elFJREFRQUJBb0lCQUY1YkRBUG1LZ1Y0cmVLRwpVbzhJeDNwZ29NUHppeVJaS2NybGdjYnFrOGt6aWpjZThzamZBSHNWMlJNdmp5TjVLMitseGkvTWwrWDFFRkRnCnUreldUdlJjdzZBQ3pYNXpRbHZ5b2hQdzh0Rlp5cURURUNSRjVMc2t1REdCUTlCNEVoTFVaSnFxOG54MFdMYlEKUWJVVW9YeC9ZajNhazJRUklOM0R5YnRYMlNpUHBPN1hVMmFiVkNzYkZBWW1uN2lweW16M25WWFRseDJuVk1sZQpmYzhXbERsd09pL3FJUThwZjNpRnowRDVoUGl5ZDY5eXp2b2ZrVk5CbCtodGFPbGdwdVNqSEFrNnhIcFpBUExTCkIxclVJaDk1RWozTUk5U3BuSnNWcUFFVHFSSmpYOHl3bFZYa2dvd3I2TXJuTnVXelRZUnlSNDY5UFVmKzhaSzQKUjE1WTdvMENnWUVBM21HdjErSmRuSkVrL1R4M2ZaMkFmOWJIazJ1dE5FemxUakN5YlZtYkxKamx0M1pjSG96UgphZVR2azJSQ0Q4VDU0NU9EWmIzS3Zxdzg2TXkxQW9lWmpqV3pTR1VIVHZJYTRDQ3lMenBXaVNaQkRHSE9KbDBtCk9nbnRRclFPK0UwZjNXOHZtbkp0NGoySGxMWHByL1R6Zk12R05lTWVkSUlIMC8xZXV0WkJjNnNDZ1lFQTFDK0gKaDVtQ0pnbllNcm5zK3dZZ2lWVTJFdjZmRUc2VGl2QU5XUUlldVpHcDRoclZISXc0UTV3SHhZNWgrNE15bXFORAprMmVDYU15RjFxb1NCS1hOckFZS3RtWCtxR3ltaVBpWlRJWEltZlppcENocWl3dm1udjMxbWE1Njk2NkZ6SjdaCjJTLzZkTGtweWI2OTUxRTl5azRBOEYzNVdQRlY4M01DanJ1bjBHY0NnWUFoOFVFWXIybGdZMXNFOS95NUJKZy8KYXZYdFQyc1JaNGM4WnZ4azZsOWY4RHBueFQ0TVA2d2JBS0Y4bXJubWxFY2I4RUVHLzIvNXFHcG5rZzh5d3FXeQphZ25pUytGUXNHMWZ0ajNjTFloVnlLdjNDdHFmU21weVExK2VaY00vTE81bkt2aFdGNDhrRUFZb3NaZG9qdmUzCkhaYzBWR1VxblVvNmxocW1ZOXQ3bndLQmdGbFFVRm9Sa2FqMVI5M0NTVEE0bWdWMHFyaEFHVEJQZXlkbWVCZloKUHBtWjZNcFZ4UktwS3gyNlZjTWdkYm5xdGFoRnhMSU5SZVZiQVpNa0wwVnBqVE0xcjlpckFoQmUrNUo0SWY4Rgo2VFIxYzN2cHp6OE1HVjBmUlB3Vlo0bE9HdC9RbFo1SUJjS1FGampuWXdRMVBDOGx1bHR6RXZ3UFNjQ1p6cC9KCitZOU5Bb0dBVkpybjl4QmZhYWF5T3BlMHFTTjNGVzRlaEZyaTFaRUYvVHZqS3lnMnJGUng3VDRhY1dNWWdSK20KL2RsYU9CRFN6bjNheVVXNlBwSnN1UTBFanpIajFNSVFtV3JOQXNGSVJiN0Z6YzdhaVQzMFZmNFFYUmMwQUloLwpXNHk0OW5wNWNDWUZ5SXRSWEhXMUk5bkZPSjViQjF2b1pYTWNMK1dyMVZVa2FuVlIvNEE9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
# cat /opt/etcd/ssl/server.pem | base64 -w 0
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURyekNDQXBlZ0F3SUJBZ0lVZUFZTHdLMkxVdnE0V2ZiSG92cTlzVS8rWlJ3d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1F6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEVEQU9CZ05WQkFNVEIyVjBZMlFnUTBFd0hoY05NVGt4TWpBeE1UQXdNREF3V2hjTk1qa3hNVEk0Ck1UQXdNREF3V2pCQU1Rc3dDUVlEVlFRR0V3SkRUakVRTUE0R0ExVUVDQk1IUW1WcFNtbHVaekVRTUE0R0ExVUUKQnhNSFFtVnBTbWx1WnpFTk1Bc0dBMVVFQXhNRVpYUmpaRENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUApBRENDQVFvQ2dnRUJBTGhTTDBsZXNCak9EWGc0M3lrVi9OMXhFbDhRa3c2Q0ZvcVZDQVhrdjY0QzZSTXpMWXU4Cjg4eWVLejQzcFNDSUgwTGtSdE1DRkxMTzZndlY0cEt4YVJvT2M4c0lDK25RM1RpZWhOYWdma0k1Snp2WVBJMDIKYzhPNnBkU2ZsUEN6SUNPekhBVW5wbUdlREFFVFlNNERxc0piMlJBTnJ2K0Y4cE93TGdWMTEva051aWZpazEwdwpjZHFSdmM5YjduMGk3Tko3M095WElkVG9pb0hiejZZeVNqZ2R5emFPMzYxdFltYzg1Q3kxU2ZRMXJ1ZUZpemZrCnpSWG53Zm1BRE8zaDdoamV4aU5pYnBsLzhTYk8rNzdkeGRiMVBseWZyRk9zSzNSZGZrLythNDNFKzFncjRaeFYKR1Q2alRqR0JVbUJudlR4M0dMWEZlZEFvRUZMK1M3L2RlYzBDQXdFQUFhT0JuVENCbWpBT0JnTlZIUThCQWY4RQpCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRkJ3TUNNQXdHQTFVZEV3RUIvd1FDCk1BQXdIUVlEVlIwT0JCWUVGTHNKb2pPRUZGcGVEdEhFSTBZOEZIUjQvV0c4TUI4R0ExVWRJd1FZTUJhQUZEUlMKakhxMm0wVWVFM0JmSzZsNklRSk9TZXNyTUJzR0ExVWRFUVFVTUJLSEJNQ29BajJIQk1Db0FqNkhCTUNvQWo4dwpEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQzVOSlh6QTQvTStFRjFHNXBsc2luSC9sTjlWWDlqK1FHdU0wRWZrCjhmQnh3bmV1ZzNBM2l4OGxXQkhZTCtCZ0VySWNsc21ZVXpJWFJXd0h4ZklKV2x1Ukx5NEk3OHB4bDBaVTZWUTYKalFiQVI2YzhrK0FhbGxBTUJUTkphY3lTWkV4MVp2c3BVTUJUU0l3bmk5RFFDUDJIQStDNG5mdHEwMGRvckQwcgp5OXVDZ3dnSDFrOG42TkdSZ0lJbVl6dFlZZmZHbEQ3R3lybEM1N0plSkFFbElUaElEMks1Y090M2dUb2JiNk5oCk9pSWpNWVAwYzRKL1FTTHNMNjZZRTh5YnhvZ2M2L3JTYzBIblladkNZbXc5MFZxY05oU3hkT2liaWtPUy9SdDAKZHVRSnU3cmdkM3pldys3Y05CaTIwTFZrbzc3dDNRZWRZK0c3dUxVZ21qNWJudkU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
將上述base64加密的字符串修改至文件中聲明:ca.pem對應etcd-ca、server-key.pem對應etcd-key、server.pem對應etcd-cert;修改etcd證書的位置;修改etcd的連接地址(與api-server中配置/opt/kubernetes/cfg/kube-apiserver.conf中相同)
# vim calico-etcd.yaml
...
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: calico-etcd-secrets
namespace: kube-system
data:
# Populate the following with etcd TLS configuration if desired, but leave blank if
# not using TLS for etcd.
# The keys below should be uncommented and the values populated with the base64
# encoded contents of each file that would be associated with the TLS data.
# Example command for encoding a file contents: cat <file> | base64 -w 0
etcd-key: 填寫上面的加密字符串
etcd-cert: 填寫上面的加密字符串
etcd-ca: 填寫上面的加密字符串
...
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Configure this with the location of your etcd cluster.
etcd_endpoints: "https://192.168.2.61:2379,https://192.168.2.62:2379,https://192.168.2.63:2379"
# If you're using TLS enabled etcd uncomment the following.
# You must also populate the Secret below with these files.
etcd_ca: "/calico-secrets/etcd-ca"
etcd_cert: "/calico-secrets/etcd-cert"
etcd_key: "/calico-secrets/etcd-key"
根據實際網絡規划修改Pod CIDR(CALICO_IPV4POOL_CIDR),與controller-manager配置/opt/kubernetes/cfg/kube-controller-manager.conf中相同
# vim calico-etcd.yaml
...
320 - name: CALICO_IPV4POOL_CIDR
321 value: "10.244.0.0/16"
...
選擇工作模式(CALICO_IPV4POOL_IPIP),支持BGP,IPIP,此處先關閉IPIP模式
# vim calico-etcd.yaml
...
309 - name: CALICO_IPV4POOL_IPIP
310 value: "Never"
...
修改完后應用清單
# kubectl apply -f calico-etcd.yaml
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
# kubectl get pods -n kube-system
如果事先部署了fannel網絡組件,需要先卸載和刪除flannel,在每個節點均需要操作
# kubectl delete -f kube-flannel.yaml
# ip link delete cni0
# ip link delete flannel.1
# ip route
default via 192.168.2.2 dev eth0
10.244.1.0/24 via 192.168.2.63 dev eth0
10.244.2.0/24 via 192.168.2.62 dev eth0
169.254.0.0/16 dev eth0 scope link metric 1002
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.61
# ip route del 10.244.1.0/24 via 192.168.2.63 dev eth0
# ip route del 10.244.2.0/24 via 192.168.2.62 dev eth0
# ip route
default via 192.168.2.2 dev eth0
169.254.0.0/16 dev eth0 scope link metric 1002
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.61
4、Calico管理工具
下載工具:https://github.com/projectcalico/calicoctl/releases
# wget -O /usr/local/bin/calicoctl https://github.com/projectcalico/calicoctl/releases/download/v3.11.1/calicoctl
# chmod +x /usr/local/bin/calicoctl
使用calicoctl查看服務狀態
# ./calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 192.168.2.62 | node-to-node mesh | up | 02:58:05 | Established |
| 192.168.2.63 | node-to-node mesh | up | 03:08:46 | Established |
+--------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
實際上,使用calicoctl查看node狀態就是調用系統查看的,與netstat效果一樣
# netstat -antp|grep bird
tcp 0 0 0.0.0.0:179 0.0.0.0:* LISTEN 62709/bird
tcp 0 0 192.168.2.61:179 192.168.2.63:58963 ESTABLISHED 62709/bird
tcp 0 0 192.168.2.61:179 192.168.2.62:37390 ESTABLISHED 62709/bird
想要查看更多的信息,需要指定配置查看etcd中的數據
創建配置文件
# mkdir /etc/calico
# vim /etc/calico/calicoctl.cfg
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
datastoreType: "etcdv3"
etcdEndpoints: "https://192.168.2.61:2379,https://192.168.2.62:2379,https://192.168.2.63:2379"
etcdKeyFile: "/opt/etcd/ssl/server-key.pem"
etcdCertFile: "/opt/etcd/ssl/server.pem"
etcdCACertFile: "/opt/etcd/ssl/ca.pem"
查看數據等操作
# calicoctl get node
NAME
k8s-master-01
k8s-node-01
k8s-node-02
查看IPAM的IP地址池:
# ./calicoctl get ippool
NAME CIDR SELECTOR
default-ipv4-ippool 10.244.0.0/16 all()
# ./calicoctl get ippool -o wide
NAME CIDR NAT IPIPMODE VXLANMODE DISABLED SELECTOR
default-ipv4-ippool 10.244.0.0/16 true Never Never false all()
5、Calico BGP模式

Pod 1訪問Pod 2大致流程如下:
-
數據包從容器1出到達
Veth Pair另一端(宿主機上,以cali前綴開頭); -
宿主機根據路由規則,將數據包轉發給下一跳(網關);
-
到達
Node2,根據路由規則將數據包轉發給cali設備,從而到達容器2。
路由表:
# node1
10.244.36.65 dev cali4f18ce2c9a1 scope link
10.244.169.128/26 via 192.168.31.63 dev ens33 proto bird
10.244.235.192/26 via 192.168.31.61 dev ens33 proto bird
# node2
10.244.169.129 dev calia4d5b2258bb scope link
10.244.36.64/26 via 192.168.31.62 dev ens33 proto bird
10.244.235.192/26 via 192.168.31.61 dev ens33 proto bird
其中,這里最核心的“下一跳”路由規則,就是由Calico的Felix進程負責維護的。這些路由規則信息,則是通過BGP Client也就是BIRD組件,使用BGP協議傳輸而來的。
不難發現,Calico項目實際上將集群里的所有節點,都當作是邊界路由器來處理,它們一起組成了一個全連通的網絡,互相之間通過BGP協議交換路由規則。這些節點,我們稱為BGP Peer。
calico相關文件
# ls /opt/cni/bin/calico-ipam
/opt/cni/bin/calico-ipam
# cat /etc/cni/net.d/
10-calico.conflist calico-kubeconfig calico-tls/
# cat /etc/cni/net.d/10-calico.conflist
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"etcd_endpoints": "https://192.168.2.61:2379,https://192.168.2.62:2379,https://192.168.2.63:2379",
"etcd_key_file": "/etc/cni/net.d/calico-tls/etcd-key",
"etcd_cert_file": "/etc/cni/net.d/calico-tls/etcd-cert",
"etcd_ca_cert_file": "/etc/cni/net.d/calico-tls/etcd-ca",
"mtu": 1440,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}
6、Calico Route Reflector 模式(RR)
https://docs.projectcalico.org/master/networking/bgp
Calico維護的網絡在默認是(Node-to-Node Mesh)全互聯模式,Calico集群中的節點之間都會相互建立連接,用於路由交換。但是隨着集群規模的擴大,mesh模式將形成一個巨大服務網格,連接數成倍增加。
這時就需要使用Route Reflector(路由器反射)模式解決這個問題。
確定一個或多個Calico節點充當路由反射器(一般配置兩個以上),讓其他節點從這個RR節點獲取路由信息。
具體步驟如下:
1、關閉 node-to-node BGP網格
默認node to node模式最好在100個節點以下
添加default BGP配置,調整nodeToNodeMeshEnabled和asNumber:bgp.yaml
# cat << EOF | calicoctl create -f -
apiVersion: projectcalico.org/v3
kind: BGPConfiguration
metadata:
name: default
spec:
logSeverityScreen: Info
nodeToNodeMeshEnabled: false
asNumber: 63400
EOF
# calicoctl apply -f bgp.yaml # 一旦執行,集群會立即斷網
Successfully applied 1 'BGPConfiguration' resource(s)
# calicoctl get bgpconfig
NAME LOGSEVERITY MESHENABLED ASNUMBER
default Info false 63400
# calicoctl node status
Calico process is running.
IPv4 BGP status
No IPv4 peers found.
IPv6 BGP status
No IPv6 peers found.
ASN號可以通過獲取
# calicoctl get nodes --output=wide
NAME ASN IPV4 IPV6
k8s-master-01 (63400) 192.168.2.61/24
k8s-node-01 (63400) 192.168.2.62/24
k8s-node-02 (63400) 192.168.2.63/24
2、配置指定節點充當路由反射器
為方便讓BGPPeer輕松選擇節點,通過標簽選擇器匹配。
給路由器反射器節點打標簽:
增加第二個路由反射器時,給新的node打標簽並配置成反射器節點即可
# kubectl label node k8s-node-02 route-reflector=true
node/k8s-node-02 labeled
然后配置路由器反射器節點routeReflectorClusterID:
# calicoctl get nodes k8s-node-02 -o yaml> node.yaml
# vim node.yaml
apiVersion: projectcalico.org/v3
kind: Node
metadata:
annotations:
projectcalico.org/kube-labels: '{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"k8s-node2","kubernetes.io/os":"linux"}'
creationTimestamp: null
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/arch: amd64
kubernetes.io/hostname: k8s-node2
kubernetes.io/os: linux
name: k8s-node2
spec:
bgp:
ipv4Address: 192.168.31.63/24
routeReflectorClusterID: 244.0.0.1 # 增加集群ID
orchRefs:
- nodeName: k8s-node2
orchestrator: k8s
# ./calicoctl apply -f node.yaml
Successfully applied 1 'Node' resource(s)
現在,很容易使用標簽選擇器將路由反射器節點與其他非路由反射器節點配置為對等:
表示所有節點都連接路由反射器節點
# vim peer-with-route-reflectors.yaml
apiVersion: projectcalico.org/v3
kind: BGPPeer
metadata:
name: peer-with-route-reflectors
spec:
nodeSelector: all()
peerSelector: route-reflector == 'true'
# calicoctl apply -f peer-with-route-reflectors.yaml
Successfully applied 1 'BGPPeer' resource(s)
# calicoctl get bgppeer
NAME PEERIP NODE ASN
peer-with-route-reflectors all() 0
查看節點的BGP連接狀態,只有本節點與路由反射器節點的連接:
# calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+---------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+---------------+-------+----------+-------------+
| 192.168.2.63 | node specific | up | 04:17:14 | Established |
+--------------+---------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
7、Calico IPIP模式
Flannel host-gw模式最主要的限制,就是要求集群宿主機之間是二層連通的。而這個限制對於Calico來說,也同樣存在。
修改為IPIP模式:
也可以直接在部署calico的時候直接修改
# calicoctl get ipPool -o yaml > ipip.yaml
# vi ipip.yaml
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: default-ipv4-ippool
spec:
blockSize: 26
cidr: 10.244.0.0/16
ipipMode: Always # 啟動ipip模式
natOutgoing: true
# calicoctl apply -f ipip.yaml
# calicoctl get ippool -o wide
NAME CIDR NAT IPIPMODE VXLANMODE DISABLED SELECTOR
default-ipv4-ippool 10.244.0.0/16 true Always Never false all()
# ip route # 會增加tunl0網卡
default via 192.168.2.2 dev eth0
10.244.44.192/26 via 192.168.2.63 dev tunl0 proto bird onlink
blackhole 10.244.151.128/26 proto bird
10.244.154.192/26 via 192.168.2.62 dev tunl0 proto bird onlink
169.254.0.0/16 dev eth0 scope link metric 1002
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.61
IPIP示意圖:

Pod 1訪問Pod 2大致流程如下:
- 數據包從容器1出到達
Veth Pair另一端(宿主機上,以cali前綴開頭); - 進入IP隧道設備(tunl0),由
Linux內核IPIP驅動封裝在宿主機網絡的IP包中(新的IP包目的地之是原IP包的下一跳地址,即192.168.31.63),這樣,就成了Node1到Node2的數據包; - 數據包經過路由器三層轉發到
Node2; Node2收到數據包后,網絡協議棧會使用IPIP驅動進行解包,從中拿到原始IP包;- 然后根據路由規則,根據路由規則將數據包轉發給
cali設備,從而到達容器2。
路由表:
# node1
10.244.36.65 dev cali4f18ce2c9a1 scope link
10.244.169.128/26 via 192.168.31.63 dev tunl0 proto bird onlink
# node2
10.244.169.129 dev calia4d5b2258bb scope link
10.244.36.64/26 via 192.168.31.62 dev tunl0 proto bird onlink
不難看到,當Calico使用IPIP模式的時候,集群的網絡性能會因為額外的封包和解包工作而下降。所以建議你將所有宿主機節點放在一個子網里,避免使用IPIP。
8、Calico網絡策略
部署完成Calico后,就可以實現k8s中的網絡策略NetworkPolicy,對於網絡策略在前面的文章使用flannel+canal實現k8s的NetworkPolicy有詳細描述,這里不再贅述。😊
文章參考來源: https://docs.projectcalico.org/
