因為Kubernetes官方用的flannel無法實現多租戶環境下的網絡隔離,建立起來的pod之間實際可以相互訪問,而Calico可以實現,因此周末找個時間試了一下大概的過程。
前面的kubernetes安裝掠過
Calico安裝
下載yaml文件
http://docs.projectcalico.org/v2.3/getting-started/kubernetes/installation/hosted/calico.yaml http://docs.projectcalico.org/v2.3/getting-started/kubernetes/installation/rbac.yaml
下載鏡像文件
quay.io/calico/node:v1.3.0 quay.io/calico/cni:v1.9.1 quay.io/calico/kube-policy-controller:v0.6.0 # 國內鏡像 jicki/node:v1.3.0 jicki/cni:v1.9.1 jicki/kube-policy-controller:v0.6.0
修改calico.yaml的如下部分
etcd_endpoints: "https://192.168.44.108:2379" etcd_ca: "/calico-secrets/etcd-ca" etcd_cert: "/calico-secrets/etcd-cert" etcd_key: "/calico-secrets/etcd-key"
# 這里面要寫入 base64 的信息 # 分別執行括號內的命令,填寫到 etcd-key , etcd-cert, etcd-ca 中,不用括號。 data: etcd-key: (cat /etc/kubernetes/ssl/etcd-key.pem | base64 | tr -d '\n') etcd-cert: (cat /etc/kubernetes/ssl/etcd.pem | base64 | tr -d '\n') etcd-ca: (cat /etc/kubernetes/ssl/ca.pem | base64 | tr -d '\n') - name: CALICO_IPV4POOL_CIDR value: "10.233.0.0/16"
建立pod
[root@k8s-master-1 ~]# kubectl apply -f calico.yaml configmap "calico-config" created secret "calico-etcd-secrets" created daemonset "calico-node" created deployment "calico-policy-controller" created serviceaccount "calico-policy-controller" created serviceaccount "calico-node" created [root@k8s-master-1 ~]# kubectl apply -f rbac.yaml
驗證,如果你只有一個node節點,calico-node應該是1,然后下面的calico-node也會相應少一個
[root@k8s-master-1 calico]# kubectl get ds -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE calico-node 2 2 2 2 2 <none> 41s [root@k8s-master-1 calico]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-node-04kd8 2/2 Running 0 1m calico-node-pkbwq 2/2 Running 0 1m calico-policy-controller-4282960220-mcdm7 1/1 Running 0 1m
Kubelet和Kube-proxy
相應的node上的kubelet和kube-proxy的修改為
[root@calico-node1 ~]# cat /etc/systemd/system/kubelet.service [Unit] Description=kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/local/bin/kubelet \ --address=192.168.44.109 \ --hostname-override=calico-node1 \ --pod-infra-container-image=docker.io/jicki/pause-amd64:3.0 \ --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \ --require-kubeconfig \ --cert-dir=/etc/kubernetes/ssl \ --cluster_dns=10.254.0.2 \ --cluster_domain=cluster.local. \ --hairpin-mode promiscuous-bridge \ --allow-privileged=true \ --serialize-image-pulls=false \ --logtostderr=true \ --cgroup-driver=systemd \ --network-plugin=cni \ --v=2 ExecStopPost=/sbin/iptables -A INPUT -s 10.0.0.0/8 -p tcp --dport 4194 -j ACCEPT ExecStopPost=/sbin/iptables -A INPUT -s 172.16.0.0/12 -p tcp --dport 4194 -j ACCEPT ExecStopPost=/sbin/iptables -A INPUT -s 192.168.0.0/16 -p tcp --dport 4194 -j ACCEPT ExecStopPost=/sbin/iptables -A INPUT -p tcp --dport 4194 -j DROP Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
kube-proxy
[root@calico-node1 ~]# cat /etc/systemd/system/kube-proxy.service [Unit] Description=kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/usr/local/bin/kube-proxy \ --bind-address=192.168.44.109 \ --hostname-override=calico-node1 \ --cluster-cidr=10.254.0.0/16 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \ --logtostderr=true \ --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
安裝calicoctl
下載
https://github.com/projectcalico/calicoctl/releases/download/v1.3.0/calicoctl
[root@k8s-master-1 ~]# mv calicoctl /usr/local/bin [root@k8s-master-1 ~]# cd /usr/local/bin [root@k8s-master-1 ~]# chmod +x calicoctl [root@k8s-master-1 ~]# calicoctl version Version: v1.3.0 Build date: Git commit: d2babb6 ## 創建 calicoctl 配置文件 # 配置文件, 在 安裝了 calico 網絡的 機器下 [root@k8s-master-1 ~]# mkdir /etc/calico [root@k8s-master-1 ~]# vi /etc/calico/calicoctl.cfg apiVersion: v1 kind: calicoApiConfig metadata: spec: datastoreType: "etcdv2" etcdEndpoints: "https://192.168.44.108:2379" etcdKeyFile: "/etc/kubernetes/ssl/etcd-key.pem" etcdCertFile: "/etc/kubernetes/ssl/etcd.pem" etcdCACertFile: "/etc/kubernetes/ssl/ca.pem" # 查看 calico 狀態 [root@k8s-master-2 ~]# calicoctl node status Calico process is running. IPv4 BGP status +--------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+-------------------+-------+----------+-------------+ | 10.6.0.188 | node-to-node mesh | up | 10:05:39 | Established | +--------------+-------------------+-------+----------+-------------+
這里要注意下,查看節點狀態需要在安裝calico pod的機器上運行,如果只有一個node,會顯示找不到ipv4 BGP,折騰了很久一直出不來這個表,后來又安裝了一個節點后就出來了,雙方指到各自的地址。
網絡策略
我用一個節點驗證
先建立namespace
apiVersion: v1 kind: Namespace metadata: name: ns-calico1 labels: user: calico1 --- apiVersion: v1 kind: Namespace metadata: name: ns-calico2
然后創建一個nginx,使用了一個user:ericnie的label.
[root@calico-master calico]# cat nginx.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: ca1-nginx namespace: ns-calico2 spec: replicas: 1 template: metadata: labels: name: nginx user: ericnie spec: containers: - name: nginx image: nginx:alpine imagePullPolicy: IfNotPresent ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: ca1-nginx-svc namespace: ns-calico2 labels: user: ericnie spec: ports: - port: 80 targetPort: 80 protocol: TCP selector: name: nginx
再建立一個tomcat的pod,用來訪問nginx
[root@calico-master calico]# cat tomcat.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: tomcat namespace: ns-calico2 labels: user: ericnie spec: replicas: 1 template: metadata: labels: name: tomcat spec: containers: - name: tomcat image: tomcat:9.0-jre8 imagePullPolicy: IfNotPresent ports: - containerPort: 8080
從tomcat中訪問nginx,無論tomcat是否是ns-calico2的namespace都是聯通的。
[root@calico-master calico]# kubectl get pods -n ns-calico2 -o wide NAME READY STATUS RESTARTS AGE IP NODE ca1-nginx-2981719527-9zxw6 1/1 Running 0 23m 10.233.63.139 calico-node1 tomcat-3717491931-b5tl5 1/1 Running 0 23m 10.233.63.140 calico-node1 [root@calico-master calico]# kubectl exec -it tomcat-3717491931-b5tl5 -n ns-calico2 bash root@tomcat-3717491931-b5tl5:/usr/local/tomcat# curl http://10.233.63.139 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
現在修改一下ns-calico2的策略,也就是當前缺省會拒絕任何pod的訪問。
[root@calico-master calico]# cat ns-calico2.yaml apiVersion: v1 kind: Namespace metadata: name: ns-calico2 labels: user: ericnie annotations: net.beta.kubernetes.io/network-policy: | { "ingress": { "isolation": "DefaultDeny" } }
通過tomcat pod驗證,確實訪問不了
再建立一個策略,允許有label, user: ericnie的pod進行訪問
[root@calico-master calico]# cat net-policy.yaml apiVersion: extensions/v1beta1 kind: NetworkPolicy metadata: name: calico1-network-policy namespace: ns-calico2 spec: podSelector: matchLabels: user: ericnie ingress: - from: - namespaceSelector: matchLabels: user: ericnie - podSelector: matchLabels: user: ericnie
建立起來以后驗證,tomcat pod又能訪問nginx了.
謝謝下面文章的指導
https://jicki.me/2017/07/25/kubernetes-1.7.2/#calico-%E7%BD%91%E7%BB%9C
http://blog.csdn.net/qq_34463875/article/details/74288175