kubernetes和calico集成


硬件環境:

三台虛擬機:

192.168.99.129 master(kube-apiserver、kube-controller-manager、kube-proxy、kube-scheduler、kubelet、etcd、calico、docker)

192.168.99.130 slave1(kube-proxy、kubelet、etcd proxy、calico、docker、dns)

192.168.99.131 slave2(kube-proxy、kubelet、etcd proxy、calico、docker)

軟件環境:

kubernetes 1.5.2

etcd 3.1.0

calico 0.23.1

【etcd】

calico需要每個node節點都要運行一個etcd proxy,所以master主機上部署一個etcd,其他node節點上部署etcd proxy。

master上etcd啟動命令如下:(etcd新版本基本只使用2379這個端口了,但是有一些老的程序之前與etcd集成時使用的是4001端口,因此我同時監聽2379和4001這兩個端口)

etcd --name infra1 \
--data-dir /var/lib/etcd \
--listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
--advertise-client-urls http://192.168.99.129:2379,http://192.168.99.129:4001 \
--listen-peer-urls http://0.0.0.0:2380 \
--initial-advertise-peer-urls http://192.168.99.129:2380 \
--initial-cluster-token etcd-cluster \
--initial-cluster 'infra1=http://192.168.99.129:2380' \
--initial-cluster-state new \
--enable-pprof \
>> /var/log/etcd.log 2>&1 &

node上etcd proxy啟動命令如下:

etcd --name infra-proxy1 \
--proxy=on \
--listen-client-urls http://0.0.0.0:2379 \
--initial-cluster 'infra1=http://192.168.99.130:2380' \
--enable-pprof \
>> /var/log/etcd.log 2>&1 &
etcd --name infra-proxy1 \
--proxy=on \
--listen-client-urls http://0.0.0.0:2379 \
--initial-cluster 'infra1=http://192.168.99.131:2380' \
--enable-pprof \
>> /var/log/etcd.log 2>&1 &

【kubernetes】

1、kube-apiserver和kubelet的啟動腳本中添加--allow_privileged=true,如果不添加的話,下面在部署calico的時候,會以下錯誤:

The DaemonSet "calico-node" is invalid: spec.template.spec.containers[0].securityContext.privileged: Forbidden: disallowed by policy

2、在kubelet的啟動腳本中增加--network-plugin=cni和--network-plugin-dir=/etc/cni/net.d

kube-apiserver和kubelet的啟動腳本如下:

kube-apiserver \
--logtostderr=true --v=0 \
--etcd-servers=http://k8s-master:4001 \
--insecure-bind-address=0.0.0.0 --insecure-port=8080 \
--service-cluster-ip-range=10.254.0.0/16 \
--allow_privileged=true \
>> /var/log/kube-apiserver.log 2>&1 &
kubelet \
--logtostderr=true --v=0 \
--address=0.0.0.0 \
--api-servers=http://k8s-master:8080 \
--pod-infra-container-image=index.tenxcloud.com/google_containers/pause-amd64:3.0 \
--cluster-dns=10.254.159.10 \
--cluster-domain=cluster.local \
--hostname-override=192.168.99.130 \
--allow_privileged=true \
--network-plugin=cni \
--network-plugin-dir=/etc/cni/net.d \
>> /var/log/kubelet.log 2>&1 &

3、下載 https://github.com/containernetworking/cni/releases/download/v0.4.0/cni-v0.4.0.tgz,解壓之后,將loopback拷貝到/opt/cni/bin目錄下,如果不做這步的話,創建pod時會拋錯,說找不到loopback。

4、calico必須部署在master節點和所有的node節點上,如果master節點不部署calico,會出現容器內無法訪問master的問題。因為calico是以dameonset部署的,所以在master節點上啟動kubelet,calico就會部署在master節點上了。

【calico】

1、下載calico.yaml,地址為http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/calico.yaml

2、修改calico.yaml文件中,etcd的地址

etcd_endpoints: "http://192.168.99.129:2379"

3、通過以下命令部署calico

kubectl apply -f calico.yaml

【部署centos和redis】

1、部署centos,指定部署在192.168.99.130節點上,centos-rcd.yaml如下:

apiVersion: v1
kind: ReplicationController
metadata:
  name: centos
  labels:
    name: centos
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: centos
    spec:
      containers:
      - name: centos
        image: index.tenxcloud.com/tenxcloud/docker-centos
        ports:
        - containerPort: 6379
      nodeSelector:
        kubernetes.io/hostname: "192.168.99.130"

2、部署redis,指定部署在192.168.99.131節點上,redis-rc.yaml如下:

apiVersion: v1
kind: ReplicationController
metadata:
  name: redis
  labels:
    k8s-app: redis
spec:
  replicas: 1
  selector:
    k8s-app: redis
  template:
    metadata:
      labels:
        k8s-app: redis
    spec:
      containers:
      - name: redis
        image: 10.10.30.166/public/redis:v1
        ports:
        - containerPort: 6379
          name: redis-tcp
          protocol: TCP
      nodeSelector:
        kubernetes.io/hostname: "192.168.99.131"

redis-svc.yaml如下:

apiVersion: v1
kind: Service
metadata:
  name: redis
spec:
  selector:
    k8s-app: redis
  clusterIP: 10.254.159.20
  ports:
  - name: "1"
    port: 6379
    protocol: TCP

3、部署情況如下:

[root@master redis]# kubectl get pods -o wide
NAME           READY     STATUS    RESTARTS   AGE       IP                NODE
centos-bpzkc   1/1       Running   0          23h       192.168.140.197   192.168.99.130
dns-99cqq      3/3       Running   0          1d        192.168.140.196   192.168.99.130
redis-c7wk3    1/1       Running   0          4m        192.168.140.82    192.168.99.131
[root@master redis]# kubectl get svc -o wide
NAME         CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE       SELECTOR
dns          10.254.159.10   <none>        53/UDP,53/TCP   1d        k8s-app=dns
kubernetes   10.254.0.1      <none>        443/TCP         2d        <none>
redis        10.254.159.20   <none>        6379/TCP        4m        k8s-app=redis

master主機上的路由:

[root@master redis]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.99.2    0.0.0.0         UG    100    0        0 eno16777736
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.99.0    0.0.0.0         255.255.255.0   U     100    0        0 eno16777736
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
192.168.140.64  192.168.99.131  255.255.255.192 UG    0      0        0 eno16777736
192.168.140.192 192.168.99.130  255.255.255.192 UG    0      0        0 eno16777736

slave1主機上的路由:

[root@slave1 bin]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.99.2    0.0.0.0         UG    100    0        0 eno16777736
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.99.0    0.0.0.0         255.255.255.0   U     100    0        0 eno16777736
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
192.168.140.64  192.168.99.131  255.255.255.192 UG    0      0        0 eno16777736
192.168.140.192 0.0.0.0         255.255.255.192 U     0      0        0 *
192.168.140.196 0.0.0.0         255.255.255.255 UH    0      0        0 cali12b26626b64
192.168.140.197 0.0.0.0         255.255.255.255 UH    0      0        0 calic477824fb70

slave2主機上的路由:

[root@slave2 bin]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.99.2    0.0.0.0         UG    100    0        0 eno16777736
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.99.0    0.0.0.0         255.255.255.0   U     100    0        0 eno16777736
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
192.168.140.64  0.0.0.0         255.255.255.192 U     0      0        0 *
192.168.140.82  0.0.0.0         255.255.255.255 UH    0      0        0 calieb567fc0b5e
192.168.140.192 192.168.99.130  255.255.255.192 UG    0      0        0 eno16777736

master、slave1和slave2上redis的iptables規則如下,他們三個是一樣的

iptables -S -t nat | grep redis
-A KUBE-SEP-XAJWX3SXEKZG2YR7 -s 192.168.140.82/32 -m comment --comment "default/redis:1" -j KUBE-MARK-MASQ
-A KUBE-SEP-XAJWX3SXEKZG2YR7 -p tcp -m comment --comment "default/redis:1" -m tcp -j DNAT --to-destination 192.168.140.82:6379
-A KUBE-SERVICES -d 10.254.159.20/32 -p tcp -m comment --comment "default/redis:1 cluster IP" -m tcp --dport 6379 -j KUBE-SVC-XXJ2TMJIYSJJDBZG
-A KUBE-SVC-XXJ2TMJIYSJJDBZG -m comment --comment "default/redis:1" -j KUBE-SEP-XAJWX3SXEKZG2YR7

從這個規則能夠看出,redis的clusterIp 10.254.159.20:6379被dnat轉換為192.168.140.82:6379,這里遇到一個奇怪的問題,目前不知道原因,現象是,當redis-rc.yaml中labels是k8s-app: redis時,iptables規則如上面顯示,一切正常,但如果labels是name: redis,則只有下面這一條規則,這說明clusterip不會被轉換成pod的IP,所以訪問clusterIP肯定是不通的。

-A KUBE-SERVICES -d 10.254.159.20/32 -p tcp -m comment --comment "default/redis:1 cluster IP" -m tcp --dport 6379 -j KUBE-SVC-XXJ2TMJIYSJJDBZG

【驗證網絡連通性】

1、在master主機上ping centos和redis的ip

[root@master redis]# ping 192.168.140.197
PING 192.168.140.197 (192.168.140.197) 56(84) bytes of data.
64 bytes from 192.168.140.197: icmp_seq=1 ttl=63 time=1.55 ms
64 bytes from 192.168.140.197: icmp_seq=2 ttl=63 time=0.487 ms
[root@master redis]# ping 192.168.140.82
PING 192.168.140.82 (192.168.140.82) 56(84) bytes of data.
64 bytes from 192.168.140.82: icmp_seq=1 ttl=63 time=0.317 ms
64 bytes from 192.168.140.82: icmp_seq=2 ttl=63 time=0.502 ms

2、在master主機上telnet redis的clusterip

[root@master redis]# telnet 10.254.159.20 6379
Trying 10.254.159.20...
Connected to 10.254.159.20.
Escape character is '^]'.

3、在slave1上ping centos和redis的pod,訪問redis的clusterip

[root@slave1 bin]# ping 192.168.140.197
PING 192.168.140.197 (192.168.140.197) 56(84) bytes of data.
64 bytes from 192.168.140.197: icmp_seq=1 ttl=64 time=0.329 ms
64 bytes from 192.168.140.197: icmp_seq=2 ttl=64 time=0.068 ms
[root@slave1 bin]# ping 192.168.140.82
PING 192.168.140.82 (192.168.140.82) 56(84) bytes of data.
64 bytes from 192.168.140.82: icmp_seq=1 ttl=63 time=0.291 ms
64 bytes from 192.168.140.82: icmp_seq=2 ttl=63 time=0.455 ms
[root@slave1 bin]# telnet 10.254.159.20 6379
Trying 10.254.159.20...
Connected to 10.254.159.20.
Escape character is '^]'.

4、在centos容器內ping redis的pod

[root@centos-bpzkc /]# ping 192.168.140.82
PING 192.168.140.82 (192.168.140.82) 56(84) bytes of data.
64 bytes from 192.168.140.82: icmp_seq=1 ttl=62 time=0.951 ms

5、在centos容器內通過dns解析redis域名,並訪問redis

[root@centos-bpzkc /]# nslookup redis
Server:        10.254.159.10
Address:    10.254.159.10#53

Name:    redis.default.svc.cluster.local
Address: 10.254.159.20
[root@centos-bpzkc /]# telnet redis 6379
Trying 10.254.159.20...
Connected to redis.
Escape character is '^]'.

6、在centos容器內訪問master主機上的服務(kube-apiserver)

[root@centos-bpzkc /]# telnet 192.168.99.129 8080
Trying 192.168.99.129...
Connected to 192.168.99.129.
Escape character is '^]'.

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM