[k8s]docker calico網絡&docker cluster-store


docker cluster-store選項

etcd-calico(bgp)實現docker誇主機通信

配置calico網絡

-  啟動etcd
etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls=http://192.168.2.11:2379 --debug

- 啟動docker
iptables -P FORWARD ACCEPT
systemctl stop docker
dockerd --cluster-store=etcd://192.168.2.11:2379


- 設置calico網絡配置
mkdir -p /etc/calico/
cat > /etc/calico/calicoctl.cfg <<EOF
apiVersion: v1
kind: calicoApiConfig
metadata:
spec:
  datastoreType: "etcdv2"
  etcdEndpoints: "http://192.168.2.11:2379"
EOF

- 啟動calico
calicoctl node run
calicoctl node status

- n1創建calico驅動類型的global網卡
docker network rm cal_net1
docker network create --driver calico --ipam-driver calico-ipam cal_net1
docker network ls

- n1創建的網絡會自動同步到n2

docker network create --driver calico --ipam-driver calico-ipam cal_net1

--driver calico           指定使用 calico 的 libnetwork CNM driver。
--ipam-driver calico-ipam 指定使用 calico 的 IPAM driver 管理 IP。

測試calico網絡-2台node的容器互通

docker run --net cal_net1 --name b1 -itd busybox
docker exec -it b1 ip a

docker run --net cal_net1 --name b2 -itd busybox
docker exec -it b2 ip a


[root@n1 ~]# docker exec -it b1 ping 192.168.158.64
PING 192.168.158.64 (192.168.158.64): 56 data bytes
64 bytes from 192.168.158.64: seq=0 ttl=62 time=0.774 ms

calico網絡結構探究

遇到的問題,docker global類型的網絡不自動同步: 原因 etcd advertise-client-urls被錯誤指定為0.0.0.0了,需要指明具體的ip地址

- 正確姿勢
etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls=http://192.168.2.11:2379 --debug

- 錯誤姿勢
etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls=http://0.0.0.0:2379

calicoctl node run干了什么事

1,開啟ip_forward
2,下載calico-node鏡像,並啟動

docker run --net=host --privileged \
    --name=calico-node -d \
    --restart=always -e NODENAME=n1.ma.com \
    -e CALICO_NETWORKING_BACKEND=bird \
    -e CALICO_LIBNETWORK_ENABLED=true \
    -e ETCD_ENDPOINTS=http://127.0.0.1:2379 \
    -v /var/log/calico:/var/log/calico \
    -v /var/run/calico:/var/run/calico \
    -v /lib/modules:/lib/modules \
    -v /run:/run -v /run/docker/plugins:/run/docker/plugins \
    -v /var/run/docker.sock:/var/run/docker.sock \
quay.io/calico/node:latest


3,寫入etcd信息

定制 Calico 的 IP 池

- 創建自定義ipam網絡
cat << EOF | calicoctl create -f -
- apiVersion: v1
  kind: ipPool
  metadata:
    cidr: 17.2.0.0/16
EOF

docker network create --driver calico --ipam-driver calico-ipam --subnet=17.2.0.0/16 my_net

- 啟動容器(固定ip地址方式)
docker run --net my_net --ip 17.2.3.11 -it busybox

- 查看網段
calicoctl node status
calicoctl get ipPool

定制 Calico 的 IP 池

- 創建自定義ipam網絡
cat << EOF | calicoctl create -f -
- apiVersion: v1
  kind: ipPool
  metadata:
    cidr: 17.2.0.0/16
EOF

docker network create --driver calico --ipam-driver calico-ipam --subnet=17.2.0.0/16 my_net

- 啟動容器(固定ip地址方式)
docker run --net my_net --ip 17.2.3.11 -it busybox

- 查看網段
calicoctl node status
calicoctl get ipPool

cat << EOF | calicoctl create -f -

  • apiVersion: v1
    kind: ipPool
    metadata:
    cidr: 17.2.0.0/16
    EOF

calico默認網絡策略

- calico默認網絡策略
Calico 默認的 policy 規則是:容器只能與同一個 calico 網絡中的容器通信。

- 查看calico默認策略
calicoctl get profile cal_net1 -o yaml

① 命名為 cal_net1,這就是 calico 網絡 cal_net1 的 profile。
② 為 profile 添加一個 tag cal_net1。注意,這個 tag 雖然也叫 cal_net1,其實可以隨便設置,這跟上面的 name: cal_net1 沒有任何關系。此 tag 后面會用到。
③ egress  對從容器發出的數據包進行控制,當前沒有任何限制。
④ ingress 對進入容器的數據包進行限制,當前設置是接收來自 tag cal_net1 的容器,根據第 ① 步設置我們知道,實際上就是只接收本網絡的數據包,這也進一步解釋了前面的實驗結果。

定制calico默認網絡策略

Calico 能夠讓用戶定義靈活的 policy 規則,精細化控制進出容器的流量,下面我們就來實踐一個場景:
    創建一個新的 calico 網絡 cal_web 並部署一個 httpd 容器 web1。
    定義 policy 允許 cal_net2 中的容器訪問 web1 的 80 端口。


- 首先創建 cal_web。
docker network create --driver calico --ipam-driver calico-ipam cal_web
calicoctl get profile cal_web -o yaml

- 在 host1 中運行容器 web1,連接到 cal_web:
docker run --net cal_web --name web1 -d httpd
docker exec -it web1 ip a

- 創建net1
docker network create --driver calico --ipam-driver calico-ipam cal_net1
docker run --net cal_net1 --name b2 -itd busybox
docker exec -it b2 ip a
訪問 web1的80
docker exec -it b2 wget x.x.x.x  #不通

- 創建 policy 文件 web.yml,內容為:
cat > web.yaml<<EOF
- apiVersion: v1
  kind: profile
  metadata:
    name: cal_web
  spec:
    ingress:
    - action: allow
      protocol: tcp
      source:
        tag: cal_net1
      destination:
        ports:
          - 80
EOF
calicoctl apply -f web.yaml
calicoctl get profile cal_web -o yaml

- 訪問 web1的80
docker exec -it b2 wget x.x.x.x  #通了

- 在兩個節點分別查看策略(一致)
calicoctl get profile cal_web -o yaml

calico最佳實戰

calico數據轉發流程

發數據
查路由
arp網關(網關有代理arp功能)
數據發到主機
根據主機路由表轉發

實現網絡

參考: http://cizixs.com/2017/10/19/docker-calico-network

calico數據庫探究

使用etcd brower圖形化查看etcd

docker run --name etcd-browser -p 0.0.0.0:8000:8000 --env ETCD_HOST=192.168.2.11 --env ETCD_PORT=2379 --env AUTH_PASS=doe -itd buddho/etcd-browser

注意端口
注意etcdip
[root@n1 ~]# etcdctl ls /calico
/calico/bgp
/calico/ipam
/calico/v1

[root@n1 ~]# etcdctl ls /calico/bgp/v1
/calico/bgp/v1/host
/calico/bgp/v1/global


[root@n1 ~]# etcdctl ls /calico/bgp/v1/global
/calico/bgp/v1/global/node_mesh
/calico/bgp/v1/global/as_num
/calico/bgp/v1/global/loglevel
/calico/bgp/v1/global/custom_filters


[root@n1 ~]# etcdctl get /calico/bgp/v1/global/as_num
64512

[root@n1 ~]# etcdctl ls /calico/bgp/v1/host
/calico/bgp/v1/host/n1.ma.com
/calico/bgp/v1/host/n2.ma.com


[root@n1 ~]# etcdctl ls /calico/bgp/v1/host/n1.ma.com
/calico/bgp/v1/host/n1.ma.com/ip_addr_v6
/calico/bgp/v1/host/n1.ma.com/network_v4
/calico/bgp/v1/host/n1.ma.com/ip_addr_v4

[root@n1 ~]# etcdctl get /calico/bgp/v1/host/n1.ma.com/ip_addr_v4
192.168.2.11

[root@n1 ~]# etcdctl get /calico/bgp/v1/host/n1.ma.com/network_v4
192.168.2.0/24

calico組件擔任的角色

參考: https://docs.projectcalico.org/v1.6/reference/without-docker-networking/docker-container-lifecycle

http://www.youruncloud.com/blog/131.html


bird   實現了bgp協議
flex   調度執行者
confd  服務發現,通過flex修改配置

libnetwork-管理ip/接口


etcd
    ipam  節點地址記錄
    bgp   bgp相關
    v2    policy

以前總結的calico零散知識

#!/usr/bin/env bash


docker stats

etcd --advertise-client-urls=http://0.0.0.0:2379 --listen-client-urls=http://0.0.0.0:2379 --enable-v2 --debug

vim /usr/lib/systemd/system/docker.service
# /etc/systemd/system/docker.service
--cluster-store=etcd://192.168.14.132:2379

systemctl daemon-reload
systemctl restart docker.service

[root@node1 ~]# ps -ef|grep docker
root       8122      1  0 Nov07 ?        00:01:01 /usr/bin/dockerd --cluster-store=etcd://192.168.14.132:2379

etcdctl ls
/docker


cd /usr/local/bin
wget https://github.com/projectcalico/calicoctl/releases/download/v1.6.1/calicoctl
chmod +x calicoctl

[root@node1 ~]# rpm -qa|grep etcd
etcd-3.2.5-1.el7.x86_64

mkdir /etc/calico
cat >> /etc/calico/calicoctl.cfg <<EOF
apiVersion: v1
kind: calicoApiConfig
metadata:
spec:
  datastoreType: "etcdv2"
  etcdEndpoints: "http://192.168.14.132:2379"
EOF

calicoctl node run
calicoctl node run --ip=192.168.14.132

1,開啟ip_forward
2,下載calico-node鏡像,並啟動
3,寫入etcd信息

iptables -P FORWARD ACCEPT
etcdctl rm --recursive /calico
etcdctl rm --recursive /docker

# 可以看到bgp鄰居已經建立起來了(14.132 14.133)
calicoctl node status

# 任意一台機器創建網絡,另一台機器會同步過去的
docker network rm cal_net1
docker network create --driver calico --ipam-driver calico-ipam cal_net1

#+++++++++++++++++++++++++++
#  測試
#+++++++++++++++++++++++++++
# 14.132
docker container run --net cal_net1 --name bbox1 -tid busybox
docker exec bbox1 ip address
docker exec bbox1 route -n

# 14.133
docker container run --net cal_net1 --name bbox2 -tid busybox


docker exec bbox2 ip address
docker exec bbox2 ping  192.168.108.128


#+++++++++++++++++++++++++++
#  參考
#+++++++++++++++++++++++++++
https://mp.weixin.qq.com/s/VL72aVjU4KB3c2UTihl-DA
http://blog.csdn.net/felix_yujing/article/details/55213239


#+++++++++++++++++++++++++++
#  創建網段
#+++++++++++++++++++++++++++
calicoctl node status
calicoctl get ipPool
- apiVersion: v1
  kind: ipPool
  metadata:
    cidr: 10.20.0.0/24
  spec:
    ipip:
      enabled: true
    nat-outgoing: true


另外一個測試
docker network create --driver calico --ipam-driver calico-ipam  --subnet 10.30.0.0/24 net1

docker network create --driver calico --ipam-driver calico-ipam  --subnet 10.30.0.0/24 net1
docker network create --driver calico --ipam-driver calico-ipam  --subnet 10.30.0.0/24 net2
docker network create --driver calico --ipam-driver calico-ipam  --subnet 10.30.0.0/24 net3

#node1
docker run --net net1 --name workload-A -tid busybox
docker run --net net2 --name workload-B -tid busybox
docker run --net net1 --name workload-C -tid busybox
#node2
docker run --net net3 --name workload-D -tid busybox
docker run --net net1 --name workload-E -tid busybox



#同一網絡內的容器(即使不在同一節點主機上)可以使用容器名來訪問
docker exec workload-A ping -c 4 workload-C.net1
docker exec workload-A ping -c 4 workload-E.net1
#不同網絡內的容器需要使用容器ip來訪問(使用容器名會報:bad address)
docker exec workload-A ping -c 2  `docker inspect --format "{{ .NetworkSettings.Networks.net2.IPAddress }}" workload-B`


#calico默認策略,同一網絡內的容器是能相互通信的;不同網絡內的容器相互是不通的。不同節點上屬於同一網絡的容器也是能相互通信的,這樣就實現了容器的跨主機互連。



#+++++++++++++++++++++++++++
#  修改默認策略
#+++++++++++++++++++++++++++

cat << EOF | calicoctl apply -f -
- apiVersion: v1
  kind: profile
  metadata:
    name: cal_net12icmp
    labels:
      role: database
  spec:
    ingress:
    - action: allow
      protocol: icmp
      source:
        tag: net1
      destination:
        tag: net2
EOF





https://docs.projectcalico.org/v2.2/reference/public-cloud/aws
$ calicoctl apply -f - << EOF
apiVersion: v1
kind: ipPool
metadata:
  cidr: 192.168.0.0/16
spec:
  ipip:
    enabled: true
    mode: cross-subnet
  nat-outgoing: true
EOF

參考:
Docker網絡解決方案-Calico部署記錄
https://allgo.cc/2015/04/16/centos7%E7%BD%91%E5%8D%A1%E6%A1%A5%E6%8E%A5/
yum install bridge-utils
calico原理
http://www.cnblogs.com/kevingrace/p/6864804.html




#!/usr/bin/env bash
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-udp-ingress-controller
  labels:
    k8s-app: nginx-udp-ingress-lb
  namespace: kube-system
spec:
  replicas: 1
  selector:
    k8s-app: nginx-udp-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: nginx-udp-ingress-lb
        name: nginx-udp-ingress-lb
    spec:
      hostNetwork: true
      terminationGracePeriodSeconds: 60
      containers:
      #- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.8
      - image: 192.168.1.103/k8s_public/nginx-ingress-controller:0.9.0-beta.5
        name: nginx-udp-ingress-lb
        readinessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          timeoutSeconds: 1
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        ports:
        - containerPort: 81
          hostPort: 81
        - containerPort: 443
          hostPort: 443
        - containerPort: 53
          hostPort: 53
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
        - --udp-services-configmap=$(POD_NAMESPACE)/nginx-udp-ingress-configmap

apiVersion: v1
kind: ConfigMap
metadata:
  name: udp-configmap-example
data:
  53: "kube-system/kube-dns:53"



免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM