service type
k8s中service主要有三種:
- ClusterIP: use a cluster-internal IP only - this is the default and is discussed above. Choosing this value means that you want this service to be reachable only from inside of the cluster.
- NodePort: on top of having a cluster-internal IP, expose the service on a port on each node of the cluster (the same port on each node). You'll be able to contact the service on any
:NodePort address. - LoadBalancer: on top of having a cluster-internal IP and exposing service on a NodePort also, ask the cloud provider for a load balancer which forwards to the Service exposed as a
:NodePort for each Node.
clusterIP
clusterIP主要作用是方便pod到pod之間的調用。
[minion@te-yuab6awchg-0-z5nlezoa435h-kube-master-udhqnaxpu5op ~]$ kubectl describe service redis-sentinel
Name: redis-sentinel
Namespace: default
Labels: name=sentinel,role=service
Selector: redis-sentinel=true
Type: ClusterIP
IP: 10.254.142.111
Port: <unnamed> 26379/TCP
Endpoints: <none>
Session Affinity: None
No events.
clusterIP主要在每個node節點使用iptables,將發向clusterIP對應端口的數據,轉發到kube-proxy中。
[minion@te-yuab6awchg-0-z5nlezoa435h-kube-master-udhqnaxpu5op ~]$ sudo iptables -S -t nat
...
-A KUBE-PORTALS-CONTAINER -d 10.254.142.111/32 -p tcp -m comment --comment "default/redis-sentinel:" -m tcp --dport 26379 -j REDIRECT --to-ports 36547
-A KUBE-PORTALS-HOST -d 10.254.142.111/32 -p tcp -m comment --comment "default/redis-sentinel:" -m tcp --dport 26379 -j DNAT --to-destination 10.0.0.5:36547
然后kube-proxy自己內部實現有負載均衡的方法,並可以查詢到這個service下對應pod的地址和端口,進而把數據轉發給對應的pod的地址和端口。
nodePort/LoadBalancer
nodePort跟LoadBalancer其實是同一種方式。參見這里的說明
區別在於LoadBalancer比nodePort多了一步,就是可以調用cloud provider去創建LB來向節點導流。cloud provider好像支持了openstack、gce等系統。
nodePort的原理在於在node上開了一個端口,將向該端口的流量導入到kube-proxy,然后由kube-proxy進一步導給對應的pod。
所以service采用nodePort的方式,正確的方法是在前面有一個lb,然后lb的后端掛上所有node的對應端口。這樣即使node1掛了。lb也可以把流量導給其他node的對應端口。
我們使用這樣的一個manifest來創建service
apiVersion: v1
kind: Service
metadata:
labels:
name: ssh
role: service
name: ssh-service1
spec:
ports:
- port: 2222
targetPort: 22
nodePort: 30239
type: NodePort
selector:
ssh-service: "true"
使用get service可以看到雖然type是NodePort,但是依然為其分配了一個clusterIP。分配clusterIP的作用還是如上文所說,是方便pod到service的數據訪問。
[minion@te-yuab6awchg-0-z5nlezoa435h-kube-master-udhqnaxpu5op ~]$ kubectl get service
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.254.0.1 443/TCP
ssh-service1 name=ssh,role=service ssh-service=true 10.254.132.107 2222/TCP
使用describe可以查看到詳細信息。可以看到暴露出來的NodePort端口,正是指定的30239
[minion@te-yuab6awchg-0-z5nlezoa435h-kube-master-udhqnaxpu5op ~]$ kubectl describe service ssh-service1
Name: ssh-service1
Namespace: default
Labels: name=ssh,role=service
Selector: ssh-service=true
Type: LoadBalancer
IP: 10.254.132.107
Port: <unnamed> 2222/TCP
NodePort: <unnamed> 30239/TCP
Endpoints: <none>
Session Affinity: None
No events.
nodePort的工作原理與clusterIP大致相同,是發送到node上指定端口的數據,通過iptables重定向到kube-proxy對應的端口上。然后由kube-proxy進一步把數據發送到其中的一個pod上。
[minion@te-yuab6awchg-0-z5nlezoa435h-kube-master-udhqnaxpu5op ~]$ sudo iptables -S -t nat
...
-A KUBE-NODEPORT-CONTAINER -p tcp -m comment --comment "default/ssh-service1:" -m tcp --dport 30239 -j REDIRECT --to-ports 36463
-A KUBE-NODEPORT-HOST -p tcp -m comment --comment "default/ssh-service1:" -m tcp --dport 30239 -j DNAT --to-destination 10.0.0.5:36463
-A KUBE-PORTALS-CONTAINER -d 10.254.0.1/32 -p tcp -m comment --comment "default/kubernetes:" -m tcp --dport 443 -j REDIRECT --to-ports 53940
-A KUBE-PORTALS-CONTAINER -d 10.254.132.107/32 -p tcp -m comment --comment "default/ssh-service1:" -m tcp --dport 2222 -j REDIRECT --to-ports 36463
-A KUBE-PORTALS-HOST -d 10.254.0.1/32 -p tcp -m comment --comment "default/kubernetes:" -m tcp --dport 443 -j DNAT --to-destination 10.0.0.5:53940
-A KUBE-PORTALS-HOST -d 10.254.132.107/32 -p tcp -m comment --comment "default/ssh-service1:" -m tcp --dport 2222 -j DNAT --to-destination 10.0.0.5:36463
