k8s stateful sets storageclass 有狀態應用布署實踐v2
Copyright 2017-05-22 xiaogang(172826370@qq.com)
參考
由於網上的文章全部是抄襲官網等,爛文章一堆,誤導一堆人,完美沒有實用性,特寫此文章,nfs相對來說比較簡單,一般都會安裝
先送上nfs的相關文檔,稍后將為大家獻上ceph rbd動態卷文檔,同時還有幾個redis和mysql主從實例
有狀態容器的工作過程中,存儲是一個關鍵問題,Kubernetes 對存儲的管理提供了有力的支持。Kubernetes 獨有的動態卷供給特性,
實現了存儲卷的按需創建。在這一特性面世之前,集群管理員首先要給雲供應商或者存儲供應商致電,來申請新的存儲卷,然后創建持
久卷(PersistentVolue),使其在 Kubernetes 中可見。而動態卷供給功能則實現了這兩個步驟的自動化,讓管理員無需再進行存儲卷
預分配。存儲資源會依照 StorageClass 定義的方式進行供給。StorageClass 是對底層存儲資源的抽象,包含了存儲相關的參數,
例如磁盤類型(標准類型和 SSD)。
StorageClass 的多種供給者(Previsioner),為 Kubernetes 提供了針對特定物理存儲或雲存儲的訪問能力。目前提供了多種開箱即
用的存儲支持,另外還有一些在 Kubernetes 孵化器中提供的其他存儲支持。
在 Kubernetes 1.6 中,動態卷供給提升為穩定版(1.4 開始進入 Beta 版)。這在 Kubernetes 的存儲自動化過程中是很重要的一步,
讓管理員能夠控制資源的供給方式,讓用戶能夠更專注於自己的應用。在上面提到的益處之外,在升級到 Kubernetes 1.6 之前,還需
要了解一下這里涉及到的針對用戶方面的變更。
有狀態的應用程序
一般情況下,nginx或者web server(不包含MySQL)自身都是不需要保存數據的,對於 web server,數據會保存在專門做持久化的節點
上。所以這些節點可以隨意擴容或者縮容,只要簡單的增加或減少副本的數量就可以。但是很多有狀態的程序都需要集群式的部署,
意味着節點需要形成群組關系,每個節點需要一個唯一的ID(例如Kafka BrokerId, Zookeeper myid)來作為集群內部每個成員的標識,
集群內節點之間進行內部通信時需要用到這些標識。傳統的做法是管理員會把這些程序部署到穩定的,長期存活的節點上去,這些節點
有持久化的存儲和靜態的IP地址。這樣某個應用的實例就跟底層物理基礎設施比如某台機器,某個IP地址耦合在一起了。Kubernets中
StatefulSet的目標是通過把標識分配給應用程序的某個不依賴於底層物理基礎設施的特定實例來解耦這種依賴關系。(消費方不使用靜
態的IP,而是通過DNS域名去找到某台特定機器)
StatefulSet
前提
使用StatefulSet的前提:
- Kubernetes集群的版本 >=1.5
- 安裝好DNS集群插件,版本 >=15
特點
StatefulSet(1.5版本之前叫做PetSet)為什么適合有狀態的程序,因為它相比於Deployment有以下特點:
- 穩定的,唯一的網絡標識,可以用來發現集群內部的其他成員。比如StatefulSet的名字叫kafka,那么第一個起來的Pet叫kafka-0,mysql-0
第二個叫 kafk-1, mysql-1依次類推。 - 穩定的持久化存儲:通過Kubernetes的PV/PVC或者外部存儲(預先提供的)來實現
啟動或關閉時保證有序:優雅的部署和伸縮性: - 操作第n個pod時,前n-1個pod已經是運行且准備好的狀態。 有序的,優雅的刪除和
終止操作:從 n, n-1, ... 1, 0 這樣的順序刪除 - 上述提到的“穩定”指的是Pod在多次重新調度時保持穩定,即存儲,DNS名稱,hostname都是跟Pod綁定到一起的,跟Pod被調度到哪個
節點沒關系。
所以Zookeeper, Etcd 或 Elasticsearch這類需要穩定的集群成員的應用時,就可以用StatefulSet。通過查詢無頭服務域名的A記錄,
就可以得到集群內成員的域名信息。
限制
StatefulSet也有一些限制:
- Pod的存儲必須是通過 PersistentVolume Provisioner基於 storeage類來提供,或者是管理員預先提供的外部存儲
刪除或者縮容不會刪除跟StatefulSet相關的卷,這是為了保證數據的安全
StatefulSet現在需要一個無頭服務(Headless Service)來負責生成Pods的唯一網絡標示,需要開發人員創建這個服務
對StatefulSet的升級是一個手工的過程 - 無頭服務(Headless Service)
要定義一個服務(Service)為無頭服務(Headless Service),需要把Service定義中的ClusterIP配置項設置為空: spec.clusterIP:None。
和普通Service相比,Headless Service沒有ClusterIP(所以沒有負載均衡),它會給一個集群內部的每個成員提供一個唯一的DNS- 域名來
作為每個成員的網絡標識,集群內部成員之間使用域名通信。無頭服務管理的域名是如下的格式:$(service_name).$(k8s_namespace).svc.cluster.local。
其中的 "cluster.local"是集群的域名,除非做了配置,否則集群域名默認就是cluster.local。StatefulSet下創建的每個Pod,得到一個對應的DNS子域名,
格式如下:
$(podname).$(governing_service_domain),這里 governing_service_domain是由StatefulSet中定義的serviceName來決定。舉例子,
無頭服務管理的kafka的域名是:kafka.test.svc.cluster.local, 創建的Pod得到的子域名是 kafka-1.kafka.test.svc.cluster.local。
注意這里提到的域名,都是由kuber-dns組件管理的集群內部使用的域名,可以通過命令來查詢:
1.nfs-client storage class動態卷
在nfs-server物理機上配置權限 cat /etc/exports
/data/nfs-storage/k8s-storage/ssd *(rw,insecure,sync,no_subtree_check,no_root_squash)
下載nfs-client 插件
docker pull quay.io/kubernetes_incubator/nfs-client-provisioner:v1
docker tag quay.io/kubernetes_incubator/nfs-client-provisioner:v1 192.168.1.103/k8s_public/nfs-client-provisioner:v1
docker push 192.168.1.103/k8s_public/nfs-client-provisioner:v1
布署供應卷,實際上是把pv掛載成class供應卷
cat deployment-nfs.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
containers:
- name: nfs-client-provisioner
image: 192.168.1.103/k8s_public/nfs-client-provisioner:v1
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.1.103
- name: NFS_PATH
value: /data/nfs-storage/k8s-storage/ssd
volumes:
- name: nfs-client-root
nfs:
server: 192.168.1.103
path: /data/nfs-storage/k8s-storage/ssd #此處填寫nfs 存儲路徑 跟據實際情況填寫
[root@master3 deploy]# kubectl create -f deployment-nfs.yaml
kubectl get pod
nfs-client-provisioner-4163627910-fn70d 1/1 Running 0 1m
布署storageclass.yaml
[root@master3 deploy]# cat nfs-class.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # 此處引用nfs-client-provisioner里面的 fuseim.pri/ifs or choose another name, must match deployment's env PROVISIONER_NAME'
[root@master3 deploy]# kubectl create -f nfs-class.yaml
[root@master3 deploy]# kubectl get storageclass
NAME TYPE
ceph-web kubernetes.io/rbd
managed-nfs-storage fuseim.pri/ifs
創建一個pod引用storageclass
[root@master3 stateful-set]# cat nginx.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx1"
replicas: 2
volumeClaimTemplates:
- metadata:
name: test
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" #此處引用classname
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx1
image: 192.168.1.103/k8s_public/nginx:latest
volumeMounts:
- mountPath: "/mnt"
name: test
imagePullSecrets:
- name: "registrykey" #注意此處注名了secret安全連接registy 本地鏡相服務器
驗證pv pvc 是否自己創建成功
[root@master3 stateful-set]# kubectl get pv |grep web
default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59 2Gi RWO Delete Bound default/test-web-0 1m
default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59 2Gi RWO Delete Bound default/test-web-1 1m
[root@master3 stateful-set]# kubectl get pvc |grep web
test-web-0 Bound default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59 2Gi RWO 1m
test-web-1 Bound default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59 2Gi RWO 1m
[root@master3 stateful-set]# kubectl get storageclass |grep web
ceph-web kubernetes.io/rbd
[root@master3 stateful-set]# kubectl get storageclass
NAME TYPE
ceph-web kubernetes.io/rbd
managed-nfs-storage fuseim.pri/ifs
[root@master3 stateful-set]# kubectl get pod |grep web
web-0 1/1 Running 0 2m
web-1 1/1 Running 0 2m
擴展 pod
[root@master3 stateful-set]# kubectl scale statefulset web --replicas=3
[root@master3 stateful-set]# kubectl get pod |grep web
web-0 1/1 Running 0 10m
web-1 1/1 Running 0 10m
web-3 1/1 Running 0 1m
收縮 pod 至1個
kubectl scale statefulset web --replicas=1
[root@master3 stateful-set]# kubectl get pod |grep web
web-0 1/1 Running 0 11m
ok ,創建完成 pod也正常
進入web-0驗證pvc掛載目錄
[root@master3 stateful-set]# kubectl exec -it web-0 /bin/bash
root@web-0:/#
root@web-0:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:0-654996-18a8b448ce9ebf898e46c4468b33093ed9a5f81794d82a271124bcd1eb27a87c 10G 230M 9.8G 3% /
tmpfs 1.6G 0 1.6G 0% /dev
tmpfs 1.6G 0 1.6G 0% /sys/fs/cgroup
192.168.1.103:/data/nfs-storage/k8s-storage/ssd/default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59 189G 76G 104G 43% /mnt
/dev/mapper/centos-root 37G 9.1G 26G 27% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 1.6G 12K 1.6G 1% /run/secrets/kubernetes.io/serviceaccount
root@web-0:/#
去nfs-server上看看pvc卷
root@pxt:/data/nfs-storage/k8s-storage/ssd# ll
total 40
drwxr-xr-x 10 root root 4096 May 22 17:53 ./
drwxr-xr-x 7 root root 4096 May 12 17:26 ../
drwxr-xr-x 3 root root 4096 May 16 16:19 default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59/
drwxr-xr-x 3 root root 4096 May 16 16:20 default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59/
drwxr-xr-x 3 root root 4096 May 16 16:21 default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59/
drwxr-xr-x 2 root root 4096 May 17 17:49 default-redis-primary-volume-redis-primary-0-pvc-bb19aa13-3ad3-11e7-b646-525400c2bc59/
drwxr-xr-x 2 root root 4096 May 17 17:56 default-redis-secondary-volume-redis-secondary-0-pvc-16c8749d-3ae7-11e7-b646-525400c2bc59/
drwxr-xr-x 2 root root 4096 May 17 17:58 default-redis-secondary-volume-redis-secondary-1-pvc-16da7ba5-3ae7-11e7-b646-525400c2bc59/
drwxr-xr-x 2 root root 4096 May 22 17:53 default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59/
drwxr-xr-x 2 root root 4096 May 22 17:53 default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59/
root@pxt:/data/nfs-storage/k8s-storage/ssd# showmount -e
Export list for pxt.docker.agent103:
/data/nfs_ssd *
/data/nfs-storage/k8s-storage/standard *
/data/nfs-storage/k8s-storage/ssd *
/data/nfs-storage/k8s-storage/redis *
/data/nfs-storage/k8s-storage/nginx *
/data/nfs-storage/k8s-storage/mysql *
root@pxt:/data/nfs-storage/k8s-storage/ssd# cat /etc/exports
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
#/data/nfs-storage/k8s-storage *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/mysql *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/nginx *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/redis *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/ssd *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/standard *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs_ssd *(rw,insecure,sync,no_subtree_check,no_root_squash)
2.布署一個可伸縮的mysql 主從集群,基於mysql5.7 一主多從,准備3個yaml文件
參考
mysql-configmap.yaml mysql-services.yaml mysql-statefulset.yaml
[root@master3 setateful-set-mysql]# cat mysql-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
labels:
app: mysql
data:
master.cnf: |
# Apply this config only on the master.
[mysqld]
log-bin
slave.cnf: |
# Apply this config only on slaves.
[mysqld]
super-read-only
[root@master3 setateful-set-mysql]# cat mysql-services.yaml
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
clusterIP: None
selector:
app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the master: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
name: mysql-read
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
selector:
app: mysql
[root@master3 setateful-set-mysql]# cat mysql-statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: mysql
replicas: 3
template:
metadata:
labels:
app: mysql
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "init-mysql",
#原始鏡相:image: msql:5.7
"image": "192.168.1.103/k8s_public/mysql:5.7",
"command": ["bash", "-c", "
set -ex\n
# Generate mysql server-id from pod ordinal index.\n
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n
ordinal=${BASH_REMATCH[1]}\n
echo [mysqld] > /mnt/conf.d/server-id.cnf\n
# Add an offset to avoid reserved server-id=0 value.\n
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf\n
# Copy appropriate conf.d files from config-map to emptyDir.\n
if [[ $ordinal -eq 0 ]]; then\n
cp /mnt/config-map/master.cnf /mnt/conf.d/\n
else\n
cp /mnt/config-map/slave.cnf /mnt/conf.d/\n
fi\n
"],
"volumeMounts": [
{"name": "conf", "mountPath": "/mnt/conf.d"},
{"name": "config-map", "mountPath": "/mnt/config-map"}
]
},
{
"name": "clone-mysql",
#"image": gcr.io/google-samples/xtrabackup:1.0 原始鏡相自己打tag push 到私庫
"image": "192.168.1.103/k8s_public/xtrabackup:1.0",
"command": ["bash", "-c", "
set -ex\n
# Skip the clone if data already exists.\n
[[ -d /var/lib/mysql/mysql ]] && exit 0\n
# Skip the clone on master (ordinal index 0).\n
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n
ordinal=${BASH_REMATCH[1]}\n
[[ $ordinal -eq 0 ]] && exit 0\n
# Clone data from previous peer.\n
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql\n
# Prepare the backup.\n
xtrabackup --prepare --target-dir=/var/lib/mysql\n
"],
"volumeMounts": [
{"name": "data", "mountPath": "/var/lib/mysql", "subPath": "mysql"},
{"name": "conf", "mountPath": "/etc/mysql/conf.d"}
]
}
]'
spec:
containers:
- name: mysql
image: 192.168.1.103/k8s_public/mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 1
memory: 1Gi
#memory: 500Mi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
timeoutSeconds: 1
- name: xtrabackup
image: 192.168.1.103/k8s_public/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave.
mv xtrabackup_slave_info change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm xtrabackup_binlog_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
mysql -h 127.0.0.1 <<EOF
$(<change_master_to.sql.orig),
MASTER_HOST='mysql-0.mysql',
MASTER_USER='root',
MASTER_PASSWORD='',
MASTER_CONNECT_RETRY=10;
START SLAVE;
EOF
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
nodeSelector:
zone: mysql
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
annotations:
#volume.alpha.kubernetes.io/storage-class: "managed-nfs-storage" #不同版本這里引用的alpha/beta不同注意
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
[root@master3 setateful-set-mysql]# kubectl create -f mysql-configmap.yaml -f mysql-services.yaml -f mysql-statefulset.yaml
[root@master3 setateful-set-mysql]# kubectl get storageclass,pv,pvc,statefulset,pod,service |grep mysql
pv/default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59 10Gi RWO Delete Bound default/data-mysql-0 6d
pv/default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59 10Gi RWO Delete Bound default/data-mysql-1 6d
pv/default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59 10Gi RWO Delete Bound default/data-mysql-2
6d
pvc/data-mysql-0 Bound default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59 10Gi RWO 6d
pvc/data-mysql-1 Bound default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59 10Gi RWO 6d
pvc/data-mysql-2 Bound default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59 10Gi RWO 6d
statefulsets/mysql 3 3 5d
po/mysql-0 2/2 Running 0 5d
po/mysql-1 2/2 Running 0 5d
po/mysql-2 2/2 Running 0 5d
svc/mysql None <none> 3306/TCP 6d #同一個namespaces 下面是可以ping 的 ping mysql-0.mysql ; ping mysql-1.mysql
svc/mysql-read 172.1.11.160 <none> 3306/TCP 6d
[root@master3 setateful-set-mysql]# ok 所有pok創建完成,注意這里的service 沒有clusterip 這種就是headless service無頭類型,
注意刪除kubectl delete statefulset yaml pv和pvc還是會存在的
擴容mysql slave 擴容后可以看到pv,pvc相應的自動創建了
kubectl scale --replicas=5 statefulset mysql
kubectl get pod|grep mysql
po/mysql-0 2/2 Running 0 5d
po/mysql-1 2/2 Running 0 5d
po/mysql-2 2/2 Running 0 5d
po/mysql-3 2/2 Running 0 5m
po/mysql-4 2/2 Running 0 5m
收宿 kubectl scale --replicas=2 statefulset mysql
kubectl get pod|grep mysql
po/mysql-0 2/2 Running 0 5d
po/mysql-1 2/2 Running 0 5d
測試
連接mysql測試
方法1: 通過容器連接
啟動1個mysql-client pod
#啟動1個容器,這里測了下,執行成功了, 沒反應 ctrl+c下. 看到查看pod 可以看到mysql-client的pod
kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\
mysql -h mysql-0.mysql <<EOF
CREATE DATABASE test;
CREATE TABLE test.messages (message VARCHAR(250));
INSERT INTO test.messages VALUES ('hello');
EOF
kubectl exec -it mysql-client bash
#連接從庫
root@mysql-client:/# mysql -h mysql-read
#連接主庫
mysql -h mysql-0.mysql
方法2: 可以物理機安裝mysql-client
#安裝
yum install mysql -y
#查看pod的ip
[root@node131 images]# kubectl get po -o wide|grep mysql
mysql-0 2/2 Running 0 25m 172.30.2.4 192.168.6.133
mysql-1 2/2 Running 1 24m 172.30.28.4 192.168.6.132
mysql-2 2/2 Running 1 24m 172.30.2.5 192.168.6.133
mysql-client 1/1 Running 0 22m 172.30.28.5 192.168.6.132
#通過本地mysql客戶端登錄mysql
mysql -h 172.30.2.5
檢查mysql-read svc
kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --\
bash -ic "while sleep 1; do mysql -h mysql-read -e 'SELECT @@server_id,NOW()'; done"
[root@node131 images]# kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --\
> bash -ic "while sleep 1; do mysql -h mysql-read -e 'SELECT @@server_id,NOW()'; done"
If you don't see a command prompt, try pressing enter.
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW() |
+-------------+---------------------+
| 100 | 2017-05-23 08:58:31 |
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW() |
+-------------+---------------------+
| 101 | 2017-05-23 08:58:32 |
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW() |
+-------------+---------------------+
| 102 | 2017-05-23 08:58:33 |
+-------------+---------------------+
^C
上面窗口保留
模擬mysql-node宕機
kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql /usr/bin/mysql.off
從窗口可以看到只有id是100和101的了.
+-------------+---------------------+
| @@server_id | NOW() |
+-------------+---------------------+
| 100 | 2017-05-23 09:03:05 |
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW() |
+-------------+---------------------+
| 100 | 2017-05-23 09:03:06 |
+-------------+---------------------+
恢復102,后自動有添加到從庫了
kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql.off /usr/bin/mysql
刪除pod:
kubectl delete pod mysql-2
刪掉后,StatefulSet controller會自動創建mysql-2
維護node: 當1個node需要被維護,所有的所在此node的pod都要被驅逐出來.pod會自動實現調用到別的節點
kubectl drain <node-name> --force --delete-local-data --ignore-daemonsets
kubectl get pod mysql-2 -o wide --watch
維護好node后,加入集群
kubectl uncordon <node-name>
kubectl get pods -l app=mysql --watch
擴展節點
kubectl scale --replicas=5 statefulset mysql
kubectl get pods -l app=mysql --watch
kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\
mysql -h mysql-3.mysql -e "SELECT * FROM test.messages"
kubectl scale --replicas=3 statefulset mysql
kubectl get pvc -l app=mysql
縮小:
Which shows that all 5 PVCs still exist, despite having scaled the StatefulSet down to 3:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
data-mysql-0 Bound pvc-8acbf5dc-b103-11e6-93fa-42010a800002 10Gi RWO 20m
data-mysql-1 Bound pvc-8ad39820-b103-11e6-93fa-42010a800002 10Gi RWO 20m
data-mysql-2 Bound pvc-8ad69a6d-b103-11e6-93fa-42010a800002 10Gi RWO 20m
data-mysql-3 Bound pvc-50043c45-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m
data-mysql-4 Bound pvc-500a9957-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m
If you don’t intend to reuse the extra PVCs, you can delete them:
kubectl delete pvc data-mysql-3
kubectl delete pvc data-mysql-4
清理環境:
kubectl delete pod mysql-client-loop --now
kubectl delete statefulset mysql
kubectl get pods -l app=mysql
kubectl delete configmap,service,pvc -l app=mysql
授權解決
因為k8s 1.6開啟了rbac授權
創建statfulset后,看了下pod的日志
kubectl logs -f nfs-client-provisioner-2387627438-hs250
...
E0523 02:47:32.695718 1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:397: Failed to list *v1.PersistentVolume: User "system:serviceaccount:default:default" cannot list persistentvolumes at the cluster scope. (get persistentvolumes)
E0523 02:47:32.696305 1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:369: Failed to list *v1.StorageClass: User "system:serviceaccount:default:default" cannot list storageclasses.storage.k8s.io at the cluster scope. (get storageclasses.storage.k8s.io)
E0523 02:47:32.697326 1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:396: Failed to list *v1.PersistentVolumeClaim: User "system:serviceaccount:default:default" cannot list persistentvolumeclaims at the cluster scope. (get persistentvolumeclaims)
E0523 02:47:33.697467 1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:397: Failed to list *v1.PersistentVolume: User "system:serviceaccount:default:default" cannot list persistentvolumes at the cluster scope. (get persistentvolumes)
E0523 02:47:33.697967 1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:369: Failed to list *v1.StorageClass: User "system:serviceaccount:default:default" cannot list storageclasses.storage.k8s.io at the cluster scope. (get storageclasses.storage.k8s.io)
E0523 02:47:33.699042 1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:396: Failed to list *v1.PersistentVolumeClaim: User "system:serviceaccount:default:default" cannot list persistentvolumeclaims at the cluster scope. (get persistentvolumeclaims)
...
^C
解決:
[root@node131 rbac]# cat serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
[root@node131 rbac]# cat clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
[root@node131 rbac]# cat clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
[root@node131 rbac]#
注意點
[root@node131 nfs]# cat nfs-stateful.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
**serviceAccount: nfs-provisioner** #這里需要調用剛創建的sa
按照以依次創建,然后執行上面的 pv
kubectl create -f serviceaccount.yaml -f clusterrole.yaml -f clusterrolebinding.yaml