1、創建aggregator證書
方法一:直接使用二進制源碼包安裝
$ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 $ chmod +x cfssl_linux-amd64 $ mv cfssl_linux-amd64 /usr/local/bin/cfssl $ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 $ chmod +x cfssljson_linux-amd64 $ mv cfssljson_linux-amd64 /usr/local/bin/cfssljson $ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 $ chmod +x cfssl-certinfo_linux-amd64 $ mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo $ export PATH=/usr/local/bin:$PATH
方式二:使用go命令安裝
$ go get -u github.com/cloudflare/cfssl/cmd/... $ls $GOPATH/bin/cfssl* cfssl cfssl-bundle cfssl-certinfo cfssljson cfssl-newkey cfssl-scan
2、創建 CA (Certificate Authority)
創建 CA 配置文件
$ mkdir /root/ssl $ cd /root/ssl $ cfssl print-defaults config > config.json $ cfssl print-defaults csr > csr.json # 根據config.json文件的格式創建如下的ca-config.json文件 # 過期時間設置成了 87600h $ cat > aggregator-ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "aggregator": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } } EOF
字段說明:
profiles
: 可以定義多個 profiles,分別指定不同的過期時間、使用場景等參數;后續在簽名證書時使用某個 profile。signing
:表示該證書可用於簽名其它證書;生成的 aggregator-ca.pem 證書中CA=TRUE
。server auth
:表示 Client 可以用該 CA 對 Server 提供的證書進行驗證。client auth
:表示 Server 可以用該 CA 對 Client 提供的證書進行驗證。
創建 CA 證書簽名請求
創建 aggregator-ca-csr.json
文件,內容如下:
{ "CN": "aggregator", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "k8s", "OU": "System" } ], "ca": { "expiry": "87600h" } }
字段說明:
- “CN” :
Common Name
,kube-apiserver 從證書中提取該字段作為請求的用戶名 (User Name);瀏覽器使用該字段驗證網站是否合法。 - “O” :
Organization
,kube-apiserver 從證書中提取該字段作為請求用戶所屬的組 (Group);
生成 CA 證書和私鑰
$ cfssl gencert -initca aggregator-ca-csr.json | cfssljson -bare aggregator-ca $ ls aggregator-ca* aggregator-ca-config.json aggregator-ca.csr aggregator-ca-csr.json aggregator-ca-key.pem
3、創建 kubernetes 證書
創建 aggregator 證書簽名請求文件 aggregator-csr.json
:
{ "CN": "aggregator", "hosts": [ "127.0.0.1", "192.168.123.250", "192.168.123.248", "192.168.123.249", "10.254.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "k8s", "OU": "System" } ] }
- 如果 hosts 字段不為空則需要指定授權使用該證書的 IP 或域名列表,由於該證書后續被 etcd 集群和 kubernetes master 集群使用,所以上面分別指定了
etcd
集群、kubernetes master
集群的主機 IP 和 kubernetes 服務的服務 IP(一般是 kube-apiserver 指定的service-cluster-ip-range
網段的第一個 IP,如 10.254.0.1)。 - 以上物理節點的 IP 也可以更換為主機名。
生成 aggregator 證書和私鑰
$ cfssl gencert -ca=aggregator-ca.pem -ca-key=aggregator-ca-key.pem -config=aggregator-ca-config.json -profile=aggregator aggregator-csr.json | cfssljson -bare aggregator $ ls aggregator* aggregator.csr aggregator-csr.json aggregator-key.pem aggregator.pem
4、分發證書
將生成的證書和秘鑰文件(后綴名為.pem)拷貝到 Master 節點的 /etc/kubernetes/ssl
目錄下備用。
cp *.pem /etc/kubernetes/ssl
5、開啟聚合層 API
kube-apiserver
增加以下配置:
--requestheader-client-ca-file=/etc/kubernetes/ssl/aggregator-ca.pem --requestheader-allowed-names=aggregator --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --proxy-client-cert-file=/etc/kubernetes/ssl/aggregator.pem --proxy-client-key-file=/etc/kubernetes/ssl/aggregator-key.pem
注意:前面創建的證書的 CN
字段的值必須和參數 --requestheader-allowed-names
指定的值 aggregator
相同。
重啟 kube-apiserver:
$ systemctl daemon-reload $ systemctl restart kube-apiserver
如果 kube-proxy
沒有在 Master 上面運行,kube-proxy
還需要添加配置:
--enable-aggregator-routing=true
6、部署metrics server
git clone https://github.com/kubernetes-incubator/metrics-server $ cd metrics-server $ cat deploy/1.8+/metrics-server-deployment.yaml --- apiVersion: v1 kind: ServiceAccount metadata: name: metrics-server namespace: kube-system --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: metrics-server namespace: kube-system labels: k8s-app: metrics-server spec: selector: matchLabels: k8s-app: metrics-server template: metadata: name: metrics-server labels: k8s-app: metrics-server spec: serviceAccountName: metrics-server volumes: # mount in tmp so we can safely use from-scratch images and/or read-only containers - name: tmp-dir emptyDir: {} containers: - name: metrics-server image: k8s.gcr.io/metrics-server-amd64:v0.3.2 command: - /metrics-server - --kubelet-preferred-address-types=InternalIP - --kubelet-insecure-tls imagePullPolicy: IfNotPresent volumeMounts: - name: tmp-dir mountPath: /tmp $ kubectl create -f deploy/1.8+/
注意:這里我修改了metrics-server的啟動命令,增加了--kubelet-preferred-address-types=InternalIP和--kubelet-insecure-tls參數,否則metrics server可能會從kubelet拿不到監控數據。具體報錯可以通過kubectl log metrics-server-5687578d67-tx8m4 -n kube-system命令查看
7、驗證metrics server
[root@k8s-10-21-17-56 1.8+]# kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes [root@k8s-10-21-17-56 1.8+]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-10-21-17-41 136m 13% 2131Mi 93% k8s-10-21-17-42 167m 2% 8904Mi 28% k8s-10-21-17-43 978m 13% 17733Mi 57% k8s-10-21-17-56 707m 17% 16621Mi 51% k8s-10-21-17-57 320m 8% 12478Mi 38% k8s-10-21-17-58 442m 11% 13087Mi 40% k8s-10-21-17-59 242m 8% 13838Mi 45% [root@k8s-10-21-17-56 1.8+]# kubectl top pod NAME CPU(cores) MEMORY(bytes) eager-alpaca-zookeeper-0 6m 780Mi eager-alpaca-zookeeper-1 5m 755Mi eager-alpaca-zookeeper-2 7m 793Mi filled-scorpion-minio-96595c48-bfwrd 1m 10Mi filled-scorpion-redis-master-0 5m 28Mi filled-scorpion-spinnake-halyard-0 1m 1365Mi idolized-wallaby-nfs-client-provisioner-5dbcfc8c9-8kpwk 2m 11Mi jaundiced-possum-gitlab-runner-64dcdccc4c-k5927 4m 7Mi nginx-deployment-586f5f95f7-dvmw7 0m 1Mi nginx-deployment-586f5f95f7-hpw5n 0m 2Mi prometheus-operator-6c8d8456cd-ccfwx 2m 24Mi prometheus-sample-metrics-prom-0 1m 30Mi sample-metrics-app-5f67fcbc57-9ghxt 1m 9Mi sample-metrics-app-5f67fcbc57-t9pzn 1m 9Mi