ubuntu18.04.4安裝k8s


k8s部署

1.集群所有主機關閉swap

sudo swapoff -a

sudo sed -i 's/.*swap.*/#&/' /etc/fstab

如果重啟后swap還是自動掛載執行systemctl mask dev-sda3.swap(dev-sdax為swap磁盤分區)

2.集群中所有服務器安裝docker

sudo apt-get update

sudo apt-get -y install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"

sudo apt-get update

sudo apt-get install -y docker-ce docker-ce-cli containerd.io

3.集群所有節點上安裝  kubectl kubelet kubeadm

添加阿里雲k8s源

curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

sudo vim /etc/apt/sources.list

deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main

sudo apt-get update

sudo apt-get -y install kubectl kubelet kubeadm

sudo apt-mark hold kubelet kubeadm kubectl

修改docker Cgroup Driver為systemd

cat <<EOF | sudo tee /etc/docker/daemon.json

{
"exec-opts":["native.cgroupdriver=systemd"]
}

EOF

sudo systemctl restart docker.service

4.初始化master節點

kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.2 --pod-network-cidr=10.244.0.0/16

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.130:6443 --token 2tun7p.y9ftxhjvwt69djyx \
--discovery-token-ca-cert-hash sha256:63159dd2b07b995894790ec0e7293977b698eccfb9c6b21f1b3a8cb007682e5a

 kubectl get nodes

為了使用更便捷,啟用 kubectl 命令的自動補全功能。

echo "source <(kubectl completion bash)" >> ~/.bashrc

source ~/.bashrc

5.master節點安裝flannel網絡

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

 6.加入node節點

kubeadm join 192.168.0.130:6443 --token 2tun7p.y9ftxhjvwt69djyx \
--discovery-token-ca-cert-hash sha256:63159dd2b07b995894790ec0e7293977b698eccfb9c6b21f1b3a8cb007682e5a

journalctl -xeu kubelet查看kubelet日志

默認情況下,token的有效期是24小時,如果我們的token已經過期的話,可以使用以下命令重新生成:

# kubeadm token create
 如果我們也沒有--discovery-token-ca-cert-hash的值,可以使用以下命令生成:
# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

  kubeadm reset 可以重置'kubeadm init' or 'kubeadm join'的操作。  同時需要刪除家目錄下的.kube 目錄。才能恢復到初始化之前的狀態

在master上查看node節點

kubectl get nodes

使用label給節點角色打標簽

kubectl label node k8s-node01 node-role.kubernetes.io/node=node

安裝部署Dashboard

1.在master上下載並修改Dashboard安裝腳本

cd ~

mkdir Dashboard

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.1/aio/deploy/recommended.yaml

vim recommended.yaml

-----

#增加直接訪問端口

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort  #增加
ports:
- port: 443
targetPort: 8443
nodePort: 30000 #增加
selector:
k8s-app: kubernetes-dashboard

---

因為自動生成的證書很多瀏覽器無法使用,所以我們自己創建,注釋掉kubernetes-dashboard-certs對象聲明

#apiVersion: v1
#kind: Secret
#metadata:
# labels:
# k8s-app: kubernetes-dashboard
# name: kubernetes-dashboard-certs
# namespace: kubernetes-dashboard
#type: Opaque

 2.創建證書

mkdir dashboard-certs

cd dashboard-certs

創建命名空間

kubectl create namespace kubernetes-dashboard

創建key文件

openssl genrsa -out dashboard.key 2048

證書請求

openssl rand -writerand /home/tiny/.rnd

openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'

自簽證書

openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt

創建kubernetes-dashboard-certs對象

kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

3.安裝Dashboard

cd ~/Dashboard

kubectl create -f  recommended.yaml

檢查結果

kubectl get pods -A  -o wide

kubectl get service -n kubernetes-dashboard  -o wide

4.創建dashboard管理員

創建賬號

vim dashboard-admin.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: dashboard-admin
namespace: kubernetes-dashboard

 kubectl create -f dashboard-admin.yaml

為用戶分配權限

vim dashboard-admin-bind-cluster-role.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin-bind-cluster-role
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard

 kubectl create -f dashboard-admin-bind-cluster-role.yaml

查看並復制用戶Token

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')

在瀏覽器中打開鏈接 https://192.168.0.130:30000

 選擇Token登錄,輸入剛才獲取到的Token登錄。

5.在master上安裝metrics-server

cd ~

mkdir metrics-server
cd metrics-server

wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml

vim components.yaml

修改image

image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6

添加啟動參數

- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls

 kubectl create -f components.yaml

安裝完成后可以看到dashboard界面上CPU和內存使用情況了

6.修改登錄dashboard Token過期時間,默認15分鍾

登錄dashboard,修改kubernetes-dashboard 配置文件

 

部署helm

1.在master上下載helm二進制包

wget https://get.helm.sh/helm-v2.16.7-linux-amd64.tar.gz

tar -zxvf helm-v2.16.7-linux-amd64.tar.gz 

mv linux-amd64/helm /usr/local/bin/helm

2.查看helm版本

helm version

 

 3.安裝helm命令補全

helm completion bash > .helmrc && echo "source .helmrc" >> .bashrc

4.安裝tiller服務器

helm init

 查看tiller pod發現鏡像下載失敗

 使用kubectl describe pod/tiller-deploy-74548b7c5f-br2x7 -n kube-system查看詳情

 在node節點上查找下載該鏡像

 改名

docker tag maryyang/kubernetes-helm-tiller gcr.io/kubernetes-helm/tiller:v2.16.7

pod已經正常運行

 將該鏡像導入到node01上

node02上保存鏡像

docker save -o tiller.v2.16.7.tar.gz gcr.io/kubernetes-helm/tiller

上傳到node01上

docker load < tiller.v2.16.7.tar.gz

使用helm安裝mysql chart

 

 報錯是因為tiller服務器權限不足

執行如下命令添加權限

kubectl create serviceaccount --namespace kube-system tiller

kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

 

再次安裝

 

 

 helm list 顯示已經部署的 release,helm delete 可以刪除 release

 

 

 

k8s練習

1.部署應用

kubectl create deployment httpd-app --image=httpd

 2.擴容應用

kubectl scale deployment/httpd-app --replicas=2

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM