
前言
這里使用環境:Ubuntu 18.04,以下所有操作都在root用戶下進行
其實kubenetes安裝非常核心的幾步就是
- 安裝docker
- 安裝kubeadm
- 通過kubeadm init創建master節點
- 安裝master的網絡插件
- 通過kubeadm join添加node節點
一、所有節點安裝docker
export LC_ALL=C
apt-get -y autoremove docker docker-engine docker.io docker-ce
apt-get update -y
apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt-get update -y
# docker源加速
mkdir -p /etc/docker
echo '{"registry-mirrors":["https://pee6w651.mirror.aliyuncs.com"]}' > /etc/docker/daemon.json
# 安裝docker
apt-get install docker.io -y
# 啟動和自啟動
systemctl start docker
systemctl enable docker
二、所有節點安裝kubeadm
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
#配置kubernetes阿里源
tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
# 設置停止自動更新
apt-mark hold kubelet kubeadm kubectl
# kubelet開機自啟動
systemctl enable kubelet && systemctl start kubelet
安裝了kubeadm, kubelet、kubectl、kubernetes-cni都會被安裝好
三、安裝master節點
安裝命令如下:
kubeadm init --kubernetes-version=v1.11.3 --apiserver-advertise-address 172.18.0.1 --pod-network-cidr=10.244.0.0/16
說明
--kubernetes-version: 用於指定 k8s版本
--apiserver-advertise-address:用於指定使用 Master的哪個network interface進行通信,若不指定,則 kubeadm會自動選擇具有默認網關的 interface
--pod-network-cidr:用於指定Pod的網絡范圍。該參數使用依賴於使用的網絡方案,本文將使用經典的flannel網絡方案。
這里運行報錯
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
我們根據提示加上--ignore-preflight-errors=all,再次運行
kubeadm init --kubernetes-version=v1.11.3 --apiserver-advertise-address 172.18.0.1 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all```
如果沒有FQ的同學可能會遇到拉不下鏡像來的情況,可以通過下面命令解決
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.3
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.18
docker pull coredns/coredns:1.1.3
拉完后,再tag為我們需要的鏡像
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.3 k8s.gcr.io/kube-proxy-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3 k8s.gcr.io/kube-scheduler-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3 k8s.gcr.io/kube-apiserver-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3 k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3
安裝成功后會出現kubeadm join --token...
,這個命令就是我們用來添加節點的命令,得記錄下來
kubeadm join 172.18.0.1:6443 --token 9fvv8z.7vmiehcohwlll9tj --discovery-token-ca-cert-hash sha256:dc583faabe8bc130137507cca893fbead73a8879e5540545f2122f9a0839d6dc
還會出現第一次使用集群的配置命令,用來將k8s集群的配置命令保存在用戶的.kube目錄,kubectl默認會使用這個目錄授權信息訪問k8s集群
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
根據提示執行這三條指令,這時kubectl就可以正常的執行了,我們使用kubectl get nodes來查看一下節點信息
root@VM-0-8-ubuntu:/home/ubuntu# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vm-0-8-ubuntu NotReady master 6m v1.11.3
顯示master節點是NotReady,這是因為我們還沒有安裝網絡插件,可以通過kubectl describe node
命令驗證
root@VM-0-8-ubuntu:/home/ubuntu# kubectl describe node vm-0-8-ubuntu
...
ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
提示網絡插件未就緒
再查看一下k8s默認的kube-system保留命名空間的內容
root@VM-0-8-ubuntu:/home/ubuntu# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-vb9pv 0/1 Pending 0 7m
coredns-78fcdf6894-zlkfp 0/1 Pending 0 7m
etcd-vm-0-8-ubuntu 1/1 Running 0 6m
kube-apiserver-vm-0-8-ubuntu 1/1 Running 1 7m
kube-controller-manager-vm-0-8-ubuntu 1/1 Running 2 8m
kube-proxy-jcfpt 1/1 Running 0 7m
kube-scheduler-vm-0-8-ubuntu 1/1 Running 0
coredns處於pending狀態
四、部署網絡插件
kubernetes默認支持多重網絡插件如flannel、weave、calico,
這里使用flanne,就必須要設置--pod-network-cidr參數,10.244.0.0/16是kube-flannel.yml里面配置的默認網段,如果需要修改的話,需要把kubeadm init的--pod-network-cidr參數和后面的kube-flannel.yml里面修改成一樣的網段就可以了
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
安裝完成,再次查看node狀態
root@VM-0-8-ubuntu:/home/ubuntu# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vm-0-8-ubuntu Ready master 9m v1.11.3
再看master節點,已經變成Ready,再看kube-system的pod
root@VM-0-8-ubuntu:/home/ubuntu# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-fndmg 1/1 Running 0 2m
coredns-78fcdf6894-nm282 1/1 Running 0 2m
etcd-vm-0-8-ubuntu 1/1 Running 0 2m
kube-apiserver-vm-0-8-ubuntu 1/1 Running 0 1m
kube-controller-manager-vm-0-8-ubuntu 1/1 Running 0 1m
kube-flannel-ds-85wxw 1/1 Running 0 22s
kube-proxy-95v4f 1/1 Running 0 2m
kube-scheduler-vm-0-8-ubuntu 1/1 Running 0
coredns也正常了
默認情況master節點打上了“污點”,使用的是k8s的Taint/Toleration機制,如下
root@VM-0-8-ubuntu:/home/ubuntu# kubectl describe node vm-0-8-ubuntu
...
Taints: node-role.kubernetes.io/master:NoSchedule
如果我們只是需要一個單節點,可以刪除這個Taints
root@VM-0-8-ubuntu:/home/ubuntu# kubectl taint nodes --all node-role.kubernetes.io/master-
node/vm-0-8-ubuntu untainted
五、安裝node節點
master和node節點都運行着kubectl,唯一不同的是master需要init,啟動kube-apiserver、kube-scheduler、kube-controller-manager這三個系統Pod,
在安裝完docker和kubeadm的基礎上,只需要運行kubeadm join命令就可以了
kubeadm join 172.18.0.1:6443 --token 9fvv8z.7vmiehcohwlll9tj --discovery-token-ca-cert-hash sha256:dc583faabe8bc130137507cca893fbead73a8879e5540545f2122f9a0839d6dc
六、運行一個demo
創建一個demo.yaml,我們使用nginx:alpine鏡像
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
執行創建
root@VM-0-8-ubuntu:/home/ubuntu# kubectl apply -f demo.yaml
deployment.apps/hello-deployment created
查看pod
root@VM-0-8-ubuntu:/home/ubuntu# kubectl get po
NAME READY STATUS RESTARTS AGE
demo-deployment-555958bc44-drwvh 1/1 Running 0 26s
demo-deployment-555958bc44-llhr9 1/1 Running 0 26s
可以看到,po已經正常的運行了,replicas代表設置了兩個節點。可以使用kubectl exec -it進入容器查看一下
root@VM-0-8-ubuntu:/home/ubuntu# kubectl exec -it demo-deployment-555958bc44-drwvh sh
/ # ls /
bin etc lib mnt proc run srv tmp var
dev home media opt root sbin sys usr
ok,一個簡單的demo也運行了