kubernetes 學習筆記(一)之 安裝


一. 離線安裝

https://cloud.tencent.com/developer/article/1445946

https://github.com/liul85/sealos    這里提供了離線的 kube1.16.0.tar.gz包

https://sealyun.oss-cn-beijing.aliyuncs.com/37374d999dbadb788ef0461844a70151-1.16.0/kube1.16.0.tar.gz 

https://sealyun.oss-cn-beijing.aliyuncs.com/7b6af025d4884fdd5cd51a674994359c-1.18.0/kube1.18.0.tar.gz
https://sealyun.oss-cn-beijing.aliyuncs.com/a4f6fa2b1721bc2bf6fe3172b72497f2-1.17.12/kube1.17.12.tar.gz

使用 sealos 安裝出現如下為:

1、 [root@host-10-14-69-125 kubernetes]# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding

 待補充完整.................................

二. 在線安裝

1、前提條件

   1) 使用操作系統 7.6 ,見 《Linux 學習筆記之(一)centos7.6 安裝

   2) 准備安裝 kubernetes 的服務器 能夠上網

   3) 准備兩台服務器,一台做 k8s master node,另一台做 k8s worker node

2、准備工作

(1) 關閉防火牆

systemctl stop firewalld
systemctl disable firewalld

(2) 修改服務器名稱並設置DNS

hostnamectl set-hostname k8s-2   //k8s-2 是 master 所屬服務器名稱 
echo "127.0.0.1 k8s-2">>/etc/hosts

(3) 進行時間校時(用aliyun的NTP服務器)

yum install -y ntp
ntpdate ntp1.aliyun.com

(4) 安裝常用軟件

yum install wget

3、安裝 docker

# 安裝常用軟件
yum install -y yum-utils \ device-mapper-persistent-data \ lvm2
# 添加docker yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #更新並安裝 Docker-CE
yum makecache fast yum -y install docker-ce
# 啟動 docker service docker start
# 開啟自啟動 docker systemctl enable docker
# 查看 docker 服務狀態
systemctl status docke

4、配置 docker

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
# 使用阿里作為鏡像加速器
"registry-mirrors": ["https://obww7jh1.mirror.aliyuncs.com"],
# 將 docker 原先使用的驅動 cgroups 更改為 systemd,因為 kubernetes 使用的驅動為 systemd
"exec-opts": ["native.cgroupdriver=systemd"] } EOF
# 使 daemon.json 加載到系統中 systemctl daemon
-reload
# 重啟 docker systemctl restart docker
# 查看安裝的 docker 相關信息
docker info

5、安裝 kubernetes

# 安裝 kubernetes yum 源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
# 查看版本,本次安裝1.17.12-0 yum
--showduplicates list kubelet | expand
# 安裝 kubelet\kubeadm\kubectl yum install -y kubelet-1.17.12-0 kubeadm-1.17.12-0 kubectl-1.17.12-0

6、配置 kubernetes

#關閉 SELINUX
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#設置iptables
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

#讓 k8s.conf生效
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

# 關閉 SWAP
vi /etc/fstab
注釋swap分區
# /dev/mapper/centos-swap swap     swap    defaults        0 0
#保存退出vi后執行
swapoff –a

#開機啟動kubelet
systemctl enable kubelet

7、初始化 k8s 集群 master node

kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers

 --pod-network-cidr :后續安裝 flannel 的前提條件,且值為 10.244.0.0/16

 --image-repository :指定鏡像倉庫

執行以上命令,輸出的日志為:

[root@k8s-2 opt]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers
W1013 10:38:56.543641   19539 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1013 10:38:56.543871   19539 version.go:102] falling back to the local client version: v1.17.12
W1013 10:38:56.544488   19539 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1013 10:38:56.544515   19539 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.12
[preflight] Running pre-flight checks
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.149.133]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-2 localhost] and IPs [192.168.149.133 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-2 localhost] and IPs [192.168.149.133 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1013 10:42:48.939526   19539 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1013 10:42:48.941281   19539 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.010651 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-2 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 9od4xd.15l09jrrxa7qo3ny
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.149.133:6443 --token 9od4xd.15l09jrrxa7qo3ny \
    --discovery-token-ca-cert-hash sha256:fb23ab81f7b95b36595dfb44ee7aab865aac7671a416b57f9cb2461f45823ea1

按照紅色區域執行命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

8、需要部署一個 Pod Network 到集群中,此處選擇 flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

 此時直接將 raw.githubusercontent.com 寫入到 /etc/hosts中

echo "199.232.28.133 raw.githubusercontent.com" >> /etc/hosts

PS: 補充說明

raw.githubusercontent.com 對應的真實 IP 可以通過  https://site.ip138.com/raw.githubusercontent.com/ 進行查詢,建議使用美國的地址 (日本的和香港的測試不行)

再次執行上述命令,執行成功

查看集群狀態:

kubectl cluster-info

9、檢查是否搭建成功

# 查看所有 node結點
kubectl get nodes
#查看所有命名空間中的 pod
kubectl get pods –all-namespaces

 

如果初始化過程出現問題,使用如下命令重置

kubeadm reset
rm -rf /var/lib/cni/
rm -f $HOME/.kube/config

10、初始化 k8s 集群 worker node

 本示例主要是部署一個master 結點,要再添加 worker,可以參見參考資料中的文章。

(1) 按照5、6 步驟 在 woker node 結點安裝 kubernetes;

(2) 從 master 結點拷貝 /etc/kubernets/adim.conf文件到從結點同等目錄下

 

 (3) 在 worker 結點上執行

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
(4)  執行步驟8

(5)  按照步驟9進行校驗

11、部署過程中遇到的問題

(1) 部署 docker 完之后,沒有去修改它的驅動,導致在部署 kubernetes 啟動 kubelet 失敗(通過 systemctl status kubelet 查看服務狀態)

 

 通過 docker info 命令看到默認驅動是 cgroupfs

 所以一定要執行以上的 配置 docker 步驟,否則得返回去再去修改 docker 的驅動

在 /etc/docker/daemon.json 文件中增加如下配置:
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl restart docker

重啟 docker 之后,再次查看 docker info 信息,

(2) 通過 kubectl get nodes 發現 work 結點 是 NotReady 狀態的話,可以通過 tail -f /var/log/messages 查詢原因。

 

參考資料:

 https://www.cnblogs.com/bluersw/p/11713468.html

 https://www.jianshu.com/p/832bcd89bc07

三、kubernetes 卸載及清理

kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd

 四、使用 centos 8 安裝 kubernetes

安裝步驟基本同上,不同點在於

1、手動修改 docker-ce repos

 vi /etc/yum.repos.d/docker-ce.repo

將其中的操作系統 center 7 更改 為 8

修改后的內容如下:

[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-debuginfo]
name=Docker CE Edge - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/debug-$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-source]
name=Docker CE Edge - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/source/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

2、安裝docker 之前添加對應 yum 源之后,原先執行 yum makecache fast 更改為執行 yum makecache

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM