k8s安裝


https://www.kubernetes.org.cn/doc-16

https://blog.csdn.net/zjysource/article/details/52086835

https://blog.csdn.net/wucong60/article/details/81911859

https://www.kubernetes.org.cn/tags/kubeadm

https://www.katacoda.com/courses/kubernetes

Kubernetes包提供了一些服務:

  kube-apiserver,

  kube-scheduler,

  kube-controller-manager,

  kubelet,

  kube-proxy。

  這些服務通過systemd進行管理,配置信息都集中存放在一個地方:/etc/kubernetes。

  我們將會把這些服務運行到不同的主機上。第一台主機,centosmaster,將是Kubernetes 集群的master主機。這台機器上將運行kube-apiserver, kubecontroller-manager和kube-scheduler這幾個服務,此外,master主機上還將運行etcd。其余的主機,fed-minion,將是從節點,將會運行kubelet, proxy和docker。

systemctl stop firewalld
systemctl disable firewalld
yum -y install ntp
systemctl start ntpd
systemctl enable ntpd

配置master

yum -y install etcd docker kubernetes

對etcd進行配置,編輯/etc/etcd/etcd.conf

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"  #ETCD存儲目錄
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"  #修改項,表示etcd在2379端口上監聽所有網絡接口。
ETCD_NAME="default"  #ETCD名稱
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"  #對外提供服務的地址

  

對Master節點上的Kubernetes進行配置,編輯配置文件/etc/kubernetes/config

KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.0.201:8080" #修改為master的地址。含義:將apiserver進程的服務地址告訴controller-manager, scheduler和proxy進程。

  

/etc/kubernetes/apiserver

這些配置讓apiserver進程在8080端口上監聽所有網絡接口,並告訴apiserver進程etcd服務的地址。

KUBE_API_ADDRESS="--address=0.0.0.0" #修改項
KUBE_API_PORT="--port=8080" #添加項
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""

這台機器的linux的內核中的SELinux不支持 overlay2 graph driver 。
解決方法有兩個,要么啟動一個新內核,要么就在docker配置文件里面里禁用selinux,--selinux-enabled=false

將“--selinux-enabled”改成“--selinux-enabled=false”

[root@registry lib]# cat /etc/sysconfig/docker # /etc/sysconfig/docker # Modify these options if you want to change the way the docker daemon runs #OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false' OPTIONS='--selinux-enabled=false --log-driver=journald --signature-verification=false --registry-mirror=https://fzhifedh.mirror.aliyuncs.com --insecure-registry=registry.sese.com' #修改這里的"--selinux-enabled",改成"--selinux-enabled=false" if [ -z "${DOCKER_CERT_PATH}" ]; then DOCKER_CERT_PATH=/etc/docker fi 

啟動服務:

for SERVICES  in etcd docker kube-apiserver kube-controller-manager kube-scheduler;  do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

現在我們可以使用kubectl get nodes命令來查看,當然,目前還沒有Node節點加入到該Kubernetes集群,所以命令的執行結果是空的:

# kubectl get nodes
NAME              STATUS    AGE

 

etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
#節點上的flannel用master上的etcd里的/atomic.io/network/config來創建網絡的表

  

配置節點

yum -y install flannel docker kubernetes

/etc/sysconfig/flanneld   #對flannel進行配置

FLANNEL_ETCD_ENDPOINTS="http://192.168.0.201:2379" #修改成master的地址,告訴flannel進程etcd服務的位置以及在etcd上網絡配置信息的節點位置
FLANNEL_ETCD_PREFIX="/atomic.io/network"

/etc/kubernetes/config:

KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.0.201:8080"  #修改成master的地址,將Kubernetes的apiserver進程的服務地址告訴Kubernetes的controller-manager, scheduler和proxy進程。

/etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=192.168.0.200" #節點服務器的IP
KUBELET_API_SERVER="--api-servers=http://192.168.0.201:8080" #master服務器的IP
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""

節點上啟動kube-proxy kubelet docker和flanneld進程並查看其狀態

for SERVICES in kube-proxy kubelet docker flanneld; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES 
done

Master節點上使用kubectl get nodes命令來查看,可以看到加入的Node節點

[root@localhost ~]# kubectl get nodes
NAME            STATUS    AGE
192.168.0.200   Ready     1h

 

搭建私有倉庫:

vim /etc/pki/tls/openssl.cnf

[ v3_ca ]
subjectAltName = IP:192.168.169.125  #增加一行

cd /etc/pki/tls

[root@localhost tls]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt

Generating a 4096 bit RSA private key
..................................................................................................................++
writing new private key to 'certs/domain.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:86 #國家名(2位代碼)
State or Province Name (full name) []:jiangsu~^H^Hxvzhou #省/市/自治區名稱(全名)
Locality Name (eg, city) [Default City]:xv^Hzhou #地點名稱(例如,城市)
Organization Name (eg, company) [Default Company Ltd]:ni^Hhao #組織名稱(例如,公司)
Organizational Unit Name (eg, section) []:jishu #組織單位名稱(例如,部門)
Common Name (eg, your name or your server's hostname) []:hostname #公用名(例如,您的名稱或服務器的主機名)
Email Address []:123@qq.com #郵箱地址

證書創建完畢后,在certs目錄下出現了兩個文件:證書文件domain.crt和私鑰文件domain.key

在192.168.169.125上安裝docker

yum -y install docker

將前面生成的domain.crt文件復制到/etc/docker/certs.d/192.168.169.125:5000目錄下,然后重啟docker進程:

mkdir -p /etc/docker/certs.d/192.168.0.205:5000
cp certs/domain.crt /etc/docker/certs.d/192.168.0.205:5000/ca.crt
systemctl restart docker

  

[root@localhost tls]# docker run -d -p 5000:5000 --restart=always --name registry   -v `pwd`/certs:/certs   -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt   -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key   registry:2

 

Unable to find image 'registry:2' locally
Trying to pull repository docker.io/library/registry ... 
2: Pulling from docker.io/library/registry
c87736221ed0: Pull complete 
1cc8e0bb44df: Pull complete 
54d33bcb37f5: Pull complete 
e8afc091c171: Pull complete 
b4541f6d3db6: Pull complete 
Digest: sha256:3b00e5438ebd8835bcfa7bf5246445a6b57b9a50473e89c02ecc8e575be3ebb5
Status: Downloaded newer image for docker.io/registry:2
/usr/bin/docker-current: Error response from daemon: Invalid container name (registry  ), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed.
See '/usr/bin/docker-current run --help'.

最后,將domain.crt文件復制到Kubernetes集群里的所有節點的/etc/docker/certs.d/192.168.169.125:5000目錄下,並重啟各節點的docker進程,例如在192.168.169.121節點上運行:

mkdir -p /etc/docker/certs.d/192.168.169.125:5000
scp root@192.168.0.205:~/certs/domain.crt /etc/docker/certs.d/192.168.0.205:5000/ca.crt
systemctl restart docker

 

容器的簡單命令:

1. 先查看所有的容器
# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                        PORTS               NAMES
e3274a72e8d6        tomcat              "catalina.sh run"   2 weeks ago         Exited (130) 19 minutes ago                       tomcat8080
看到了這個名為 “tomcat8080” 的容器,並且這個容器是非運行(Exited)狀態。

注:“docker ps” 是查看當前運行的容器,“docker ps -a” 是查看所有容器(包括停止的)。

2. 移除這個“tomcat8080”容器
# docker rm e3274a72e8d6
e3274a72e8d6

3. 然后再創建新容器
# docker run --name tomcat8080 -d -p 8080:8080 tomcat
新容器創建成功,並且是運行狀態:
# docker ps -a

  

Kubernetes Web UI搭建

這節我以搭建Kubernetes Web UI(kubernetes-dashboard)來簡要演示如何使用Docker私有庫。

由於我的Kubernetes集群無法直接從gcr.io拉取kubernetes-dashboard的鏡像,我事先下載了鏡像文件並使用docker load命令加載鏡像:

# docker load < kubernetes-dashboard-amd64_v1.1.0.tar.gz
# docker images
REPOSITORY                                        TAG                 IMAGE ID            CREATED             SIZE
registry                                          2                   c6c14b3960bd        3 days ago          33.28 MB
ubuntu                                            latest              42118e3df429        9 days ago          124.8 MB
hello-world                                       latest              c54a2cc56cbb        4 weeks ago         1.848 kB
172.28.80.11:5000/kubernetes-dashboard-amd64      v1.1.0              20b7531358be        5 weeks ago         58.52 MB
registry                                          2                   8ff6a4aae657        7 weeks ago         171.5 MB


我為加載的kubernetes-dashboard鏡像打上私有庫的標簽並推送到私有庫:

# docker tag 20b7531358be 192.168.169.125:5000/kubernetes-dashboard-amd64
# docker push 192.168.169.125:5000/kubernetes-dashboard-amd64


從Kubernetes官網獲取了kubernetes-dashboard的配置文件https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml,對其進行編輯如下:

# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Configuration to deploy release version of the Dashboard UI.
#
# Example usage: kubectl create -f <this_file>

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: kubernetes-dashboard
version: v1.1.0
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: 192.168.169.125:5000/kubernetes-dashboard-amd64
imagePullPolicy: Always
ports:
- containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
- --apiserver-host=192.168.169.120:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
app: kubernetes-dashboard

尤其要注意:1 創建的Pods所要拉取的鏡像是Docker私有庫的192.168.169.125:5000/kubernetes-dashboard-adm64; 2 apiserver-host參數是192.168.169.120:8080,即Kubernetes Master節點的aipserver服務地址。
修改完kubernetes-dashboard.yaml后保存到Kubernetes Master節點192.168.169.120節點上,在該節點上用kubectl create命令創建kubernetes-dashboard:

# kubectl create -f kubernetes-dashboard.yaml


創建完成后,查看Pods和Service的詳細信息:

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
default       nginx                                   1/1       Running   0          3h
kube-system   kubernetes-dashboard-4164430742-lqhcg   1/1       Running   0          2h


# kubectl describe pods/kubernetes-dashboard-4164430742-lqhcg --namespace="kube-system"
Name:        kubernetes-dashboard-4164430742-lqhcg
Namespace:    kube-system
Node:        192.168.169.124/192.168.169.124
Start Time:    Mon, 01 Aug 2016 16:12:02 +0800
Labels:        app=kubernetes-dashboard,pod-template-hash=4164430742
Status:        Running
IP:        172.17.17.3
Controllers:    ReplicaSet/kubernetes-dashboard-4164430742
Containers:
  kubernetes-dashboard:
    Container ID:    docker://40ab377c5b8a333487f251547e5de51af63570c31f9ba05fe3030a02cbb3660c
    Image:        192.168.169.125:5000/kubernetes-dashboard-amd64
    Image ID:        docker://sha256:20b7531358be693a34eafdedee2954f381a95db469457667afd4ceeb7146cd1f
    Port:        9090/TCP
    Args:
      --apiserver-host=192.168.169.120:8080
    QoS Tier:
      cpu:        BestEffort
      memory:        BestEffort
    State:        Running
      Started:        Mon, 01 Aug 2016 16:12:03 +0800
    Ready:        True
    Restart Count:    0
    Liveness:        http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment Variables:
Conditions:
  Type        Status
  Ready     True
No volumes.
No events.


# kubectl describe service/kubernetes-dashboard --namespace="kube-system"
Name:            kubernetes-dashboard
Namespace:        kube-system
Labels:            app=kubernetes-dashboard
Selector:        app=kubernetes-dashboard
Type:            NodePort
IP:            10.254.213.209
Port:            <unset>    80/TCP
NodePort:        <unset>    31482/TCP
Endpoints:        172.17.17.3:9090
Session Affinity:    None
No events.

7.發布nginx服務

7.1 創建pod : nginx-pod.yaml

kubectl create -f nginx-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80

  

7.2 查看pod的狀態

[root@localhost ~]# kubectl get pods
NAME        READY     STATUS              RESTARTS   AGE
nginx-pod   0/1       ContainerCreating   0          2h

等10分鍾再試

NAME READY STATUS RESTARTS AGE 
nginx-pod 1/1 Running 0 13m 

PS:這里經常會因為網絡問題失敗,可以先使用docker手動pull鏡像后再使用kubectl來create pod,如果還是不行,就delete pod之后再create pod,實在不行,可以重啟機器試試,還是不行,那就是配置出問題了

7.3 創建replicationController : nginx-rc.yaml

kubectl create -f nginx-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-rc
spec:
  replicas: 1
  selector:
    name: nginx-pod
  template:
    metadata:
      labels:
        name: nginx-pod
    spec:
      containers:
      - name: nginx-pod
        image: nginx
        ports:
        - containerPort: 80

7.4 查看ReplicationController狀況

[root@localhost ~]# kubectl get rc
NAME       DESIRED   CURRENT   READY     AGE
nginx-rc   1         1         0         1h

7.5 創建service : nginx-service.yaml

kubectl create -f nginx-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  ports:
  - port: 80
    nodePort: 30001
  selector:
    name: nginx-pod    

  

7.6 查看service狀態

PS:其中Kubernetes服務為Kube系統自帶服務,無需理會

7.7 測試發布的nginx服務
使用其他機器的瀏覽器訪問node1機器的30001端口 

[root@localhost ~]# kubectl describe svc nginx
Name:			nginx-service
Namespace:		default
Labels:			<none>
Selector:		name=nginx-pod
Type:			NodePort
IP:			10.254.98.243
Port:			<unset>	80/TCP
NodePort:		<unset>	30001/TCP
Endpoints:		<none>
Session Affinity:	None
No events.

[root@localhost ~]# kubectl describe svc nginx-service
Name:			nginx-service
Namespace:		default
Labels:			<none>
Selector:		name=nginx-pod
Type:			NodePort
IP:			10.254.98.243
Port:			<unset>	80/TCP
NodePort:		<unset>	30001/TCP
Endpoints:		<none>
Session Affinity:	None
No events.

  

[root@localhost build]# sh postinstall.sh
postinstall.sh: line 19: cd: ./node_modules/wiredep: No such file or directory
postinstall.sh: line 20: ../../build/patch/wiredep/wiredep.patch: No such file or directory
postinstall.sh: line 21: cd: lib: No such file or directory
postinstall.sh: line 22: ../../../build/patch/wiredep/detect-dependencies.patch: No such file or directory
postinstall.sh: line 29: go: command not found
postinstall.sh: line 33: cd: ./.tools/: No such file or directory
Cloning into 'xtbgenerator'...
remote: Enumerating objects: 229, done.
remote: Total 229 (delta 0), reused 0 (delta 0), pack-reused 229
Receiving objects: 100% (229/229), 27.78 MiB | 45.00 KiB/s, done.
Resolving deltas: 100% (73/73), done.
Note: checking out 'd6a6c9ed0833f461508351a80bc36854bc5509b2'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

git checkout -b new_branch_name

HEAD is now at d6a6c9e... fix empty --js param, recompile bin/XtbGenerator.jar

 

[root@localhost build]# sh run-gulp-in-docker.sh
ERRO[0000] Can't add file /dashboard-1.8.3/build/xtbgenerator/.git/objects/pack/tmp_pack_eZzn8x to tar: archive/tar: write too long
Sending build context to Docker daemon 43.7 MB
Step 1/6 : FROM golang
Trying to pull repository docker.io/library/golang ...
latest: Pulling from docker.io/library/golang
e79bb959ec00: Already exists
d4b7902036fe: Already exists
1b2a72d4e030: Already exists
d54db43011fd: Pull complete
963c818ebafc: Pull complete
2c6333e9b74a: Pull complete
3b0c71504fac: Pull complete
Digest: sha256:62538d25400afa368551fdeebbeed63f37a388327037438199cdf60b7f465639
Status: Downloaded newer image for docker.io/golang:latest
---> 213fe73a3852
Step 2/6 : RUN curl -sL https://deb.nodesource.com/setup_9.x | bash - && apt-get install -y --no-install-recommends openjdk-8-jre nodejs patch && rm -rf /var/lib/apt/lists/* && apt-get clean
---> Running in ef39b4003488

nsenter: could not ensure we are a cloned binary: Invalid argument
container_linux.go:247: starting container process caused "write parent: broken pipe"
oci runtime error: container_linux.go:247: starting container process caused "write parent: broken pipe"

Unable to find image 'kubernetes-dashboard-build-image:latest' locally
Trying to pull repository docker.io/library/kubernetes-dashboard-build-image ...
/usr/bin/docker-current: repository docker.io/kubernetes-dashboard-build-image not found: does not exist or no pull access.
See '/usr/bin/docker-current run --help'.


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM