Kubernetes部署-RKE自動化部署


一、簡介

RKE:Rancher Kubernetes Engine
一個極其簡單,閃電般快速的Kubernetes安裝程序,可在任何地方使用。

二、准備工作

I、配置系統

系統:CentOS 7 / Ubuntu
配置完系統后安裝必要的軟件:

yum install lvm2 parted lrzsz -y
# 查看需要配置的磁盤
fdisk -l
# 如:/dev/sda
fdisk /dev/sda # 根據提示進行分區
# 配置lvm卷
pvcreate /dev/sda1
vgcreate disk1 /dev/sda1
lvcreate -n data -l +100%FREE disk1
# 格式化磁盤
mkfs.xfs /dev/disk1/data
# 寫入開機自動加載
diskuuid=`blkid /dev/disk1/data | awk '{print $2}' | tr -d '"'`
echo "$diskuuid /data                   xfs     defaults        0 0" >> /etc/fstab
# 判斷/data目錄是否存在並掛載磁盤
[ -d /data ] || mkdir /data
mount -a

II、安裝docker

可以根據rancher提供的匹配版本進行安裝:

DOCKER VERSION INSTALL SCRIPT
18.09.2 curl [https://releases.rancher.com/install-docker/18.09.2.sh](https://releases.rancher.com/install-docker/18.09.2.sh) | sh
18.06.2 curl [https://releases.rancher.com/install-docker/18.06.2.sh](https://releases.rancher.com/install-docker/18.06.2.sh) | sh
17.03.2 curl [https://releases.rancher.com/install-docker/17.03.2.sh](https://releases.rancher.com/install-docker/17.03.2.sh) | sh

也可以通過如下命令進行安裝:

# 配置yum源
sudo yum remove docker docker-common docker-selinux docker-engine
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
sudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum install -y -q docker-ce-18.09.2 # 此處指定你要安裝的版本即可

配置docker的daemon.json文件:

systemctl enable docker
systemctl start docker
echo '''{
  "data-root": "/data/docker",
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "3"
  },
  "registry-mirrors": [
    "https://hub-mirror.c.163.com",
    "https://docker.mirrors.ustc.edu.cn",
    "https://dockerhub.azk8s.cn"
  ]
}''' > /etc/docker/daemon.json
[ -d /data/docker ] || mkdir /data/docker
systemctl restart docker

至此,基本環境已經配置完成。

三、安裝配置

假設初始化環境如下:

IP 系統
192.168.0.1 CentOS 7
192.168.0.2 CentOS 7
192.168.0.3 CentOS 7

首先需要配置進行集群創建的用戶以及免密登陸:

useradd admin
usermod -aG docker admin
su - admin
cd .ssh/
ssh-keygen -t rsa # 一路回車完成配置
echo <PublicKeys> >> /home/admin/.ssh/authorized_keys

配置完免密登陸,我們需要下載rke軟件並進行集群配置文件設置:

# 下載rke軟件
# github地址:https://github.com/rancher/rke
wget https://github.com/rancher/rke/releases/download/v1.0.4/rke_linux-amd64
ln -s rke_linux-amd64 /usr/local/bin/rke
# 配置rke配置文件
[ -d /data/k8s ] || mkdir /data/k8s ; cd /data/k8s
rke config --name cluster.yml # 按照提示進行rke配置
# 完整配置后
rke up # 等待安裝完成

rke cluster.yml的示例:
官方示例一:

nodes:
    - address: 1.1.1.1
      user: ubuntu
      role:
        - controlplane
        - etcd
      ssh_key_path: /home/user/.ssh/id_rsa
      port: 2222
    - address: 2.2.2.2
      user: ubuntu
      role:
        - worker
      ssh_key: |-
        -----BEGIN RSA PRIVATE KEY-----

        -----END RSA PRIVATE KEY-----
    - address: example.com
      user: ubuntu
      role:
        - worker
      hostname_override: node3
      internal_address: 192.168.1.6
      labels:
        app: ingress

# If set to true, RKE will not fail when unsupported Docker version
# are found
ignore_docker_version: false

# Cluster level SSH private key
# Used if no ssh information is set for the node
ssh_key_path: ~/.ssh/test

# Enable use of SSH agent to use SSH private keys with passphrase
# This requires the environment `SSH_AUTH_SOCK` configured pointing
#to your SSH agent which has the private key added
ssh_agent_auth: true

# List of registry credentials
# If you are using a Docker Hub registry, you can omit the `url`
# or set it to `docker.io`
# is_default set to `true` will override the system default
# registry set in the global settings
private_registries:
     - url: registry.com
       user: Username
       password: password
       is_default: true

# Bastion/Jump host configuration
bastion_host:
    address: x.x.x.x
    user: ubuntu
    port: 22
    ssh_key_path: /home/user/.ssh/bastion_rsa
# or
#   ssh_key: |-
#     -----BEGIN RSA PRIVATE KEY-----
#
#     -----END RSA PRIVATE KEY-----

# Set the name of the Kubernetes cluster  
cluster_name: mycluster


# The Kubernetes version used. The default versions of Kubernetes
# are tied to specific versions of the system images.
#
# For RKE v0.2.x and below, the map of Kubernetes versions and their system images is
# located here:
# https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go
#
# For RKE v0.3.0 and above, the map of Kubernetes versions and their system images is
# located here:
# https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_rke_system_images.go
#
# In case the kubernetes_version and kubernetes image in
# system_images are defined, the system_images configuration
# will take precedence over kubernetes_version.
kubernetes_version: v1.10.3-rancher2

# System Images are defaulted to a tag that is mapped to a specific
# Kubernetes Version and not required in a cluster.yml. 
# Each individual system image can be specified if you want to use a different tag.
#
# For RKE v0.2.x and below, the map of Kubernetes versions and their system images is
# located here:
# https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go
#
# For RKE v0.3.0 and above, the map of Kubernetes versions and their system images is
# located here:
# https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_rke_system_images.go
#
system_images:
    kubernetes: rancher/hyperkube:v1.10.3-rancher2
    etcd: rancher/coreos-etcd:v3.1.12
    alpine: rancher/rke-tools:v0.1.9
    nginx_proxy: rancher/rke-tools:v0.1.9
    cert_downloader: rancher/rke-tools:v0.1.9
    kubernetes_services_sidecar: rancher/rke-tools:v0.1.9
    kubedns: rancher/k8s-dns-kube-dns-amd64:1.14.8
    dnsmasq: rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.8
    kubedns_sidecar: rancher/k8s-dns-sidecar-amd64:1.14.8
    kubedns_autoscaler: rancher/cluster-proportional-autoscaler-amd64:1.0.0
    pod_infra_container: rancher/pause-amd64:3.1

services:
    etcd:
      # if external etcd is used
      # path: /etcdcluster
      # external_urls:
      #   - https://etcd-example.com:2379
      # ca_cert: |-
      #   -----BEGIN CERTIFICATE-----
      #   xxxxxxxxxx
      #   -----END CERTIFICATE-----
      # cert: |-
      #   -----BEGIN CERTIFICATE-----
      #   xxxxxxxxxx
      #   -----END CERTIFICATE-----
      # key: |-
      #   -----BEGIN PRIVATE KEY-----
      #   xxxxxxxxxx
      #   -----END PRIVATE KEY-----
    # Note for Rancher v2.0.5 and v2.0.6 users: If you are configuring
    # Cluster Options using a Config File when creating Rancher Launched
    # Kubernetes, the names of services should contain underscores
    # only: `kube_api`.
    kube-api:
      # IP range for any services created on Kubernetes
      # This must match the service_cluster_ip_range in kube-controller
      service_cluster_ip_range: 10.43.0.0/16
      # Expose a different port range for NodePort services
      service_node_port_range: 30000-32767    
      pod_security_policy: false
      # Add additional arguments to the kubernetes API server
      # This WILL OVERRIDE any existing defaults
      extra_args:
        # Enable audit log to stdout
        audit-log-path: "-"
        # Increase number of delete workers
        delete-collection-workers: 3
        # Set the level of log output to debug-level
        v: 4
    # Note for Rancher 2 users: If you are configuring Cluster Options
    # using a Config File when creating Rancher Launched Kubernetes,
    # the names of services should contain underscores only:
    # `kube_controller`. This only applies to Rancher v2.0.5 and v2.0.6.
    kube-controller:
      # CIDR pool used to assign IP addresses to pods in the cluster
      cluster_cidr: 10.42.0.0/16
      # IP range for any services created on Kubernetes
      # This must match the service_cluster_ip_range in kube-api
      service_cluster_ip_range: 10.43.0.0/16
    kubelet:
      # Base domain for the cluster
      cluster_domain: cluster.local
      # IP address for the DNS service endpoint
      cluster_dns_server: 10.43.0.10
      # Fail if swap is on
      fail_swap_on: false
      # Set max pods to 250 instead of default 110
      extra_args:
        max-pods: 250
      # Optionally define additional volume binds to a service
      extra_binds:
        - "/usr/libexec/kubernetes/kubelet-plugins:/usr/libexec/kubernetes/kubelet-plugins"

# Currently, only authentication strategy supported is x509.
# You can optionally create additional SANs (hostnames or IPs) to
# add to the API server PKI certificate.
# This is useful if you want to use a load balancer for the
# control plane servers.
authentication:
    strategy: x509
    sans:
      - "10.18.160.10"
      - "my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com"

# Kubernetes Authorization mode
# Use `mode: rbac` to enable RBAC
# Use `mode: none` to disable authorization
authorization:
    mode: rbac

# If you want to set a Kubernetes cloud provider, you specify
# the name and configuration
cloud_provider:
    name: aws

# Add-ons are deployed using kubernetes jobs. RKE will give
# up on trying to get the job status after this timeout in seconds..
addon_job_timeout: 30

# Specify network plugin-in (canal, calico, flannel, weave, or none)
network:
    plugin: canal

# Specify DNS provider (coredns or kube-dns)
dns:
    provider: coredns

# Currently only nginx ingress provider is supported.
# To disable ingress controller, set `provider: none`
# `node_selector` controls ingress placement and is optional
ingress:
    provider: nginx
    node_selector:
      app: ingress
      
# All add-on manifests MUST specify a namespace
addons: |-
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: my-nginx
      namespace: default
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80

addons_include:
    - https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-operator.yaml
    - https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-cluster.yaml
    - /path/to/manifest

官方示例二:(主要是為了着重展示擴展參數!)

nodes:
  - address: 1.1.1.1
    internal_address:
    user: ubuntu
    role:
      - controlplane
      - etcd
    ssh_key_path: /home/user/.ssh/id_rsa
    port: 2222
  - address: 2.2.2.2
    internal_address:
    user: ubuntu
    role:
      - worker
    ssh_key: |-
      -----BEGIN RSA PRIVATE KEY-----
      -----END RSA PRIVATE KEY-----
  - address: example.com
    internal_address:
    user: ubuntu
    role:
      - worker
    hostname_override: node3
    internal_address: 192.168.1.6
    labels:
      app: ingress
      app: dns

# 如果設置為true,則可以使用不受支持的Docker版本
ignore_docker_version: false

# 集群等級的SSH私鑰(private key)
## 如果節點未配置SSH私鑰,RKE將會以此私鑰去連接集群節點
ssh_key_path: ~/.ssh/test

# 使用SSH agent來提供SSH私鑰
## 需要配置環境變量`SSH_AUTH_SOCK`指向已添加私鑰的SSH agent
ssh_agent_auth: false

# 配置docker root目錄
docker_root_dir: "/var/lib/docker"

# 私有倉庫
## 當設置`is_default: true`后,構建集群時會自動在配置的私有倉庫中拉取鏡像
## 如果使用的是DockerHub鏡像倉庫,則可以省略`url`或將其設置為`docker.io`
## 如果使用內部公開倉庫,則可以不用設置用戶名和密碼

private_registries:
  - url: registry.com
    user: Username
    password: password
    is_default: true

# 堡壘機
## 如果集群節點需要通過堡壘機跳轉,那么需要為RKE配置堡壘機信息
bastion_host:
  address: x.x.x.x
  user: ubuntu
  port: 22
  ssh_key_path: /home/user/.ssh/bastion_rsa
# or
#   ssh_key: |-
#     -----BEGIN RSA PRIVATE KEY-----
#
#     -----END RSA PRIVATE KEY-----

# 設置Kubernetes集群名稱

# 定義kubernetes版本.
## 目前, 版本定義需要與rancher/types defaults map相匹配: https://github.com/rancher/types/blob/master/apis/management.cattle.io/v3/k8s_defaults.go#L14 (后期版本請查看: https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_rke_system_images.go )
## 如果同時定義了kubernetes_version和system_images中的kubernetes鏡像,則system_images配置將優先於kubernetes_version
kubernetes_version: v1.14.3-rancher1

# `system_images`優先級更高,如果沒有單獨指定`system_images`鏡像,則會使用`kubernetes_version`對應的默認鏡像版本。
## 默認Tags: https://github.com/rancher/types/blob/master/apis/management.cattle.io/v3/k8s_defaults.go)(Rancher v2.3或者RKE v0.3之后的版本請查看: https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_rke_system_images.go )
system_images:
  etcd: rancher/coreos-etcd:v3.3.10-rancher1
  alpine: rancher/rke-tools:v0.1.34
  nginx_proxy: rancher/rke-tools:v0.1.34
  cert_downloader: rancher/rke-tools:v0.1.34
  kubernetes_services_sidecar: rancher/rke-tools:v0.1.34
  kubedns: rancher/k8s-dns-kube-dns:1.15.0
  dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0
  kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0
  kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.3.0
  coredns: rancher/coredns-coredns:1.3.1
  coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.3.0
  kubernetes: rancher/hyperkube:v1.14.3-rancher1
  flannel: rancher/coreos-flannel:v0.10.0-rancher1
  flannel_cni: rancher/flannel-cni:v0.3.0-rancher1
  calico_node: rancher/calico-node:v3.4.0
  calico_cni: rancher/calico-cni:v3.4.0
  calico_controllers: ""
  calico_ctl: rancher/calico-ctl:v2.0.0
  canal_node: rancher/calico-node:v3.4.0
  canal_cni: rancher/calico-cni:v3.4.0
  canal_flannel: rancher/coreos-flannel:v0.10.0
  weave_node: weaveworks/weave-kube:2.5.0
  weave_cni: weaveworks/weave-npc:2.5.0
  pod_infra_container: rancher/pause:3.1
  ingress: rancher/nginx-ingress-controller:0.21.0-rancher3
  ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
  metrics_server: rancher/metrics-server:v0.3.1

services:
  etcd:
    # if external etcd is used
    # path: /etcdcluster
    # external_urls:
    #   - https://etcd-example.com:2379
    # ca_cert: |-
    #   -----BEGIN CERTIFICATE-----
    #   xxxxxxxxxx
    #   -----END CERTIFICATE-----
    # cert: |-
    #   -----BEGIN CERTIFICATE-----
    #   xxxxxxxxxx
    #   -----END CERTIFICATE-----
    # key: |-
    #   -----BEGIN PRIVATE KEY-----
    #   xxxxxxxxxx
    #   -----END PRIVATE KEY-----
    # Rancher 2用戶注意事項:如果在創建Rancher Launched Kubernetes時使用配置文件配置集群,則`kube_api`服務名稱應僅包含下划線。這僅適用於Rancher v2.0.5和v2.0.6。
    # 以下參數僅支持RKE部署的etcd集群

    # 開啟自動備份
    ## rke版本小於0.2.x或rancher版本小於v2.2.0時使用
    snapshot: true
    creation: 5m0s
    retention: 24h
    ## rke版本大於等於0.2.x或rancher版本大於等於v2.2.0時使用(兩段配置二選一)
    backup_config:
      enabled: true           # 設置true啟用ETCD自動備份,設置false禁用;
      interval_hours: 12      # 快照創建間隔時間,不加此參數,默認5分鍾;
      retention: 6            # etcd備份保留份數;
      # S3配置選項
      s3backupconfig:
        access_key: "myaccesskey"
        secret_key:  "myaccesssecret"
        bucket_name: "my-backup-bucket"
        folder: "folder-name" # 此參數v2.3.0之后可用
        endpoint: "s3.eu-west-1.amazonaws.com"
        region: "eu-west-1"
    # 擴展參數
    extra_args:
      auto-compaction-retention: 240 #(單位小時)
      # 修改空間配額為$((6*1024*1024*1024)),默認2G,最大8G
      quota-backend-bytes: '6442450944'
  kube-api:
    # cluster_ip范圍
    ## 這必須與kube-controller中的service_cluster_ip_range匹配
    service_cluster_ip_range: 10.43.0.0/16
    # NodePort映射的端口范圍
    service_node_port_range: 30000-32767
    # Pod安全策略
    pod_security_policy: false
    # kubernetes API server擴展參數
    ## 這些參數將會替換默認值
    extra_args:
      watch-cache: true
      default-watch-cache-size: 1500
      # 事件保留時間,默認1小時
      event-ttl: 1h0m0s
      # 默認值400,設置0為不限制,一般來說,每25~30個Pod有15個並行
      max-requests-inflight: 800
      # 默認值200,設置0為不限制
      max-mutating-requests-inflight: 400
      # kubelet操作超時,默認5s
      kubelet-timeout: 5s
      # 啟用審計日志到標准輸出
      audit-log-path: "-"
      # 增加刪除workers的數量
      delete-collection-workers: 3
      # 將日志輸出的級別設置為debug模式
      v: 4
  # Rancher 2用戶注意事項:如果在創建Rancher Launched Kubernetes時使用配置文件配置集群,則`kube_controller`服務名稱應僅包含下划線。這僅適用於Rancher v2.0.5和v2.0.6。
  kube-controller:
    # Pods_ip范圍
    cluster_cidr: 10.42.0.0/16
    # cluster_ip范圍
    ## 這必須與kube-api中的service_cluster_ip_range相同
    service_cluster_ip_range: 10.43.0.0/16
    extra_args:
      # 修改每個節點子網大小(cidr掩碼長度),默認為24,可用IP為254個;23,可用IP為510個;22,可用IP為1022個;
      node-cidr-mask-size: '24'

      feature-gates: "TaintBasedEvictions=false"
      # 控制器定時與節點通信以檢查通信是否正常,周期默認5s
      node-monitor-period: '5s'
      ## 當節點通信失敗后,再等一段時間kubernetes判定節點為notready狀態。
      ## 這個時間段必須是kubelet的nodeStatusUpdateFrequency(默認10s)的整數倍,
      ## 其中N表示允許kubelet同步節點狀態的重試次數,默認40s。
      node-monitor-grace-period: '20s'
      ## 再持續通信失敗一段時間后,kubernetes判定節點為unhealthy狀態,默認1m0s。
      node-startup-grace-period: '30s'
      ## 再持續失聯一段時間,kubernetes開始遷移失聯節點的Pod,默認5m0s。
      pod-eviction-timeout: '1m'

      # 默認5. 同時同步的deployment的數量。
      concurrent-deployment-syncs: 5
      # 默認5. 同時同步的endpoint的數量。
      concurrent-endpoint-syncs: 5
      # 默認20. 同時同步的垃圾收集器工作器的數量。
      concurrent-gc-syncs: 20
      # 默認10. 同時同步的命名空間的數量。
      concurrent-namespace-syncs: 10
      # 默認5. 同時同步的副本集的數量。
      concurrent-replicaset-syncs: 5
      # 默認5m0s. 同時同步的資源配額數。(新版本中已棄用)
      # concurrent-resource-quota-syncs: 5m0s
      # 默認1. 同時同步的服務數。
      concurrent-service-syncs: 1
      # 默認5. 同時同步的服務帳戶令牌數。
      concurrent-serviceaccount-token-syncs: 5
      # 默認5. 同時同步的復制控制器的數量
      concurrent-rc-syncs: 5
      # 默認30s. 同步deployment的周期。
      deployment-controller-sync-period: 30s
      # 默認15s。同步PV和PVC的周期。
      pvclaimbinder-sync-period: 15s
  kubelet:
    # 集群搜索域
    cluster_domain: cluster.local
    # 內部DNS服務器地址
    cluster_dns_server: 10.43.0.10
    # 禁用swap
    fail_swap_on: false
    # 擴展變量
    extra_args:
      # 支持靜態Pod。在主機/etc/kubernetes/目錄下創建manifest目錄,Pod YAML文件放在/etc/kubernetes/manifest/目錄下
      pod-manifest-path: "/etc/kubernetes/manifest/"
      root-dir:  "/var/lib/kubelet"
      docker-root: "/var/lib/docker"
      feature-gates: "TaintBasedEvictions=false"
      # 指定pause鏡像
      pod-infra-container-image: 'rancher/pause:3.1'
      # 傳遞給網絡插件的MTU值,以覆蓋默認值,設置為0(零)則使用默認的1460
      network-plugin-mtu: '1500'
      # 修改節點最大Pod數量
      max-pods: "250"
      # 密文和配置映射同步時間,默認1分鍾
      sync-frequency: '3s'
      # Kubelet進程可以打開的文件數(默認1000000),根據節點配置情況調整
      max-open-files: '2000000'
      # 與apiserver會話時的並發數,默認是10
      kube-api-burst: '30'
      # 與apiserver會話時的 QPS,默認是5,QPS = 並發量/平均響應時間
      kube-api-qps: '15'
      # kubelet默認一次拉取一個鏡像,設置為false可以同時拉取多個鏡像,
      # 前提是存儲驅動要為overlay2,對應的Dokcer也需要增加下載並發數,參考[docker配置](/rancher2x/install-prepare/best-practices/docker/)
      serialize-image-pulls: 'false'
      # 拉取鏡像的最大並發數,registry-burst不能超過registry-qps ,
      # 僅當registry-qps大於0(零)時生效,(默認10)。如果registry-qps為0則不限制(默認5)。
      registry-burst: '10'
      registry-qps: '0'
      cgroups-per-qos: 'true'
      cgroup-driver: 'cgroupfs'

      # 節點資源預留
      enforce-node-allocatable: 'pods'
      system-reserved: 'cpu=0.25,memory=200Mi'
      kube-reserved: 'cpu=0.25,memory=1500Mi'
      # POD驅逐,這個參數只支持內存和磁盤。
      ## 硬驅逐閾值
      ### 當節點上的可用資源降至保留值以下時,就會觸發強制驅逐。強制驅逐會強制kill掉POD,不會等POD自動退出。
      eviction-hard: 'memory.available<300Mi,nodefs.available<10%,imagefs.available<15%,nodefs.inodesFree<5%'
      ## 軟驅逐閾值
      ### 以下四個參數配套使用,當節點上的可用資源少於這個值時但大於硬驅逐閾值時候,會等待eviction-soft-grace-period設置的時長;
      ### 等待中每10s檢查一次,當最后一次檢查還觸發了軟驅逐閾值就會開始驅逐,驅逐不會直接Kill POD,先發送停止信號給POD,然后等待eviction-max-pod-grace-period設置的時長;
      ### 在eviction-max-pod-grace-period時長之后,如果POD還未退出則發送強制kill POD"
      eviction-soft: 'memory.available<500Mi,nodefs.available<50%,imagefs.available<50%,nodefs.inodesFree<10%'
      eviction-soft-grace-period: 'memory.available=1m30s'
      eviction-max-pod-grace-period: '30'
      eviction-pressure-transition-period: '30s'
      # 指定kubelet多長時間向master發布一次節點狀態。注意: 它必須與kube-controller中的nodeMonitorGracePeriod一起協調工作。(默認 10s)
      node-status-update-frequency: 10s
      # 設置cAdvisor全局的采集行為的時間間隔,主要通過內核事件來發現新容器的產生。默認1m0s
      global-housekeeping-interval: 1m0s
      # 每個已發現的容器的數據采集頻率。默認10s
      housekeeping-interval: 10s
      # 所有運行時請求的超時,除了長時間運行的 pull, logs, exec and attach。超時后,kubelet將取消請求,拋出錯誤,然后重試。(默認2m0s)
      runtime-request-timeout: 2m0s
      # 指定kubelet計算和緩存所有pod和卷的卷磁盤使用量的間隔。默認為1m0s
      volume-stats-agg-period: 1m0s

    # 可以選擇定義額外的卷綁定到服務
    extra_binds:
      - "/usr/libexec/kubernetes/kubelet-plugins:/usr/libexec/kubernetes/kubelet-plugins"
      - "/etc/iscsi:/etc/iscsi"
      - "/sbin/iscsiadm:/sbin/iscsiadm"
  kubeproxy:
    extra_args:
      # 默認使用iptables進行數據轉發,如果要啟用ipvs,則此處設置為`ipvs`
      proxy-mode: ""
      # 與kubernetes apiserver通信並發數,默認10
      kube-api-burst: 20
      # 與kubernetes apiserver通信時使用QPS,默認值5,QPS=並發量/平均響應時間
      kube-api-qps: 10
    extra_binds:
  scheduler:
    extra_args: {}
    extra_binds: []
    extra_env: []

# 目前,只支持x509驗證
## 您可以選擇創建額外的SAN(主機名或IP)以添加到API服務器PKI證書。
## 如果要為control plane servers使用負載均衡器,這很有用。
authentication:
  strategy: "x509|webhook"
  webhook:
    config_file: "...."
    cache_timeout: 5s
  sans:
    # 此處配置備用域名或IP,當主域名或者IP無法訪問時,可通過備用域名或IP訪問
    - "192.168.1.100"
    - "www.test.com"
# Kubernetes認證模式
## Use `mode: rbac` 啟用 RBAC
## Use `mode: none` 禁用 認證
authorization:
  mode: rbac
# 如果要設置Kubernetes雲提供商,需要指定名稱和配置,非雲主機則留空;
cloud_provider:
# Add-ons是通過kubernetes jobs來部署。 在超時后,RKE將放棄重試獲取job狀態。以秒為單位。
addon_job_timeout: 30
# 有幾個網絡插件可以選擇:`flannel、canal、calico`,Rancher2默認canal
network:
  # rke v1.0.4+ 可用,如果選擇canal網絡驅動,需要設置mtu為1450
  mtu: 1450  
  plugin: canal
  options:
    flannel_backend_type: "vxlan"
# 目前只支持nginx ingress controller
## 可以設置`provider: none`來禁用ingress controller
ingress:
  provider: nginx
  node_selector:
    app: ingress
# 配置dns上游dns服務器
## 可用rke版本 v0.2.0
dns:
  provider: coredns
  upstreamnameservers:
  - 114.114.114.114
  - 1.2.4.8
  node_selector:
    app: dns
# 安裝附加應用
## 所有附加應用都必須指定命名空間
addons: |-
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      namespace: default
    spec:
      containers:
        image: nginx
        ports:
        - containerPort: 80

addons_include:
    - https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-operator.yml
    - https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-cluster.yml
    - /path/to/manifest

個人示例:

nodes:
- address: 192.168.0.1
  port: "22"
  internal_address: 192.168.0.1
  role:
  - controlplane
  - etcd
  - worker
  hostname_override: 192.168.0.1
  user: admin
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ""
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: 192.168.0.2
  port: "22"
  internal_address: 192.168.0.2
  role:
  - controlplane
  - etcd
  - worker
  hostname_override: 192.168.0.2
  user: admin
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ""
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: 192.168.0.3
  port: "22"
  internal_address: 192.168.0.3
  role:
  - controlplane
  - etcd
  - worker
  hostname_override: 192.168.0.3
  user: admin
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ""
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
services:
  etcd:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    external_urls: []
    ca_cert: ""
    cert: ""
    key: ""
    path: ""
    uid: 0
    gid: 0
    snapshot: null
    retention: ""
    creation: ""
    backup_config: null
  kube-api:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    service_cluster_ip_range: 172.26.96.0/20
    service_node_port_range: "30000-40000"
    pod_security_policy: false
    always_pull_images: false
    secrets_encryption_config: null
    audit_log: null
    admission_configuration: null
    event_rate_limit: null
  kube-controller:
    image: ""
    extra_args: {}
    extra_args:
      # 修改每個節點子網大小(cidr掩碼長度),默認為24,可用IP為254個;23,可用IP為510個;22,可用IP為1022個;
      node-cidr-mask-size: '25'
    extra_binds: []
    extra_env: []
    cluster_cidr: 172.26.112.0/20
    service_cluster_ip_range: 172.26.96.0/20
  scheduler:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
  kubelet:
    image: ""
    extra_args:
      # 修改節點最大Pod數量
      max-pods: "120"
    extra_binds: []
    extra_env: []
    cluster_domain: cluster.local
    infra_container_image: ""
    cluster_dns_server: 172.26.96.10
    fail_swap_on: false
    generate_serving_certificate: false
  kubeproxy:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
network:
  plugin: flannel
  options: {}
  mtu: 0
  node_selector: {}
authentication:
  strategy: x509
  sans: []
  webhook: null
# All add-on manifests MUST specify a namespace
addons: |-
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      labels:
        app: flannel
        tier: node
      name: kube-flannel-cfg
      namespace: kube-system
    data:
      cni-conf.json: |
        {
          "name": "cbr0",
          "cniVersion":"0.3.1",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "172.26.112.0/20",
          "Backend": {
            "Type": "vxlan",
            "VNI": 1,
            "Port": 8472
          }
        }
addons_include: []
system_images:
  etcd: rancher/coreos-etcd:v3.4.3-rancher1
  alpine: rancher/rke-tools:v0.1.52
  nginx_proxy: rancher/rke-tools:v0.1.52
  cert_downloader: rancher/rke-tools:v0.1.52
  kubernetes_services_sidecar: rancher/rke-tools:v0.1.52
  kubedns: rancher/k8s-dns-kube-dns:1.15.0
  dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0
  kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0
  kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
  coredns: rancher/coredns-coredns:1.6.5
  coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
  kubernetes: rancher/hyperkube:v1.17.2-rancher1
  flannel: rancher/coreos-flannel:v0.11.0-rancher1
  flannel_cni: rancher/flannel-cni:v0.3.0-rancher5
  calico_node: rancher/calico-node:v3.10.2
  calico_cni: rancher/calico-cni:v3.10.2
  calico_controllers: rancher/calico-kube-controllers:v3.10.2
  calico_ctl: rancher/calico-ctl:v2.0.0
  calico_flexvol: rancher/calico-pod2daemon-flexvol:v3.10.2
  canal_node: rancher/calico-node:v3.10.2
  canal_cni: rancher/calico-cni:v3.10.2
  canal_flannel: rancher/coreos-flannel:v0.11.0
  canal_flexvol: rancher/calico-pod2daemon-flexvol:v3.10.2
  weave_node: weaveworks/weave-kube:2.5.2
  weave_cni: weaveworks/weave-npc:2.5.2
  pod_infra_container: rancher/pause:3.1
  ingress: rancher/nginx-ingress-controller:nginx-0.25.1-rancher1
  ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
  metrics_server: rancher/metrics-server:v0.3.6
  windows_pod_infra_container: rancher/kubelet-pause:v0.1.3
ssh_key_path: ~/.ssh/id_rsa
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
  mode: rbac
  options: {}
ignore_docker_version: true
kubernetes_version: ""
private_registries: []
ingress:
  provider: ""
  options: {}
  node_selector: {}
  extra_args: {}
  dns_policy: ""
  extra_envs: []
  extra_volumes: []
  extra_volume_mounts: []
cluster_name: ""
cloud_provider:
  name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
  address: ""
  port: ""
  user: ""
  ssh_key: ""
  ssh_key_path: ""
  ssh_cert: ""
  ssh_cert_path: ""
monitoring:
  provider: ""
  options: {}
  node_selector: {}
restore:
  restore: false
  snapshot_name: ""
dns: null

OK,到目前為止集群已經安裝完成。

重要說明

以下文件需要維護,故障排除和升級群集。

將以下文件的副本保存在安全的位置:

  • cluster.yml:RKE集群配置文件。
  • kube_config_cluster.yml:集群的Kubeconfig文件,此文件包含用於完全訪問集群的憑據。
  • cluster.rkestate:Kubernetes群集狀態文件,此文件包含用於完全訪問群集的憑據。

僅在使用RKE v0.2.0或更高版本時創建Kubernetes群集狀態文件。

RKE官方文檔


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM