前言
之前部署了Kubernetes 1.13.0,發現master服務的啟動方式與1.10.4版本有所區別,kube-apiserver、kube-controller-manager和kube-scheduler分別使用不同的鏡像啟動,而不再是公用一個hyperkube鏡像。但是官方的 kube-controller-manager鏡像中不包含ceph client,導致無法創建RBD volume。於是需要打包自定義鏡像,安裝ceph client。
1. 環境
系統:CentOS 7.2
Docker:18.03.1-ce
Kubernetes:1.13.0
2. 下載Kubernetes源碼
使用git clone –b參數下載對應版本的源碼:
# git clone -b v1.13.0 https://github.com/kubernetes/kubernetes.git
3. 下載base鏡像
打包過程使用到的base鏡像有:
k8s.gcr.io/kube-cross:v1.11.2-1 k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/debian-base-amd64:0.4.0 k8s.gcr.io/debian-iptables-amd64:v11.0 k8s.gcr.io/debian-hyperkube-base-amd64:0.12.0
這些鏡像可以使用make release命令從源碼編譯生成,為了方便,我直接使用官方編譯好的鏡像。我將它們下載下來上傳到了私有鏡像倉庫,方便在不同機器上打包。
4. 修改鏡像地址
由於使用本地鏡像倉庫,要修改打包腳本中的鏡像倉庫地址。修改如下:
build/build-image/Dockerfile:
# git diff build/build-image/Dockerfile diff --git a/build/build-image/Dockerfile b/build/build-image/Dockerfile index ff4543b..976a377 100644 --- a/build/build-image/Dockerfile +++ b/build/build-image/Dockerfile @@ -13,7 +13,7 @@ # limitations under the License. # This file creates a standard build environment for building Kubernetes - FROM k8s.gcr.io/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG +FROM registry.example.com/k8s_gcr/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG # Mark this as a kube-build container RUN touch /kube-build-image
build/common.sh:
# git diff build/common.sh diff --git a/build/common.sh b/build/common.sh index b3b7748..902b08f 100755 --- a/build/common.sh +++ b/build/common.sh @@ -91,14 +91,15 @@ kube::build::get_docker_wrapped_binaries() { local arch=$1 local debian_base_version=0.4.0 local debian_iptables_version=v11.0 + local registry="registry.example.com/k8s_gcr" ### If you change any of these lists, please also update DOCKERIZED_BINARIES ### in build/BUILD. And kube::golang::server_image_targets local targets=( - cloud-controller-manager,"k8s.gcr.io/debian-base-${arch}:${debian_base_version}" - kube-apiserver,"k8s.gcr.io/debian-base-${arch}:${debian_base_version}" - kube-controller-manager,"k8s.gcr.io/debian-base-${arch}:${debian_base_version}" - kube-scheduler,"k8s.gcr.io/debian-base-${arch}:${debian_base_version}" - kube-proxy,"k8s.gcr.io/debian-iptables-${arch}:${debian_iptables_version}" + cloud-controller-manager,"${registry}/debian-base-${arch}:${debian_base_version}" + kube-apiserver,"${registry}/debian-base-${arch}:${debian_base_version}" + kube-controller-manager,"${registry}/debian-base-${arch}:${debian_base_version}" + kube-scheduler,"${registry}/debian-base-${arch}:${debian_base_version}" + kube-proxy,"${registry}/debian-iptables-${arch}:${debian_iptables_version}" ) echo "${targets[@]}"
build/root/WORKSPACE:
# git diff build/root/WORKSPACE diff --git a/build/root/WORKSPACE b/build/root/WORKSPACE index cee8962..f1a7c37 100644 --- a/build/root/WORKSPACE +++ b/build/root/WORKSPACE @@ -71,7 +71,7 @@ http_file( docker_pull( name = "debian-base-amd64", digest = "sha256:86176bc8ccdc4d8ea7fbf6ba4b57fcefc2cb61ff7413114630940474ff9bf751", - registry = "k8s.gcr.io", + registry = "registry.example.com/k8s_gcr", repository = "debian-base-amd64", tag = "0.4.0", # ignored, but kept here for documentation ) @@ -79,7 +79,7 @@ docker_pull( docker_pull( name = "debian-iptables-amd64", digest = "sha256:d4ff8136b9037694a3165a7fff6a91e7fc828741b8ea1eda226d4d9ea5d23abb", - registry = "k8s.gcr.io", + registry = "registry.example.com/k8s_gcr", repository = "debian-iptables-amd64", tag = "v11.0", # ignored, but kept here for documentation ) @@ -87,7 +87,7 @@ docker_pull( docker_pull( name = "debian-hyperkube-base-amd64", digest = "sha256:4a77bc882f7d629c088a11ff144a2e86660268fddf63b61f52b6a93d16ab83f0", - registry = "k8s.gcr.io", + registry = "registry.example.com/k8s_gcr", repository = "debian-hyperkube-base-amd64", tag = "0.12.0", # ignored, but kept here for documentation )
build/lib/release.sh:
# git diff build/lib/release.sh diff --git a/build/lib/release.sh b/build/lib/release.sh index d7ccc01..47d9e37 100644 --- a/build/lib/release.sh +++ b/build/lib/release.sh @@ -327,7 +327,7 @@ function kube::release::create_docker_images_for_server() { local images_dir="${RELEASE_IMAGES}/${arch}" mkdir -p "${images_dir}" - local -r docker_registry="k8s.gcr.io" + local -r docker_registry="registry.example.com/k8s_gcr" # Docker tags cannot contain '+' local docker_tag="${KUBE_GIT_VERSION/+/_}" if [[ -z "${docker_tag}" ]]; then
5. kube-controller-manager安裝ceph client
Kubernetes master服務鏡像的build在build/lib/release.sh的kube::release::create_docker_images_for_server()中進行,它們的Dockerfile也是由這個函數動態生成的。修改這個函數,在Dockerfile中增加上安裝ceph client的命令。
由於使用的base鏡像k8s.gcr.io/debian-base-amd64:0.4.0是Debian 9.5 Stretch版本的,ceph mimic使用C++ 17標准,需要gcc 8支持,所以不支持Debian Stretch。於是只能安裝ceph luminous版本的client。當然也可以使用第三方編譯的deb包,比如croit。
在步驟四的修改上繼續修改build/lib/release.sh:
# git diff build/lib/release.sh diff --git a/build/lib/release.sh b/build/lib/release.sh index d7ccc01..0d03da9 100644 --- a/build/lib/release.sh +++ b/build/lib/release.sh @@ -327,7 +327,7 @@ function kube::release::create_docker_images_for_server() { local images_dir="${RELEASE_IMAGES}/${arch}" mkdir -p "${images_dir}" - local -r docker_registry="k8s.gcr.io" + local -r docker_registry="registry.example.com/k8s_gcr" # Docker tags cannot contain '+' local docker_tag="${KUBE_GIT_VERSION/+/_}" if [[ -z "${docker_tag}" ]]; then @@ -370,11 +370,21 @@ function kube::release::create_docker_images_for_server() { cat <<EOF > "${docker_file_path}" FROM ${base_image} COPY ${binary_name} /usr/local/bin/${binary_name} +RUN echo "deb http://mirrors.aliyun.com/debian/ stretch main non-free contrib\ndeb-src http://mirrors.aliyun.com/debian/ stretch main non-free contrib\ +RUN apt-get update && apt-get -y install apt-transport-https gnupg2 wget curl EOF # ensure /etc/nsswitch.conf exists so go's resolver respects /etc/hosts if [[ "${base_image}" =~ busybox ]]; then echo "COPY nsswitch.conf /etc/" >> "${docker_file_path}" fi + + # install ceph client + if [[ ${binary_name} =~ "kube-controller-manager" ]]; then + echo "RUN wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | apt-key add -" >> "${docker_file_path}" + echo "RUN echo 'deb https://download.ceph.com/debian-luminous/ stretch main' > /etc/apt/sources.list.d/ceph.list" >> "${docker_file_path}" + echo "RUN apt-get update && apt-get install -y ceph-common ceph-fuse" >> "${docker_file_path}" + fi + "${DOCKER[@]}" build -q -t "${docker_image_tag}" "${docker_build_path}" >/dev/null "${DOCKER[@]}" save "${docker_image_tag}" > "${binary_dir}/${binary_name}.tar" echo "${docker_tag}" > "${binary_dir}/${binary_name}.docker_tag"
將所有鏡像的源換成了aliyun,並在kube-controller-manager Dockerfile中增加了安裝ceph client的命令。
至此准備工作就完成了,下面開始編譯鏡像。
6. 編譯kubernetes鏡像
編譯命令:
# cd kubernetes # make clean # KUBE_BUILD_PLATFORMS=linux/amd64 KUBE_BUILD_CONFORMANCE=n KUBE_BUILD_HYPERKUBE=n make release-images GOFLAGS=-v GOGCFLAGS="-N -l"
其中KUBE_BUILD_PLATFORMS=linux/amd64指定目標平台為linux/amd64,GOFLAGS=-v開啟verbose日志,GOGCFLAGS="-N -l"禁止編譯優化和內聯,減小可執行程序大小。
7. 導入鏡像
等待編譯完成后,在_output/release-stage/server/linux-amd64/kubernetes/server/bin/目錄下保存了編譯生成的二進制可執行程序和docker鏡像tar包。導入kube-controller-manager鏡像,上傳到鏡像倉庫:
# docker load –i _output/release-stage/server/linux-amd64/kubernetes/server/bin/kube-controller-manager.tar # docker tag registry.example.com/k8s_gcr/kube-controller-manager:v1.13.0 registry.example.com/k8s_gcr/kube-controller-manager:v1.13.0-ceph-mimic # docker push registry.example.com/k8s_gcr/kube-controller-manager:v1.13.0-ceph-mimic
整個編譯過程結束,現在就可以到master節點上,修改/etc/kubernetes/manifests/kube-controller-manager.yaml描述文件中的image,修改完立即生效,創建RBD PV可以看到效果。
測試發現kube-controller-manager創建rbd沒問題,但是node節點掛載rbd時報錯。進一步測試發現與ceph client版本的關系不大,而與kernel的版本有關。將node節點的內核從3.10升級到4.17后,可以正常掛載。
參考資料
croit | Debian 9 (Stretch) Ceph Mimic mirror