kubernetes三: 交付dubbo服務到K8S集群


一、Dubbo介紹

1.Dubbo是什么?

  • Dubbo是阿里巴巴SOA服務化治理方案的核心框架,每天為2000+個服務提供30億+次訪問支持,並被廣泛應用於阿里巴巴集團的各成員站點。
  • Dubbo是一個分布式服務框架,致力於提供高性能和透明化的PRC遠程服務調用方案,以及SOA服務治理方案
  • 簡單的說,Dubbo就是個服務框架,如果沒有分布式的需求,其實是不需要用的,只有在分布式的時候,才有dubbo這樣的的分布式服務框架的需求,並且本質上是個服務調用的東東,說白了就是個遠程服務調用的分布式框架

2.Dubbo能做什么?

  • 透明化的遠程方法調用,就像調用本地方法一樣調用遠程方法,只需要簡單的配置,沒有任何API侵入
  • 軟負載均衡及容錯機制,可在內網替代F5等硬件負載均衡器,降低成本,減少單點。
  • 服務自動注冊與發現,不再需要寫死服務提供方地址,注冊中心基於接口名查詢服務提供者的IP地址,並且能夠平滑添加或刪除服務提供者。

3.Dubbo工作原理

  • 簡單的說,Dubbo 是 基於 Java 的RPC 框架。Dubbo 工作分為 4 個角色,分別是服務提供者、服務消費者、注冊中心、和監控中心。
  • 按照工作階段又分為部署階段和運行階段。
  • 其中部署階段在圖中以藍色的線來表示,代表服務注冊、服務訂閱的過程,而運行階段在圖中以紅色的線來表示,代表一次 RPC 的完整調用。
  • 部署階段中服務提供方在啟動時在指定的端口上暴露服務,並向注冊中心匯報自己的地址。
  • 服務調用方啟動時向注冊中心訂閱自己感興趣的服務。
  • 運行階段注冊中心先將地址列表推送給服務消費者,服務消費者選取一個地址向對端發起調用。
  • 在這個過程中,服務消費者和服務提供者的運行狀態會上報給監控中心。

二、實戰交付一套dubbo微服務到kubernetes集群

1.實驗拓撲圖

  • 第一層代表部署在k8s之外的,第二層部署在k8s中,第三層部署在7-200中

2.基礎架構

主機名 角色 ip
kjdow7-11.host.com k8s代理節點1,zk1 10.4.7.11
kjdow7-12.host.com k8s代理節點2,zk2 10.4.7.12
kjdow7-21.host.com k8s代理節點3,zk3 10.4.7.21
kjdow7-22.host.com k8s運算節點2,jenkins 10.4.7.22
kjdow7-200.host.com k8s運維節點(docker倉庫) 10.4.7.200

3.部署zookeeper

3.1 安裝jdk

在kjdow7-11、kjdow7-12、kjdow7-21三台主機上部署

 ~]# mkdir /usr/java
 ~]# tar xf jdk-8u221-linux-x64.tar.gz -C /usr/java
 ~]# ln -s /usr/java/jdk1.8.0_221 /usr/java/jdk
 ~]# vim /etc/profile
export JAVA_HOME=/usr/java/jdk
export PATH=$JAVA_HOME/bin:$JAVA_HOME/bin:$PATH
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
 ~]# source /etc/profile
 ~]# java -version
java version "1.8.0_221"
Java(TM) SE Runtime Environment (build 1.8.0_221-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.221-b11, mixed mode)

3.2 安裝zookeeper(3台zk角色主機)

在kjdow7-11、kjdow7-12、kjdow7-21三台主機上部署

zk下載地址

#解壓、配置
 ~]# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
 ~]# tar xf zookeeper-3.4.14.tar.gz -C /opt
 ~]# ln -s /opt/zookeeper-3.4.14 /opt/zookeeper
 ~]# mkdir -p /data/zookeeper/data /data/zookeeper/logs
 ~]# vi /opt/zookeeper/conf/zoo.cfg
 tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
dataLogDir=/data/zookeeper/logs
clientPort=2181
server.1=zk1.phc-dow.com:2888:3888
server.2=zk2.phc-dow.com:2888:3888
server.3=zk3.phc-dow.com:2888:3888

注意:各節點zk配置相同。

在kjdow7-11主機上部署

[root@kjdow7-11 ~]# cat /data/zookeeper/data/myid
1

在kjdow7-12主機上部署

[root@kjdow7-12 ~]# cat /data/zookeeper/data/myid
2

在kjdow7-21主機上部署

[root@kjdow7-12 ~]# cat /data/zookeeper/data/myid
3

3.3 做dns解析

在kjdow7-11主機上部署

[root@kjdow7-11 ~]# cat /var/named/phc-dow.com.zone

                                2020010206   ; serial   #serial值加一

zk1	60 IN      A         10.4.7.11                      #末尾添加此三行
zk2	60 IN      A         10.4.7.12
zk3	60 IN      A         10.4.7.21
[root@kjdow7-11 ~]# systemctl restart named
[root@kjdow7-11 ~]# dig -t A zk1.phc-dow.com @10.4.7.11 +short
10.4.7.11
[root@kjdow7-11 ~]# dig -t A zk2.phc-dow.com @10.4.7.11 +short
10.4.7.12
[root@kjdow7-11 ~]# dig -t A zk3.phc-dow.com @10.4.7.11 +short
10.4.7.21

3.4 依次啟動zk

[root@kjdow7-11 ~]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

[root@kjdow7-11 ~]# netstat -lntup | grep 19333
tcp6       0      0 10.4.7.11:3888          :::*                    LISTEN      19333/java          
tcp6       0      0 :::36989                :::*                    LISTEN      19333/java          
tcp6       0      0 :::2181                 :::*                    LISTEN      19333/java  

[root@kjdow7-21 ~]# netstat -lntup | grep 3675
tcp6       0      0 10.4.7.21:2888          :::*                    LISTEN      3675/java           
tcp6       0      0 10.4.7.21:3888          :::*                    LISTEN      3675/java           
tcp6       0      0 :::2181                 :::*                    LISTEN      3675/java           
tcp6       0      0 :::39301                :::*                    LISTEN      3675/java 

[root@kjdow7-12 ~]# netstat -lntup | grep 11949
tcp6       0      0 10.4.7.12:3888          :::*                    LISTEN      11949/java          
tcp6       0      0 :::46303                :::*                    LISTEN      11949/java          
tcp6       0      0 :::2181                 :::*                    LISTEN      11949/java  

[root@kjdow7-11 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@kjdow7-12 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@kjdow7-21 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: leader

4.安裝部署jenkins准備工作

jenkins官方鏡像

4.1 准備鏡像

[root@kjdow7-200 ~]# docker pull jenkins/jenkins:2.190.3
[root@kjdow7-200 ~]# docker images | grep jenkins
jenkins/jenkins                                   2.190.3                    22b8b9a84dbe        2 months ago        568MB
[root@kjdow7-200 ~]# docker tag 22b8b9a84dbe harbor.phc-dow.com/public/jenkins:v2.190.3
[root@kjdow7-200 ~]# docker push harbor.phc-dow.com/public/jenkins:v2.190.3

4.2 自定義Dockerfile

官網拉取的鏡像需要做些自定義操作,才能在k8s集群中部署

在運維主機kjdow-200.host.com`上編輯自定義dockerfile

mkdir -p /data/dockerfile/jenkins
cd /data/dockerfile/jenkins
vim Dockerfile
FROM harbor.phc-dow.com/public/jenkins:v2.190.3
USER root
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\ 
    echo 'Asia/Shanghai' >/etc/timezone
ADD id_rsa /root/.ssh/id_rsa
ADD config.json /root/.docker/config.json
ADD get-docker.sh /get-docker.sh
RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config &&\
    /get-docker.sh

這個Dockerfile里我們主要做了以下幾件事

  • 設置容器用戶為root
  • 設置容器內的時區
  • 將ssh私鑰加入(使用git拉代碼時要用到,配對的公鑰應配置在gitlab中)
  • 加入了登錄自建harbor倉庫的config文件
  • 修改了ssh客戶端的
  • 安裝一個docker的客戶端
  • 如果因為網絡原因構建失敗,可以在最后“ /get-docker.sh --mirror Aliyun”
1) 生成ssh密鑰對:
[root@kjdow7-200 jenkins]# ssh-keygen -t rsa -b 2048 -C "897307140@qq.com" -N "" -f /root/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:bIajghsF/BqJouTeNvZXvQWvolAKWvhVSuZ3uVWoVXU 897307140@qq.com
The key's randomart image is:
+---[RSA 2048]----+
|             ...E|
|.           o   .|
|..   o .   o .   |
|..+ + oo  +..    |
|o=.+ +ooS+..o    |
|=o* o.++..o. o   |
|++...o  ..  +    |
|.o.=  .. . o     |
|..o.o.... .      |
+----[SHA256]-----+
[root@kjdow7-200 jenkins]# cp /root/.ssh/id_rsa .

2) 准備其他文件
[root@kjdow7-200 jenkins]# cp /root/.docker/config.json .
[root@kjdow7-200 jenkins]# curl -fsSL get.docker.com -o get-docker.sh
[root@kjdow7-200 jenkins]# chmod +x get-docker.sh 
[root@kjdow7-200 jenkins]# ll
total 28
-rw------- 1 root root   160 Jan 28 23:41 config.json
-rw-r--r-- 1 root root   355 Jan 28 23:38 Dockerfile
-rwxr-xr-x 1 root root 13216 Jan 28 23:42 get-docker.sh
-rw------- 1 root root  1675 Jan 28 23:38 id_rsa

3) 登錄harbor倉庫頁面,創建infra

創建infra的project,access level 為Private

4)生成鏡像
[root@kjdow7-200 jenkins]# docker build -t harbor.phc-dow.com/infra/jenkins:v2.190.3 .
Sending build context to Docker daemon  19.46kB
Step 1/7 : FROM harbor.phc-dow.com/public/jenkins:v2.190.3
 ---> 22b8b9a84dbe
Step 2/7 : USER root
 ---> Running in 7604d600a620
Removing intermediate container 7604d600a620
 ---> c8d326bfe8b7
Step 3/7 : RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&    echo 'Asia/Shanghai' >/etc/timezone
 ---> Running in 1b72c3d69eea
Removing intermediate container 1b72c3d69eea
 ---> f839ab1701d0
Step 4/7 : ADD id_rsa /root/.ssh/id_rsa
 ---> 840bac71419f
Step 5/7 : ADD config.json /root/.docker/config.json
 ---> 2dcd61ef1c90
Step 6/7 : ADD get-docker.sh /get-docker.sh
 ---> 9430aa0cb5ad
Step 7/7 : RUN echo "    StrictHostKeyChecking no" >> /etc/ssh/sshd_config &&    /get-docker.sh
 ---> Running in ff19d96b70da
# Executing docker install script, commit: f45d7c11389849ff46a6b4d94e0dd1ffebca32c1
+ sh -c apt-get update -qq >/dev/null
+ sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
debconf: delaying package configuration, since apt-utils is not installed
+ sh -c curl -fsSL "https://download.docker.com/linux/debian/gpg" | apt-key add -qq - >/dev/null
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sh -c echo "deb [arch=amd64] https://download.docker.com/linux/debian stretch stable" > /etc/apt/sources.list.d/docker.list
+ sh -c apt-get update -qq >/dev/null
+ [ -n  ]
+ sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
debconf: delaying package configuration, since apt-utils is not installed
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker your-user

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.
Removing intermediate container ff19d96b70da
 ---> 637a6cbc288d
Successfully built 637a6cbc288d
Successfully tagged harbor.phc-dow.com/infra/jenkins:v2.190.3

5) 推送鏡像到倉庫
[root@kjdow7-200 jenkins]# docker push harbor.phc-dow.com/infra/jenkins:v2.190.3

4.3 准備共享存儲

jenkins的/var/lib/jenkins_home里面有jenkins的配置等需要掛載到宿主機,這樣,無論在哪個運算節點起pod,無論pod是否運行,新運行的pod也會有之前的配置內容不會丟失

1) 在所有主機上運行
yum install nfs-utils -y

2) 配置NFS服務
[root@kjdow7-200 ~]# vim /etc/exports
/data/nfs-volume 10.4.7.0/24(rw,no_root_squash)
###啟動NFS服務
[root@kjdow7-200 ~]# mkdir -p /data/nfs-volume
[root@kjdow7-200 ~]# systemctl start nfs
[root@kjdow7-200 ~]# systemctl enable nfs

4.4 准備資源配置清單

運維主機kjdow-200.host.com上:

[root@kjdow7-200 ~]# mkdir /data/k8s-yaml/jenkins && mkdir -p /data/nfs-volume/jenkins_home && cd /data/k8s-yaml/jenkins

[root@kjdow7-200 ~]# vi dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: jenkins
  namespace: infra
  labels: 
    name: jenkins
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: jenkins
  template:
    metadata:
      labels: 
        app: jenkins 
        name: jenkins
    spec:
      volumes:
      - name: data
        nfs: 
          server: kjdow7-200
          path: /data/nfs-volume/jenkins_home
      - name: docker
        hostPath: 
          path: /run/docker.sock
          type: ''
      containers:
      - name: jenkins
        image: harbor.phc-dow.com/infra/jenkins:v2.190.3
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
          protocol: TCP
        env:
        - name: JAVA_OPTS
          value: -Xmx512m -Xms512m
        volumeMounts:
        - name: data
          mountPath: /var/jenkins_home
        - name: docker
          mountPath: /run/docker.sock
      imagePullSecrets:
      - name: harbor
      securityContext: 
        runAsUser: 0
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600

注:imagePullSecrets:中name在創建secret時指定過的

將宿主機/run/docker.sock掛載到pod中,那么pod就可以與宿主機的docker的server端進行通信了

[root@kjdow7-200 ~]# vim service.yaml
kind: Service
apiVersion: v1
metadata: 
  name: jenkins
  namespace: infra
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  selector:
    app: jenkins

注: targetport指定鏡像端口,jenkins默認打開頁面端口是8080

port指定暴露給service中的cluster ip使用80端口

[root@kjdow7-200 ~]# vim ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: jenkins
  namespace: infra
spec:
  rules:
  - host: jenkins.phc-dow.com
    http:
      paths:
      - path: /
        backend: 
          serviceName: jenkins
          servicePort: 80

4.5 運算節點創建必須的資源

[root@kjdow7-21 ~]# kubectl create ns infra
namespace/infra created
[root@kjdow7-21 ~]# kubectl create secret docker-registry harbor --docker-server=harbor.phc-dow.com --docker-username=admin --docker-password=Harbor_kjdow1! -n infra
secret/harbor created
###創建一個名字叫harbor的secret

注: 創建infra的命名空間,所有的運維pod都在此空間中

創建secret資源用於從infra的私有倉庫中拉取鏡像時提供用戶名和密碼,在上面的dp.yaml里面指定使用此secret

4.6 應用資源配置清單

[root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/jenkins/dp.yaml
deployment.extensions/jenkins created
[root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/jenkins/service.yaml
service/jenkins created
[root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/jenkins/ingress.yaml
ingress.extensions/jenkins created

4.7 打開頁面訪問

[root@kjdow7-200 ~]# cat /data/nfs-volume/jenkins_home/secrets/initialAdminPassword
112f082a79ce4e389be1cf884cc652e8

訪問jenkins.phc-dow.com並進行簡單的配置,設置用戶名admin密碼admin123

在頁面進行簡單的配置

給jenkins添加插件blue ocean

4.8 驗證jenkins搭建完成

  • 驗證用戶是否是root
  • 驗證時間是否對
  • 驗證docker ps -a是否跟宿主機顯示一樣
  • 驗證sshi是否不用輸入yes、no
  • 驗證是否已經登錄成功harbor倉庫
  • 使用私鑰驗證git是否能連接成功

5.安裝部署maven

maven官方下載地址

###查看jenkins的pod中java版本
[root@kjdow7-22 ~]# kubectl get pod -n infra -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP           NODE                 NOMINATED NODE   READINESS GATES
jenkins-67d4b48b54-gd9g7   1/1     Running   0          33m   172.7.22.7   kjdow7-22.host.com   <none>           <none>
[root@kjdow7-22 ~]# kubectl exec jenkins-67d4b48b54-gd9g7  -it  /bin/bash -n infra
root@jenkins-67d4b48b54-gd9g7:/# java -version
openjdk version "1.8.0_232"
OpenJDK Runtime Environment (build 1.8.0_232-b09)
OpenJDK 64-Bit Server VM (build 25.232-b09, mixed mode)

###下載軟件
[root@kjdow7-200 ~]# wget https://archive.apache.org/dist/maven/maven-3/3.6.1/binaries/apache-maven-3.6.1-bin.tar.gz
[root@kjdow7-200 ~]# tar xf apache-maven-3.6.1-bin.tar.gz -C /data/nfs-volume/jenkins_home/
[root@kjdow7-200 ~]# cd /data/nfs-volume/jenkins_home/
[root@kjdow7-200 jenkins_home]# mv apache-maven-3.6.1 maven-3.6.1-8u232
[root@kjdow7-200 ~]# vi /data/nfs-volume/jenkins_home/maven-3.6.1-8u232/conf/settings.xml
  <mirrors>
    <mirror>
      <id>alimaven</id>
      <name>aliyun maven</name>
      <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
      <mirrorOf>central</mirrorOf>
    </mirror>
    <!-- mirror
     | Specifies a repository mirror site to use instead of a given repository. The repository that
     | this mirror serves has an ID that matches the mirrorOf element of this mirror. IDs are used
     | for inheritance and direct lookup purposes, and must be unique across the set of mirrors.
     |
     -->
  </mirrors>
###添加到文件的相應位置,jenkins的pod自動就會同步更改

6.dubbo微服務底包鏡像制作

6.1 自定義dockerfile

注:我們需要一個java運行時環境的底包

[root@kjdow7-200 ~]# docker pull docker.io/stanleyws/jre8:8u112
[root@kjdow7-200 ~]# docker images | grep jre8
stanleyws/jre8                                    8u112                      fa3a085d6ef1        2 years ago         363MB
[root@kjdow7-200 ~]# docker tag fa3a085d6ef1 harbor.phc-dow.com/public/jre8:8u112
[root@kjdow7-200 ~]# docker push harbor.phc-dow.com/public/jre8:8u112
[root@kjdow7-200 ~]# mkdir /data/dockerfile/jre8
[root@kjdow7-200 ~]# cd /data/dockerfile/jre8
[root@kjdow7-200 jre8]# vim Dockerfile
FROM docker.io/stanleyws/jre8:8u112
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
    echo 'Asia/Shanghai' >/etc/timezone
ADD config.yml /opt/prom/config.yml
ADD jmx_javaagent-0.3.1.jar /opt/prom/
WORKDIR /opt/project_dir
ADD entrypoint.sh /entrypoint.sh
CMD ["/entrypoint.sh"]

注: 第三行主要是普羅米修斯監控的配置文件

​ 第四行是普羅米修斯使用這個jar包來監控jvm

###准備其他必須的文件
[root@kjdow7-200 jre8]# wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar -O jmx_javaagent-0.3.1.jar
————————————————————————————————————————————————————————————————————————
[root@kjdow7-200 jre8]# vi config.yml
---
rules:
  - pattern: '.*'
————————————————————————————————————————————————————————————————————————    
[root@kjdow7-200 jre8]# vi entrypoint.sh
#!/bin/sh
M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml"
C_OPTS=${C_OPTS}
JAR_BALL=${JAR_BALL}
exec java -jar ${M_OPTS} ${C_OPTS} ${JAR_BALL}
[root@kjdow7-200 jre8]# chmod +x entrypoint.sh 

[root@kjdow7-200 jre8]# ll
total 372
-rw-r--r-- 1 root root     29 Jan 29 23:11 config.yml
-rw-r--r-- 1 root root    297 Jan 29 22:54 Dockerfile
-rwxr-xr-x 1 root root    234 Jan 29 23:11 entrypoint.sh
-rw-r--r-- 1 root root 367417 May 10  2018 jmx_javaagent-0.3.1.jar


注:entrypoint.sh文件中

​ C_OPTS=${C_OPTS}表示將資源配置清單中的變量值賦值給它

​ ${M_PORT:-"12346"}表示如果沒有給它賦值,則默認值是12346

​ 最后一行前面加exec是因為這個shell執行完,這個容器就死了,exec作用就是把這個shell 的pid交給 exec后面的命令繼續使用,這樣java不死,這個pod就能一直存活

shell的內建命令exec將並不啟動新的shell,而是用要被執行命令替換當前的shell進程,並且將老進程的環境清理掉,而且exec命令后的其它命令將不再執行。

6.2 harbor頁面創建object

在harbor中創建base的object,用來存放所有業務基礎鏡像.權限為公開

6.3 創建鏡像

[root@kjdow7-200 jre8]# docker build -t harbor.phc-dow.com/base/jre8:8u112 .
[root@kjdow7-200 jre8]# docker push harbor.phc-dow.com/base/jre8:8u112

7.使用Jenkins進行持續構建交付dubo服務的提供者

7.1 新建新項目

新建名為dubbo-demo的pipeline項目

7.2 設置丟棄舊的構建,保存三天的,最多30個

參數化構建

7.3 jenkins流水線配置的十個參數

  • app_name -->項目名

  • image_name --> 鏡像名

  • git_repo --> 項目的git地址

  • git_ver --> 項目的git版本號或分支

  • add_tag --> 鏡像標簽,日期時間戳(20200130_1421)

  • mvn_dir --> 編譯項目的目錄

  • target_dir --> 項目編譯完成后,禪城的jar、war包所在的目錄

  • mvn_cmd --> 編譯項目的命令

  • base_image --> 項目的docker底包鏡像

  • maven --> maven軟件的版本

7.4 pipeline流水線代碼

pipeline {
  agent any
    stages {
	  stage('pull') {
	    steps {
		  sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.app_name}/${env.BUILD_NUMBER} &&  git checkout ${params.git_ver}"
		}
	  }
	  stage('build') {
	    steps {
		  sh "cd ${params.app_name}/${env.BUILD_NUMBER} && /var/jenkins_home/maven-${params.maven}/bin/${params.mvn_cmd}"
		}
	  }
	  stage('package') {
	    steps {
		  sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.target_dir} && mkdir project_dir && mv *.jar ./project_dir"
		}
	  }
	  stage('image') {
	    steps {
		  writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile", text: """FROM harbor.phc-dow.com/${params.base_image} 
		  ADD ${params.target_dir}/project_dir /opt/project_dir"""
		  sh "cd ${params.app_name}/${env.BUILD_NUMBER} && docker build -t harbor.phc-dow.com/${params.image_name}:${params.git_ver}_${params.add_tag} . && docker push harbor.phc-dow.com/${params.image_name}:${params.git_ver}_${params.add_tag}"
		}
	  }
	}
}

7.5 構建前准備工作

harbor倉庫創建私有projects名字為app

7.6 開始構建

打開jenkins頁面開始構建,填寫參數值

依次填入/選擇:
app_name:       dubbo-demo-service
image_name:     app/dubbo-demo-service
git_repo:       https://github.com/zizhufanqing/dubbo-demo-service.git
git_ver:        master
add_tag:        202001311655
mvn_dir:        ./
target_dir:     ./dubbo-server/target
mvn_cmd:        mvn clean package -Dmaven.test.skip=true
base_image:     base/jre8:8u112
maven:          3.6.0-8u181
點擊Build進行構建,等待構建完成。

注: 在github上已經添加了公鑰

  • 構建完成后在harbor的app中查看自動提交的鏡像

7.7 准備資源配置清單

在kjdow7-200上部署

[root@kjdow7-200 ~]# mkdir /data/k8s-yaml/dubbo-demo-service
[root@kjdow7-200 ~]# vi /data/k8s-yaml/dubbo-demo-service/dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-service
  namespace: app
  labels: 
    name: dubbo-demo-service
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-service
  template:
    metadata:
      labels: 
        app: dubbo-demo-service
        name: dubbo-demo-service
    spec:
      containers:
      - name: dubbo-demo-service
        image: harbor.phc-dow.com/app/dubbo-demo-service:master_202001311655
        ports:
        - containerPort: 20880
          protocol: TCP
        env:
        - name: JAR_BALL
          value: dubbo-server.jar
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600

注意:這里給JAR_BALL進行賦值,使用上面構建的鏡像,進行創建pod,由於harbor里app是私有倉庫,因此需要在k8s中先創建指定的namespace和secret

7.8 應用配置清單前准備工作

[root@kjdow7-21 ~]# kubectl create ns app
namespace/app created
[root@kjdow7-21 ~]# kubectl create secret docker-registry harbor --docker-server=harbor.phc-dow.com --docker-username=admin --docker-password=Harbor_kjdow1! -n app
secret/harbor created

注意:這里secret的名字要跟上面的dp.yaml中imagePullSecrets的name的值一樣,secret的名字可以自定義,但是要引用對應的名字

7.9 應用資源配置清單

  • 應用前
[root@kjdow7-11 zookeeper]# ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@kjdow7-11 zookeeper]# bin/zkCli.sh -server localhost:2181

WATCHER::
WatchedEvent state:SyncConnected type:None path:null

[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]

注:此時里面只有zookeeper

  • 應用資源配置清單
[root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dubbo-demo-service/dp.yaml
deployment.extensions/dubbo-demo-service created

  • 應用后
[zk: localhost:2181(CONNECTED) 0] ls /
[dubbo, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /dubbo
[com.od.dubbotest.api.HelloService]

注:這里可以看到已經自動注冊進來了。代碼里面寫死了注冊地址是zk1.od.com,而我們的域名是phc-dow.com,因此可以在bind中添加一個od.com的配置文件,或者修改源碼

8.交付dubbo-monitor到K8S集群

8.1 下載源碼包

dubbo-monitor下載地址

[root@kjdow7-200 ~]# wget https://github.com/Jeromefromcn/dubbo-monitor/archive/master.zip
[root@kjdow7-200 ~]# unzip master.zip
[root@kjdow7-200 ~]# mv dubbo-monitor-master /opt/src/dubbo-monitor

8.2 修改源碼包

[root@kjdow7-200 ~]# vim /opt/src/dubbo-monitor/dubbo-monitor-simple/conf/dubbo_origin.properties
dubbo.application.name=kjdow-monitor
dubbo.application.owner=kjdow
dubbo.registry.address=zookeeper://zk1.phc-dow.com:2181?backup=zk2.phc-dow.com:2181,zk3.phc-dow.com:2181
dubbo.protocol.port=20880
dubbo.jetty.port=8080
dubbo.jetty.directory=/dubbo-monitor-simple/monitor
dubbo.charts.directory=/dubbo-monitor-simple/charts


8.3 制作配置文件

[root@kjdow7-200 ~]# mkdir /data/dockerfile/dubbo-monitor
[root@kjdow7-200 ~]# cp -r /opt/src/dubbo-monitor/* /data/dockerfile/dubbo-monitor/
[root@kjdow7-200 ~]# cd /data/dockerfile/dubbo-monitor/
[root@kjdow7-200 dubbo-monitor]# ls
Dockerfile  dubbo-monitor-simple  README.md
[root@kjdow7-200 dubbo-monitor]# vim  ./dubbo-monitor-simple/bin/start.sh
if [ -n "$BITS" ]; then
    JAVA_MEM_OPTS=" -server -Xmx128m -Xms128m -Xmn32m -XX:PermSize=16m -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 "
else
    JAVA_MEM_OPTS=" -server -Xms128m -Xmx128m -XX:PermSize=16m -XX:SurvivorRatio=2 -XX:+UseParallelGC "
fi

echo -e "Starting the $SERVER_NAME ...\c"
exec  java $JAVA_OPTS $JAVA_MEM_OPTS $JAVA_DEBUG_OPTS $JAVA_JMX_OPTS -classpath $CONF_DIR:$LIB_JARS com.alibaba.dubbo.container.Main > $STDOUT_FILE 2>&1 

注:腳本的59行和61行jvm進行調優

​ 64行java啟動腳本改成exec開頭,並刪除最后的&,讓java前台執行,並接管這個shell的進程pid,並刪除此行以下的所有內容

[root@kjdow7-200 dubbo-monitor]# docker build -t harbor.phc-dow.com/infra/dubbo-monitor:latest .
[root@kjdow7-200 ~]# docker push harbor.phc-dow.com/infra/dubbo-monitor:latest

8.4 准備k8s資源配置清單

[root@kjdow7-200 ~]# vi /data/k8s-yaml/dubbo-monitor/dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-monitor
  namespace: infra
  labels: 
    name: dubbo-monitor
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-monitor
  template:
    metadata:
      labels: 
        app: dubbo-monitor
        name: dubbo-monitor
    spec:
      containers:
      - name: dubbo-monitor
        image: harbor.phc-dow.com/infra/dubbo-monitor:latest
        ports:
        - containerPort: 8080
          protocol: TCP
        - containerPort: 20880
          protocol: TCP
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@kjdow7-200 ~]# vi /data/k8s-yaml/dubbo-monitor/svc.yaml
kind: Service
apiVersion: v1
metadata: 
  name: dubbo-monitor
  namespace: infra
spec:
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080
  selector: 
    app: dubbo-monitor
  clusterIP: None
  type: ClusterIP
  sessionAffinity: None
[root@kjdow7-200 ~]# vi /data/k8s-yaml/dubbo-monitor/ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: dubbo-monitor
  namespace: infra
spec:
  rules:
  - host: dubbo-monitor.phc-dow.com
    http:
      paths:
      - path: /
        backend: 
          serviceName: dubbo-monitor
          servicePort: 8080

8.5 應用資源配置清單前准備工作-解析域名

[root@kjdow7-11 ~]# vim /var/named/phc-dow.com.zone
$ORIGIN  phc-dow.com.
$TTL  600   ; 10 minutes
@        IN SOA dns.phc-dow.com. dnsadmin.phc-dow.com. (
                                2020010207   ; serial           #serial值加一
                                10800        ; refresh (3 hours)
                                900          ; retry  (15 minutes)
                                604800       ; expire (1 week)
                                86400        ; minimum (1 day)
                )
                        NS   dns.phc-dow.com.
$TTL  60 ; 1 minute
dns                A         10.4.7.11
harbor             A         10.4.7.200
k8s-yaml           A         10.4.7.200
traefik            A         10.4.7.10
dashboard          A         10.4.7.10
zk1     60 IN      A         10.4.7.11
zk2     60 IN      A         10.4.7.12
zk3     60 IN      A         10.4.7.21
dubbo-monitor      A         10.4.7.10                          #添加此行配置
[root@kjdow7-11 ~]# systemctl restart named
[root@kjdow7-11 ~]# dig -t A dubbo-monitor.phc-dow.com @10.4.7.11 +short
10.4.7.10

8.6 應用k8s資源配置清單

[root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dubbo-monitor/dp.yaml
deployment.extensions/dubbo-monitor created
[root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dubbo-monitor/svc.yaml
service/dubbo-monitor created
[root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dubbo-monitor/ingress.yaml
ingress.extensions/dubbo-monitor created

8.7 打開網頁進行訪問

http://dubbo-monitor.phc-dow.com

9.交付dubbo服務的消費者集群到K8S

9.1 使用jenkins進行持續構建dubbo消費者鏡像

依次填入/選擇:
app_name:       dubbo-demo-consumer
image_name:     app/dubbo-demo-consumer
git_repo:       git@github.com:zizhufanqing/dubbo-demo-web.git
git_ver:        master
add_tag:        202002011530
mvn_dir:        ./
target_dir:     ./dubbo-client/target
mvn_cmd:        mvn clean package -Dmaven.test.skip=true
base_image:     base/jre8:8u112
maven:          3.6.0-8u181
點擊Build進行構建,等待構建完成。

注: 構建完成后在harbor的app中查看自動提交的鏡像

9.2 准備資源配置清單

[root@kjdow7-200 ~]# vi /data/k8s-yaml/dubbo-demo-consumer/dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-consumer
  namespace: app
  labels: 
    name: dubbo-demo-consumer
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-consumer
  template:
    metadata:
      labels: 
        app: dubbo-demo-consumer
        name: dubbo-demo-consumer
    spec:
      containers:
      - name: dubbo-demo-consumer
        image: harbor.phc.com/app/dubbo-demo-consumer:master_202002011530
        ports:
        - containerPort: 8080
          protocol: TCP
        - containerPort: 20880
          protocol: TCP
        env:
        - name: JAR_BALL
          value: dubbo-client.jar
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@kjdow7-200 ~]# vi /data/k8s-yaml/dubbo-demo-consumer/svc.yaml
kind: Service
apiVersion: v1
metadata: 
  name: dubbo-demo-consumer
  namespace: app
spec:
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080
  selector: 
    app: dubbo-demo-consumer
  clusterIP: None
  type: ClusterIP
  sessionAffinity: None
[root@kjdow7-200 ~]# vi /data/k8s-yaml/dubbo-demo-consumer/ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: dubbo-demo-consumer
  namespace: app
spec:
  rules:
  - host: demo.phc-dow.com
    http:
      paths:
      - path: /
        backend: 
          serviceName: dubbo-demo-consumer
          servicePort: 8080

9.3 應用資源配置清單前准備工作-解析域名

[root@kjdow7-11 ~]# vim /var/named/phc-dow.com.zone
$ORIGIN  phc-dow.com.
$TTL  600   ; 10 minutes
@        IN SOA dns.phc-dow.com. dnsadmin.phc-dow.com. (
                                2020010208   ; serial
                                10800        ; refresh (3 hours)
                                900          ; retry  (15 minutes)
                                604800       ; expire (1 week)
                                86400        ; minimum (1 day)
                )
                        NS   dns.phc-dow.com.
$TTL  60 ; 1 minute
dns                A         10.4.7.11
harbor             A         10.4.7.200
k8s-yaml           A         10.4.7.200
traefik            A         10.4.7.10
dashboard          A         10.4.7.10
zk1     60 IN      A         10.4.7.11
zk2     60 IN      A         10.4.7.12
zk3     60 IN      A         10.4.7.21
dubbo-monitor      A         10.4.7.10
demo               A         10.4.7.10
[root@kjdow7-11 ~]# systemctl restart named
[root@kjdow7-11 ~]# dig -t A demo.phc-dow.com @10.4.7.11 +short
10.4.7.10

9.4 應用資源配置清單

[root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dubbo-demo-consumer/dp.yaml
deployment.extensions/dubbo-demo-consumer created
[root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dubbo-demo-consumer/svc.yaml
service/dubbo-demo-consumer created
[root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dubbo-demo-consumer/ingress.yaml
ingress.extensions/dubbo-demo-consumer created

9.5 驗證

  • 登錄dubbo-monitor頁面查看

http://dubbo-monitor.phc-dow.com/applications.html

Applications已經能看到部署的三個

  • 打開頁面進行訪問
http://demo.phc-dow.com/hello?name=wanglei

注:這里通過客戶端調用hello的方法,客戶端通過rpc協議調用服務端的hello方法,返回結果

三 、實戰dubbo集群的日常維護

1.jenkins持續集成與持續部署

  • 1.jenkins從git上拉取新代碼,並按照上述方式進行構建
  • 2.jenkins自動集成生成新的app的鏡像
  • 3.在k8s中修改對應的服務所使用的鏡像,k8s自動進行滾動更新

2.服務的擴容與縮容

  • 1.修改deployment中聲明的pod的個數
  • 2.應用新的配置清單
  • 3.k8s自動進行擴容與縮容


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM