3、交付Dubbo微服務到kubernetes集群


1.基礎架構

1.1.架構圖

  • Zookeeper是Dubbo微服務集群的注冊中心
  • 它的高可用機制和k8s的etcd集群一致
  • java編寫,需要jdk環境

1.2.節點規划

主機名 角色 ip
hdss7-11.host.com k8s代理節點1,zk1 10.4.7.11
hdss7-12.host.com k8s代理節點2,zk2 10.4.7.12
hdss7-21.host.com k8s運算節點1,zk3 10.4.7.21
hdss7-22.host.com k8s運算節點2,jenkins 10.4.7.22
hdss7-200.host.com k8s運維節點(docker倉庫) 10.4.7.200

2.部署zookeeper

2.1.安裝jdk 1.8(3台zk節點都要安裝)

//解壓、創建軟鏈接
[root@hdss7-11 src]# mkdir /usr/java
[root@hdss7-11 src]# tar xf jdk-8u221-linux-x64.tar.gz  -C /usr/java/
[root@hdss7-11 src]# ln -s /usr/java/jdk1.8.0_221/ /usr/java/jdk
[root@hdss7-11 src]# cd /usr/java/
[root@hdss7-11 java]# ll
total 0
lrwxrwxrwx 1 root root  23 Nov 30 17:38 jdk -> /usr/java/jdk1.8.0_221/
drwxr-xr-x 7   10  143 245 Jul  4 19:37 jdk1.8.0_221

//創建環境變量
[root@hdss7-11 java]# vi /etc/profile
export JAVA_HOME=/usr/java/jdk
export PATH=$JAVA_HOME/bin:$JAVA_HOME/bin:$PATH
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar

//source並檢查
[root@hdss7-11 java]# source /etc/profile
[root@hdss7-11 java]# java -version
java version "1.8.0_221"
Java(TM) SE Runtime Environment (build 1.8.0_221-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.221-b11, mixed mode)

2.2.安裝zk(3台節點都要安裝)

zookeeper官方地址

2.2.1.解壓,創建軟鏈接

[root@hdss7-11 src]# tar xf zookeeper-3.4.14.tar.gz  -C /opt/
[root@hdss7-11 src]# ln -s /opt/zookeeper-3.4.14/ /opt/zookeeper

2.2.2.創建數據目錄和日志目錄

[root@hdss7-11 opt]# mkdir  -pv /data/zookeeper/data /data/zookeeper/logs
mkdir: created directory ‘/data’
mkdir: created directory ‘/data/zookeeper’
mkdir: created directory ‘/data/zookeeper/data’
mkdir: created directory ‘/data/zookeeper/logs’

2.2.3.配置

//各節點相同
[root@hdss7-11 opt]# vi /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
dataLogDir=/data/zookeeper/logs
clientPort=2181
server.1=zk1.od.com:2888:3888
server.2=zk2.od.com:2888:3888
server.3=zk3.od.com:2888:3888

myid

//各節點不同
[root@hdss7-11 opt]# vi /data/zookeeper/data/myidvi
1
[root@hdss7-12 opt]# vi /data/zookeeper/data/myid
2
[root@hdss7-21 opt]# vi /data/zookeeper/data/myid
3

2.2.4.做dns解析

[root@hdss7-11 opt]# vi /var/named/od.com.zone

$ORIGIN od.com.
$TTL 600        ; 10 minutes
@               IN SOA  dns.od.com. dnsadmin.od.com. (
                                2019111006 ; serial                        //序列號前滾1
                                10800      ; refresh (3 hours)
                                900        ; retry (15 minutes)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
                                NS   dns.od.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.11
harbor             A    10.4.7.200
k8s-yaml           A    10.4.7.200
traefik            A    10.4.7.10
dashboard          A    10.4.7.10
zk1                A    10.4.7.11
zk2                A    10.4.7.12
zk3                A    10.4.7.21

[root@hdss7-11 opt]# systemctl restart named
[root@hdss7-11 opt]# dig -t A zk1.od.com @10.4.7.11 +short
10.4.7.11

2.2.4.依次啟動並檢查

啟動

[root@hdss7-11 opt]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@hdss7-12 opt]# /opt/zookeeper/bin/zkServer.sh start

[root@hdss7-21 opt]# /opt/zookeeper/bin/zkServer.sh start

檢查

[root@hdss7-11 opt]# netstat -ntlup|grep 2181
tcp6       0      0 :::2181                 :::*                    LISTEN      69157/java   

[root@hdss7-11 opt]# zookeeper/bin/zkServer.sh  status  
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower

[root@hdss7-12 opt]# zookeeper/bin/zkServer.sh  status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: leader

3.部署jenkins

jenkins官網
jenkins 鏡像

3.1.准備鏡像

hdss7-200上

[root@hdss7-200 ~]# docker pull jenkins/jenkins:2.190.3
[root@hdss7-200 ~]# docker images |grep jenkins
[root@hdss7-200 ~]# docker tag 22b8b9a84dbe harbor.od.com/public/jenkins:v2.190.3
[root@hdss7-200 ~]# docker push  harbor.od.com/public/jenkins:v2.190.3

3.2.制作自定義鏡像

3.2.1.生成ssh秘鑰對

[root@hdss7-200 ~]# ssh-keygen -t rsa -b 2048 -C "8614610@qq.com" -N "" -f /root/.ssh/id_rsa
  • 此處用自己的郵箱

3.2.2.准備get-docker.sh文件

[root@hdss7-200 ~]#  curl -fsSL get.docker.com -o get-docker.sh
[root@hdss7-200 ~]# chmod +x  get-docker.sh

3.2.3.准備config.json文件

cp /root/.docker/config.json  .
cat  /root/.docker/config.json
{
	"auths": {
		"harbor.od.com": {
			"auth": "YWRtaW46SGFyYm9yMTIzNDU="
		}
	},
	"HttpHeaders": {
		"User-Agent": "Docker-Client/19.03.4 (linux)"
	}

3.2.4.創建目錄並准備Dockerfile

[root@hdss7-200 ~]# mkdir /data/dockerfile/jenkins -p
[root@hdss7-200 ~]# cd /data/dockerfile/jenkins/

[root@hdss7-200 jenkins]# vi Dockerfile
FROM harbor.od.com/public/jenkins:v2.190.3
USER root
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\ 
    echo 'Asia/Shanghai' >/etc/timezone
ADD id_rsa /root/.ssh/id_rsa
ADD config.json /root/.docker/config.json
ADD get-docker.sh /get-docker.sh
RUN echo "    StrictHostKeyChecking no" >> /etc/ssh/ssh_config &&\
    /get-docker.sh
  • 設置容器用戶為root
  • 設置容器內的時區
  • 將創建的ssh私鑰加入(使用git拉代碼是要用,配對的公鑰配置在gitlab中)
  • 加入了登陸自建harbor倉庫的config文件
  • 修改了ssh客戶端的配置,不做指紋驗證
  • 安裝一個docker的客戶端 //build如果失敗,在get-docker.sh 后加--mirror=Aliyun

3.3.制作自定義鏡像

//准備所需文件,拷貝至/data/dockerfile/jenkins
[root@hdss7-200 jenkins]# pwd
/data/dockerfile/jenkins
[root@hdss7-200 jenkins]# ll
total 32
-rw------- 1 root root   151 Nov 30 18:35 config.json
-rw-r--r-- 1 root root   349 Nov 30 18:31 Dockerfile
-rwxr-xr-x 1 root root 13216 Nov 30 18:31 get-docker.sh
-rw------- 1 root root  1675 Nov 30 18:35 id_rsa

//執行build
 docker build . -t harbor.od.com/infra/jenkins:v2.190.3 

//公鑰上傳到gitee測試此鏡像是否可以成功連接
[root@hdss7-200 harbor]# docker run --rm harbor.od.com/infra/jenkins:v2.190.3 ssh -i /root/.ssh/id_rsa -T git@gitee.com
Warning: Permanently added 'gitee.com,212.64.62.174' (ECDSA) to the list of known hosts.
Hi StanleyWang (DeployKey)! You've successfully authenticated, but GITEE.COM does not provide shell access.
Note: Perhaps the current use is DeployKey.
Note: DeployKey only supports pull/fetch operations

3.4.創建infra倉庫

3.5.創建kubernetes名稱空間並在此創建secret

[root@hdss7-21 ~]# kubectl create namespace infra
[root@hdss7-21 ~]# kubectl create secret docker-registry harbor --docker-server=harbor.od.com --docker-username=admin --docker-password=Harbor12345 -n infra

3.6.推送鏡像

[root@hdss7-200 jenkins]# docker push harbor.od.com/infra/jenkins:v2.190.3

3.7.准備共享存儲

運維主機hdss7-200和所有運算節點上,這里指hdss7-21、22

3.7.1.安裝nfs-utils -y

[root@hdss7-200 jenkins]# yum install nfs-utils -y

3.7.2.配置NFS服務

運維主機hdss7-200上

[root@hdss7-200 jenkins]# vi /etc/exports
/data/nfs-volume 10.4.7.0/24(rw,no_root_squash)

3.7.3.啟動NFS服務

運維主機hdss7-200上

[root@hdss7-200 ~]# mkdir -p /data/nfs-volume
[root@hdss7-200 ~]# systemctl start nfs
[root@hdss7-200 ~]# systemctl enable nfs

3.8.准備資源配置清單

運維主機hdss7-200上

[root@hdss7-200 ~]# cd /data/k8s-yaml/
[root@hdss7-200 k8s-yaml]# mkdir /data/k8s-yaml/jenkins && mkdir /data/nfs-volume/jenkins_home && cd  jenkins

dp.yaml

[root@hdss7-200 jenkins]# vi dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: jenkins
  namespace: infra
  labels: 
    name: jenkins
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: jenkins
  template:
    metadata:
      labels: 
        app: jenkins 
        name: jenkins
    spec:
      volumes:
      - name: data
        nfs: 
          server: hdss7-200
          path: /data/nfs-volume/jenkins_home
      - name: docker
        hostPath: 
          path: /run/docker.sock
          type: ''
      containers:
      - name: jenkins
        image: harbor.od.com/infra/jenkins:v2.190.3
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
          protocol: TCP
        env:
        - name: JAVA_OPTS
          value: -Xmx512m -Xms512m
        volumeMounts:
        - name: data
          mountPath: /var/jenkins_home
        - name: docker
          mountPath: /run/docker.sock
      imagePullSecrets:
      - name: harbor
      securityContext: 
        runAsUser: 0
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600

svc.yaml

[root@hdss7-200 jenkins]# vi dp.yaml
kind: Service
apiVersion: v1
metadata: 
  name: jenkins
  namespace: infra
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  selector:
    app: jenkins

ingress.yaml

kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: jenkins
  namespace: infra
spec:
  rules:
  - host: jenkins.od.com
    http:
      paths:
      - path: /
        backend: 
          serviceName: jenkins
          servicePort: 80

3.9.應用資源配置清單

任意運算節點上

[root@hdss7-21 etcd]# kubectl apply -f http://k8s-yaml.od.com/jenkins/dp.yaml
[root@hdss7-21 etcd]# kubectl apply -f http://k8s-yaml.od.com/jenkins/svc.yaml
[root@hdss7-21 etcd]# kubectl apply -f http://k8s-yaml.od.com/jenkins/ingress.yaml

[root@hdss7-21 etcd]# kubectl get pods -n infra
NAME                       READY   STATUS    RESTARTS   AGE
jenkins-54b8469cf9-v46cc   1/1     Running   0          168m
[root@hdss7-21 etcd]# kubectl get all  -n infra
NAME                           READY   STATUS    RESTARTS   AGE
pod/jenkins-54b8469cf9-v46cc   1/1     Running   0          169m


NAME              TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)   AGE
service/jenkins   ClusterIP   192.168.183.210   <none>        80/TCP    2d21h


NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/jenkins   1/1     1            1           2d21h

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/jenkins-54b8469cf9   1         1         1       2d18h
replicaset.apps/jenkins-6b6d76f456   0         0         0       2d21h

3.10.解析域名

hdss7-11上

[root@hdss7-11 ~]# vi /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600        ; 10 minutes
@               IN SOA  dns.od.com. dnsadmin.od.com. (
                                2019111007 ; serial
                                10800      ; refresh (3 hours)
                                900        ; retry (15 minutes)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
                                NS   dns.od.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.11
...
...
jenkins            A    10.4.7.10

[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A jenkins.od.com @10.4.7.11 +short
10.4.7.10

3.11. 瀏覽器訪問

訪問:http://jenkins.od.com 需要輸入初始密碼:

初始密碼查看(也可在log里查看):

[root@hdss7-200 jenkins]# cat /data/nfs-volume/jenkins_home/secrets/initialAdminPassword

3.12.頁面配置jenkins

3.12.1.配置用戶名密碼

用戶名:admin 密碼:admin123 //后續依賴此密碼,請務必設置此密碼

3.12.2.設置configure global security

允許匿名用戶訪問

阻止跨域請求,勾去掉

3.12.3.安裝好流水線插件Blue-Ocean

注意安裝插件慢的話可以設置清華大學加速
hdss-200上

cd /data/nfs-volume/jenkins_home/updates
sed -i 's/http:\/\/updates.jenkins-ci.org\/download/https:\/\/mirrors.tuna.tsinghua.edu.cn\/jenkins/g' default.json && sed -i 's/http:\/\/www.google.com/https:\/\/www.baidu.com/g' default.json

4.最后的准備工作

4.1.檢查jenkins容器里的docker客戶端

驗證當前用戶,時區

,sock文件是否可用

驗證kubernetes名稱空間創建的secret是否可登陸到harbor倉庫

4.2.檢查jenkins容器里的SSH key

驗證私鑰,是否能登陸到gitee拉代碼

4.3.部署maven軟件

編譯java,早些年用javac-->ant -->maven-->Gragle
在運維主機hdss7-200上二進制部署,這里部署maven-3.6.2版本
mvn命令是一個腳本,如果用jdk7,可以在腳本里修改

4.3.1.下載安裝包

maven官方下載地址

4.3.2.創建目錄並解壓

目錄8u232是根據docker容器里的jenkins的jdk版本命名,請嚴格按照此命名

[root@hdss7-200 src]# mkdir /data/nfs-volume/jenkins_home/maven-3.6.1-8u232
[root@hdss7-200 src]# tar xf apache-maven-3.6.1-bin.tar.gz  -C /data/nfs-volume/jenkins_home/maven-3.6.1-8u232
[root@hdss7-200 src]# cd /data/nfs-volume/jenkins_home/maven-3.6.1-8u232
[root@hdss7-200 maven-3.6.1-8u232]# ls
apache-maven-3.6.1
[root@hdss7-200 maven-3.6.1-8u232]# mv apache-maven-3.6.1/ ../ && mv ../apache-maven-3.6.1/* .
[root@hdss7-200 maven-3.6.1-8u232]# ll
total 28
drwxr-xr-x 2 root root     97 Dec  3 19:04 bin
drwxr-xr-x 2 root root     42 Dec  3 19:04 boot
drwxr-xr-x 3  501 games    63 Apr  5  2019 conf
drwxr-xr-x 4  501 games  4096 Dec  3 19:04 lib
-rw-r--r-- 1  501 games 13437 Apr  5  2019 LICENSE
-rw-r--r-- 1  501 games   182 Apr  5  2019 NOTICE
-rw-r--r-- 1  501 games  2533 Apr  5  2019 README.txt

4.3.3.設置settings.xml國內鏡像源

搜索替換:
[root@hdss7-200 maven-3.6.1-8u232]# vi /data/nfs-volume/jenkins_home/maven-3.6.1-8u232/conf/settings.xml
<mirror>
  <id>nexus-aliyun</id>
  <mirrorOf>*</mirrorOf>
  <name>Nexus aliyun</name>
  <url>http://maven.aliyun.com/nexus/content/groups/public</url>
</mirror>

4.3.制作dubbo微服務的底包鏡像

運維主機hdss7-200上

4.3.1.自定義Dockerfile

root@hdss7-200 dockerfile]# mkdir jre8
[root@hdss7-200 dockerfile]# cd jre8/
[root@hdss7-200 jre8]# pwd
/data/dockerfile/jre8

[root@hdss7-200 jre8]# vi Dockfile
FROM harbor.od.com/public/jre:8u112
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
    echo 'Asia/Shanghai' >/etc/timezone
ADD config.yml /opt/prom/config.yml
ADD jmx_javaagent-0.3.1.jar /opt/prom/
WORKDIR /opt/project_dir
ADD entrypoint.sh /entrypoint.sh
CMD ["/entrypoint.sh"]
  • 普羅米修斯的監控匹配規則
  • java agent 收集jvm的信息,采集jvm的jar包
  • docker運行的默認啟動腳本entrypoint.sh

4.3.2.准備jre底包(7版本有一個7u80)

[root@hdss7-200 jre8]# docker pull docker.io/stanleyws/jre8:8u112
[root@hdss7-200 jre8]# docker images |grep jre
stanleyws/jre8                     8u112                      fa3a085d6ef1        2 years ago         363MB
[root@hdss7-200 jre8]# docker tag fa3a085d6ef1 harbor.od.com/public/jre:8u112
[root@hdss7-200 jre8]# docker push  harbor.od.com/public/jre:8u112

4.3.3.准備java-agent的jar包

[root@hdss7-200 jre8]# wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar -O jmx_javaagent-0.3.1.jar

4.3.3.准備config.yml和entrypoint.sh

[root@hdss7-200 jre8]# vi config.yml
---
rules:
  - pattern: '.*'

[root@hdss7-200 jre8]# vi entrypoint.sh
#!/bin/sh
M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml"
C_OPTS=${C_OPTS}
JAR_BALL=${JAR_BALL}
exec java -jar ${M_OPTS} ${C_OPTS} ${JAR_BALL}

[root@hdss7-200 jre8]# chmod +x entrypoint.sh 
  

4.3.4.harbor創建base項目

4.3.5.構建dubbo微服務的底包並推到harbor倉庫

[root@hdss7-200 jre8]# docker build . -t harbor.od.com/base/jre8:8u112

[root@hdss7-200 jre8]# docker push  harbor.od.com/base/jre8:8u112

5.交付dubbo微服務至kubenetes集群

5.1.配置New job流水線

添加構建參數:
//以下配置項是王導根據多年生產經驗總結出來的--運維甩鍋大法(姿勢要帥,動作要快)

登陸jenkins----->選擇NEW-ITEM----->item name :dubbo-demo----->Pipeline------>ok

需要保留多少此老的構建,這里設置,保留三天,30個

點擊:“This project is parameterized”使用參數化構建jenkins

添加參數String Parameter:8個------Trim the string都勾選
app_name

image_name

git_repo

git_ver

add_tag

mvn_dir

target_dir

mvn_cmd

添加Choice Parameter:2個
base_image

maven

5.2..Pipeline Script

pipeline {
  agent any 
    stages {
      stage('pull') { //get project code from repo 
        steps {
          sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.app_name}/${env.BUILD_NUMBER} && git checkout ${params.git_ver}"
        }
      }
      stage('build') { //exec mvn cmd
        steps {
          sh "cd ${params.app_name}/${env.BUILD_NUMBER}  && /var/jenkins_home/maven-${params.maven}/bin/${params.mvn_cmd}"
        }
      }
      stage('package') { //move jar file into project_dir
        steps {
          sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.target_dir} && mkdir project_dir && mv *.jar ./project_dir"
        }
      }
      stage('image') { //build image and push to registry
        steps {
          writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile", text: """FROM harbor.od.com/${params.base_image}
ADD ${params.target_dir}/project_dir /opt/project_dir"""
          sh "cd  ${params.app_name}/${env.BUILD_NUMBER} && docker build -t harbor.od.com/${params.image_name}:${params.git_ver}_${params.add_tag} . && docker push harbor.od.com/${params.image_name}:${params.git_ver}_${params.add_tag}"
        }
      }
    }
}

5.3.harbor創建app項目,把dubbo服務鏡像管理起來

5.4.創建app名稱空間,並添加secret資源

任意運算節點上
因為要去拉app私有倉庫的鏡像,所以添加secret資源

[root@hdss7-21 bin]# kubectl create ns app
namespace/app created
[root@hdss7-21 bin]# kubectl create secret docker-registry harbor --docker-server=harbor.od.com --docker-username=admin --docker-password=Harbor12345 -n app
secret/harbor created

5.5.交付dubbo-demo-service

5.5.1.jenkins傳參,構建dubbo-demo-service鏡像,傳到harbor



5.5.2.創建dubbo-demo-service的資源配置清單

特別注意:dp.yaml的image替換成自己打包的鏡像名稱
hdss7-200上

[root@hdss7-200 dubbo-demo-service]# vi dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-service
  namespace: app
  labels: 
    name: dubbo-demo-service
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-service
  template:
    metadata:
      labels: 
        app: dubbo-demo-service
        name: dubbo-demo-service
    spec:
      containers:
      - name: dubbo-demo-service
        image: harbor.od.com/app/dubbo-demo-service:master_191201_1200
        ports:
        - containerPort: 20880
          protocol: TCP
        env:
        - name: JAR_BALL
          value: dubbo-server.jar
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600

5.5.3.應用dubbo-demo-service資源配置清單

任意運算節點上

[root@hdss7-21 bin]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-service/dp.yaml
deployment.extensions/dubbo-demo-service created

5.5.4.檢查啟動狀態

dashboard查看日志

zk注冊中心查看

5.6.交付dubbo-Monitor

dubbo-monitor實際上就是從注冊中心registry去數據出來然后展示的工具
兩個開源軟件:1、dubbo-admin 2、dubbo-monitor。此處我們用dubbo-minitor

5.6.1.准備docker鏡像

5.6.1.1.下載源碼包、解壓
[root@hdss7-200 src]# ll|grep dubbo
-rw-r--r-- 1 root root  23468109 Dec  4 11:50 dubbo-monitor-master.zip
[root@hdss7-200 src]# unzip  dubbo-monitor-master.zip 
[root@hdss7-200 src]# ll|grep dubbo
drwxr-xr-x 3 root root        69 Jul 27  2016 dubbo-monitor-master
-rw-r--r-- 1 root root  23468109 Dec  4 11:50 dubbo-monitor-master.zip
5.6.1.2.修改以下項配置
[root@hdss7-200 conf]# pwd
/opt/src/dubbo-monitor-master/dubbo-monitor-simple/conf

[root@hdss7-200 conf]# vi dubbo_origin.properties 
dubbo.registry.address=zookeeper://zk1.od.com:2181?backup=zk2.od.com:2181,zk3.od.com:2181
dubbo.protocol.port=20880
dubbo.jetty.port=8080
dubbo.jetty.directory=/dubbo-monitor-simple/monitor
dubbo.statistics.directory=/dubbo-monitor-simple/statistics
dubbo.charts.directory=/dubbo-monitor-simple/charts
dubbo.log4j.file=logs/dubbo-monitor.log
5.6.1.3.制作鏡像
5.6.1.3.1.准備環境
  • 由於是虛擬機環境,這里java給的內存太大,需要給小一些,nohup 替換成exec,要在前台跑,去掉結尾&符,刪除nohup 行下所有行
[root@hdss7-200 bin]# vi /opt/src/dubbo-monitor-master/dubbo-monitor-simple/bin/start.sh 
    JAVA_MEM_OPTS=" -server -Xmx2g -Xms2g -Xmn256m -XX:PermSize=128m -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 "
else
    JAVA_MEM_OPTS=" -server -Xms1g -Xmx1g -XX:PermSize=128m -XX:SurvivorRatio=2 -XX:+UseParallelGC "
fi

echo -e "Starting the $SERVER_NAME ...\c"
nohup java $JAVA_OPTS $JAVA_MEM_OPTS $JAVA_DEBUG_OPTS $JAVA_JMX_OPTS -classpath $CONF_DIR:$LIB_JARS com.alibaba.dubbo.container.Main > $STDOUT_FILE 2>&1 &
  • sed命令替換,用到了sed模式空間
sed -r -i -e '/^nohup/{p;:a;N;$!ba;d}'  ./dubbo-monitor-simple/bin/start.sh && sed  -r -i -e "s%^nohup(.*)%exec \1%"  /opt/src/dubbo-monitor-master/dubbo-monitor-simple/bin/start.sh 

//調小內存,然后nohup行結尾的&去掉!!!

  • 為了規范,復制到data下
[root@hdss7-200 src]# mv dubbo-monitor-master dubbo-monitor
[root@hdss7-200 src]# cp -a dubbo-monitor /data/dockerfile/
[root@hdss7-200 src]# cd /data/dockerfile/dubbo-monitor/
5.6.1.3.2.准備Dockerfile
[root@hdss7-200 dubbo-monitor]# pwd
/data/dockerfile/dubbo-monitor

[root@hdss7-200 dubbo-monitor]# cat Dockerfile 
FROM jeromefromcn/docker-alpine-java-bash
MAINTAINER Jerome Jiang
COPY dubbo-monitor-simple/ /dubbo-monitor-simple/
CMD /dubbo-monitor-simple/bin/start.sh
5.6.1.3.3.build鏡像並push到harbor倉庫
[root@hdss7-200 dubbo-monitor]# docker build . -t harbor.od.com/infra/dubbo-monitor:latest
[root@hdss7-200 dubbo-monitor]# docker push harbor.od.com/infra/dubbo-monitor:latest

5.6.2.解析域名

[root@hdss7-11 ~]# vi /var/named/od.com.zone 
$ORIGIN od.com.
$TTL 600        ; 10 minutes
@               IN SOA  dns.od.com. dnsadmin.od.com. (
                                2019111008 ; serial
                                10800      ; refresh (3 hours)
                                900        ; retry (15 minutes)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
                                NS   dns.od.com.
  。。。略                              
dubbo-monitor      A    10.4.7.10

[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A dubbo-monitor.od.com @10.4.7.11 +short
10.4.7.10

5.6.3.准備k8s資源配置清單

  • dp.yaml
[root@hdss7-200 k8s-yaml]# pwd
/data/k8s-yaml
[root@hdss7-200 k8s-yaml]# pwd
/data/k8s-yaml
[root@hdss7-200 k8s-yaml]# mkdir dubbo-monitor
[root@hdss7-200 k8s-yaml]# cd dubbo-monitor/
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-monitor
  namespace: infra
  labels: 
    name: dubbo-monitor
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-monitor
  template:
    metadata:
      labels: 
        app: dubbo-monitor
        name: dubbo-monitor
    spec:
      containers:
      - name: dubbo-monitor
        image: harbor.od.com/infra/dubbo-monitor:latest
        ports:
        - containerPort: 8080
          protocol: TCP
        - containerPort: 20880
          protocol: TCP
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
  • svc.yaml
kind: Service
apiVersion: v1
metadata: 
  name: dubbo-monitor
  namespace: infra
spec:
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080
  selector: 
    app: dubbo-monitor
  • ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: dubbo-monitor
  namespace: infra
spec:
  rules:
  - host: dubbo-monitor.od.com
    http:
      paths:
      - path: /
        backend: 
          serviceName: dubbo-monitor
          servicePort: 8080

5.6.4.應用資源配置清單

[root@hdss7-21 bin]# kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/dp.yaml
deployment.extensions/dubbo-monitor created
[root@hdss7-21 bin]# kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/svc.yaml
service/dubbo-monitor created
[root@hdss7-21 bin]# kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/ingress.yaml
ingress.extensions/dubbo-monitor created

5.6.5.瀏覽器訪問

5.7.交付dubbo-demo-consumer

5.7.1.jenkins傳參,構建dubbo-demo-service鏡像,傳到harbor

jenkins的jar包本地緩存

5.7.2.創建dubbo-demo-consumer的資源配置清單

運維主機hdss7-200上
特別注意:dp.yaml的image替換成自己打包的鏡像名稱
dp.yaml

[root@hdss7-200 k8s-yaml]# pwd
/data/k8s-yaml
[root@hdss7-200 k8s-yaml]# mkdir dubbo-demo-consumer
[root@hdss7-200 k8s-yaml]# cd dubbo-demo-consumer/
[root@hdss7-200 k8s-yaml]# vi dp.yaml

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-consumer
  namespace: app
  labels: 
    name: dubbo-demo-consumer
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-consumer
  template:
    metadata:
      labels: 
        app: dubbo-demo-consumer
        name: dubbo-demo-consumer
    spec:
      containers:
      - name: dubbo-demo-consumer
        image: harbor.od.com/app/dubbo-demo-consumer:master_191204_1307
        ports:
        - containerPort: 8080
          protocol: TCP
        - containerPort: 20880
          protocol: TCP
        env:
        - name: JAR_BALL
          value: dubbo-client.jar
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600

svc.yaml

kind: Service
apiVersion: v1
metadata: 
  name: dubbo-demo-consumer
  namespace: app
spec:
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080
  selector: 
    app: dubbo-demo-consumer

ingress.yaml

kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: dubbo-demo-consumer
  namespace: app
spec:
  rules:
  - host: demo.od.com
    http:
      paths:
      - path: /
        backend: 
          serviceName: dubbo-demo-consumer
          servicePort: 8080

5.7.3.應用dubbo-demo-consumer資源配置清單

任意運算節點上

[root@hdss7-21 bin]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-consumer/dp.yaml
deployment.extensions/dubbo-demo-consumer created
[root@hdss7-21 bin]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-consumer/svc.yaml
service/dubbo-demo-consumer created
[root@hdss7-21 bin]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-consumer/ingress.yaml
ingress.extensions/dubbo-demo-consumer created

5.7.4.解析域名

hdss7-11上

[root@hdss7-11 ~]# vi /var/named/od.com.zone 
$ORIGIN od.com.
$TTL 600        ; 10 minutes
@               IN SOA  dns.od.com. dnsadmin.od.com. (
                                2019111009 ; serial                      
                                10800      ; refresh (3 hours)
                                900        ; retry (15 minutes)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
                                NS   dns.od.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.11
...
...
demo               A    10.4.7.10

[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A demo.od.com @10.4.7.11 +short 
10.4.7.10

5.7.5.檢查啟動狀態

6.實戰維護dubbo微服務集群

6.1.更新(rolling update)

  • 修改代碼提交GIT(發版)
  • 使用jenkins進行CI(持續構建)
  • 修改並應用k8s資源配置清單
  • 或者在k8s上修改yaml的harbor鏡像地址

6.2.擴容(scaling)

  • 在k8s的dashboard上直接操作:登陸dashboard頁面-->部署-->伸縮-->修改數量-->確定
  • 命令行擴容,如下示例:
* Examples:
  # Scale a replicaset named 'foo' to 3.
  kubectl scale --replicas=3 rs/foo
  
  # Scale a resource identified by type and name specified in "foo.yaml" to 3.
  kubectl scale --replicas=3 -f foo.yaml
  
  # If the deployment named mysql's current size is 2, scale mysql to 3.
  kubectl scale --current-replicas=2 --replicas=3 deployment/mysql
  
  # Scale multiple replication controllers.
  kubectl scale --replicas=5 rc/foo rc/bar rc/baz
  
  # Scale statefulset named 'web' to 3.
  kubectl scale --replicas=3 statefulset/web

6.3.宿主機突發故障處理

假如hdss7-21突發故障,離線

  1. 其他運算節點上操作:先刪除故障節點使k8s觸發自愈機制,pod在健康節點重新拉起
[root@hdss7-22 ~]#  kubectl delete node hdss7-21.host.com
node "hdss7-21.host.com" deleted
  1. 前端代理修改配置文件,把節點注釋掉,使其不再調度到故障節點(hdss-7-21)
[root@hdss7-11 ~]# vi /etc/nginx/nginx.conf
[root@hdss7-11 ~]# vi /etc/nginx/conf.d/od.com.conf 
[root@hdss7-11 ~]# nginx -t 
[root@hdss7-11 ~]# nginx -s reload
  1. 節點修好,直接啟動,會自行加到集群,修改label,並把節點加回前端負載
[root@hdss7-21 bin]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/master=
node/hdss7-21.host.com labeled
[root@hdss7-21 bin]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/node=
node/hdss7-21.host.com labeled
[root@hdss7-21 bin]# kubectl get nodes
NAME                STATUS   ROLES         AGE   VERSION
hdss7-21.host.com   Ready    master,node   8d    v1.15.4
hdss7-22.host.com   Ready    master,node   10d   v1.15.4

6.4.FAQ

6.4.1.supervisor restart 不成功?

/etc/supervisord.d/xxx.ini 追加:

killasgroup=true
stopasgroup=true


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM