當前微服務已經成為服務端開發的主流架構,而Go語言因其簡單易學、內置高並發、快速編譯、占用內存小等特點也越來越受到開發者的青睞,微服務實戰系列文章將從實戰的角度和大家一起學習微服務相關的知識。本系列文章將以一個“博客系統”由淺入深的和大家一起一步步搭建起一個完整的微服務系統
該篇文章為微服務實戰系列的第一篇文章,我們將基於go-zero+gitlab+jenkins+k8s構建微服務持續集成和自動構建發布系統,先對以上模塊做一個簡單介紹:
- go-zero 是一個集成了各種工程實踐的 web 和 rpc 框架。通過彈性設計保障了大並發服務端的穩定性,經受了充分的實戰檢驗
- gitlab 是一款基於 Git 的完全集成的軟件開發平台,另外,GitLab 且具有wiki以及在線編輯、issue跟蹤功能、CI/CD 等功能
- jenkins 是基於Java開發的一種持續集成工具,用於監控持續重復的工作,旨在提供一個開放易用的軟件平台,使軟件的持續集成變成可能
- kubernetes 常簡稱為K8s,是用於自動部署、擴展和管理容器化應用程序”的開源系統。該系統由Google設計並捐贈給Cloud Native Computing Foundation(今屬Linux基金會)來使用。它旨在提供“跨主機集群的自動部署、擴展以及運行應用程序容器的平台
實戰主要分為五個步驟,下面針對以下的五個步驟分別進行詳細的講解
- 第一步環境搭建,這里我采用了兩台ubuntu16.04服務器分別安裝了gitlab和jenkins,采用xxx雲彈性k8s集群
- 第二步生成項目,這里我采用go-zero提供的goctl工具快速生成項目,並對項目做簡單的修改以便測試
- 第三部生成Dockerfile和k8s部署文件,k8s部署文件編寫復雜而且容易出錯,goctl工具提供了生成Dockerfile和k8s部署文件的功能非常的方便
- Jenkins Pipeline采用聲明式語法構建,創建Jenkinsfile文件並使用gitlab進行版本管理
- 最后進行項目測試驗證服務是否正常

環境搭建
首先我們搭建實驗環境,這里我采用了兩台ubuntu16.04服務器,分別安裝了gitlab和jenkins。gtilab使用apt-get直接安裝,安裝好后啟動服務並查看服務狀態,各組件為run狀態說明服務已經啟動,默認端口為9090直接訪問即可
gitlab-ctl start // 啟動服務
gitlab-ctl status // 查看服務狀態
run: alertmanager: (pid 1591) 15442s; run: log: (pid 2087) 439266s
run: gitaly: (pid 1615) 15442s; run: log: (pid 2076) 439266s
run: gitlab-exporter: (pid 1645) 15442s; run: log: (pid 2084) 439266s
run: gitlab-workhorse: (pid 1657) 15441s; run: log: (pid 2083) 439266s
run: grafana: (pid 1670) 15441s; run: log: (pid 2082) 439266s
run: logrotate: (pid 5873) 1040s; run: log: (pid 2081) 439266s
run: nginx: (pid 1694) 15440s; run: log: (pid 2080) 439266s
run: node-exporter: (pid 1701) 15439s; run: log: (pid 2088) 439266s
run: postgres-exporter: (pid 1708) 15439s; run: log: (pid 2079) 439266s
run: postgresql: (pid 1791) 15439s; run: log: (pid 2075) 439266s
run: prometheus: (pid 10763) 12s; run: log: (pid 2077) 439266s
run: puma: (pid 1816) 15438s; run: log: (pid 2078) 439266s
run: redis: (pid 1821) 15437s; run: log: (pid 2086) 439266s
run: redis-exporter: (pid 1826) 15437s; run: log: (pid 2089) 439266s
run: sidekiq: (pid 1835) 15436s; run: log: (pid 2104) 439266s
jenkins也是用apt-get直接安裝,需要注意的是安裝jenkins前需要先安裝java,過程比較簡單這里就不再演示,jenkins默認端口為8080,默認賬號為admin,初始密碼路徑為/var/lib/jenkins/secrets/initialAdminPassword,初始化安裝推薦的插件即可,后面可以根據自己的需要再安裝其它插件
k8s集群搭建過程比較復雜,雖然可以使用kubeadm等工具快速搭建,但距離真正的生產級集群還是有一定差距,因為我們的服務最終是要上生產的,所以這里我選擇了xxx雲的彈性k8s集群版本為1.16.9,彈性集群的好處是按需收費沒有額外的費用,當我們實驗完成后通過kubectl delete立馬釋放資源只會產生很少的費用,而且xxx雲的k8s集群給我們提供了友好的監控頁面,可以通過這些界面查看各種統計信息,集群創建好后需要創建集群訪問憑證才能訪問集群
-
若當前訪問客戶端尚未配置任何集群的訪問憑證,即 ~/.kube/config 內容為空,可直接將訪問憑證內容並粘貼入 ~/.kube/config 中
-
若當前訪問客戶端已配置了其他集群的訪問憑證,需要通過如下命令合並憑證
KUBECONFIG=~/.kube/config:~/Downloads/k8s-cluster-config kubectl config view --merge --flatten > ~/.kube/config export KUBECONFIG=~/.kube/config
配置好訪問權限后通過如下命令可查看當前集群
kubectl config current-context
查看集群版本,輸出內容如下
kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.9", GitCommit:"a17149e1a189050796ced469dbd78d380f2ed5ef", GitTreeState:"clean", BuildDate:"2020-04-16T11:44:51Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.9-eks.2", GitCommit:"f999b99a13f40233fc5f875f0607448a759fc613", GitTreeState:"clean", BuildDate:"2020-10-09T12:54:13Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
到這里我們的試驗已經搭建完成了,這里版本管理也可以使用github
生成項目
整個項目采用大倉的方式,目錄結構如下,最外層項目命名為blog,app目錄下為按照業務拆分成的不同的微服務,比如user服務下面又分為api服務和rpc服務,其中api服務為聚合網關對外提供restful接口,而rpc服務為內部通信提供高性能的數據緩存等操作
├── blog
│ ├── app
│ │ ├── user
│ │ │ ├── api
│ │ │ └── rpc
│ │ ├── article
│ │ │ ├── api
│ │ │ └── rpc
項目目錄創建好之后我們進入api目錄創建user.api文件,文件內容如下,定義服務端口為2233,同時定義了一個/user/info接口
type UserInfoRequest struct {
Uid int64 `form:"uid"`
}
type UserInfoResponse struct {
Uid int64 `json:"uid"`
Name string `json:"name"`
Level int `json:"level"`
}
@server(
port: 2233
)
service user-api {
@doc(
summary: 獲取用戶信息
)
@server(
handler: UserInfo
)
get /user/info(UserInfoRequest) returns(UserInfoResponse)
}
定義好api文件之后我們執行如下命令生成api服務代碼,一鍵生成真是能大大提升我們的生產力呢
goctl api go -api user.api -dir .
代碼生成后我們對代碼稍作改造以便后面部署后方便進行測試,改造后的代碼為返回本機的ip地址
func (ul *UserInfoLogic) UserInfo(req types.UserInfoRequest) (*types.UserInfoResponse, error) {
addrs, err := net.InterfaceAddrs()
if err != nil {
return nil, err
}
var name string
for _, addr := range addrs {
if ipnet, ok := addr.(*net.IPNet); ok && !ipnet.IP.IsLoopback() && ipnet.IP.To4() != nil {
name = ipnet.IP.String()
}
}
return &types.UserInfoResponse{
Uid: req.Uid,
Name: name,
Level: 666,
}, nil
}
到這里服務生成部分就完成了,因為本節為基礎框架的搭建所以只是添加一些測試的代碼,后面會繼續豐富項目代碼
生成鏡像和部署文件
一般的常用鏡像比如mysql、memcache等我們可以直接從鏡像倉庫拉取,但是我們的服務鏡像需要我們自定義,自定義鏡像有多重方式而使用Dockerfile則是使用最多的一種方式,使用Dockerfile定義鏡像雖然不難但是也很容易出錯,所以這里我們也借助工具來自動生成,這里就不得不再誇一下goctl這個工具真的是棒棒的,還能幫助我們一鍵生成Dockerfile呢,在api目錄下執行如下命令
goctl docker -go user.go
生成后的文件稍作改動以符合我們的目錄結構,文件內容如下,采用了兩階段構建,第一階段構建可執行文件確保構建獨立於宿主機,第二階段會引用第一階段構建的結果,最終構建出極簡鏡像
FROM golang:alpine AS builder
LABEL stage=gobuilder
ENV CGO_ENABLED 0
ENV GOOS linux
ENV GOPROXY https://goproxy.cn,direct
WORKDIR /build/zero
RUN go mod init blog/app/user/api
RUN go mod download
COPY . .
COPY /etc /app/etc
RUN go build -ldflags="-s -w" -o /app/user user.go
FROM alpine
RUN apk update --no-cache && apk add --no-cache ca-certificates tzdata
ENV TZ Asia/Shanghai
WORKDIR /app
COPY --from=builder /app/user /app/user
COPY --from=builder /app/etc /app/etc
CMD ["./user", "-f", "etc/user-api.yaml"]
然后執行如下命令創建鏡像
docker build -t user:v1 app/user/api/
這個時候我們使用docker images命令查看鏡像會發現user鏡像已經創建,版本為v1
REPOSITORY TAG IMAGE ID CREATED SIZE
user v1 1c1f64579b40 4 days ago 17.2MB
同樣,k8s的部署文件編寫也比較復雜很容易出錯,所以我們也使用goctl自動來生成,在api目錄下執行如下命令
goctl kube deploy -name user-api -namespace blog -image user:v1 -o user.yaml -port 2233
生成的ymal文件如下
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-api
namespace: blog
labels:
app: user-api
spec:
replicas: 2
revisionHistoryLimit: 2
selector:
matchLabels:
app: user-api
template:
metadata:
labels:
app: user-api
spec:
containers:
- name: user-api
image: user:v1
lifecycle:
preStop:
exec:
command: ["sh","-c","sleep 5"]
ports:
- containerPort: 2233
readinessProbe:
tcpSocket:
port: 2233
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 2233
initialDelaySeconds: 15
periodSeconds: 10
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1024Mi
到此生成鏡像和k8s部署文件步驟已經結束了,上面主要是為了演示,真正的生產環境中都是通過持續集成工具自動創建鏡像的
Jenkins Pipeline
jenkins是常用的繼續集成工具,其提供了多種構建方式,而pipeline是最常用的構建方式之一,pipeline支持聲名式和腳本式兩種方式,腳本式語法靈活、可擴展,但也意味着更復雜,而且需要學習Grovvy語言,增加了學習成本,所以才有了聲明式語法,聲名式語法是一種更簡單,更結構化的語法,我們后面也都會使用聲名式語法
這里再介紹下Jenkinsfile,其實Jenkinsfile就是一個純文本文件,也就是部署流水線概念在Jenkins中的表現形式,就像Dockerfile之於Docker,所有的部署流水線邏輯都可在Jenkinsfile文件中定義,需要注意,Jenkins默認是不支持Jenkinsfile的,我們需要安裝Pipeline插件,安裝插件的流程為Manage Jenkins -> Manager Plugins 然后搜索安裝即可,之后便可構建pipeline了

我們可以直接在pipeline的界面中輸入構建腳本,但是這樣沒法做到版本化,所以如果不是臨時測試的話是不推薦這種方式的,更通用的方式是讓jenkins從git倉庫中拉取Jenkinsfile並執行
首先需要安裝Git插件,然后使用ssh clone方式拉取代碼,所以,需要將git私鑰放到jenkins中,這樣jenkins才有權限從git倉庫拉取代碼
將git私鑰放到jenkins中的步驟是:Manage Jenkins -> Manage credentials -> 添加憑據,類型選擇為SSH Username with private key,接下來按照提示進行設置就可以了,如下圖所示

然后在我們的gitlab中新建一個項目,只需要一個Jenkinsfile文件
在user-api項目中流水線定義選擇Pipeline script from SCM,添加gitlab ssh地址和對應的token,如下圖所示

接着我們就可以按照上面的實戰步驟進行Jenkinsfile文件的編寫了
-
從gitlab拉取代碼,從我們的gitlab倉庫中拉取代碼,並使用commit_id用來區分不同版本
stage('從gitlab拉取服務代碼') { steps { echo '從gitlab拉取服務代碼' git credentialsId: 'xxxxxxxx', url: 'http://xxx.xxx.xxx.xxx:xxx/blog/blog.git' script { commit_id = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim() } } }
-
構建docker鏡像,使用goctl生成的Dockerfile文件構建鏡像
stage('構建鏡像') { steps { echo '構建鏡像' sh "docker build -t user:${commit_id} app/user/api/" } }
-
上傳鏡像到鏡像倉庫,把生產的鏡像push到鏡像倉庫
stage('上傳鏡像到鏡像倉庫') { steps { echo "上傳鏡像到鏡像倉庫" sh "docker login -u xxx -p xxxxxxx" sh "docker tag user:${commit_id} xxx/user:${commit_id}" sh "docker push xxx/user:${commit_id}" } }
-
部署到k8s,把部署文件中的版本號替換,從遠程拉取鏡,使用kubectl apply命令進行部署
stage('部署到k8s') { steps { echo "部署到k8s" sh "sed -i 's/<COMMIT_ID_TAG>/${commit_id}/' app/user/api/user.yaml" sh "cp app/user/api/user.yaml ." sh "kubectl apply -f user.yaml" } }
完整的Jenkinsfile文件如下
pipeline { agent any stages { stage('從gitlab拉取服務代碼') { steps { echo '從gitlab拉取服務代碼' git credentialsId: 'xxxxxx', url: 'http://xxx.xxx.xxx.xxx:9090/blog/blog.git' script { commit_id = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim() } } } stage('構建鏡像') { steps { echo '構建鏡像' sh "docker build -t user:${commit_id} app/user/api/" } } stage('上傳鏡像到鏡像倉庫') { steps { echo "上傳鏡像到鏡像倉庫" sh "docker login -u xxx -p xxxxxxxx" sh "docker tag user:${commit_id} xxx/user:${commit_id}" sh "docker push xxx/user:${commit_id}" } } stage('部署到k8s') { steps { echo "部署到k8s" sh "sed -i 's/<COMMIT_ID_TAG>/${commit_id}/' app/user/api/user.yaml" sh "kubectl apply -f app/user/api/user.yaml" } } } }
到這里所有的配置基本完畢,我們的基礎框架也基本搭建完成,下面開始執行pipeline,點擊左側的立即構建在下面Build History中就回產生一個構建歷史序列號,點擊對應的序列號然后點擊左側的Console Output即可查看構建過程的詳細信息,如果構建過程出現錯誤也會在這里輸出
構建詳細輸出如下,pipeline對應的每一個stage都有詳細的輸出
Started by user admin
Obtained Jenkinsfile from git git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/user-api
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Checkout SCM)
[Pipeline] checkout
Selected Git installation does not exist. Using Default
The recommended git tool is: NONE
using credential gitlab_token
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git # timeout=10
Fetching upstream changes from git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git
> git --version # timeout=10
> git --version # 'git version 2.7.4'
using GIT_SSH to set credentials
> git fetch --tags --progress git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git +refs/heads/*:refs/remotes/origin/* # timeout=10
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
Checking out Revision 77eac3a4ca1a5b6aea705159ce26523ddd179bdf (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 77eac3a4ca1a5b6aea705159ce26523ddd179bdf # timeout=10
Commit message: "add"
> git rev-list --no-walk 77eac3a4ca1a5b6aea705159ce26523ddd179bdf # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (從gitlab拉取服務代碼)
[Pipeline] echo
從gitlab拉取服務代碼
[Pipeline] git
The recommended git tool is: NONE
using credential gitlab_user_pwd
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url http://xxx.xxx.xxx.xxx:9090/blog/blog.git # timeout=10
Fetching upstream changes from http://xxx.xxx.xxx.xxx:9090/blog/blog.git
> git --version # timeout=10
> git --version # 'git version 2.7.4'
using GIT_ASKPASS to set credentials
> git fetch --tags --progress http://xxx.xxx.xxx.xxx:9090/blog/blog.git +refs/heads/*:refs/remotes/origin/* # timeout=10
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
Checking out Revision b757e9eef0f34206414bdaa4debdefec5974c3f5 (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10
> git branch -a -v --no-abbrev # timeout=10
> git branch -D master # timeout=10
> git checkout -b master b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10
Commit message: "Merge branch 'blog/dev' into 'master'"
> git rev-list --no-walk b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ git rev-parse --short HEAD
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (構建鏡像)
[Pipeline] echo
構建鏡像
[Pipeline] sh
+ docker build -t user:b757e9e app/user/api/
Sending build context to Docker daemon 28.16kB
Step 1/18 : FROM golang:alpine AS builder
alpine: Pulling from library/golang
801bfaa63ef2: Pulling fs layer
ee0a1ba97153: Pulling fs layer
1db7f31c0ee6: Pulling fs layer
ecebeec079cf: Pulling fs layer
63b48972323a: Pulling fs layer
ecebeec079cf: Waiting
63b48972323a: Waiting
1db7f31c0ee6: Verifying Checksum
1db7f31c0ee6: Download complete
ee0a1ba97153: Verifying Checksum
ee0a1ba97153: Download complete
63b48972323a: Verifying Checksum
63b48972323a: Download complete
801bfaa63ef2: Verifying Checksum
801bfaa63ef2: Download complete
801bfaa63ef2: Pull complete
ee0a1ba97153: Pull complete
1db7f31c0ee6: Pull complete
ecebeec079cf: Verifying Checksum
ecebeec079cf: Download complete
ecebeec079cf: Pull complete
63b48972323a: Pull complete
Digest: sha256:49b4eac11640066bc72c74b70202478b7d431c7d8918e0973d6e4aeb8b3129d2
Status: Downloaded newer image for golang:alpine
---> 1463476d8605
Step 2/18 : LABEL stage=gobuilder
---> Running in c4f4dea39a32
Removing intermediate container c4f4dea39a32
---> c04bee317ea1
Step 3/18 : ENV CGO_ENABLED 0
---> Running in e8e848d64f71
Removing intermediate container e8e848d64f71
---> ff82ee26966d
Step 4/18 : ENV GOOS linux
---> Running in 58eb095128ac
Removing intermediate container 58eb095128ac
---> 825ab47146f5
Step 5/18 : ENV GOPROXY https://goproxy.cn,direct
---> Running in df2add4e39d5
Removing intermediate container df2add4e39d5
---> c31c1aebe5fa
Step 6/18 : WORKDIR /build/zero
---> Running in f2a1da3ca048
Removing intermediate container f2a1da3ca048
---> 5363d05f25f0
Step 7/18 : RUN go mod init blog/app/user/api
---> Running in 11d0adfa9d53
[91mgo: creating new go.mod: module blog/app/user/api
[0mRemoving intermediate container 11d0adfa9d53
---> 3314852f00fe
Step 8/18 : RUN go mod download
---> Running in aa9e9d9eb850
Removing intermediate container aa9e9d9eb850
---> a0f2a7ffe392
Step 9/18 : COPY . .
---> a807f60ed250
Step 10/18 : COPY /etc /app/etc
---> c4c5d9f15dc0
Step 11/18 : RUN go build -ldflags="-s -w" -o /app/user user.go
---> Running in a4321c3aa6e2
[91mgo: finding module for package github.com/tal-tech/go-zero/core/conf
[0m[91mgo: finding module for package github.com/tal-tech/go-zero/rest/httpx
[0m[91mgo: finding module for package github.com/tal-tech/go-zero/rest
[0m[91mgo: finding module for package github.com/tal-tech/go-zero/core/logx
[0m[91mgo: downloading github.com/tal-tech/go-zero v1.1.1
[0m[91mgo: found github.com/tal-tech/go-zero/core/conf in github.com/tal-tech/go-zero v1.1.1
go: found github.com/tal-tech/go-zero/rest in github.com/tal-tech/go-zero v1.1.1
go: found github.com/tal-tech/go-zero/rest/httpx in github.com/tal-tech/go-zero v1.1.1
go: found github.com/tal-tech/go-zero/core/logx in github.com/tal-tech/go-zero v1.1.1
[0m[91mgo: downloading gopkg.in/yaml.v2 v2.4.0
[0m[91mgo: downloading github.com/justinas/alice v1.2.0
[0m[91mgo: downloading github.com/dgrijalva/jwt-go v3.2.0+incompatible
[0m[91mgo: downloading go.uber.org/automaxprocs v1.3.0
[0m[91mgo: downloading github.com/spaolacci/murmur3 v1.1.0
[0m[91mgo: downloading github.com/google/uuid v1.1.1
[0m[91mgo: downloading google.golang.org/grpc v1.29.1
[0m[91mgo: downloading github.com/prometheus/client_golang v1.5.1
[0m[91mgo: downloading github.com/beorn7/perks v1.0.1
[0m[91mgo: downloading github.com/golang/protobuf v1.4.2
[0m[91mgo: downloading github.com/prometheus/common v0.9.1
[0m[91mgo: downloading github.com/cespare/xxhash/v2 v2.1.1
[0m[91mgo: downloading github.com/prometheus/client_model v0.2.0
[0m[91mgo: downloading github.com/prometheus/procfs v0.0.8
[0m[91mgo: downloading github.com/matttproud/golang_protobuf_extensions v1.0.1
[0m[91mgo: downloading google.golang.org/protobuf v1.25.0
[0mRemoving intermediate container a4321c3aa6e2
---> 99ac2cd5fa39
Step 12/18 : FROM alpine
latest: Pulling from library/alpine
801bfaa63ef2: Already exists
Digest: sha256:3c7497bf0c7af93428242d6176e8f7905f2201d8fc5861f45be7a346b5f23436
Status: Downloaded newer image for alpine:latest
---> 389fef711851
Step 13/18 : RUN apk update --no-cache && apk add --no-cache ca-certificates tzdata
---> Running in 51694dcb96b6
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
v3.12.3-38-g9ff116e4f0 [http://dl-cdn.alpinelinux.org/alpine/v3.12/main]
v3.12.3-39-ge9195171b7 [http://dl-cdn.alpinelinux.org/alpine/v3.12/community]
OK: 12746 distinct packages available
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/2) Installing ca-certificates (20191127-r4)
(2/2) Installing tzdata (2020f-r0)
Executing busybox-1.31.1-r19.trigger
Executing ca-certificates-20191127-r4.trigger
OK: 10 MiB in 16 packages
Removing intermediate container 51694dcb96b6
---> e5fb2e4d5eea
Step 14/18 : ENV TZ Asia/Shanghai
---> Running in 332fd0df28b5
Removing intermediate container 332fd0df28b5
---> 11c0e2e49e46
Step 15/18 : WORKDIR /app
---> Running in 26e22103c8b7
Removing intermediate container 26e22103c8b7
---> 11d11c5ea040
Step 16/18 : COPY --from=builder /app/user /app/user
---> f69f19ffc225
Step 17/18 : COPY --from=builder /app/etc /app/etc
---> b8e69b663683
Step 18/18 : CMD ["./user", "-f", "etc/user-api.yaml"]
---> Running in 9062b0ed752f
Removing intermediate container 9062b0ed752f
---> 4867b4994e43
Successfully built 4867b4994e43
Successfully tagged user:b757e9e
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (上傳鏡像到鏡像倉庫)
[Pipeline] echo
上傳鏡像到鏡像倉庫
[Pipeline] sh
+ docker login -u xxx -p xxxxxxxx
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /var/lib/jenkins/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Pipeline] sh
+ docker tag user:b757e9e xxx/user:b757e9e
[Pipeline] sh
+ docker push xxx/user:b757e9e
The push refers to repository [docker.io/xxx/user]
b19a970f64b9: Preparing
f695b957e209: Preparing
ee27c5ca36b5: Preparing
7da914ecb8b0: Preparing
777b2c648970: Preparing
777b2c648970: Layer already exists
ee27c5ca36b5: Pushed
b19a970f64b9: Pushed
7da914ecb8b0: Pushed
f695b957e209: Pushed
b757e9e: digest: sha256:6ce02f8a56fb19030bb7a1a6a78c1a7c68ad43929ffa2d4accef9c7437ebc197 size: 1362
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (部署到k8s)
[Pipeline] echo
部署到k8s
[Pipeline] sh
+ sed -i s/<COMMIT_ID_TAG>/b757e9e/ app/user/api/user.yaml
[Pipeline] sh
+ kubectl apply -f app/user/api/user.yaml
deployment.apps/user-api created
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
可以看到最后輸出了SUCCESS說明我們的pipeline已經成了,這個時候我們可以通過kubectl工具查看一下,-n參數為指定namespace
kubectl get pods -n blog
NAME READY STATUS RESTARTS AGE
user-api-84ffd5b7b-c8c5w 1/1 Running 0 10m
user-api-84ffd5b7b-pmh92 1/1 Running 0 10m
我們在k8s部署文件中制定了命名空間為blog,所以在執行pipeline之前我們需要先創建這個namespance
kubectl create namespace blog
服務已經部署好了,那么接下來怎么從外部訪問服務呢?這里使用LoadBalancer方式,Service部署文件定義如下,80端口映射到容器的2233端口上,selector用來匹配Deployment中定義的label
apiVersion: v1
kind: Service
metadata:
name: user-api-service
namespace: blog
spec:
selector:
app: user-api
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 2233
執行創建service,創建完后查看service輸出如下,注意一定要加上-n參數指定namespace
kubectl apply -f user-service.yaml
kubectl get services -n blog
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
user-api-service LoadBalancer <none> xxx.xxx.xxx.xx 80:32470/TCP 79m
這里的EXTERNAL-IP 即為提供給外網訪問的ip,端口為80
到這里我們的所有的部署任務都完成了,大家最好也能親自動手來實踐一下
測試
最后我們來測試下部署的服務是否正常,使用EXTERNAL-IP來進行訪問
curl "http://xxx.xxx.xxx.xxx:80/user/info?uid=1"
{"uid":1,"name":"172.17.0.5","level":666}
curl http://xxx.xxx.xxx.xxx:80/user/info\?uid\=1
{"uid":1,"name":"172.17.0.8","level":666}
curl訪問了兩次/user/info接口,都能正常返回,說明我們的服務沒有問題,name字段分別輸出了兩個不同ip,可以看出LoadBalancer默認采用了Round Robin的負載均衡策略
總結
以上我們實現了從代碼開發到版本管理再到構建部署的DevOps全流程,完成了基礎架構的搭建,當然這個架構現在依然很簡陋。在本系列后續中,我們將以這個博客系統為基礎逐漸的完善整個架構,比如逐漸的完善CI、CD流程、增加監控、博客系統功能的完善、高可用最佳實踐和其原理等等
工欲善其事必先利其器,好的工具能大大提升我們的工作效率而且能降低出錯的可能,上面我們大量使用了goctl工具簡直有點愛不釋手了哈哈哈,下次見
由於個人能力有限難免有表達有誤的地方,歡迎廣大觀眾姥爺批評指正!
項目地址
https://github.com/tal-tech/go-zero
歡迎使用並 star 支持我們!👏
go-zero 系列文章見『微服務實踐』公眾號