網址:https://www.qikqiak.com/post/harbor-quick-install/
安裝 Harbor
Harbor 支持多種安裝方式,源碼目錄下面默認有一個安裝腳本(make/install.sh),采用 docker-compose 的形式運行 Harbor 各個組件,和前面的課程一樣,我們這里依然還是將 Harbor 安裝到 Kubernetes 集群中,如果我們對 Harbor 的各個組件之間的運行關系非常熟悉,同樣的,我們可以自己手動編寫資源清單文件進行部署,不過 Harbor 源碼目錄中也為我們提供了生成這些資源清單的腳本文件了(make/kubernetes/k8s-prepare),我們只需要執行下面的命令即可為我們生成所需要用到的 YAML 文件了:
$ python make/kubernetes/k8s-prepare
當然了如果上面的一些基本配置不能滿足你的需求,你也可以做一些更高級的配置。你可以在make/common/templates
目錄下面找到所有的 Harbor 的配置模板,做相應的修改即可。
不過我們這里給大家介紹另外一種簡單的安裝方法:Helm,Harbor 官方提供了對應的 Helm Chart 包,所以我們可以很容易安裝。
首先下載 Harbor Chart 包到要安裝的集群上:
$ git clone https://github.com/goharbor/harbor-helm
切換到我們需要安裝的分支,比如我們這里使用 1.0.0分支:
$ cd harbor-helm
$ git checkout 1.0.0
安裝 Helm Chart 包最重要的當然是values.yaml
文件了,我們可以通過覆蓋該文件中的屬性來改變配置:
expose:
# 設置暴露服務的方式。將類型設置為 ingress、clusterIP或nodePort並補充對應部分的信息。
type: ingress
tls:
# 是否開啟 tls,注意:如果類型是 ingress 並且tls被禁用,則在pull/push鏡像時,則必須包含端口。詳細查看文檔:https://github.com/goharbor/harbor/issues/5291。
enabled: true
# 如果你想使用自己的 TLS 證書和私鑰,請填寫這個 secret 的名稱,這個 secret 必須包含名為 tls.crt 和 tls.key 的證書和私鑰文件,如果沒有設置則會自動生成證書和私鑰文件。
secretName: ""
# 默認 Notary 服務會使用上面相同的證書和私鑰文件,如果你想用一個獨立的則填充下面的字段,注意只有類型是 ingress 的時候才需要。
notarySecretName: ""
# common name 是用於生成證書的,當類型是 clusterIP 或者 nodePort 並且 secretName 為空的時候才需要
commonName: ""
ingress:
hosts:
core: core.harbor.domain
notary: notary.harbor.domain
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
clusterIP:
# ClusterIP 服務的名稱
name: harbor
ports:
httpPort: 80
httpsPort: 443
# Notary 服務監聽端口,只有當 notary.enabled 設置為 true 的時候有效
notaryPort: 4443
nodePort:
# NodePort 服務名稱
name: harbor
ports:
http:
port: 80
nodePort: 30002
https:
port: 443
nodePort: 30003
notary:
port: 4443
nodePort: 30004
# Harbor 核心服務外部訪問 URL。主要用於:
# 1) 補全 portal 頁面上面顯示的 docker/helm 命令
# 2) 補全返回給 docker/notary 客戶端的 token 服務 URL
# 格式:protocol://domain[:port]。
# 1) 如果 expose.type=ingress,"domain"的值就是 expose.ingress.hosts.core 的值
# 2) 如果 expose.type=clusterIP,"domain"的值就是 expose.clusterIP.name 的值
# 3) 如果 expose.type=nodePort,"domain"的值就是 k8s 節點的 IP 地址
# 如果在代理后面部署 Harbor,請將其設置為代理的 URL
externalURL: https://core.harbor.domain
# 默認情況下開啟數據持久化,在k8s集群中需要動態的掛載卷默認需要一個StorageClass對象。
# 如果你有已經存在可以使用的持久卷,需要在"storageClass"中指定你的 storageClass 或者設置 "existingClaim"。
#
# 對於存儲 docker 鏡像和 Helm charts 包,你也可以用 "azure"、"gcs"、"s3"、"swift" 或者 "oss",直接在 "imageChartStorage" 區域設置即可
persistence:
enabled: true
# 設置成"keep"避免在執行 helm 刪除操作期間移除 PVC,留空則在 chart 被刪除后刪除 PVC
resourcePolicy: "keep"
persistentVolumeClaim:
registry:
# 使用一個存在的 PVC(必須在綁定前先手動創建)
existingClaim: ""
# 指定"storageClass",或者使用默認的 StorageClass 對象,設置成"-"禁用動態分配掛載卷
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 5Gi
chartmuseum:
existingClaim: ""
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 5Gi
jobservice:
existingClaim: ""
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
# 如果使用外部的數據庫服務,下面的設置將會被忽略
database:
existingClaim: ""
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
# 如果使用外部的 Redis 服務,下面的設置將會被忽略
redis:
existingClaim: ""
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
# 定義使用什么存儲后端來存儲鏡像和 charts 包,詳細文檔地址:https://github.com/docker/distribution/blob/master/docs/configuration.md#storage
imageChartStorage:
# 正對鏡像和chart存儲是否禁用跳轉,對於一些不支持的后端(例如對於使用minio的`s3`存儲),需要禁用它。為了禁止跳轉,只需要設置`disableredirect=true`即可,詳細文檔地址:https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect
disableredirect: false
# 指定存儲類型:"filesystem", "azure", "gcs", "s3", "swift", "oss",在相應的區域填上對應的信息。
# 如果你想使用 pv 則必須設置成"filesystem"類型
type: filesystem
filesystem:
rootdirectory: /storage
#maxthreads: 100
azure:
accountname: accountname
accountkey: base64encodedaccountkey
container: containername
#realm: core.windows.net
gcs:
bucket: bucketname
# The base64 encoded json file which contains the key
encodedkey: base64-encoded-json-key-file
#rootdirectory: /gcs/object/name/prefix
#chunksize: "5242880"
s3:
region: us-west-1
bucket: bucketname
#accesskey: awsaccesskey
#secretkey: awssecretkey
#regionendpoint: http://myobjects.local
#encrypt: false
#keyid: mykeyid
#secure: true
#v4auth: true
#chunksize: "5242880"
#rootdirectory: /s3/object/name/prefix
#storageclass: STANDARD
swift:
authurl: https://storage.myprovider.com/v3/auth
username: username
password: password
container: containername
#region: fr
#tenant: tenantname
#tenantid: tenantid
#domain: domainname
#domainid: domainid
#trustid: trustid
#insecureskipverify: false
#chunksize: 5M
#prefix:
#secretkey: secretkey
#accesskey: accesskey
#authversion: 3
#endpointtype: public
#tempurlcontainerkey: false
#tempurlmethods:
oss:
accesskeyid: accesskeyid
accesskeysecret: accesskeysecret
region: regionname
bucket: bucketname
#endpoint: endpoint
#internal: false
#encrypt: false
#secure: true
#chunksize: 10M
#rootdirectory: rootdirectory
imagePullPolicy: IfNotPresent
logLevel: debug
# Harbor admin 初始密碼,Harbor 啟動后通過 Portal 修改該密碼
harborAdminPassword: "Harbor12345"
# 用於加密的一個 secret key,必須是一個16位的字符串
secretKey: "not-a-secure-key"
# 如果你通過"ingress"保留服務,則下面的Nginx不會被使用
nginx:
image:
repository: goharbor/nginx-photon
tag: v1.7.0
replicas: 1
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
## 額外的 Deployment 的一些 annotations
podAnnotations: {}
portal:
image:
repository: goharbor/harbor-portal
tag: v1.7.0
replicas: 1
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
core:
image:
repository: goharbor/harbor-core
tag: v1.7.0
replicas: 1
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
adminserver:
image:
repository: goharbor/harbor-adminserver
tag: v1.7.0
replicas: 1
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
jobservice:
image:
repository: goharbor/harbor-jobservice
tag: v1.7.0
replicas: 1
maxJobWorkers: 10
# jobs 的日志收集器:"file", "database" or "stdout"
jobLogger: file
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
registry:
registry:
image:
repository: goharbor/registry-photon
tag: v2.6.2-v1.7.0
controller:
image:
repository: goharbor/harbor-registryctl
tag: v1.7.0
replicas: 1
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
chartmuseum:
enabled: true
image:
repository: goharbor/chartmuseum-photon
tag: v0.7.1-v1.7.0
replicas: 1
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
clair:
enabled: true
image:
repository: goharbor/clair-photon
tag: v2.0.7-v1.7.0
replicas: 1
# 用於從 Internet 更新漏洞數據庫的http(s)代理
httpProxy:
httpsProxy:
# clair 更新程序的間隔,單位為小時,設置為0來禁用
updatersInterval: 12
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
notary:
enabled: true
server:
image:
repository: goharbor/notary-server-photon
tag: v0.6.1-v1.7.0
replicas: 1
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
signer:
image:
repository: goharbor/notary-signer-photon
tag: v0.6.1-v1.7.0
replicas: 1
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
database:
# 如果使用外部的數據庫,則設置 type=external,然后填寫 external 區域的一些連接信息
type: internal
internal:
image:
repository: goharbor/harbor-db
tag: v1.7.0
# 內部的數據庫的初始化超級用戶的密碼
password: "changeit"
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
external:
host: "192.168.0.1"
port: "5432"
username: "user"
password: "password"
coreDatabase: "registry"
clairDatabase: "clair"
notaryServerDatabase: "notary_server"
notarySignerDatabase: "notary_signer"
sslmode: "disable"
podAnnotations: {}
redis:
# 如果使用外部的 Redis 服務,設置 type=external,然后補充 external 部分的連接信息。
type: internal
internal:
image:
repository: goharbor/redis-photon
tag: v1.7.0
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
external:
host: "192.168.0.2"
port: "6379"
# coreDatabaseIndex 必須設置為0
coreDatabaseIndex: "0"
jobserviceDatabaseIndex: "1"
registryDatabaseIndex: "2"
chartmuseumDatabaseIndex: "3"
password: ""
podAnnotations: {}
有了上面的配置說明,則我們可以根據自己的需求來覆蓋上面的值,比如我們這里新建一個 qikqiak-values.yaml 的文件,文件內容如下:
expose:
type: ingress
tls:
enabled: true
ingress:
hosts:
core: registry.qikqiak.com
notary: notary.qikqiak.com
annotations:
kubernetes.io/ingress.class: "traefik"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/proxy-body-size: "0"
externalURL: https://registry.qikqiak.com
persistence:
enabled: true
resourcePolicy: "keep"
persistentVolumeClaim:
registry:
storageClass: "harbor-data"
chartmuseum:
storageClass: "harbor-data"
jobservice:
storageClass: "harbor-data"
database:
storageClass: "harbor-data"
redis:
storageClass: "harbor-data"
其中需要我們定制的部分很少,我們將域名替換成我們自己的,使用默認的 Ingress 方式暴露服務,其他需要我們手動配置的部分就是數據持久化的部分,我們需要提前為上面的這些服務創建好可用的 PVC 或者 StorageClass 對象,比如我們這里使用一個名為 harbor-data 的 StorageClass 資源對象,當然也可以根據我們實際的需求修改 accessMode 或者存儲容量:(harbor-data-sc.yaml)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: harbor-data
provisioner: fuseim.pri/ifs
先新建上面的 StorageClass 資源對象:
$ kubectl create -f harbor-data-sc.yaml
storageclass.storage.k8s.io "harbor-data" created
創建完成后,使用上面自定義的 values 文件安裝:
$ helm install --name harbor -f qikqiak-values.yaml . --namespace kube-ops
NAME: harbor
LAST DEPLOYED: Fri Feb 22 22:39:22 2019
NAMESPACE: kube-ops
STATUS: DEPLOYED
RESOURCES:
==> v1/StatefulSet
NAME DESIRED CURRENT AGE
harbor-harbor-database 1 1 0s
harbor-harbor-redis 1 1 0s
==> v1beta1/Ingress
NAME HOSTS ADDRESS PORTS AGE
harbor-harbor-ingress registry.qikqiak.com,notary.qikqiak.com 80, 443 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
harbor-harbor-adminserver-58c855568c-jnpvq 0/1 ContainerCreating 0 0s
harbor-harbor-chartmuseum-58d6c9b898-4csmd 0/1 Pending 0 0s
harbor-harbor-clair-5c7689585-hd2br 0/1 ContainerCreating 0 0s
harbor-harbor-core-6f56879469-rbthd 0/1 ContainerCreating 0 0s
harbor-harbor-jobservice-74d7795cdb-bhzdm 0/1 ContainerCreating 0 0s
harbor-harbor-notary-server-69cdbdfb56-ggc49 0/1 Pending 0 0s
harbor-harbor-notary-signer-8499dc4db7-f78cd 0/1 Pending 0 0s
harbor-harbor-portal-55c45c558d-dmj48 0/1 Pending 0 0s
harbor-harbor-registry-5569fcbf78-5grds 0/2 Pending 0 0s
harbor-harbor-database-0 0/1 Pending 0 0s
harbor-harbor-redis-0 0/1 Pending 0 0s
==> v1/Secret
NAME TYPE DATA AGE
harbor-harbor-adminserver Opaque 4 1s
harbor-harbor-chartmuseum Opaque 1 1s
harbor-harbor-core Opaque 4 1s
harbor-harbor-database Opaque 1 1s
harbor-harbor-ingress kubernetes.io/tls 3 1s
harbor-harbor-jobservice Opaque 1 1s
harbor-harbor-registry Opaque 1 1s
==> v1/ConfigMap
NAME DATA AGE
harbor-harbor-adminserver 39 1s
harbor-harbor-chartmuseum 24 1s
harbor-harbor-clair 1 1s
harbor-harbor-core 1 1s
harbor-harbor-jobservice 1 1s
harbor-harbor-notary-server 5 1s
harbor-harbor-registry 2 1s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
harbor-harbor-chartmuseum Pending harbor-data 1s
harbor-harbor-jobservice Bound pvc-a8a35d0e-36af-11e9-bcd8-525400db4df7 1Gi RWO harbor-data 1s
harbor-harbor-registry Bound pvc-a8a466e9-36af-11e9-bcd8-525400db4df7 5Gi RWO harbor-data 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
harbor-harbor-adminserver ClusterIP 10.108.3.242 <none> 80/TCP 1s
harbor-harbor-chartmuseum ClusterIP 10.101.49.103 <none> 80/TCP 1s
harbor-harbor-clair ClusterIP 10.110.173.153 <none> 6060/TCP 1s
harbor-harbor-core ClusterIP 10.105.178.198 <none> 80/TCP 1s
harbor-harbor-database ClusterIP 10.102.101.155 <none> 5432/TCP 0s
harbor-harbor-jobservice ClusterIP 10.100.127.32 <none> 80/TCP 0s
harbor-harbor-notary-server ClusterIP 10.105.25.64 <none> 4443/TCP 0s
harbor-harbor-notary-signer ClusterIP 10.108.92.82 <none> 7899/TCP 0s
harbor-harbor-portal ClusterIP 10.103.111.161 <none> 80/TCP 0s
harbor-harbor-redis ClusterIP 10.107.205.3 <none> 6379/TCP 0s
harbor-harbor-registry ClusterIP 10.100.87.29 <none> 5000/TCP,8080/TCP 0s
==> v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
harbor-harbor-adminserver 1 1 1 0 0s
harbor-harbor-chartmuseum 1 1 1 0 0s
harbor-harbor-clair 1 1 1 0 0s
harbor-harbor-core 1 1 1 0 0s
harbor-harbor-jobservice 1 1 1 0 0s
harbor-harbor-notary-server 1 1 1 0 0s
harbor-harbor-notary-signer 1 1 1 0 0s
harbor-harbor-portal 1 1 1 0 0s
harbor-harbor-registry 1 0 0 0 0s
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://registry.qikqiak.com.
For more details, please visit https://github.com/goharbor/harbor.
上面是我們通過 Helm 安裝所有涉及到的一些資源對象,稍微等一會兒,就可以安裝成功了,查看對應的 Pod 狀態:
$ kubectl get pods -n kube-ops
NAME READY STATUS RESTARTS AGE
harbor-harbor-adminserver-58c855568c-7dqqb 1/1 Running 0 37m
harbor-harbor-chartmuseum-58d6c9b898-4csmd 1/1 Running 0 49m
harbor-harbor-clair-5c7689585-hd2br 1/1 Running 0 49m
harbor-harbor-core-6f56879469-rbthd 1/1 Running 8 49m
harbor-harbor-database-0 1/1 Running 0 49m
harbor-harbor-jobservice-74d7795cdb-bhzdm 1/1 Running 7 49m
harbor-harbor-notary-server-69cdbdfb56-vklbt 1/1 Running 0 20m
harbor-harbor-notary-signer-8499dc4db7-f78cd 1/1 Running 0 49m
harbor-harbor-portal-55c45c558d-dmj48 1/1 Running 0 49m
harbor-harbor-redis-0 1/1 Running 0 49m
harbor-harbor-registry-5569fcbf78-5grds 2/2 Running 0 49m
現在都是Running
狀態了,都成功運行起來了,查看下對應的 Ingress 對象:
$ kubectl get ingress -n kube-ops
NAME HOSTS ADDRESS PORTS AGE
harbor-harbor-ingress registry.qikqiak.com,notary.qikqiak.com 80, 443 50m
如果你有自己的真正的域名,則將上面的兩個域名解析到你的任意一個 Ingress Controller 的 Pod 所在的節點即可,我們這里為了演示方便,還是自己在本地的/etc/hosts
里面添加上registry.qikqiak.com
和notary.qikqiak.com
的映射。
在第一次安裝的時候比較順暢,后面安裝總是不成功,查看數據庫的 Pod 日志出現database “registry” does not exist()的錯誤信息,如果 registry 數據庫沒有自動創建,我們可以進入數據庫 Pod 中手動創建:
# 1. 進入數據庫 Pod
$ kubectl exec -it harbor-harbor-database-0 -n kube-ops /bin/bash
# 2. 連接數據庫
root [ / ]# psql --username postgres
psql (9.6.10)
Type "help" for help.
# 3. 創建 registry 數據庫
postgres=# CREATE DATABASE registry ENCODING 'UTF8';
CREATE DATABASE
postgres=# \c registry;
You are now connected to database "registry" as user "postgres".
registry=# CREATE TABLE schema_migrations(version bigint not null primary key, dirty boolean not null);
CREATE TABLE
registry-# \quit