Harbor HA部署-使用Ceph RADOS后端


1. 前言

Harbor 1.4.0版本開始提供了HA部署方式,和非HA的主要區別就是把有狀態的服務分離出來,使用外部集群,而不是運行在本地的容器上。而無狀態的服務則可以部署在多個節點上,通過配置上層Load Balancer構成HA。

這些有狀態的服務包括:

  • Harbor database(MariaDB)
  • Clair database(PostgresSQL)
  • Notary database(MariaDB)
  • Redis

我們的Harbor沒有使用notary和clair,所以只需要預先准備高可用的MariaDB和Redis集群。多個Harbor節點配置使用相同的MariaDB和Redis地址,就構成了HA集群。

另外多個Registry需要使用共享存儲,可選的有Swift、NFS、S3、azure、GCS、Ceph和OSS。我們選擇使用Ceph。

docker-registry在2.4.0版本之后移除了rados storage driver,推薦使用Swift API gateway替代,因為Ceph在rados之上提供了兼容Swift和S3的接口。Harbor是在docker-registry的基礎之上做的擴展,我們用的Harbor 1.5.1所使用的registry是2.6.2版本,因此無法配置rados storage,只能使用Swift driver或者S3 driver,我們選擇Swift driver。

 

2. 配置Ceph radosgw用戶

在Ceph Radosgw上為Harbor Registry創建一個user和subuser

# radosgw-admin user create --uid=registry --display-name="registry"
# radosgw-admin subuser create --uid=registry --subuser=registry:swift --access=full

其中user用於訪問S3接口,subuser用於訪問Swift接口。我們將使用radosgw的Swift接口,記錄下subuser的secret_key。

 

3. 部署HA的MariaDB集群

1) 安裝配置MariaDB

# yum install MariaDB-server MariaDB-client
# cat /etc/my.cnf.d/server.cnf
[mariadb-10.1]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
log-error=/var/log/mysql/mysql.log
pid-file=/data/mysql/mysql.pid
max_connections=10000
binlog_format=ROW
expire_logs_days = 30
character-set-server = utf8
collation-server = utf8_general_ci
default-storage-engine=innodb
init-connect = 'SET NAMES utf8'
innodb_file_per_table
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address='gcomm://172.21.10.11,172.21.10.12,172.21.10.13?pc.wait_prim=no'
wsrep_cluster_name='Harbor'
wsrep_node_address=172.21.10.11
wsrep_node_name='registry01-prod-rg3-b28'
wsrep_sst_method=rsync
wsrep_sst_auth=galera:galera_sync
wsrep_slave_threads=1
innodb_flush_log_at_trx_commit=0
default-storage-engine=INNODB
innodb_large_prefix=on
innodb_file_format=BARRACUDA

slow_query_log=1
slow_query_log_file=/var/log/mysql/mysql-slow.log
log_queries_not_using_indexes=1
long_query_time=3

其他兩個節點相同配置,注意修改wsrep_node_address和wsrep_node_name。

2) 啟動MariaDB集群

在一個節點上初始化:

# mysql_install_db --defaults-file=/etc/my.cnf.d/server.cnf --user=mysql
# mysqld_safe --defaults-file=/etc/my.cnf.d/server.cnf --user=mysql  --wsrep-new-cluster &        # 啟動MariaDB服務並建立集群
# mysql_secure_installation        # 設置root密碼和初始化

# mysql
MariaDB [(none)]> grant all privilidges on *.* to 'galera'@'%' identified by 'galera_sync';

在其他兩個節點上啟動MariaDB服務:

# systemctl start mariadb

3) 驗證集群狀態

MariaDB [(none)]> SHOW STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+

MariaDB [(none)]> show global status like 'ws%';
+------------------------------+-------------------------------------------------------+
| Variable_name                | Value                                                 |
+------------------------------+-------------------------------------------------------+
| wsrep_apply_oooe             | 0.000000                                              |
| wsrep_apply_oool             | 0.000000                                              |
| wsrep_apply_window           | 1.000000                                              |
| wsrep_causal_reads           | 0                                                     |
| wsrep_cert_deps_distance     | 20.630228                                             |
| wsrep_cert_index_size        | 94                                                    |
| wsrep_cert_interval          | 0.003802                                              |
| wsrep_cluster_conf_id        | 6                                                     |
| wsrep_cluster_size           | 3                                                     |
| wsrep_cluster_state_uuid     | f785b691-7513-11e8-bfdf-c2caa6c57ac0                  |
| wsrep_cluster_status         | Primary                                               |
| wsrep_commit_oooe            | 0.000000                                              |
| wsrep_commit_oool            | 0.000000                                              |
| wsrep_commit_window          | 1.000000                                              |
| wsrep_connected              | ON                                                    |
| wsrep_desync_count           | 0                                                     |
| wsrep_evs_delayed            |                                                       |
| wsrep_evs_evict_list         |                                                       |
| wsrep_evs_repl_latency       | 0/0/0/0/0                                             |
| wsrep_evs_state              | OPERATIONAL                                           |
| wsrep_flow_control_paused    | 0.000091                                              |
| wsrep_flow_control_paused_ns | 8931145224                                            |
| wsrep_flow_control_recv      | 19                                                    |
| wsrep_flow_control_sent      | 19                                                    |
| wsrep_gcomm_uuid             | 39a7ce90-7522-11e8-bbbc-feb5b5a38145                  |
| wsrep_incoming_addresses     | 10.212.29.38:3306,10.212.29.40:3306,10.212.29.39:3306 |
| wsrep_last_committed         | 1128                                                  |
| wsrep_local_bf_aborts        | 0                                                     |
| wsrep_local_cached_downto    | 77                                                    |
| wsrep_local_cert_failures    | 0                                                     |
| wsrep_local_commits          | 254                     |
| wsrep_local_index            | 0                                                     |
| wsrep_local_recv_queue       | 0                                                     |
| wsrep_local_recv_queue_avg   | 2.058603                                              |
| wsrep_local_recv_queue_max   | 29                                                    |
| wsrep_local_recv_queue_min   | 0                                                     |
| wsrep_local_replays          | 0                                                     |
| wsrep_local_send_queue       | 0                                                     |
| wsrep_local_send_queue_avg   | 0.000000                                              |
| wsrep_local_send_queue_max   | 1                                                     |
| wsrep_local_send_queue_min   | 0                                                     |
| wsrep_local_state            | 4                                                     |
| wsrep_local_state_comment    | Synced                                                |
| wsrep_local_state_uuid       | f785b691-7513-11e8-bfdf-c2caa6c57ac0                  |
| wsrep_protocol_version       | 7                                                     |
| wsrep_provider_name          | Galera                                                |
| wsrep_provider_vendor        | Codership Oy <info@codership.com>                     |
| wsrep_provider_version       | 25.3.19(r3667)                                        |
| wsrep_ready                  | ON                                                    |
| wsrep_received               | 802                                                   |
| wsrep_received_bytes         | 340485                                                |
| wsrep_repl_data_bytes        | 62325                                                 |
| wsrep_repl_keys              | 1074                                                  |
| wsrep_repl_keys_bytes        | 14986                                                 |
| wsrep_repl_other_bytes       | 0                                                     |
| wsrep_replicated             | 278                                                   |
| wsrep_replicated_bytes       | 95103                                                 |
| wsrep_thread_count           | 2                                                     |
+------------------------------+-------------------------------------------------------+
58 rows in set (0.00 sec)
View Code

wsrep_cluster_size為3,表示集群有三個節點。

wsrep_cluster_status為Primary,表示節點為主節點,正常讀寫。

wsrep_ready為ON,表示集群正常運行。

 

4. 部署HA的Redis集群

Harbor 1.5.0和1.5.1版本有bug,不支持使用redis集群,只能用單個redis,詳情查看這個issue。除了Issue中提到的Job Service,UI組件也使用redis保存token數據,同樣不支持redis集群,會登錄時會報錯找不到token:

Jun 22 17:26:19 172.18.0.1 ui[8651]: 2018/06/22 09:26:19 #033[1;31m[E] [server.go:2619] MOVED 10216 172.21.10.12:7001#033[0m
Jun 22 17:26:19 172.18.0.1 ui[8651]: 2018/06/22 09:26:19 #033[1;44m[D] [server.go:2619] |   172.21.10.13|#033[41m 503 #033[0m|    636.771µs| nomatch|#033[44m GET     #033[0m /service/token#033[0m

官方計划在Sprint 35解決這個問題。在此之前,可以不用下面的步驟部署redis集群,而是運行一個單獨的redis。

 

1) 編譯redis

# yum install gcc-c++ tcl ruby rubygems
# wget http://download.redis.io/releases/redis-4.0.10.tar.gz
# tar xzvf redis-4.0.10.tar.gz
# cd redis-4.0.10
# make
# make PREFIX=/usr/local/redis/ install

2) 准備redis集群環境

# mkdir -p /usr/local/redis-cluster/{7001,7002}
# find /usr/local/redis-cluster -mindepth 1 -maxdepth 1 -type d | xargs -n 1 cp /usr/local/redis/bin/*
# find /usr/local/redis-cluster -mindepth 1 -maxdepth 1 -type d | xargs -n 1 cp redis.conf
# cp src/redis-trib.rb /usr/local/redis-cluster/

進入/usr/local/redis-cluster/{7001,7002}目錄,修改redis.conf中的以下四個參數,其中port與目錄名對應:

bind 0.0.0.0
port 7001
daemonize yes
cluster-enabled yes

3) 安裝依賴

redis-trib.rb使用了redis gem,redis-4.0.1.gem依賴ruby2.2.2以上的版本,而ruby22又依賴openssl-libs-1.0.2,所以需要按以下順序安裝:

# wget http://mirror.centos.org/centos/7/os/x86_64/Packages/openssl-1.0.2k-12.el7.x86_64.rpm
# wget http://mirror.centos.org/centos/7/os/x86_64/Packages/openssl-libs-1.0.2k-12.el7.x86_64.rpm
# yum localinstall openssl-1.0.2k-12.el7.x86_64.rpm openssl-libs-1.0.2k-12.el7.x86_64.rpm
# yum install centos-release-scl-rh
# yum install rh-ruby25
# scl enable rh-ruby25 bash
# gem install redis

4) 運行redis-server

在所有節點上執行:

# cd /usr/local/redis-cluster/7001/; ./redis-server redis.conf
# cd /usr/local/redis-cluster/7002/; ./redis-server redis.conf

5) 創建集群

# ./redis-trib.rb create --replicas 1 172.21.10.11:7001 172.21.10.12:7001 172.21.10.13:7001 172.21.10.11:7002 172.21.10.12:7002 172.21.10.13:7002

 

5. 部署Harbor集群

以下操作在所有Harbor節點上執行。

1) 下載Harbor installer

可以從Harbor Releases頁面查看release信息,Harbor installer分為offline installer和online installer,區別是offline installer中包含了部署所用到的docker image,而online installer需要在部署過程中從docker hub下載。我們選擇使用1.5.1版本的offline installer:

# wget https://storage.googleapis.com/harbor-releases/release-1.5.0/harbor-offline-installer-v1.5.1.tgz
# tar xvf harbor-offline-installer-v1.5.1.tgz

2) 編輯harbor.cfg

harbor.cfg文件包含了一些會在Harbor組件中使用的參數,關鍵的有域名、數據庫、Redis、后端存儲和LDAP。

 

域名,需要能解析到節點IP或者VIP:

hostname = registry.example.com

數據庫:

db_host = 172.21.10.11
db_password = password
db_port = 3306
db_user = registry

Redis:

redis_url = 172.21.10.11:6379

后端存儲:

registry_storage_provider_name = swift
registry_storage_provider_config = authurl: http://172.21.1.10:7480/auth/v1, username: registry:swift, password: xFTlZ1Lc5tgH78E7SSHYDmRuUyDK08BariFuR6jO, container: registry.example.com

LDAP(不使用ldap認證則不需要配置這些參數):

auth_mode = ldap_auth
ldap_url = ldap://172.21.2.101:389
ldap_searchdn = CN=admin,OU=Infra, OU=Tech,DC= example,DC=com
ldap_search_pwd = password
ldap_basedn = OU=Tech,DC=example,DC=com
ldap_filter = (objectClass=person)
ldap_uid = sAMAccountName
ldap_scope = 3
ldap_timeout = 5
ldap_verify_cert = false
ldap_group_basedn = OU=Tech,DC=example,DC=com
ldap_group_filter = (objectclass=group)
ldap_group_gid = sAMAccountName
ldap_group_scope = 3

3) 初始化數據庫

MariaDB [(none)]> source harbor/ha/registry.sql
MariaDB [(none)]> grant all privileges on registry.* to 'registry'@'%' identified by 'password';

4) 部署Harbor

# cd harbor
# ./install.sh --ha

部署分為四步,主要就是根據harbor.cfg生成一系列容器的yml文件,然后使用docker-compose拉起。因為使用離線部署,無需從外網下載鏡像,部署過程非常快,只需要幾分鍾。部署完成后,使用docker-compose查看容器狀態:

# docker-compose ps
       Name                     Command                  State                                    Ports                              
-------------------------------------------------------------------------------------------------------------------------------------
harbor-adminserver   /harbor/start.sh                 Up (healthy)                                                                   
harbor-db            /usr/local/bin/docker-entr ...   Up (healthy)   3306/tcp
harbor-jobservice    /harbor/start.sh                 Up                                                                             
harbor-log           /bin/sh -c /usr/local/bin/ ...   Up (healthy)   127.0.0.1:1514->10514/tcp
harbor-ui            /harbor/start.sh                 Up (healthy)                                                                   
nginx                nginx -g daemon off;             Up (healthy)   0.0.0.0:443->443/tcp, 0.0.0.0:4443->4443/tcp, 0.0.0.0:80->80/tcp
redis                docker-entrypoint.sh redis ...   Up             6379/tcp
registry             /entrypoint.sh serve /etc/ ...   Up (healthy)   5000/tcp

看到所有組件的容器都是up狀態,此時用瀏覽器訪問節點IP、VIP或者域名,就能看到Harbor的界面了。

要注意部署過程中會生成一個ui、adminserver和jobservice共用的secretkey,默認保存在/data/secretkey。部署多個節點時,要保證各個節點使用相同的secretkey。

 

使用自己熟悉的Load Balancer,將MariaDB、Redis、Ceph Radosgw和Harbor http都配置上VIP,同時將harbor.cfg里指向的地址都改成VIP地址,就可以實現完全的HA模式。

 

參考資料

Harbor High Availability Guide

CentOS 7.2部署MariaDB Galera Cluster(10.1.21-MariaDB) 3主集群環境

redis單機及其集群的搭建

Registry Configuration Reference #storage

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM