Centos7系統-postgresql+etcd+patroni+haproxy+keepalived高可用集群部署


一、概況

1、概念

  pgsql高可用集群采用postgresql+etcd+patroni+haproxy+keepalived等軟件實現,以postgresql做數據庫,etcd存儲集群狀態,patroni與etcd結合實現數據庫集群故障切換,
haproxy實現數據庫高可用(讀讀寫分離),keepalived實現VIP跳轉。

2、拓撲圖

軟件下載地址:

鏈接:https://pan.baidu.com/s/1VIWwXcfQRCumJjEXndSXPQ
提取碼:5bpz

 

二、postgresql部署(三個節點)

1、下載解壓

https://www.enterprisedb.com/download-postgresql-binaries
mkdir -p /data/pg_data
tar xf postgresql-10.18-1-linux-x64-binaries.tar.gz -C /data/

2、創建用戶並授權

useradd postgres
passwd postgres
chown -R postgres.postgres /data/

3、初始化數據庫(postgres用戶下)

 

切換目錄
[root@centos7 ~]# su – postgres
初始化目錄
[postgres@centos7 ~]$ /data/pgsql/bin/initdb -D /data/pg_data/

4、配置變量

su – postgres
vim .bash_profile
PATH=$PATH:$HOME/bin export PATH
export PATH
export PGHOME=/data/pgsql
export PATH=$PATH:$PGHOME/bin
export PGDATA=/data/pg_data
export PGLOG=/data/pg_log/pg.log

source .bash_profile 
mkdir -p /data/pg_log
chown postgres.postgres /data/pg_data
chown postgres.postgres /data/pg_log

5、配置postgresql啟動腳本

vim  /etc/systemd/system/postgresql.service
[Unit]
Description=PostgreSQL database server
After=network.target

[Service]
Type=forking  
User=postgres   
Group=postgres
ExecStart= /data/pgsql/bin/pg_ctl -D /data/pg_data/  start
ExecReload= /data/pgsql/bin/pg_ctl -D /data/pg_data/ restart
ExecStop= /data/pgsql/bin/pg_ctl -D /data/pg_data/  stop
PrivateTmp=true 

[Install]
WantedBy=multi-user.target

6、啟動與關閉

systemctl daemon-reload
開啟
systemctl start postgresql 
關閉
systemctl stop postgresql 
重啟
systemctl restart postgresql

7、數據庫添加密碼

[postgres@pgsql-19 ~]$ psql -U postgres -h localhost
postgres=# alter user postgres with password 'P@sswrd';

8、允許遠程連接

vim /data/pg_data/pg_hba.conf
host    all             all             0.0.0.0/0               md5
 vim /data/pg_data/postgresql.conf

listen_addresses = '*'
password_encryption = on

重啟數據庫
systemctl restart postgresql

  

三、etcd部署(三個節點)  

1、下載解壓

tar xf etcd-v3.1.20-linux-amd64.tar.gz -C /usr/local/
ln -s /usr/local/etcd-v3.1.20-linux-amd64 /usr/local/etcd

2、文件配置

mkdir -p /usr/local/etcd/data/etcd
vim /usr/local/etcd/conf.yml
name: pgsql_1971
data-dir: /usr/local/etcd/data/etcd
listen-client-urls: http://192.168.19.71:2379,http://127.0.0.1:2379
advertise-client-urls: http://192.168.19.71:2379,http://127.0.0.1:2379
listen-peer-urls: http://192.168.19.71:2380
initial-advertise-peer-urls: http://192.168.19.71:2380
initial-cluster: pgsql_1971=http://192.168.19.71:2380,pgsql_1972=http://192.168.19.72:2380,pgsql_1973=http://192.168.19.73:2380
initial-cluster-token: etcd-cluster-token
initial-cluster-state: new

mkdir -p /usr/local/etcd/data/etc
vim /usr/local/etcd/conf.yml
name: pgsql_1972
data-dir: /usr/local/etcd/data/etcd
listen-client-urls: http://192.168.19.72:2379,http://127.0.0.1:2379
advertise-client-urls: http://192.168.19.72:2379,http://127.0.0.1:2379
listen-peer-urls: http://192.168.19.72:2380
initial-advertise-peer-urls: http://192.168.19.72:2380
initial-cluster: pgsql_1971=http://192.168.19.71:2380,pgsql_1972=http://192.168.19.72:2380,pgsql_1973=http://192.168.19.73:2380
initial-cluster-token: etcd-cluster-token
initial-cluster-state: new

mkdir -p /usr/local/etcd/data/etc
vim /usr/local/etcd/conf.yml
name: pgsql_1973
data-dir: /usr/local/etcd/data/etcd
listen-client-urls: http://192.168.19.73:2379,http://127.0.0.1:2379
advertise-client-urls: http://192.168.19.73:2379,http://127.0.0.1:2379
listen-peer-urls: http://192.168.19.73:2380
initial-advertise-peer-urls: http://192.168.19.73:2380
initial-cluster: pgsql_1971=http://192.168.19.71:2380,pgsql_1972=http://192.168.19.72:2380,pgsql_1973=http://192.168.19.73:2380
initial-cluster-token: etcd-cluster-token
initial-cluster-state: new

3、啟動並加入到開機自啟中

加入開機自啟里邊
nohup /usr/local/etcd/etcd --config-file=/usr/local/etcd/conf.yml &

4、集群檢查

netstat -lntup|grep etcd
/usr/local/etcd/etcdctl member list

四、patroni部署(三個節點)  

1、更新postgresql.conf文件

postgresql.conf配置如下

max_connections = '500'
max_wal_senders = '10'
port = '5432'
listen_addresses = '*'
synchronous_commit = on
full_page_writes = on
wal_log_hints = on
synchronous_standby_names = '*'
max_replication_slots = 10
wal_level = replica

注:wal_log_hints = on,synchronous_standby_names = '*' 這兩個參數會導致數據庫執行呆滯,后來者歡迎留言看是怎么回事兒 

2、更新pg_hba.conf文件

vim /data/pg_data/pg_hba.conf
清理最后配置的配置,新增以下
local   all             all                                     peer
host    all             all             127.0.0.1/32            md5
host    all             postgres        127.0.0.1/32            md5
host all all 192.168.19.0/24 md5
host    all             all             ::1/128                 md5
local   replication     replicator                                peer
host    replication     replicator        127.0.0.1/32            md5
host    replication     replicator        ::1/128                 md5
host replication replicator 192.168.19.71/32 md5 host replication replicator 192.168.19.72/32 md5 host replication replicator 192.168.19.73/32      md5

以上配置完成后,重啟數據庫

3、在主節點上創建復制槽,很重要,patroni會用到

postgres=# create user replicator replication login encrypted password '1qaz2wsx';

4、配置stream replication(在兩個從節點操作)  

systemctl stop postgresql
su - postgres cd /data/ && rm -rf pg_data /data/pgsql/bin/pg_basebackup -h 192.168.19.71 -D /data/pg_data -U replicator -v -P -R
啟動數據庫
systemctl start postgresql

5、安裝patroni(三個節點)

yum install -y python3 python-psycopg2 python3-devel
pip3 install --upgrade pip
pip3 install psycopg2-binary -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com
pip3 install patroni[etcd] -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com

 驗證是否安裝成功

which patroni
patronictl --help

6、創建patroni配置文件 

mkdir -p /usr/patroni/conf
 cd /usr/patroni/conf/
node1

scope: batman
namespace: /service/
name: postgresql1

restapi:
  listen: 192.168.19.71:8008
  connect_address: 192.168.19.71:8008
#  certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
#  keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
#  authentication:
#    username: username
#    password: password

# ctl:
#   insecure: false # Allow connections to SSL sites without certs
#   certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
#   cacert: /etc/ssl/certs/ssl-cacert-snakeoil.pem

etcd:
  #Provide host to do the initial discovery of the cluster topology:
  host: 192.168.19.71:2379
  #Or use "hosts" to provide multiple endpoints
  #Could be a comma separated string:
  #hosts: host1:port1,host2:port2
  #host: 192.168.19.71:2379,192.168.19.71:2379,192.168.19.73:2379
  #or an actual yaml list:
  #hosts:
  #- host1:port1
  #- host2:port2
  #Once discovery is complete Patroni will use the list of advertised clientURLs
  #It is possible to change this behavior through by setting:
  #use_proxies: true

#raft:
#  data_dir: .
#  self_addr: 192.168.19.71:2222
#  partner_addrs:
#  - 192.168.19.71:2223
#  - 192.168.19.71:2224

bootstrap:
  # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
  # and all other cluster members will use it as a `global configuration`
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
#    master_start_timeout: 300
#    synchronous_mode: false
    #standby_cluster:
      #host: 192.168.19.71
      #port: 1111
      #primary_slot_name: patroni
    postgresql:
      use_pg_rewind: true
#      use_slots: true
      parameters:
#        wal_level: hot_standby
#        hot_standby: "on"
#        max_connections: 100
#        max_worker_processes: 8
#        wal_keep_segments: 8
#        max_wal_senders: 10
#        max_replication_slots: 10
#        max_prepared_transactions: 0
#        max_locks_per_transaction: 64
#        wal_log_hints: "on"
#        track_commit_timestamp: "off"
#        archive_mode: "on"
#        archive_timeout: 1800s
#        archive_command: mkdir -p ../wal_archive && test ! -f ../wal_archive/%f && cp %p ../wal_archive/%f
#      recovery_conf:
#        restore_command: cp ../wal_archive/%f %p

  # some desired options for 'initdb'
  initdb:  # Note: It needs to be a list (some options need values, others are switches)
  - encoding: UTF8
  - data-checksums

  pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'
  # For kerberos gss based connectivity (discard @.*$)
  #- host replication replicator 192.168.19.71/32 gss include_realm=0
  #- host all all 0.0.0.0/0 gss include_realm=0
  - host replication replicator 192.168.19.71/32 md5
  - host all all 0.0.0.0/0 md5
#  - hostssl all all 0.0.0.0/0 md5

  # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh

  # Some additional users users which needs to be created after initializing new cluster
  users:
    admin:
      password: admin
      options:
        - createrole
        - createdb

postgresql:
  listen: 192.168.19.71:5432
  connect_address: 192.168.19.71:5432
  data_dir: /data/pg_data
  bin_dir: /data/pgsql/bin
#  config_dir:
  pgpass: /tmp/pgpass0
  authentication:
    replication:
      username: replicator
      password: 1qaz2wsx
    superuser:
      username: postgres
      password: P@sswrd
    rewind:  # Has no effect on postgres 10 and lower
      username: postgres
      password: P@sswrd
  # Server side kerberos spn
#  krbsrvname: postgres
  parameters:
    # Fully qualified kerberos ticket file for the running user
    # same as KRB5CCNAME used by the GSS
#   krb_server_keyfile: /var/spool/keytabs/postgres
    unix_socket_directories: '.'
  # Additional fencing script executed after acquiring the leader lock but before promoting the replica
  #pre_promote: /path/to/pre_promote.sh

#watchdog:
#  mode: automatic # Allowed values: off, automatic, required
#  device: /dev/watchdog
#  safety_margin: 5

tags:
    nofailover: false
    noloadbalance: false
    clonefrom: false
    nosync: false
------------------------------------------------------------------------------
node2

scope: batman
namespace: /service/
name: postgresql2

restapi:
  listen: 192.168.19.72:8008
  connect_address: 192.168.19.72:8008
#  certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
#  keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
#  authentication:
#    username: username
#    password: password

# ctl:
#   insecure: false # Allow connections to SSL sites without certs
#   certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
#   cacert: /etc/ssl/certs/ssl-cacert-snakeoil.pem

etcd:
  #Provide host to do the initial discovery of the cluster topology:
  host: 192.168.19.72:2379
  #Or use "hosts" to provide multiple endpoints
  #Could be a comma separated string:
  #hosts: host1:port1,host2:port2
  #host: 192.168.19.71:2379,192.168.19.72:2379,192.168.19.73:2379
  #or an actual yaml list:
  #hosts:
  #- host1:port1
  #- host2:port2
  #Once discovery is complete Patroni will use the list of advertised clientURLs
  #It is possible to change this behavior through by setting:
  #use_proxies: true

#raft:
#  data_dir: .
#  self_addr: 192.168.19.72:2222
#  partner_addrs:
#  - 192.168.19.72:2223
#  - 192.168.19.72:2224

bootstrap:
  # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
  # and all other cluster members will use it as a `global configuration`
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
#    master_start_timeout: 300
#    synchronous_mode: false
    #standby_cluster:
      #host: 192.168.19.72
      #port: 1111
      #primary_slot_name: patroni
    postgresql:
      use_pg_rewind: true
#      use_slots: true
      parameters:
#        wal_level: hot_standby
#        hot_standby: "on"
#        max_connections: 100
#        max_worker_processes: 8
#        wal_keep_segments: 8
#        max_wal_senders: 10
#        max_replication_slots: 10
#        max_prepared_transactions: 0
#        max_locks_per_transaction: 64
#        wal_log_hints: "on"
#        track_commit_timestamp: "off"
#        archive_mode: "on"
#        archive_timeout: 1800s
#        archive_command: mkdir -p ../wal_archive && test ! -f ../wal_archive/%f && cp %p ../wal_archive/%f
#      recovery_conf:
#        restore_command: cp ../wal_archive/%f %p

  # some desired options for 'initdb'
  initdb:  # Note: It needs to be a list (some options need values, others are switches)
  - encoding: UTF8
  - data-checksums

  pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'
  # For kerberos gss based connectivity (discard @.*$)
  #- host replication replicator 192.168.19.72/32 gss include_realm=0
  #- host all all 0.0.0.0/0 gss include_realm=0
  - host replication replicator 192.168.19.72/32 md5
  - host all all 0.0.0.0/0 md5
#  - hostssl all all 0.0.0.0/0 md5

  # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh

  # Some additional users users which needs to be created after initializing new cluster
  users:
    admin:
      password: admin
      options:
        - createrole
        - createdb

postgresql:
  listen: 192.168.19.72:5432
  connect_address: 192.168.19.72:5432
  data_dir: /data/pg_data
  bin_dir: /data/pgsql/bin
#  config_dir:
  pgpass: /tmp/pgpass0
  authentication:
    replication:
      username: replicator
      password: 1qaz2wsx
    superuser:
      username: postgres
      password: P@sswrd
    rewind:  # Has no effect on postgres 10 and lower
      username: postgres
      password: P@sswrd
  # Server side kerberos spn
#  krbsrvname: postgres
  parameters:
    # Fully qualified kerberos ticket file for the running user
    # same as KRB5CCNAME used by the GSS
#   krb_server_keyfile: /var/spool/keytabs/postgres
    unix_socket_directories: '.'
  # Additional fencing script executed after acquiring the leader lock but before promoting the replica
  #pre_promote: /path/to/pre_promote.sh

#watchdog:
#  mode: automatic # Allowed values: off, automatic, required
#  device: /dev/watchdog
#  safety_margin: 5

tags:
    nofailover: false
    noloadbalance: false
    clonefrom: false
    nosync: false

------------------------------------------------------------------------------
node3

scope: batman
namespace: /service/
name: postgresql3

restapi:
  listen: 192.168.19.73:8008
  connect_address: 192.168.19.73:8008
#  certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
#  keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
#  authentication:
#    username: username
#    password: password

# ctl:
#   insecure: false # Allow connections to SSL sites without certs
#   certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
#   cacert: /etc/ssl/certs/ssl-cacert-snakeoil.pem

etcd:
  #Provide host to do the initial discovery of the cluster topology:
  host: 192.168.19.73:2379
  #Or use "hosts" to provide multiple endpoints
  #Could be a comma separated string:
  #hosts: host1:port1,host2:port2
  #host: 192.168.19.73:2379,192.168.19.73:2379,192.168.19.73:2379
  #or an actual yaml list:
  #hosts:
  #- host1:port1
  #- host2:port2
  #Once discovery is complete Patroni will use the list of advertised clientURLs
  #It is possible to change this behavior through by setting:
  #use_proxies: true

#raft:
#  data_dir: .
#  self_addr: 192.168.19.73:2222
#  partner_addrs:
#  - 192.168.19.73:2223
#  - 192.168.19.73:2224

bootstrap:
  # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
  # and all other cluster members will use it as a `global configuration`
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
#    master_start_timeout: 300
#    synchronous_mode: false
    #standby_cluster:
      #host: 192.168.19.73
      #port: 1111
      #primary_slot_name: patroni
    postgresql:
      use_pg_rewind: true
#      use_slots: true
      parameters:
#        wal_level: hot_standby
#        hot_standby: "on"
#        max_connections: 100
#        max_worker_processes: 8
#        wal_keep_segments: 8
#        max_wal_senders: 10
#        max_replication_slots: 10
#        max_prepared_transactions: 0
#        max_locks_per_transaction: 64
#        wal_log_hints: "on"
#        track_commit_timestamp: "off"
#        archive_mode: "on"
#        archive_timeout: 1800s
#        archive_command: mkdir -p ../wal_archive && test ! -f ../wal_archive/%f && cp %p ../wal_archive/%f
#      recovery_conf:
#        restore_command: cp ../wal_archive/%f %p

  # some desired options for 'initdb'
  initdb:  # Note: It needs to be a list (some options need values, others are switches)
  - encoding: UTF8
  - data-checksums

  pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'
  # For kerberos gss based connectivity (discard @.*$)
  #- host replication replicator 192.168.19.73/32 gss include_realm=0
  #- host all all 0.0.0.0/0 gss include_realm=0
  - host replication replicator 192.168.19.73/32 md5
  - host all all 0.0.0.0/0 md5
#  - hostssl all all 0.0.0.0/0 md5

  # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh

  # Some additional users users which needs to be created after initializing new cluster
  users:
    admin:
      password: admin
      options:
        - createrole
        - createdb

postgresql:
  listen: 192.168.19.73:5432
  connect_address: 192.168.19.73:5432
  data_dir: /data/pg_data
  bin_dir: /data/pgsql/bin
#  config_dir:
  pgpass: /tmp/pgpass0
  authentication:
    replication:
      username: replicator
      password: 1qaz2wsx
    superuser:
      username: postgres
      password: P@sswrd
    rewind:  # Has no effect on postgres 10 and lower
      username: postgres
      password: P@sswrd
  # Server side kerberos spn
#  krbsrvname: postgres
  parameters:
    # Fully qualified kerberos ticket file for the running user
    # same as KRB5CCNAME used by the GSS
#   krb_server_keyfile: /var/spool/keytabs/postgres
    unix_socket_directories: '.'
  # Additional fencing script executed after acquiring the leader lock but before promoting the replica
  #pre_promote: /path/to/pre_promote.sh

#watchdog:
#  mode: automatic # Allowed values: off, automatic, required
#  device: /dev/watchdog
#  safety_margin: 5

tags:
    nofailover: false
    noloadbalance: false
    clonefrom: false
    nosync: false

7、依次啟動patroni服務

Postgres用戶下啟動
nohup patroni /usr/patroni/conf/patroni_postgresql.yml &

8、patroni配置啟動腳本

為了方便開機自啟,故配置成 patroni.service,3個node都需要進行配置,配置好patroni.service后就可以直接在root用戶下切換Leader以及重啟postgres節點等操作

[root@pgsql_1971 ~]$ vim /etc/systemd/system/patroni.service
[Unit]
Description=patroni - a high-availability PostgreSQL
Documentation=https://patroni.readthedocs.io/en/latest/index.html
After=syslog.target network.target etcd.target
Wants=network-online.target
 
[Service]
Type=simple
User=postgres
Group=postgres
PermissionsStartOnly=true
ExecStart=/usr/local/bin/patroni /usr/patroni/conf/patroni_postgresql.yml
ExecReload=/bin/kill -HUP $MAINPID
LimitNOFILE=65536
KillMode=process
KillSignal=SIGINT
Restart=on-abnormal
RestartSec=30s
TimeoutSec=0
 
[Install]
WantedBy=multi-user.target

9、禁用postgresql腳本采用patroni服務啟動數據庫

禁止 postgresql 的自啟動,通過 patroni 來管理 postgresql

systemctl stop postgresql
systemctl status postgresql
systemctl disable postgresql

systemctl status patroni
systemctl start patroni
systemctl enable patroni

五、集群檢查

1、數據庫集群檢查

patronictl -c /usr/patroni/conf/patroni_postgresql.yml list	

 2、etcd檢查

root@pgsql_1971 ~]# /usr/local/etcd/etcdctl ls /service/batman
root@pgsql_1971 ~]# /usr/local/etcd/etcdctl get /service/batman/members/postgresql1

六、haproxy部署(兩個從節點)  

 1、安裝haproxy服務

yum install -y haproxy
cp -r /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg_bak

2、配置文件

vi /etc/haproxy/haproxy.cfg

#---------------------------------------------------------------------
# 全局定義
global
    # log語法:log [max_level_1]
    # 全局的日志配置,使用log關鍵字,指定使用127.0.0.1上的syslog服務中的local0日志設備,
    # 記錄日志等級為info的日志
#   log         127.0.0.1 local0 info
    log         127.0.0.1 local1 notice
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
     
    # 定義每個haproxy進程的最大連接數 ,由於每個連接包括一個客戶端和一個服務器端,
    # 所以單個進程的TCP會話最大數目將是該值的兩倍。
    maxconn     4096
     
    # 用戶,組
    user        haproxy
    group       haproxy
     
    # 以守護進程的方式運行
    daemon
 
    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats
 
#---------------------------------------------------------------------
# 默認部分的定義
defaults    
    # mode語法:mode {http|tcp|health} 。http是七層模式,tcp是四層模式,health是健康檢測,返回OK
    mode tcp   
    # 使用127.0.0.1上的syslog服務的local3設備記錄錯誤信息
    log 127.0.0.1 local3 err
 
    #if you set mode to http,then you nust change tcplog into httplog
    option     tcplog
     
    # 啟用該項,日志中將不會記錄空連接。所謂空連接就是在上游的負載均衡器或者監控系統為了
    #探測該服務是否存活可用時,需要定期的連接或者獲取某一固定的組件或頁面,或者探測掃描
    #端口是否在監聽或開放等動作被稱為空連接;官方文檔中標注,如果該服務上游沒有其他的負
    #載均衡器的話,建議不要使用該參數,因為互聯網上的惡意掃描或其他動作就不會被記錄下來
    option     dontlognull
     
    # 定義連接后端服務器的失敗重連次數,連接失敗次數超過此值后將會將對應后端服務器標記為不可用      
    retries    3
     
    # 當使用了cookie時,haproxy將會將其請求的后端服務器的serverID插入到cookie中,以保證
    #會話的SESSION持久性;而此時,如果后端的服務器宕掉了,但是客戶端的cookie是不會刷新的
    #,如果設置此參數,將會將客戶的請求強制定向到另外一個后端server上,以保證服務的正常
    option redispatch
 
    #等待最大時長  When a server's maxconn is reached, connections are left pending in a queue  which may be server-specific or global to the backend.
    timeout queue           525600m
     
    # 設置成功連接到一台服務器的最長等待時間,默認單位是毫秒
    timeout connect         10s
     
    # 客戶端非活動狀態的超時時長   The inactivity timeout applies when the client is expected to acknowledge or  send data.
    timeout client          525600m
     
    # Set the maximum inactivity time on the server side.The inactivity timeout applies when the server is expected to acknowledge or  send data.
    timeout server          525600m
    timeout check           5s
    maxconn                 5120   
 
#---------------------------------------------------------------------
# 配置haproxy web監控,查看統計信息
listen status
    bind 0.0.0.0:1080   
    mode http   
    log global
     
    stats enable
    # stats是haproxy的一個統計頁面的套接字,該參數設置統計頁面的刷新間隔為30s
    stats refresh 30s   
    stats uri /haproxy-stats
    # 設置統計頁面認證時的提示內容
    stats realm Private lands
    # 設置統計頁面認證的用戶和密碼,如果要設置多個,另起一行寫入即可
    stats auth admin:passw0rd
    # 隱藏統計頁面上的haproxy版本信息
#    stats hide-version
     
#---------------------------------------------------------------------
listen master
    bind *:5000
        mode tcp
        option tcplog
        balance roundrobin
    option httpchk OPTIONS /master
    http-check expect status 200
    default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
        server node1 192.168.19.71:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2
        server node2 192.168.19.72:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2
	server node3 192.168.19.73:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2

listen replicas
    bind *:5001
        mode tcp
        option tcplog
        balance roundrobin
    option httpchk OPTIONS /replica
    http-check expect status 200
    default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
        server node1 192.168.19.71:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2
        server node2 192.168.19.72:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2
	server node3 192.168.19.73:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2

3、啟動服務並加入開機自啟

systemctl start haproxy
systemctl enable haproxy
systemctl status haproxy
瀏覽器訪問http://192.168.19.72:1080/haproxy-stats輸入用戶名admin密碼passw0rd

這里我們通過5000端口和5001端口分別來提供寫服務和讀服務,如果需要對數據庫寫入數據只需要對外提供192.168.216.136:5000即可,可以模擬主庫故障,即關閉其中的master節點來驗證是否會進行自動主從切換

七、keepalived部署(兩個從節點)  

1、安裝keepalived服務

yum install -y keepalived

2、配置更新

pg-node1
cat keepalived.conf

global_defs {
notification_email {
    root@localhost #收郵件人
  }
smtp_server 127.0.0.1 
smtp_connect_timeout 30  
router_id master-node  
   router_id LVS_01
}
 
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1221
    }
    virtual_ipaddress {
        192.168.19.110/24 dev eth0 label eth0:0   
    }
}
-------------------------------------------------------------------------------
pg-node2
cat keepalived.conf 

!Configuration File for keepalived
global_defs {
notification_email {
 root@localhost #收郵件人
} 
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id master-node
 
 router_id LVS_02
} 

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1221
    }
  virtual_ipaddress {
       192.168.19.110/24 dev eth0 label eth0:0
   }
}

3、啟動keepalived服務  

systemctl restart keepalived
systemctl enable keepalived

4、對外提供訪問端口  

VIP:192.168.19.110
端口:5000 讀寫權限
端口:5001 讀權限

 

注:以上部署有個問題就是haproxy與keepalived相互依存的問題,我的觀點是除非死機否則不切換,所以我沒有做腳本控制,其他要是有問題,歡迎大佬們提提意見哦!!!  

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM