基於pgpool-II的PostgreSQL雙機高可用和負載均衡方案


1    前言

1.1 概述

pgpool-II是位於PostgreSQL服務器和 PostgreSQL數據庫客戶端之間的代理軟件,它提供了功能它連接池,負載均衡,自動故障轉移,在線恢復等功能。本文介紹一種基於pgpool-II的方案,實現雙機條件下,pgpool-II服務的高可用,PostgreSQL的高可用和負載均衡等功能。

1.2 軟件介紹

1.2.1 pgpool-II

pgpool-II是位於PostgreSQL服務器和 PostgreSQL數據庫客戶端之間的代理軟件。它提供以下功能:

連接池

Pgpool-II維護與PostgreSQL 服務器的已建立連接,並在出現具有相同屬性(即用戶名,數據庫,協議版本和其他連接參數,如果有)的新連接時重用它們。它減少了連接開銷並改善了系統的整體吞吐量。

負載均衡

如果復制了數據庫(因為以復制模式或主/從模式運行),則在任何服務器上執行SELECT查詢都將返回相同的結果。Pgpool-II 利用復制功能來減少每個PostgreSQL服務器上的負載。它通過在可用服務器之間分配SELECT查詢來做到這一點,從而提高了系統的整體吞吐量。在理想情況下,讀取性能可以與PostgreSQL服務器的數量成比例地提高。在許多用戶同時執行許多只讀查詢的情況下,負載平衡效果最佳。

自動故障轉移

如果其中一台數據庫服務器出現故障或無法訪問,則 Pgpool-II會將其分離,並將繼續使用其余的數據庫服務器進行操作。有一些復雜的功能可以幫助自動故障轉移,包括超時和重試。

在線恢復

Pgpool-II可以通過執行一個命令來執行數據庫節點的聯機恢復。當聯機恢復與自動故障轉移一起使用時,可以通過故障轉移將分離的節點自動添加為備用節點。也可以同步並添加新的 PostgreSQL服務器。

復制

Pgpool-II可以管理多個PostgreSQL 服務器。激活復制功能可以在兩個或多個PostgreSQL群集上創建實時備份,因此,如果其中一個群集發生故障,服務可以繼續運行而不會中斷。

看門狗

看門狗可以協調多個Pgpool-II,創建強大的群集系統,並避免單點故障或大腦裂開。看門狗可以對其他pgpool-II節點執行生命檢查,以檢測Pgpoll-II的故障。如果活動Pgpool-II發生故障,則備用 Pgpool-II可以提升為活動狀態,並接管虛擬IP。

內存中查詢緩存

在內存中查詢緩存允許保存一對SELECT語句及其結果。如果出現相同的SELECT,則Pgpool-II從緩存中返回該值。由於不 涉及SQL解析或對PostgreSQL的訪問,因此在內存緩存中使用非常快。另一方面,在某些情況下,它可能比正常路徑慢,因為它增加了存儲緩存數據的開銷。

Pgpool-II使用 PostgreSQL的后端和前端協議,並在后端和前端之間中繼消息。因此,數據庫應用程序(前端)認為Pgpool-II是實際的PostgreSQL服務器,而服務器(后端)將Pgpool-II視為其客戶端之一。由於 Pgpool-II對服務器和客戶端都是透明的,因此現有數據庫應用程序幾乎可以與Pgpool-II一起使用,而無需更改其源代碼。

1.2.2 PostgreSQL

PostgreSQL是一個功能強大的開源對象關系數據庫系統,擁有30多年的積極開發經驗,在可靠性,功能強大和性能方面贏得了極高的聲譽。

1.3 方案架構

    如圖,我們在兩台服務器上,分別部署PostgreSQL和pgpool-II 。兩套PostgreSQL 通過流復制(streaming replication)實現數據同步。pgpool-II 監控數據庫集群的狀態,管理數據庫集群,並將用戶請求分發到數據庫節點上。 此外Pgpool-II節點互相監控,並共享信息。而pgpool-II的主節點啟動虛擬IP,作為對外訪服務的地址。

 

 

1.4 方案的功能

1. 實現 pgpool-II 服務的高可用

在本方案中,當pgpool-II的主節點停止后,另一個節點會立即取代它並對外提供服務,除非兩個節點上的pgpool服務都是異常的。

 

2. 實現PostgreSQL的高可用和在線恢復

若主數據庫停止,則進行主備切換,即原備庫切換為新主庫,原主庫切換為新主庫的備庫。在將原主庫切換為備庫的過程中,若發現原主庫切換失敗,則從新主庫中全量拉取數據。

若主庫所在服務器宕機,則備庫切換為主庫,當原主庫服務器啟動后自動切換為新主庫的備庫。

若備庫服務停止,則自動拉起備庫服務;

若備庫服務停止,且無法正常拉起服務,則從主庫中全量拉取數據;

若備庫所在服務器宕機,不做任何處理,當備庫服務器啟動后自動切換為備庫。

 

3. 負載均衡

   客戶端通過pgpool-II 訪問PostgreSQL的寫請求被發送給主庫,而讀請求可以隨機發送給主庫或備庫。

1.5 運行環境

本方案需要兩台服務器。配置信息如下:

 

硬件

內存:32G

CPU:16個邏輯CPU

 

操作系統:

CentOS 7.4

 

主要軟件:

yum 3.4.3

python 2.7.6

PostgreSQL 11.4

pgpool-II 4.1

 

主服務器的ip地址是10.40.239.228,主機名是node228

備服務器的ip地址是10.40.239.229,主機名是node229

 

集群對外提供服務的虛擬IP 是10.40.239.240。

2 方案實現

2.1 安裝 PostgreSQL

在兩台服務器上進行如下操作。

 

1. 關閉防火牆和selinex:

[root@node228 ~]# systemctl  disable --now  firewalld

[root@node228 ~]# sed   -e   '/^ \{0,\}SELINUX=/c\SELINUX=disabled'  -i  /etc/selinux/config

[root@node228 ~]# setenforce 0

 

2. 創建用戶postgres,它所屬的用戶組是postgres,用戶的主目錄是 /var/lib/pgsql:

[root@node228 ~]# useradd -m -d /var/lib/pgsql postgres

 

3. 為用戶postgres 設置密碼,我們在2.3節配置免密認證會需要:

[root@node228 ~]# passwd postgres

 

4. 安裝PostgreSQL數據庫的依賴包:

[root@node228 ~]# yum -y install readline-devel.x86_64 zlib-devel.x86_64 gcc.x86_64

[root@node228 ~]# yum -y install python python-devel

 

5. 下載PostgreSQL 11.4源碼。

 

6. 在CentOS 服務器上,解壓下載的文件:

[root@node228 ~]# tar zcxf postgresql-11.4.tar.gz

 

7. 進入解壓后的目錄,編譯並安裝postgresql,這里,在本文中,安裝目錄是/opt/pg114。

[root@node228 ~]# cd postgresql-11.4

[root@node228 postgresql-11.4]# ./configure --prefix=/opt/pg114/ --with-perl --with-python

[root@node228 postgresql-11.4]# make world

[root@node228 postgresql-11.4]# make install-world

 

8. 修改目錄的屬主為postgres:

[root@node228 postgresql-11.4]# chown -R postgres.postgres /opt/pg114/

 

9. 使用用戶postgres創建數據庫數據目錄:

[root@node228 postgresql-11.4]# su postgres

[postgres@node228 postgresql-11.4]# cd /opt/pg114/bin

[postgres@node228 bin]# ./initdb -D /opt/pg114/data -E utf8 --lc-collate='en_US.UTF-8' --lc-ctype='en_US.UTF-8'

 

10. 登錄數據庫,修改postgres用戶的密碼,這里我們設置密碼為abc12345:

[postgres@node228 bin]# ./psql -h 127.0.0.1 -U postgres -p 5432

連接成功后,在數據庫中執行下列命令:

alter user postgres password 'abc12345';

\q

 

11. 修改 /data/pg_hba.conf,設置允許訪問的ip地址設置訪問方式是 md5。

#TYPE  DATABASE        USER             ADDRESS                 METHOD

local   all             all                                     md5

host    all             all             127.0.0.1/32            md5

host    all             all             ::1/128                 md5

 

可根據實際情況添加其他訪問控制項。

 

12. 根據需要,修改 postgresql.conf 中的一些參數。例如:

listen_addresses = '*'

logging_collector = on

 

2.2 搭建 PostgreSQL 主備環境

2.2.1 主節點上的操作

1. 確保服務已經啟動。執行下面的命令,切換用戶為postgres,進入PostgreSQL 的bin目錄,並啟動服務:

[root@node228 ~]# su postgres

[postgres@node228 root]# cd /opt/pg114/bin/

[postgres@node228 bin]# . /pg_ctl start -D ../data

 

2. 創建用於流復制的用戶和用於檢查pgpool集群狀態的用戶。執行下面的命令進入postgresql的控制台:

[postgres@node228 postgres]# ./psql -h 127.0.0.1 -p 5432 -U postgres

這里, 127.0.0.1 代表本機的回環地址, 5432代表數據庫的端口, postgres代表連接數據庫的用戶。

執行如下語句創建用戶:

create user repuser with login replication password 'repuser123';

create user checkuser with login password 'checkuser123';

 

這里,用戶repuser用於流復制,它的密碼是“repuser123”。

用戶checkuser用於檢查pgpool集群狀態,它的密碼是“checkuser123”。

 

3. 修改pg_hba.conf 文件,添加如下內容,允許兩台計算機上的復制用戶repuser和超級用戶postgres登錄:

host    replication     repuser         10.40.239.228/32         md5

host    replication     repuser         10.40.239.229/32         md5

host      all           all           10.40.239.228/32              md5

host      all           all           10.40.239.229/32              md5

 

      可根據實際情況添加其他訪問控制項。

 

 

4. 在主節點的數據庫的配置文件 postgresql.conf(位於安裝目錄下data目錄中) 中設置這些參數:

listen_addresses = '*'

max_wal_senders = 10

wal_level = replica

wal_log_hints = on

wal_keep_segments = 128

wal_receiver_status_interval = 5s

hot_standby_feedback = on

hot_standby = on

 

這些參數中的含義如下:

listen_addresses 表示服務器監聽的地址,*表示任意地址;

 

max_wal_senders表示來自后備服務器或流式基礎備份客戶端的並發連接的最大數量;

 

wal_level 表示日志級別,對於流復制,它的值應設置為replica;

 

wal_log_hints = on表示,在PostgreSQL服務器一個檢查點之后頁面被第一次修改期間,把該磁盤頁面的整個內容都寫入 WAL,即使對所謂的提示位做非關鍵修改也會這樣做;

 

wal_keep_segments 指定在后備服務器需要為流復制獲取日志段文件的情況下,pg_wal (PostgreSQL 9.6 以下版本的是pg_xlog)目錄下所能保留的過去日志文件段的最小數目;

 

log_connections 表示是否在日志中記錄客戶端對服務器的連接;

 

wal_receiver_status_interval 指定在后備機上的 WAL 接收者進程向主服務器或上游后備機發送有關復制進度的信息的最小周期;

 

hot_standby_feedback 指定一個熱后備機是否將會向主服務器或上游后備機發送有關於后備機上當前正被執行的查詢的反饋,這里設置為on;

 

hot_standby 這個參數對流復制中的后備服務器有效,表示在恢復期間,是否允許用戶連接並查詢備庫。on 表示允許。

 

關於詳細內容,可以參考postgresql官方文檔。

 

5. 重啟主節點:

[postgres@node228 bin]# ./pg_ctl restart -D /opt/pg114/data

 

6. 重啟之后,連接數據庫,執行下面的sql為主服務器和后備服務器創建復制槽:

select * from pg_create_physical_replication_slot('node228');

select * from pg_create_physical_replication_slot('node229');

 

復制槽 replication slot 的作用是:

1. 在流復制中,當一個備節點斷開連接是時,備節點通過hot_standby_feedback 提供反饋數據數據會丟失。當備節點重新連接時,它可能因為被主節點發送清理記錄而引發查詢沖突。復制槽即使在備節點斷開時仍然會記錄下備節點的xmin(復制槽要需要數據庫保留的最舊事務ID)值,從而確保不會有清理沖突。

 

2. 當一個備節點斷開連接時,備節點需要的WAL文件信息也丟失了。如果沒有復制槽,當備節點重連時,我們可能已經丟棄了所需要的WAL文件,因此需要完全重建備節點。

復制槽確保這個節點保留所有下游節點需要的wal文件。

2.2.2 備節點上的操作

1. 切換用戶為postgres,並確保備節點上數據庫服務是停止的:

[root@node228 ~]# su postgres

[postgres@node228 root]# cd /opt/pg114/bin/

[postgres@node228 bin]# . /pg_ctl stop -D ../data

關閉服務。

 

2. 首先刪除備節點中的數據目錄中的文件:

[postgres@node229 bin]# rm -rf /opt/pg114/data/*

然后使用pg_basebackup將主機的數據備份到備機:

[postgres@node229 bin]# ./pg_basebackup -Xs -d "hostaddr=10.40.239.228 port=5432 user=repuser password=repuser123" -D /opt/pg114/data -v -Fp

 

這里,-Xs 表示復制方式是流式的(stream),這種方式不會復制在此次備份開始前,已經歸檔完成的WAL文件;-d 后面是一個連接字符串,其中“hostaddr=10.40.239.228”表示主服務器的ip地址是10.40.239.228,“port=5432”表示數據庫的端口是5432,“user=repuser”表示用於流復制的用戶是repuser, “password=repuser123”表示密碼是repuser123;“-D ../data”表示將備份內容輸入到本地的 /opt/pg114/data 目錄;“-v”表示打印詳細信息,–Fp 表示復制結果輸出位普通(plain)文件。

 

3. 將 /opt/postgresql11/share/ 中的 recovery.conf.sample 拷貝到 /opt/pg114/data 下,重命名為 recovery.conf:

[postgres@node229 bin]# cp /opt/postgresql11/share/recovery.conf.sample /opt/pg114/data/recovery.conf

 

修改/opt/pg114/data/recovery.conf,設置如下參數:

recovery_target_timeline = 'latest'

standby_mode = on

primary_conninfo = 'host=10.40.239.228 port=5432 user=repuser password=repuser123'

primary_slot_name = 'node229'

trigger_file = 'tgfile'

 

這些參數的含義如下:

recovery_target_timeline 表示恢復到數據庫時間線的上的什么時間點,這里設置為latest,即最新。

standby_mode 表示是否將PostgreSQL服務器作為一個后備服務器啟動,這里設置為on,即后備模式。

primary_conninfo指定后備服務器用來連接主服務器的連接字符串,其中“host=10.40.239.228”表示主服務器的ip地址是10.40.239.228;“port=5432”表示數據庫的端口是5432;“user=repuser”表示用於流復制的用戶是repuser; “password=repuser123”表示密碼是repuser123。

primary_slot_name 指定通過流復制連接到主服務器時使用一個現有的復制槽來控制上游節點上的資源移除。這里我們指定3.2.1節創建的node229。如果沒有在主服務器上創建復制槽,則不配置此參數。

trigger_file指定一個觸發器文件,該文件的存在會結束后備機中的恢復,使它成為主機。

 

4. 啟動備節點服務:

[postgres@node229 bin]# ./pg_ctl start -D /opt/pg114/data

2.2.3 主備環境檢測

1. 在主節點的上創建一個表,並插入數據:

postgres=# create table man  (id int, name text);

CREATE TABLE

postgres=# insert into man  (id, name)  values  (1,'tom');

INSERT 0 1

 

2.  在備節點上檢測:

postgres=# select * from man;

 id | name

----+------

     1 | tom

可以看到,主節點數據同步到了備機。

 

3. 同時,在備節點上寫數據會失敗:

postgres=# insert into man (id, name)  values  (2,'amy');

ERROR:  cannot execute INSERT in a read-only transaction

 

2.3 配置ssh免密認證

   在兩台服務器上執行下面的操作。

1. 使用postgres生成ssh密鑰,它包含公鑰和私鑰:

[root@node228 ~]# su -i postgres

[postgres@node228 ~]$ ssh-keygen -t rsa

  

2. 執行下面的命令,將這台服務器上的公鑰(~/.ssh/id_rsa.pub中的內容)復制到兩台服務器的認證文件(~/.ssh/authorized_keys) 中:

[postgres@node228 ~]# ssh-copy-id  postgres@10.40.239.228

[postgres@node228 ~]# ssh-copy-id  postgres@10.40.239.229

 

2.4 搭建 pgpool 主備環境

我們需要分別在node228 , node229上安裝和配置pgpool-II,步驟如下:

2.4.1 安裝 pgpool

1. 關閉防火牆:

[root@node228 ~]# systemctl  disable --now  firewalld

 

2. 關閉selinex:

[root@node228 ~]# sed   -e   '/^ \{0,\}SELINUX=/c\SELINUX=disabled'  -i  /etc/selinux/config

[root@node228 ~]# setenforce 0

 

3. 安裝pgpool yum源(需要連接外網):

[root@node228 ~]# yum install http://www.pgpool.net/yum/rpms/4.1/redhat/rhel-7-x86_64/pgpool-II-release-4.1-1.noarch.rpm

 

4. 安裝pgpool包:

[root@node228 ~]# yum -y install  pgpool-II-pg11-debuginfo.x86_64  pgpool-II-pg11-devel.x86_64   pgpool-II-pg11-extensions.x86_64  pgpool-II-release.noarch

 

pgpool 安裝成功后,會生成一個目錄 /etc/pgpool-II/,包含pgpool 相關的客戶端和服務端的配置文件。

 

5. 將此目錄中文件的屬主修改為postgres:

[root@node228 ~]# chown postgres.postgres /etc/pgpool-II/*

2.4.2 配置 pgpool

這一節的操作都使用用戶postgres 完成。

 

1. 切換為用戶postgres,並進入目錄 /etc/pgpool-II/:

[root@node228 ~]# su postgres

[postgres@node228 ~]# cd /etc/pgpool-II/

 

2. 修改pgpool服務端的配置文件pcp.conf,並設置它的權限為僅允許所屬用戶讀寫。它的作用是記錄用來管理pgpool-II集群的用戶的名稱和密碼。它的內容格式為:

username:password(密文)

 

本文中,pcp用戶是pgpool,密碼是 pgpool。具體方法是執行下面的命令:

[postgres@node228 ~]# cd /etc/pgpool-II/

[postgres@node228 pgpool-II]# echo pgpool:$(pg_md5 -u pgpool pgpool) >/etc/pgpool-II/pcp.conf

[postgres@node228 pgpool-II]# chmod 600 /etc/pgpool-II/pcp.conf

 

3. 修改pgpool服務端的配置文件pool_hba.conf。它的作用是限制pgpool的訪問權限,格式同PostgreSQL數據庫的pg_hba.conf。我們可以先復制pg_hba.conf的內容到pool_hba.conf,並根據實際添加訪問控制項。具體命令如下:

 

[postgres@node228 pgpool-II]# cp /opt/pg114/data/pg_hba.conf  /etc/pgpool-II/pool_hba.conf

 

4. 修改pgpool服務端的配置文件pool_passwd,並設置它的權限為僅允許所屬用戶讀寫。它的作用是記錄允許通過pgpool來訪問數據庫的用戶的名稱和密碼(密文)。它的內容格式為:

    username:password(密文)

 

具體方法是執行下面的命令:

[postgres@node228 pgpool-II]# /opt/pg114/bin/psql -U postgres -h 10.40.239.228 -p 5432 -t -c "select rolname || ':' || rolpassword from pg_authid where rolpassword is not null ;"    > /etc/pgpool-II/pool_passwd

[postgres@node228 pgpool-II]# sed -e 's/ //g' -i /etc/pgpool-II/pool_passwd

[postgres@node228 pgpool-II]# chmod 600 /etc/pgpool-II/pool_passwd

 

 

 

5. 在當前目錄下,配置PostgreSQL 的密碼文件 .pgpass和 pcp(即管理pgpool-II 集群)用戶的密碼文件 .pcppass,並設置它們的權限為僅允許所屬用戶讀寫。它們分別為你訪問數據庫和管理pgpool-II 提供憑據。

 

注意,文件 .pgpass 的內容格式為:

hostname:port:database:username:password

文件 .pcppass的內容格式為:

hostname:port:username:password

 

編輯文件 .pgpass,內容如下:

10.40.239.228:5432:replication:repuser:repuser123

10.40.239.229:5432:replication:repuser:repuser123

node228:5432:*:repuser:repuser123

node229:5432:*:repuser:repuser123

127.0.0.1:5432:*:postgres:abc12345

localhost:5432:*:postgres:abc12345

10.40.239.228:5432:*:postgres:abc12345

10.40.239.229:5432:*:postgres:abc12345

node228:5432:*:postgres:abc12345

node229:5432:*:postgres:abc12345

 

編輯文件 .pcppass,內容如下:

localhost:9898:pgpool:pgpool

127.0.0.1:9898:pgpool:pgpool

 

        設置它的權限為僅允許所屬用戶讀寫:

    [postgres@node228 pgpool-II]# chmod 600 /etc/pgpool-II/.pgpass

   [postgres@node228 pgpool-II]# chmod 600 /etc/pgpool-II/.pcppass

 

[小提示]

或許你也知道,如果 .pgpass和 .pcppass是配置在用戶 postgres 的Home目錄中的,那么它們也能被pgpool讀取。但我們不建議這樣做。這些密碼文件應該只配置在 pgpool-II 的工作目錄中,以防止被其他應用程序讀取或修改。

 

6. 修改pgpool配置文件pgpool.conf,並設置它的權限為僅允許所屬用戶讀寫。這個文件默認在/etc/pgpool-II/目錄下,作用為pgpool集群的配置文件。

 

上傳配置文件pgpool.conf 到10.40.239.228節點的/etc/pgpool-II目錄下;

# ----------------------------
# pgPool-II configuration file
# ----------------------------
#
# This file consists of lines of the form:
#
#   name = value
#
# Whitespace may be used.  Comments are introduced with "#" anywhere on a line.
# The complete list of parameter names and allowed values can be found in the
# pgPool-II documentation.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal.  If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, or use "pgpool reload".  Some
# parameters, which are marked below, require a server shutdown and restart to
# take effect.
#


#------------------------------------------------------------------------------
# CONNECTIONS
#------------------------------------------------------------------------------

# - pgpool Connection Settings -

listen_addresses = '*'
                                   # Host name or IP address to listen on:
                                   # '*' for all, '' for no TCP/IP connections
                                    # (change requires restart)
port = 9999
                                   # Port number
                                   # (change requires restart)
socket_dir = '/opt/pgpool/'
                                   # Unix domain socket path
                                   # The Debian package defaults to
                                   # /var/run/postgresql
                                   # (change requires restart)
reserved_connections = 0
                                   # Number of reserved connections.
                                   # Pgpool-II does not accept connections if over
                                   # num_init_chidlren - reserved_connections.

# - pgpool Communication Manager Connection Settings -

pcp_listen_addresses = '*'
                                   # Host name or IP address for pcp process to listen on:
                                   # '*' for all, '' for no TCP/IP connections
                                   # (change requires restart)
pcp_port = 9898
                                   # Port number for pcp
                                   # (change requires restart)
pcp_socket_dir = '/opt/pgpool/'
                                   # Unix domain socket path for pcp
                                  
                                   # The Debian package defaults to
                                   # /var/run/postgresql
                                   # (change requires restart)
listen_backlog_multiplier = 2
                                   # Set the backlog parameter of listen(2) to
                                   # num_init_children * listen_backlog_multiplier.
                                   # (change requires restart)
serialize_accept = off
                                   # whether to serialize accept() call to avoid thundering herd problem
                                   # (change requires restart)

# - Backend Connection Settings -

backend_hostname0 = 'node228'
                                   # Host name or IP address to connect to for backend 0
backend_port0 = 5432
                                   # Port number for backend 0
backend_weight0 = 1
                                   # Weight for backend 0 (only in load balancing mode)
backend_data_directory0 = '/opt/pg114/data/'
                                   # Data directory for backend 0
backend_flag0 = 'ALLOW_TO_FAILOVER'
                                   # Controls various backend behavior
                                   # ALLOW_TO_FAILOVER, DISALLOW_TO_FAILOVER
                                   # or ALWAYS_MASTER
backend_application_name0 = 'server0'
                                   # walsender's application_name, used for "show pool_nodes" command
                                   
backend_hostname1 = 'node229'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/opt/pg114/data/'
backend_flag1 = 'ALLOW_TO_FAILOVER'
backend_application_name1 = 'server1'


# - Authentication -

enable_pool_hba = on
                                   # Use pool_hba.conf for client authentication
pool_passwd = 'pool_passwd'
                                   # File name of pool_passwd for md5 authentication.
                                   # "" disables pool_passwd.
                                   # (change requires restart)
authentication_timeout = 60
                                   # Delay in seconds to complete client authentication
                                   # 0 means no timeout.

allow_clear_text_frontend_auth = off
                                   # Allow Pgpool-II to use clear text password authentication
                                   # with clients, when pool_passwd does not
                                   # contain the user password


# - SSL Connections -

ssl = off
                                   # Enable SSL support
                                   # (change requires restart)
#ssl_key = './server.key'
                                   # Path to the SSL private key file
                                   # (change requires restart)
#ssl_cert = './server.cert'
                                   # Path to the SSL public certificate file
                                   # (change requires restart)
#ssl_ca_cert = ''
                                   # Path to a single PEM format file
                                   # containing CA root certificate(s)
                                   # (change requires restart)
#ssl_ca_cert_dir = ''
                                   # Directory containing CA root certificate(s)
                                   # (change requires restart)

ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL'
                                   # Allowed SSL ciphers
                                   # (change requires restart)
ssl_prefer_server_ciphers = off
                                   # Use server's SSL cipher preferences,
                                   # rather than the client's
                                   # (change requires restart)
ssl_ecdh_curve = 'prime256v1'
                                   # Name of the curve to use in ECDH key exchange
ssl_dh_params_file = ''
                                   # Name of the file containing Diffie-Hellman parameters used
                                   # for so-called ephemeral DH family of SSL cipher.

#------------------------------------------------------------------------------
# POOLS
#------------------------------------------------------------------------------

# - Concurrent session and pool size -

num_init_children = 32
                                   # Number of concurrent sessions allowed
                                   # (change requires restart)
max_pool = 4
                                   # Number of connection pool caches per connection
                                   # (change requires restart)

# - Life time -

child_life_time = 300
                                   # Pool exits after being idle for this many seconds
child_max_connections = 0
                                   # Pool exits after receiving that many connections
                                   # 0 means no exit
connection_life_time = 0
                                   # Connection to backend closes after being idle for this many seconds
                                   # 0 means no close
client_idle_limit = 0
                                   # Client is disconnected after being idle for that many seconds
                                   # (even inside an explicit transactions!)
                                   # 0 means no disconnection


#------------------------------------------------------------------------------
# LOGS
#------------------------------------------------------------------------------

# - Where to log -

log_destination = 'syslog'
                                   # Where to log
                                   # Valid values are combinations of stderr,
                                   # and syslog. Default to stderr.
                                 

# - What to log -

log_line_prefix = '%t: pid %p: '   # printf-style string to output at beginning of each log line.

log_connections = on
                                   # Log connections
log_hostname = on
                                   # Hostname will be shown in ps status
                                   # and in logs if connections are logged
log_statement = off
                                   # Log all statements
log_per_node_statement = off
                                   # Log all statements
                                   # with node and backend informations
log_client_messages = off
                                   # Log any client messages
log_standby_delay = 'if_over_threshold'
                                   # Log standby delay
                                   # Valid values are combinations of always,
                                   # if_over_threshold, none

# - Syslog specific -

syslog_facility = 'LOCAL0'
                                   # Syslog local facility. Default to LOCAL0
syslog_ident = 'pgpool'
                                   # Syslog program identification string
                                   # Default to 'pgpool'

# - Debug -

#log_error_verbosity = default          # terse, default, or verbose messages

#client_min_messages = notice           # values in order of decreasing detail:
                                        #   debug5
                                        #   debug4
                                        #   debug3
                                        #   debug2
                                        #   debug1
                                        #   log
                                        #   notice
                                        #   warning
                                        #   error

#log_min_messages = warning             # values in order of decreasing detail:
                                        #   debug5
                                        #   debug4
                                        #   debug3
                                        #   debug2
                                        #   debug1
                                        #   info
                                        #   notice
                                        #   warning
                                        #   error
                                        #   log
                                        #   fatal
                                        #   panic

#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------

pid_file_name = '/opt/pgpool/pgpool.pid'
                                   # PID file name
                                   # Can be specified as relative to the"
                                   # location of pgpool.conf file or
                                   # as an absolute path
                                   # (change requires restart)
logdir = '/opt/pgpool/'
                                   # Directory of pgPool status file
                                   # (change requires restart)


#------------------------------------------------------------------------------
# CONNECTION POOLING
#------------------------------------------------------------------------------

connection_cache = on
                                   # Activate connection pools
                                   # (change requires restart)

                                   # Semicolon separated list of queries
                                   # to be issued at the end of a session
                                   # The default is for 8.3 and later
reset_query_list = 'ABORT; DISCARD ALL'
                                   # The following one is for 8.2 and before
#reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT'


#------------------------------------------------------------------------------
# REPLICATION MODE
#------------------------------------------------------------------------------

replication_mode = off
                                   # Activate replication mode
                                   # (change requires restart)
replicate_select = off
                                   # Replicate SELECT statements
                                   # when in replication mode
                                   # replicate_select is higher priority than
                                   # load_balance_mode.

insert_lock = off
                                   # Automatically locks a dummy row or a table
                                   # with INSERT statements to keep SERIAL data
                                   # consistency
                                   # Without SERIAL, no lock will be issued
lobj_lock_table = ''
                                   # When rewriting lo_creat command in
                                   # replication mode, specify table name to
                                   # lock

# - Degenerate handling -

replication_stop_on_mismatch = off
                                   # On disagreement with the packet kind
                                   # sent from backend, degenerate the node
                                   # which is most likely "minority"
                                   # If off, just force to exit this session

failover_if_affected_tuples_mismatch = off
                                   # On disagreement with the number of affected
                                   # tuples in UPDATE/DELETE queries, then
                                   # degenerate the node which is most likely
                                   # "minority".
                                   # If off, just abort the transaction to
                                   # keep the consistency


#------------------------------------------------------------------------------
# LOAD BALANCING MODE
#------------------------------------------------------------------------------

load_balance_mode = on
                                   # Activate load balancing mode
                                   # (change requires restart)
ignore_leading_white_space = on
                                   # Ignore leading white spaces of each query
white_function_list = ''
                                   # Comma separated list of function names
                                   # that don't write to database
                                   # Regexp are accepted
black_function_list = 'currval,lastval,nextval,setval,func_*,f_*'
                                   # Comma separated list of function names
                                   # that write to database
                                   # Regexp are accepted

black_query_pattern_list = ''
                                   # Semicolon separated list of query patterns
                                   # that should be sent to primary node
                                   # Regexp are accepted
                                   # valid for streaming replicaton mode only.

database_redirect_preference_list = ''
                                   # comma separated list of pairs of database and node id.
                                   # example: postgres:primary,mydb[0-4]:1,mydb[5-9]:2'
                                   # valid for streaming replicaton mode only.
app_name_redirect_preference_list = ''
                                   # comma separated list of pairs of app name and node id.
                                   # example: 'psql:primary,myapp[0-4]:1,myapp[5-9]:standby'
                                   # valid for streaming replicaton mode only.
allow_sql_comments = off
                                   # if on, ignore SQL comments when judging if load balance or
                                   # query cache is possible.
                                   # If off, SQL comments effectively prevent the judgment
                                   # (pre 3.4 behavior).

disable_load_balance_on_write = 'transaction'
                                   # Load balance behavior when write query is issued
                                   # in an explicit transaction.
                                   # Note that any query not in an explicit transaction
                                   # is not affected by the parameter.
                                   # 'transaction' (the default): if a write query is issued,
                                   # subsequent read queries will not be load balanced
                                   # until the transaction ends.
                                   # 'trans_transaction': if a write query is issued,
                                   # subsequent read queries in an explicit transaction
                                   # will not be load balanced until the session ends.
                                   # 'always': if a write query is issued, read queries will
                                   # not be load balanced until the session ends.

statement_level_load_balance = off
                                   # Enables statement level load balancing

#------------------------------------------------------------------------------
# MASTER/SLAVE MODE
#------------------------------------------------------------------------------

master_slave_mode = on
                                   # Activate master/slave mode
                                   # (change requires restart)
master_slave_sub_mode = 'stream'
                                   # Master/slave sub mode
                                   # Valid values are combinations stream, slony
                                   # or logical. Default is stream.
                                   # (change requires restart)

# - Streaming -

sr_check_period = 10
                                   # Streaming replication check period
                                   # Disabled (0) by default
sr_check_user = 'repuser'
                                   # Streaming replication check user
                                   # This is neccessary even if you disable streaming
                                   # replication delay check by sr_check_period = 0

sr_check_password = 'repuser123'
                                   # Password for streaming replication check user
                                   # Leaving it empty will make Pgpool-II to first look for the
                                   # Password in pool_passwd file before using the empty password

sr_check_database = 'postgres'
                                   # Database name for streaming replication check
delay_threshold = 10000000
                                   # Threshold before not dispatching query to standby node
                                   # Unit is in bytes
                                   # Disabled (0) by default

# - Special commands -

follow_master_command = '/etc/pgpool-II/follow_master.sh %d %h %D %H %r '
                                   # Executes this command after master failover
                                   # Special values:
                                   #   %d = failed node id
                                   #   %h = failed node host name
                                   #   %p = failed node port number
                                   #   %D = failed node database cluster path
                                   #   %m = new master node id
                                   #   %H = new master node hostname
                                   #   %M = old master node id
                                   #   %P = old primary node id
                                   #   %r = new master port number
                                   #   %R = new master database cluster path
                                   #   %N = old primary node hostname
                                   #   %S = old primary node port number
                                   #   %% = '%' character

#------------------------------------------------------------------------------
# HEALTH CHECK GLOBAL PARAMETERS
#------------------------------------------------------------------------------

health_check_period = 10
                                   # Health check period
                                   # Disabled (0) by default
health_check_timeout = 20
                                   # Health check timeout
                                   # 0 means no timeout
health_check_user = 'checkuser'
                                   # Health check user
health_check_password = 'checkuser123'
                                   # Password for health check user
                                   # Leaving it empty will make Pgpool-II to first look for the
                                   # Password in pool_passwd file before using the empty password

health_check_database = 'postgres'
                                   # Database name for health check. If '', tries 'postgres' frist, then 'template1'

health_check_max_retries = 3
                                   # Maximum number of times to retry a failed health check before giving up.
health_check_retry_delay = 1
                                   # Amount of time to wait (in seconds) between retries.
connect_timeout = 10000
                                   # Timeout value in milliseconds before giving up to connect to backend.
                                   # Default is 10000 ms (10 second). Flaky network user may want to increase
                                   # the value. 0 means no timeout.
                                   # Note that this value is not only used for health check,
                                   # but also for ordinary conection to backend.

#------------------------------------------------------------------------------
# HEALTH CHECK PER NODE PARAMETERS (OPTIONAL)
#------------------------------------------------------------------------------
#health_check_period0 = 0
#health_check_timeout0 = 20
#health_check_user0 = 'checkuser'
#health_check_password0 = ''
#health_check_database0 = ''
#health_check_max_retries0 = 0
#health_check_retry_delay0 = 1
#connect_timeout0 = 10000

#------------------------------------------------------------------------------
# FAILOVER AND FAILBACK
#------------------------------------------------------------------------------

failover_command = '/etc/pgpool-II/failover.sh %d %h %D %m %H %r %P %R '
                                   # Executes this command at failover
                                   # Special values:
                                   #   %d = failed node id
                                   #   %h = failed node host name
                                   #   %p = failed node port number
                                   #   %D = failed node database cluster path
                                   #   %m = new master node id
                                   #   %H = new master node hostname
                                   #   %M = old master node id
                                   #   %P = old primary node id
                                   #   %r = new master port number
                                   #   %R = new master database cluster path
                                   #   %N = old primary node hostname
                                   #   %S = old primary node port number
                                   #   %% = '%' character
failback_command = ''
                                   # Executes this command at failback.
                                   # Special values:
                                   #   %d = failed node id
                                   #   %h = failed node host name
                                   #   %p = failed node port number
                                   #   %D = failed node database cluster path
                                   #   %m = new master node id
                                   #   %H = new master node hostname
                                   #   %M = old master node id
                                   #   %P = old primary node id
                                   #   %r = new master port number
                                   #   %R = new master database cluster path
                                   #   %N = old primary node hostname
                                   #   %S = old primary node port number
                                   #   %% = '%' character

failover_on_backend_error = on
                                   # Initiates failover when reading/writing to the
                                   # backend communication socket fails
                                   # If set to off, pgpool will report an
                                   # error and disconnect the session.

detach_false_primary = off
                                   # Detach false primary if on. Only
                                   # valid in streaming replicaton
                                   # mode and with PostgreSQL 9.6 or
                                   # after.

search_primary_node_timeout = 3
                                   # Timeout in seconds to search for the
                                   # primary node when a failover occurs.
                                   # 0 means no timeout, keep searching
                                   # for a primary node forever.

#------------------------------------------------------------------------------
# ONLINE RECOVERY
#------------------------------------------------------------------------------

recovery_user = 'nobody'
                                   # Online recovery user
recovery_password = ''
                                   # Online recovery password
                                   # Leaving it empty will make Pgpool-II to first look for the
                                   # Password in pool_passwd file before using the empty password

recovery_1st_stage_command = ''
                                   # Executes a command in first stage
recovery_2nd_stage_command = ''
                                   # Executes a command in second stage
recovery_timeout = 90
                                   # Timeout in seconds to wait for the
                                   # recovering node's postmaster to start up
                                   # 0 means no wait
client_idle_limit_in_recovery = 0
                                   # Client is disconnected after being idle
                                   # for that many seconds in the second stage
                                   # of online recovery
                                   # 0 means no disconnection
                                   # -1 means immediate disconnection


auto_failback = on
                                   # Dettached backend node reattach automatically
                                   # if replication_state is 'streaming'.
auto_failback_interval = 30
                                   # Min interval of executing auto_failback in
                                   # seconds.

#------------------------------------------------------------------------------
# WATCHDOG
#------------------------------------------------------------------------------

# - Enabling -

use_watchdog = on
                                    # Activates watchdog
                                    # (change requires restart)

# -Connection to up stream servers -

trusted_servers = 'node228,node229'
                                    # trusted server list which are used
                                    # to confirm network connection
                                    # (hostA,hostB,hostC,...)
                                    # (change requires restart)
ping_path = '/bin'
                                    # ping command path
                                    # (change requires restart)

# - Watchdog communication Settings -

wd_hostname = 'node228'
                                    # Host name or IP address of this watchdog
                                    # (change requires restart)
wd_port = 9000
                                    # port number for watchdog service
                                    # (change requires restart)
wd_priority = 1
                                    # priority of this watchdog in leader election
                                    # (change requires restart)

wd_authkey = ''
                                    # Authentication key for watchdog communication
                                    # (change requires restart)

wd_ipc_socket_dir = '/opt/pgpool'
                                    # Unix domain socket path for watchdog IPC socket
                                    # The Debian package defaults to
                                    # /var/run/postgresql
                                    # (change requires restart)


# - Virtual IP control Setting -

delegate_IP = '10.40.239.240'
                                    # delegate IP address
                                    # If this is empty, virtual IP never bring up.
                                    # (change requires restart)
if_cmd_path = '/usr/sbin'
                                    # path to the directory where if_up/down_cmd exists
                                    # If if_up/down_cmd starts with "/", if_cmd_path will be ignored.
                                    # (change requires restart)
if_up_cmd = 'ip addr add $_IP_$/24 dev ens33 label ens33:0'
                                    # startup delegate IP command
                                    # (change requires restart)
if_down_cmd = 'ip addr del $_IP_$/24 dev ens33'
                                    # shutdown delegate IP command
                                    # (change requires restart)
arping_path = '/usr/sbin'
                                    # arping command path
                                    # If arping_cmd starts with "/", if_cmd_path will be ignored.
                                    # (change requires restart)
arping_cmd = 'arping -U  $_IP_$  -w 1 -I ens33'
                                    # arping command
                                    # (change requires restart)

# - Behaivor on escalation Setting -

clear_memqcache_on_escalation = on
                                    # Clear all the query cache on shared memory
                                    # when standby pgpool escalate to active pgpool
                                    # (= virtual IP holder).
                                    # This should be off if client connects to pgpool
                                    # not using virtual IP.
                                    # (change requires restart)
wd_escalation_command = ''
                                    # Executes this command at escalation on new active pgpool.
                                    # (change requires restart)
wd_de_escalation_command = ''
                                    # Executes this command when master pgpool resigns from being master.
                                    # (change requires restart)

# - Watchdog consensus settings for failover -

failover_when_quorum_exists = on
                                    # Only perform backend node failover
                                    # when the watchdog cluster holds the quorum
                                    # (change requires restart)

failover_require_consensus = on
                                    # Perform failover when majority of Pgpool-II nodes
                                    # aggrees on the backend node status change
                                    # (change requires restart)

allow_multiple_failover_requests_from_node = on
                                    # A Pgpool-II node can cast multiple votes
                                    # for building the consensus on failover
                                    # (change requires restart)

enable_consensus_with_half_votes = on
                                    # apply majority rule for consensus and quorum computation
                                    # at 50% of votes in a cluster with even number of nodes.
                                    # when enabled the existence of quorum and consensus
                                    # on failover is resolved after receiving half of the
                                    # total votes in the cluster, otherwise both these
                                    # decisions require at least one more vote than
                                    # half of the total votes.
                                    # (change requires restart)

# - Lifecheck Setting -

# -- common --

wd_monitoring_interfaces_list = ''  # Comma separated list of interfaces names to monitor.
                                    # if any interface from the list is active the watchdog will
                                    # consider the network is fine
                                    # 'any' to enable monitoring on all interfaces except loopback
                                    # '' to disable monitoring
                                    # (change requires restart)


wd_lifecheck_method = 'heartbeat'
                                    # Method of watchdog lifecheck ('heartbeat' or 'query' or 'external')
                                    # (change requires restart)
wd_interval = 10
                                    # lifecheck interval (sec) > 0
                                    # (change requires restart)

# -- heartbeat mode --

wd_heartbeat_port = 9694
                                    # Port number for receiving heartbeat signal
                                    # (change requires restart)
wd_heartbeat_keepalive = 2
                                    # Interval time of sending heartbeat signal (sec)
                                    # (change requires restart)
wd_heartbeat_deadtime = 30
                                    # Deadtime interval for heartbeat signal (sec)
                                    # (change requires restart)
heartbeat_destination0 = 'node229'
                                    # Host name or IP address of destination 0
                                    # for sending heartbeat signal.
                                    # (change requires restart)
heartbeat_destination_port0 = 9694 
                                    # Port number of destination 0 for sending
                                    # heartbeat signal. Usually this is the
                                    # same as wd_heartbeat_port.
                                    # (change requires restart)
heartbeat_device0 = 'ens33'
                                    # Name of NIC device (such like 'eth0')
                                    # used for sending/receiving heartbeat
                                    # signal to/from destination 0.
                                    # This works only when this is not empty
                                    # and pgpool has root privilege.
                                    # (change requires restart)

#heartbeat_destination1 = ''
#heartbeat_destination_port1 = 9694
#heartbeat_device1 = 'ens33'

# -- query mode --

wd_life_point = 3
                                    # lifecheck retry times
                                    # (change requires restart)
wd_lifecheck_query = 'SELECT 1'
                                    # lifecheck query to pgpool from watchdog
                                    # (change requires restart)
wd_lifecheck_dbname = 'postgres'
                                    # Database name connected for lifecheck
                                    # (change requires restart)
wd_lifecheck_user = 'checkuser'
                                    # watchdog user monitoring pgpools in lifecheck
                                    # (change requires restart)
wd_lifecheck_password = 'checkuser123'
                                    # Password for watchdog user in lifecheck
                                    # Leaving it empty will make Pgpool-II to first look for the
                                    # Password in pool_passwd file before using the empty password
                                    # (change requires restart)

# - Other pgpool Connection Settings -

other_pgpool_hostname0 = 'node229'
                                    # Host name or IP address to connect to for other pgpool 0
                                    # (change requires restart)
other_pgpool_port0 = 9999
                                    # Port number for other pgpool 0
                                    # (change requires restart)
other_wd_port0 = 9000
#                                    # Port number for other watchdog 0
#                                    # (change requires restart)
#other_pgpool_hostname1 = ''
#other_pgpool_port1 = 9999
#other_wd_port1 = 9000


#------------------------------------------------------------------------------
# OTHERS
#------------------------------------------------------------------------------
relcache_expire = 0
                                   # Life time of relation cache in seconds.
                                   # 0 means no cache expiration(the default).
                                   # The relation cache is used for cache the
                                   # query result against PostgreSQL system
                                   # catalog to obtain various information
                                   # including table structures or if it's a
                                   # temporary table or not. The cache is
                                   # maintained in a pgpool child local memory
                                   # and being kept as long as it survives.
                                   # If someone modify the table by using
                                   # ALTER TABLE or some such, the relcache is
                                   # not consistent anymore.
                                   # For this purpose, cache_expiration
                                   # controls the life time of the cache.

relcache_size = 256
                                   # Number of relation cache
                                   # entry. If you see frequently:
                                   # "pool_search_relcache: cache replacement happend"
                                   # in the pgpool log, you might want to increate this number.

check_temp_table = catalog
                                   # Temporary table check method. catalog, trace or none.
                                   # Default is catalog.

check_unlogged_table = on
                                   # If on, enable unlogged table check in SELECT statements.
                                   # This initiates queries against system catalog of primary/master
                                   # thus increases load of master.
                                   # If you are absolutely sure that your system never uses unlogged tables
                                   # and you want to save access to primary/master, you could turn this off.
                                   # Default is on.
enable_shared_relcache = on
                                   # If on, relation cache stored in memory cache,
                                   # the cache is shared among child process.
                                   # Default is on.
                                   # (change requires restart)

relcache_query_target = master     # Target node to send relcache queries. Default is master (primary) node.
                                   # If load_balance_node is specified, queries will be sent to load balance node.
#------------------------------------------------------------------------------
# IN MEMORY QUERY MEMORY CACHE
#------------------------------------------------------------------------------
memory_cache_enabled = off
                                   # If on, use the memory cache functionality, off by default
                                   # (change requires restart)
memqcache_method = 'shmem'
                                   # Cache storage method. either 'shmem'(shared memory) or
                                   # 'memcached'. 'shmem' by default
                                   # (change requires restart)
memqcache_memcached_host = 'localhost'
                                   # Memcached host name or IP address. Mandatory if
                                   # memqcache_method = 'memcached'.
                                   # Defaults to localhost.
                                   # (change requires restart)
memqcache_memcached_port = 11211
                                   # Memcached port number. Mondatory if memqcache_method = 'memcached'.
                                   # Defaults to 11211.
                                   # (change requires restart)
memqcache_total_size = 67108864
                                   # Total memory size in bytes for storing memory cache.
                                   # Mandatory if memqcache_method = 'shmem'.
                                   # Defaults to 64MB.
                                   # (change requires restart)
memqcache_max_num_cache = 1000000
                                   # Total number of cache entries. Mandatory
                                   # if memqcache_method = 'shmem'.
                                   # Each cache entry consumes 48 bytes on shared memory.
                                   # Defaults to 1,000,000(45.8MB).
                                   # (change requires restart)
memqcache_expire = 0
                                   # Memory cache entry life time specified in seconds.
                                   # 0 means infinite life time. 0 by default.
                                   # (change requires restart)
memqcache_auto_cache_invalidation = on
                                   # If on, invalidation of query cache is triggered by corresponding
                                   # DDL/DML/DCL(and memqcache_expire).  If off, it is only triggered
                                   # by memqcache_expire.  on by default.
                                   # (change requires restart)
memqcache_maxcache = 409600
                                   # Maximum SELECT result size in bytes.
                                   # Must be smaller than memqcache_cache_block_size. Defaults to 400KB.
                                   # (change requires restart)
memqcache_cache_block_size = 1048576
                                   # Cache block size in bytes. Mandatory if memqcache_method = 'shmem'.
                                   # Defaults to 1MB.
                                   # (change requires restart)
memqcache_oiddir = '/tmp/oiddir'
                                   # Temporary work directory to record table oids
                                   # (change requires restart)
white_memqcache_table_list = ''
                                   # Comma separated list of table names to memcache
                                   # that don't write to database
                                   # Regexp are accepted
black_memqcache_table_list = ''
                                   # Comma separated list of table names not to memcache
                                   # that don't write to database
                                   # Regexp are accepted
pgpool.conf

 

 

上傳配置文件 pgpool.conf 到10.40.239.229節點的/etc/pgpool-II目錄下。

# ----------------------------
# pgPool-II configuration file
# ----------------------------
#
# This file consists of lines of the form:
#
#   name = value
#
# Whitespace may be used.  Comments are introduced with "#" anywhere on a line.
# The complete list of parameter names and allowed values can be found in the
# pgPool-II documentation.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal.  If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, or use "pgpool reload".  Some
# parameters, which are marked below, require a server shutdown and restart to
# take effect.
#


#------------------------------------------------------------------------------
# CONNECTIONS
#------------------------------------------------------------------------------

# - pgpool Connection Settings -

listen_addresses = '*'
                                   # Host name or IP address to listen on:
                                   # '*' for all, '' for no TCP/IP connections
                                    # (change requires restart)
port = 9999
                                   # Port number
                                   # (change requires restart)
socket_dir = '/opt/pgpool/'
                                   # Unix domain socket path
                                   # The Debian package defaults to
                                   # /var/run/postgresql
                                   # (change requires restart)
reserved_connections = 0
                                   # Number of reserved connections.
                                   # Pgpool-II does not accept connections if over
                                   # num_init_chidlren - reserved_connections.

# - pgpool Communication Manager Connection Settings -

pcp_listen_addresses = '*'
                                   # Host name or IP address for pcp process to listen on:
                                   # '*' for all, '' for no TCP/IP connections
                                   # (change requires restart)
pcp_port = 9898
                                   # Port number for pcp
                                   # (change requires restart)
pcp_socket_dir = '/opt/pgpool/'
                                   # Unix domain socket path for pcp
                                  
                                   # The Debian package defaults to
                                   # /var/run/postgresql
                                   # (change requires restart)
listen_backlog_multiplier = 2
                                   # Set the backlog parameter of listen(2) to
                                   # num_init_children * listen_backlog_multiplier.
                                   # (change requires restart)
serialize_accept = off
                                   # whether to serialize accept() call to avoid thundering herd problem
                                   # (change requires restart)

# - Backend Connection Settings -

backend_hostname0 = 'node228'
                                   # Host name or IP address to connect to for backend 0
backend_port0 = 5432
                                   # Port number for backend 0
backend_weight0 = 1
                                   # Weight for backend 0 (only in load balancing mode)
backend_data_directory0 = '/opt/pg114/data/'
                                   # Data directory for backend 0
backend_flag0 = 'ALLOW_TO_FAILOVER'
                                   # Controls various backend behavior
                                   # ALLOW_TO_FAILOVER, DISALLOW_TO_FAILOVER
                                   # or ALWAYS_MASTER
backend_application_name0 = 'server0'
                                   # walsender's application_name, used for "show pool_nodes" command
                                   
backend_hostname1 = 'node229'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/opt/pg114/data/'
backend_flag1 = 'ALLOW_TO_FAILOVER'
backend_application_name1 = 'server1'


# - Authentication -

enable_pool_hba = on
                                   # Use pool_hba.conf for client authentication
pool_passwd = 'pool_passwd'
                                   # File name of pool_passwd for md5 authentication.
                                   # "" disables pool_passwd.
                                   # (change requires restart)
authentication_timeout = 60
                                   # Delay in seconds to complete client authentication
                                   # 0 means no timeout.

allow_clear_text_frontend_auth = off
                                   # Allow Pgpool-II to use clear text password authentication
                                   # with clients, when pool_passwd does not
                                   # contain the user password


# - SSL Connections -

ssl = off
                                   # Enable SSL support
                                   # (change requires restart)
#ssl_key = './server.key'
                                   # Path to the SSL private key file
                                   # (change requires restart)
#ssl_cert = './server.cert'
                                   # Path to the SSL public certificate file
                                   # (change requires restart)
#ssl_ca_cert = ''
                                   # Path to a single PEM format file
                                   # containing CA root certificate(s)
                                   # (change requires restart)
#ssl_ca_cert_dir = ''
                                   # Directory containing CA root certificate(s)
                                   # (change requires restart)

ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL'
                                   # Allowed SSL ciphers
                                   # (change requires restart)
ssl_prefer_server_ciphers = off
                                   # Use server's SSL cipher preferences,
                                   # rather than the client's
                                   # (change requires restart)
ssl_ecdh_curve = 'prime256v1'
                                   # Name of the curve to use in ECDH key exchange
ssl_dh_params_file = ''
                                   # Name of the file containing Diffie-Hellman parameters used
                                   # for so-called ephemeral DH family of SSL cipher.

#------------------------------------------------------------------------------
# POOLS
#------------------------------------------------------------------------------

# - Concurrent session and pool size -

num_init_children = 32
                                   # Number of concurrent sessions allowed
                                   # (change requires restart)
max_pool = 4
                                   # Number of connection pool caches per connection
                                   # (change requires restart)

# - Life time -

child_life_time = 300
                                   # Pool exits after being idle for this many seconds
child_max_connections = 0
                                   # Pool exits after receiving that many connections
                                   # 0 means no exit
connection_life_time = 0
                                   # Connection to backend closes after being idle for this many seconds
                                   # 0 means no close
client_idle_limit = 0
                                   # Client is disconnected after being idle for that many seconds
                                   # (even inside an explicit transactions!)
                                   # 0 means no disconnection


#------------------------------------------------------------------------------
# LOGS
#------------------------------------------------------------------------------

# - Where to log -

log_destination = 'syslog'
                                   # Where to log
                                   # Valid values are combinations of stderr,
                                   # and syslog. Default to stderr.
                                 

# - What to log -

log_line_prefix = '%t: pid %p: '   # printf-style string to output at beginning of each log line.

log_connections = on
                                   # Log connections
log_hostname = on
                                   # Hostname will be shown in ps status
                                   # and in logs if connections are logged
log_statement = off
                                   # Log all statements
log_per_node_statement = off
                                   # Log all statements
                                   # with node and backend informations
log_client_messages = off
                                   # Log any client messages
log_standby_delay = 'if_over_threshold'
                                   # Log standby delay
                                   # Valid values are combinations of always,
                                   # if_over_threshold, none

# - Syslog specific -

syslog_facility = 'LOCAL0'
                                   # Syslog local facility. Default to LOCAL0
syslog_ident = 'pgpool'
                                   # Syslog program identification string
                                   # Default to 'pgpool'

# - Debug -

#log_error_verbosity = default          # terse, default, or verbose messages

#client_min_messages = notice           # values in order of decreasing detail:
                                        #   debug5
                                        #   debug4
                                        #   debug3
                                        #   debug2
                                        #   debug1
                                        #   log
                                        #   notice
                                        #   warning
                                        #   error

#log_min_messages = warning             # values in order of decreasing detail:
                                        #   debug5
                                        #   debug4
                                        #   debug3
                                        #   debug2
                                        #   debug1
                                        #   info
                                        #   notice
                                        #   warning
                                        #   error
                                        #   log
                                        #   fatal
                                        #   panic

#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------

pid_file_name = '/opt/pgpool/pgpool.pid'
                                   # PID file name
                                   # Can be specified as relative to the"
                                   # location of pgpool.conf file or
                                   # as an absolute path
                                   # (change requires restart)
logdir = '/opt/pgpool/'
                                   # Directory of pgPool status file
                                   # (change requires restart)


#------------------------------------------------------------------------------
# CONNECTION POOLING
#------------------------------------------------------------------------------

connection_cache = on
                                   # Activate connection pools
                                   # (change requires restart)

                                   # Semicolon separated list of queries
                                   # to be issued at the end of a session
                                   # The default is for 8.3 and later
reset_query_list = 'ABORT; DISCARD ALL'
                                   # The following one is for 8.2 and before
#reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT'


#------------------------------------------------------------------------------
# REPLICATION MODE
#------------------------------------------------------------------------------

replication_mode = off
                                   # Activate replication mode
                                   # (change requires restart)
replicate_select = off
                                   # Replicate SELECT statements
                                   # when in replication mode
                                   # replicate_select is higher priority than
                                   # load_balance_mode.

insert_lock = off
                                   # Automatically locks a dummy row or a table
                                   # with INSERT statements to keep SERIAL data
                                   # consistency
                                   # Without SERIAL, no lock will be issued
lobj_lock_table = ''
                                   # When rewriting lo_creat command in
                                   # replication mode, specify table name to
                                   # lock

# - Degenerate handling -

replication_stop_on_mismatch = off
                                   # On disagreement with the packet kind
                                   # sent from backend, degenerate the node
                                   # which is most likely "minority"
                                   # If off, just force to exit this session

failover_if_affected_tuples_mismatch = off
                                   # On disagreement with the number of affected
                                   # tuples in UPDATE/DELETE queries, then
                                   # degenerate the node which is most likely
                                   # "minority".
                                   # If off, just abort the transaction to
                                   # keep the consistency


#------------------------------------------------------------------------------
# LOAD BALANCING MODE
#------------------------------------------------------------------------------

load_balance_mode = on
                                   # Activate load balancing mode
                                   # (change requires restart)
ignore_leading_white_space = on
                                   # Ignore leading white spaces of each query
white_function_list = ''
                                   # Comma separated list of function names
                                   # that don't write to database
                                   # Regexp are accepted
black_function_list = 'currval,lastval,nextval,setval,func_*,f_*'
                                   # Comma separated list of function names
                                   # that write to database
                                   # Regexp are accepted

black_query_pattern_list = ''
                                   # Semicolon separated list of query patterns
                                   # that should be sent to primary node
                                   # Regexp are accepted
                                   # valid for streaming replicaton mode only.

database_redirect_preference_list = ''
                                   # comma separated list of pairs of database and node id.
                                   # example: postgres:primary,mydb[0-4]:1,mydb[5-9]:2'
                                   # valid for streaming replicaton mode only.
app_name_redirect_preference_list = ''
                                   # comma separated list of pairs of app name and node id.
                                   # example: 'psql:primary,myapp[0-4]:1,myapp[5-9]:standby'
                                   # valid for streaming replicaton mode only.
allow_sql_comments = off
                                   # if on, ignore SQL comments when judging if load balance or
                                   # query cache is possible.
                                   # If off, SQL comments effectively prevent the judgment
                                   # (pre 3.4 behavior).

disable_load_balance_on_write = 'transaction'
                                   # Load balance behavior when write query is issued
                                   # in an explicit transaction.
                                   # Note that any query not in an explicit transaction
                                   # is not affected by the parameter.
                                   # 'transaction' (the default): if a write query is issued,
                                   # subsequent read queries will not be load balanced
                                   # until the transaction ends.
                                   # 'trans_transaction': if a write query is issued,
                                   # subsequent read queries in an explicit transaction
                                   # will not be load balanced until the session ends.
                                   # 'always': if a write query is issued, read queries will
                                   # not be load balanced until the session ends.

statement_level_load_balance = off
                                   # Enables statement level load balancing

#------------------------------------------------------------------------------
# MASTER/SLAVE MODE
#------------------------------------------------------------------------------

master_slave_mode = on
                                   # Activate master/slave mode
                                   # (change requires restart)
master_slave_sub_mode = 'stream'
                                   # Master/slave sub mode
                                   # Valid values are combinations stream, slony
                                   # or logical. Default is stream.
                                   # (change requires restart)

# - Streaming -

sr_check_period = 10
                                   # Streaming replication check period
                                   # Disabled (0) by default
sr_check_user = 'repuser'
                                   # Streaming replication check user
                                   # This is neccessary even if you disable streaming
                                   # replication delay check by sr_check_period = 0

sr_check_password = 'repuser123'
                                   # Password for streaming replication check user
                                   # Leaving it empty will make Pgpool-II to first look for the
                                   # Password in pool_passwd file before using the empty password

sr_check_database = 'postgres'
                                   # Database name for streaming replication check
delay_threshold = 10000000
                                   # Threshold before not dispatching query to standby node
                                   # Unit is in bytes
                                   # Disabled (0) by default

# - Special commands -

follow_master_command = 'sh /etc/pgpool-II/follow_master.sh %d %h %D %H %r '
                                   # Executes this command after master failover
                                   # Special values:
                                   #   %d = failed node id
                                   #   %h = failed node host name
                                   #   %p = failed node port number
                                   #   %D = failed node database cluster path
                                   #   %m = new master node id
                                   #   %H = new master node hostname
                                   #   %M = old master node id
                                   #   %P = old primary node id
                                   #   %r = new master port number
                                   #   %R = new master database cluster path
                                   #   %N = old primary node hostname
                                   #   %S = old primary node port number
                                   #   %% = '%' character

#------------------------------------------------------------------------------
# HEALTH CHECK GLOBAL PARAMETERS
#------------------------------------------------------------------------------

health_check_period = 10
                                   # Health check period
                                   # Disabled (0) by default
health_check_timeout = 20
                                   # Health check timeout
                                   # 0 means no timeout
health_check_user = 'checkuser'
                                   # Health check user
health_check_password = 'checkuser123'
                                   # Password for health check user
                                   # Leaving it empty will make Pgpool-II to first look for the
                                   # Password in pool_passwd file before using the empty password

health_check_database = 'postgres'
                                   # Database name for health check. If '', tries 'postgres' frist, then 'template1'

health_check_max_retries = 3
                                   # Maximum number of times to retry a failed health check before giving up.
health_check_retry_delay = 1
                                   # Amount of time to wait (in seconds) between retries.
connect_timeout = 10000
                                   # Timeout value in milliseconds before giving up to connect to backend.
                                   # Default is 10000 ms (10 second). Flaky network user may want to increase
                                   # the value. 0 means no timeout.
                                   # Note that this value is not only used for health check,
                                   # but also for ordinary conection to backend.

#------------------------------------------------------------------------------
# HEALTH CHECK PER NODE PARAMETERS (OPTIONAL)
#------------------------------------------------------------------------------
#health_check_period0 = 0
#health_check_timeout0 = 20
#health_check_user0 = 'checkuser'
#health_check_password0 = ''
#health_check_database0 = ''
#health_check_max_retries0 = 0
#health_check_retry_delay0 = 1
#connect_timeout0 = 10000

#------------------------------------------------------------------------------
# FAILOVER AND FAILBACK
#------------------------------------------------------------------------------

failover_command = 'sh /etc/pgpool-II/failover.sh %d %h %D %m %H %r %P %R '
                                   # Executes this command at failover
                                   # Special values:
                                   #   %d = failed node id
                                   #   %h = failed node host name
                                   #   %p = failed node port number
                                   #   %D = failed node database cluster path
                                   #   %m = new master node id
                                   #   %H = new master node hostname
                                   #   %M = old master node id
                                   #   %P = old primary node id
                                   #   %r = new master port number
                                   #   %R = new master database cluster path
                                   #   %N = old primary node hostname
                                   #   %S = old primary node port number
                                   #   %% = '%' character
failback_command = ''
                                   # Executes this command at failback.
                                   # Special values:
                                   #   %d = failed node id
                                   #   %h = failed node host name
                                   #   %p = failed node port number
                                   #   %D = failed node database cluster path
                                   #   %m = new master node id
                                   #   %H = new master node hostname
                                   #   %M = old master node id
                                   #   %P = old primary node id
                                   #   %r = new master port number
                                   #   %R = new master database cluster path
                                   #   %N = old primary node hostname
                                   #   %S = old primary node port number
                                   #   %% = '%' character

failover_on_backend_error = on
                                   # Initiates failover when reading/writing to the
                                   # backend communication socket fails
                                   # If set to off, pgpool will report an
                                   # error and disconnect the session.

detach_false_primary = off
                                   # Detach false primary if on. Only
                                   # valid in streaming replicaton
                                   # mode and with PostgreSQL 9.6 or
                                   # after.

search_primary_node_timeout = 3
                                   # Timeout in seconds to search for the
                                   # primary node when a failover occurs.
                                   # 0 means no timeout, keep searching
                                   # for a primary node forever.

#------------------------------------------------------------------------------
# ONLINE RECOVERY
#------------------------------------------------------------------------------

recovery_user = 'nobody'
                                   # Online recovery user
recovery_password = ''
                                   # Online recovery password
                                   # Leaving it empty will make Pgpool-II to first look for the
                                   # Password in pool_passwd file before using the empty password

recovery_1st_stage_command = ''
                                   # Executes a command in first stage
recovery_2nd_stage_command = ''
                                   # Executes a command in second stage
recovery_timeout = 90
                                   # Timeout in seconds to wait for the
                                   # recovering node's postmaster to start up
                                   # 0 means no wait
client_idle_limit_in_recovery = 0
                                   # Client is disconnected after being idle
                                   # for that many seconds in the second stage
                                   # of online recovery
                                   # 0 means no disconnection
                                   # -1 means immediate disconnection


auto_failback = on
                                   # Dettached backend node reattach automatically
                                   # if replication_state is 'streaming'.
auto_failback_interval = 30
                                   # Min interval of executing auto_failback in
                                   # seconds.

#------------------------------------------------------------------------------
# WATCHDOG
#------------------------------------------------------------------------------

# - Enabling -

use_watchdog = on
                                    # Activates watchdog
                                    # (change requires restart)

# -Connection to up stream servers -

trusted_servers = 'node228,node229'
                                    # trusted server list which are used
                                    # to confirm network connection
                                    # (hostA,hostB,hostC,...)
                                    # (change requires restart)
ping_path = '/bin'
                                    # ping command path
                                    # (change requires restart)

# - Watchdog communication Settings -

wd_hostname = 'node229'
                                    # Host name or IP address of this watchdog
                                    # (change requires restart)
wd_port = 9000
                                    # port number for watchdog service
                                    # (change requires restart)
wd_priority = 1
                                    # priority of this watchdog in leader election
                                    # (change requires restart)

wd_authkey = ''
                                    # Authentication key for watchdog communication
                                    # (change requires restart)

wd_ipc_socket_dir = '/opt/pgpool'
                                    # Unix domain socket path for watchdog IPC socket
                                    # The Debian package defaults to
                                    # /var/run/postgresql
                                    # (change requires restart)


# - Virtual IP control Setting -

delegate_IP = '10.40.239.240'
                                    # delegate IP address
                                    # If this is empty, virtual IP never bring up.
                                    # (change requires restart)
if_cmd_path = '/usr/sbin'
                                    # path to the directory where if_up/down_cmd exists
                                    # If if_up/down_cmd starts with "/", if_cmd_path will be ignored.
                                    # (change requires restart)
if_up_cmd = 'ip addr add $_IP_$/24 dev ens33 label ens33:0'
                                    # startup delegate IP command
                                    # (change requires restart)
if_down_cmd = 'ip addr del $_IP_$/24 dev ens33'
                                    # shutdown delegate IP command
                                    # (change requires restart)
arping_path = '/usr/sbin'
                                    # arping command path
                                    # If arping_cmd starts with "/", if_cmd_path will be ignored.
                                    # (change requires restart)
arping_cmd = 'arping -U  $_IP_$  -w 1 -I ens33'
                                    # arping command
                                    # (change requires restart)

# - Behaivor on escalation Setting -

clear_memqcache_on_escalation = on
                                    # Clear all the query cache on shared memory
                                    # when standby pgpool escalate to active pgpool
                                    # (= virtual IP holder).
                                    # This should be off if client connects to pgpool
                                    # not using virtual IP.
                                    # (change requires restart)
wd_escalation_command = ''
                                    # Executes this command at escalation on new active pgpool.
                                    # (change requires restart)
wd_de_escalation_command = ''
                                    # Executes this command when master pgpool resigns from being master.
                                    # (change requires restart)

# - Watchdog consensus settings for failover -

failover_when_quorum_exists = on
                                    # Only perform backend node failover
                                    # when the watchdog cluster holds the quorum
                                    # (change requires restart)

failover_require_consensus = on
                                    # Perform failover when majority of Pgpool-II nodes
                                    # aggrees on the backend node status change
                                    # (change requires restart)

allow_multiple_failover_requests_from_node = on
                                    # A Pgpool-II node can cast multiple votes
                                    # for building the consensus on failover
                                    # (change requires restart)

enable_consensus_with_half_votes = on
                                    # apply majority rule for consensus and quorum computation
                                    # at 50% of votes in a cluster with even number of nodes.
                                    # when enabled the existence of quorum and consensus
                                    # on failover is resolved after receiving half of the
                                    # total votes in the cluster, otherwise both these
                                    # decisions require at least one more vote than
                                    # half of the total votes.
                                    # (change requires restart)

# - Lifecheck Setting -

# -- common --

wd_monitoring_interfaces_list = ''  # Comma separated list of interfaces names to monitor.
                                    # if any interface from the list is active the watchdog will
                                    # consider the network is fine
                                    # 'any' to enable monitoring on all interfaces except loopback
                                    # '' to disable monitoring
                                    # (change requires restart)


wd_lifecheck_method = 'heartbeat'
                                    # Method of watchdog lifecheck ('heartbeat' or 'query' or 'external')
                                    # (change requires restart)
wd_interval = 10
                                    # lifecheck interval (sec) > 0
                                    # (change requires restart)

# -- heartbeat mode --

wd_heartbeat_port = 9694
                                    # Port number for receiving heartbeat signal
                                    # (change requires restart)
wd_heartbeat_keepalive = 2
                                    # Interval time of sending heartbeat signal (sec)
                                    # (change requires restart)
wd_heartbeat_deadtime = 30
                                    # Deadtime interval for heartbeat signal (sec)
                                    # (change requires restart)
heartbeat_destination0 = 'node228'
                                    # Host name or IP address of destination 0
                                    # for sending heartbeat signal.
                                    # (change requires restart)
heartbeat_destination_port0 = 9694 
                                    # Port number of destination 0 for sending
                                    # heartbeat signal. Usually this is the
                                    # same as wd_heartbeat_port.
                                    # (change requires restart)
heartbeat_device0 = 'ens33'
                                    # Name of NIC device (such like 'eth0')
                                    # used for sending/receiving heartbeat
                                    # signal to/from destination 0.
                                    # This works only when this is not empty
                                    # and pgpool has root privilege.
                                    # (change requires restart)

#heartbeat_destination1 = ''
#heartbeat_destination_port1 = 9694
#heartbeat_device1 = 'ens33'

# -- query mode --

wd_life_point = 3
                                    # lifecheck retry times
                                    # (change requires restart)
wd_lifecheck_query = 'SELECT 1'
                                    # lifecheck query to pgpool from watchdog
                                    # (change requires restart)
wd_lifecheck_dbname = 'postgres'
                                    # Database name connected for lifecheck
                                    # (change requires restart)
wd_lifecheck_user = 'checkuser'
                                    # watchdog user monitoring pgpools in lifecheck
                                    # (change requires restart)
wd_lifecheck_password = 'checkuser123'
                                    # Password for watchdog user in lifecheck
                                    # Leaving it empty will make Pgpool-II to first look for the
                                    # Password in pool_passwd file before using the empty password
                                    # (change requires restart)

# - Other pgpool Connection Settings -

other_pgpool_hostname0 = 'node228'
                                    # Host name or IP address to connect to for other pgpool 0
                                    # (change requires restart)
other_pgpool_port0 = 9999
                                    # Port number for other pgpool 0
                                    # (change requires restart)
other_wd_port0 = 9000
#                                    # Port number for other watchdog 0
#                                    # (change requires restart)
#other_pgpool_hostname1 = ''
#other_pgpool_port1 = 9999
#other_wd_port1 = 9000


#------------------------------------------------------------------------------
# OTHERS
#------------------------------------------------------------------------------
relcache_expire = 0
                                   # Life time of relation cache in seconds.
                                   # 0 means no cache expiration(the default).
                                   # The relation cache is used for cache the
                                   # query result against PostgreSQL system
                                   # catalog to obtain various information
                                   # including table structures or if it's a
                                   # temporary table or not. The cache is
                                   # maintained in a pgpool child local memory
                                   # and being kept as long as it survives.
                                   # If someone modify the table by using
                                   # ALTER TABLE or some such, the relcache is
                                   # not consistent anymore.
                                   # For this purpose, cache_expiration
                                   # controls the life time of the cache.

relcache_size = 256
                                   # Number of relation cache
                                   # entry. If you see frequently:
                                   # "pool_search_relcache: cache replacement happend"
                                   # in the pgpool log, you might want to increate this number.

check_temp_table = catalog
                                   # Temporary table check method. catalog, trace or none.
                                   # Default is catalog.

check_unlogged_table = on
                                   # If on, enable unlogged table check in SELECT statements.
                                   # This initiates queries against system catalog of primary/master
                                   # thus increases load of master.
                                   # If you are absolutely sure that your system never uses unlogged tables
                                   # and you want to save access to primary/master, you could turn this off.
                                   # Default is on.
enable_shared_relcache = on
                                   # If on, relation cache stored in memory cache,
                                   # the cache is shared among child process.
                                   # Default is on.
                                   # (change requires restart)

relcache_query_target = master     # Target node to send relcache queries. Default is master (primary) node.
                                   # If load_balance_node is specified, queries will be sent to load balance node.
#------------------------------------------------------------------------------
# IN MEMORY QUERY MEMORY CACHE
#------------------------------------------------------------------------------
memory_cache_enabled = off
                                   # If on, use the memory cache functionality, off by default
                                   # (change requires restart)
memqcache_method = 'shmem'
                                   # Cache storage method. either 'shmem'(shared memory) or
                                   # 'memcached'. 'shmem' by default
                                   # (change requires restart)
memqcache_memcached_host = 'localhost'
                                   # Memcached host name or IP address. Mandatory if
                                   # memqcache_method = 'memcached'.
                                   # Defaults to localhost.
                                   # (change requires restart)
memqcache_memcached_port = 11211
                                   # Memcached port number. Mondatory if memqcache_method = 'memcached'.
                                   # Defaults to 11211.
                                   # (change requires restart)
memqcache_total_size = 67108864
                                   # Total memory size in bytes for storing memory cache.
                                   # Mandatory if memqcache_method = 'shmem'.
                                   # Defaults to 64MB.
                                   # (change requires restart)
memqcache_max_num_cache = 1000000
                                   # Total number of cache entries. Mandatory
                                   # if memqcache_method = 'shmem'.
                                   # Each cache entry consumes 48 bytes on shared memory.
                                   # Defaults to 1,000,000(45.8MB).
                                   # (change requires restart)
memqcache_expire = 0
                                   # Memory cache entry life time specified in seconds.
                                   # 0 means infinite life time. 0 by default.
                                   # (change requires restart)
memqcache_auto_cache_invalidation = on
                                   # If on, invalidation of query cache is triggered by corresponding
                                   # DDL/DML/DCL(and memqcache_expire).  If off, it is only triggered
                                   # by memqcache_expire.  on by default.
                                   # (change requires restart)
memqcache_maxcache = 409600
                                   # Maximum SELECT result size in bytes.
                                   # Must be smaller than memqcache_cache_block_size. Defaults to 400KB.
                                   # (change requires restart)
memqcache_cache_block_size = 1048576
                                   # Cache block size in bytes. Mandatory if memqcache_method = 'shmem'.
                                   # Defaults to 1MB.
                                   # (change requires restart)
memqcache_oiddir = '/tmp/oiddir'
                                   # Temporary work directory to record table oids
                                   # (change requires restart)
white_memqcache_table_list = ''
                                   # Comma separated list of table names to memcache
                                   # that don't write to database
                                   # Regexp are accepted
black_memqcache_table_list = ''
                                   # Comma separated list of table names not to memcache
                                   # that don't write to database
                                   # Regexp are accepted
pgpool.conf

 

 

讀者可以根據實際情況修改配置信息。請自行比較兩個節點的配置文件差異。下面我們對重點配置項目進行說明。

       

      設置它的權限為僅允許所屬用戶讀寫:

    [postgres@node228 pgpool-II]# chmod 600 /etc/pgpool-II/pgpool.conf

 

 

 

#------------------------------------------------------------------------------

# CONNECTIONS

#------------------------------------------------------------------------------

# - pgpool 連接設置 -

listen_addresses = '*'

pgpool-II 監聽的 TCP/IP 連接的主機名或者IP地址。'*'表示全部, '' 表示不接受任何ip地址連接。

 

port = 9999

pgpool-II的端口號

 

socket_dir = '/opt/pgpool/'

建立和接受 UNIX 域套接字連接的目錄

                               

# - pgpool 通信管理連接設置 -

 

pcp_listen_addresses = '*'

pcp 進程監聽的主機名或者IP地址。'*'表示全部, '' 表示不接受任何ip地址連接。

 

pcp_port = 9898

pcp 的端口

 

pcp_socket_dir = '/opt/pgpool/'

pcp 的建立和接受 UNIX 域套接字連接的目錄

 

# - 后端連接設置,即PostgreSQL服務的設置

backend_hostname0 = 'node228'

后台PostgreSQL 節點0的IP地址或主機名

 

backend_port0 = 5432

PostgreSQL后台PostgreSQL 節點0的端口

 

backend_weight0 = 1

PostgreSQL節點0的權重 僅在負載均衡模式)

 

backend_data_directory0 = '/opt/pg114/data/'

PostgreSQL節點0數據目錄

 

backend_flag0 = 'ALLOW_TO_FAILOVER'

控制后台程序的行為,有三種值:

 

描述

ALLOW_TO_FAILOVER

允許故障轉移或者從后台程序斷開。本值為默認值。指定本值后,不能同時指定 DISALLOW_TO_FAILOVER 。

DISALLOW_TO_FAILOVER

不允許故障轉移或從后台程序斷開。當你使用Heartbeat或Pacemaker等HA(高可用性)軟件保護后端時,此功能很有用。 你不能同時使用ALLOW_TO_FAILOVER指定。

ALWAYS_MASTER

這僅在流復制模式下有用。如果將此標志設置為后端之一,則Pgpool-II將不會通過檢查后端找到主節點。 而是始終將設置了標志的節點視為主要節點。

 

 

backend_application_name0 = 'server0'

 walsender(日志發送進程)的application_name

 

下面的參數則是另一個PostgreSQL 后端的配置。這里不再贅述。

backend_hostname1 = 'node229'

backend_port1 = 5432

backend_weight1 = 1

backend_data_directory1 = '/opt/pg114/data/'

backend_flag1 = 'ALLOW_TO_FAILOVER'

backend_application_name1 = 'server1'

 

 

# - 池的認證 -

enable_pool_hba = on

使用 pool_hba.conf 來進行客戶端認證,on表示同意

 

pool_passwd = 'pool_passwd'

指定用於 md5 認證的文件名。默認值為"pool_passwd";"" 表示禁止 pool_passwd.

 

authentication_timeout = 60

指定 pgpool 認證超時的時長。0 指禁用超時,默認值為 60 。

 

allow_clear_text_frontend_auth = off

如果PostgreSQL后端服務器需要md5或SCRAM身份驗證來進行某些用戶的身份驗證,但是該用戶的密碼不在“ pool_passwd”文件中,則啟用allow_clear_text_frontend_auth將允許Pgpool-II對前端客戶端使用明文密碼驗證。 從客戶端獲取純文本格式的密碼,並將其用於后端身份驗證。

 

#------------------------------------------------------------------------------

# LOGS

#------------------------------------------------------------------------------

 

log_destination = 'syslog'

pgpool-II 支持多種記錄服務器消息的方式,包括 stderr 和 syslog。默認為記錄到 stderr。Syslog 表示輸出到系統日志中

 

log_line_prefix = '%t: pid %p: '  

每行日志開頭的打印樣式字符串,默認值打印時間戳和進程號

 

log_connections = on

如果為 on,進入的連接將被打印到日志中。

 

log_hostname = on

如果為on,ps 命令和日志將顯示客戶端的主機名而不是 IP 地址。

 

log_statement = off

如果設置為 on ,所有 SQL 語句將被記錄。

 

log_per_node_statement = off

針對每個 DB 節點記錄各自的SQL查詢,要知道一個 SELECT 的結果是不是從查詢緩存獲得,需要啟用它

 

log_client_messages = off

如果設置為 on ,則記錄客戶端的信息

 

log_standby_delay = 'if_over_threshold'

指出如何記錄后備服務器的延遲。如果指定 'none',則不寫入日志。 如果為 'always',在每次執行復制延遲檢查時記錄延遲。 如果 'if_over_threshold' 被指定,只有當延遲到達 delay_threshold 時記錄日志。 log_standby_delay 的默認值為 'none'。

 

# - Syslog 具體設置 -

 

syslog_facility = 'LOCAL0'

 當記錄日志到 syslog 被啟用,本參數確定被使用的 syslog “設備”。 可以使用 LOCAL0, LOCAL1, LOCAL2,…, LOCAL7

 

syslog_ident = 'pgpool'

 系統日志鑒別字符串,默認是'pgpool'

 

# - 文件位置 -

pid_file_name = '/opt/pgpool/pgpool.pid'

包含 pgpool-II 進程 ID 的文件的完整路徑名

 

logdir = '/opt/pgpool/'

保存日志文件的目錄。pgpool_status 將被寫入這個目錄。

                            

 

#------------------------------------------------------------------------------

# REPLICATION MODE

#------------------------------------------------------------------------------

 

replication_mode = off

在復制模式(已淘汰)時設置為 on,在主/備模式中, replication_mode,必須被設置為 off,並且 master_slave_mode 為 on

 

 

#------------------------------------------------------------------------------

# LOAD BALANCING MODE

#------------------------------------------------------------------------------

 

load_balance_mode = on

當設置為 on時,SELECT 查詢將被分發到每個后台程序上用於負載均衡。

 

ignore_leading_white_space = on

 當設置為on時,在負載均衡模式中 pgpool-II 忽略 SQL 查詢語句前面的空白字符。

 

white_function_list = ''

指定一系列用逗號隔開的不會更新數據庫的函數名。

 

black_function_list = 'currval,lastval,nextval,setval,func_*,f_*'

指定一系列用逗號隔開的更新數據庫的函數名。在復制模式中,在本列表中指定的函數將即不會被負載均衡,也不會被復制。在主備模式中,這些 SELECT 語句只被發送到主節點。

 

black_query_pattern_list = ''

指定一系列用分號隔開的sql 模式,匹配這些模式的sql只會被發送到主結點。

 

 

#------------------------------------------------------------------------------

# MASTER/SLAVE MODE

#------------------------------------------------------------------------------

 

master_slave_mode = on

是否為主備模式

                               

master_slave_sub_mode = 'stream'

      使用 PostgreSQL 內置的復制系統(基於流復制)時被設置

 

# - 流復制 -

 

sr_check_period = 10

本參數指出基於流復制的延遲檢查的間隔,單位為秒

 

sr_check_user = 'repuser'

執行流復制檢查的用戶。用戶必須存在於所有的PostgreSQL后端上。

 

sr_check_password = 'repuser123'

 執行流復制檢測的用戶的密碼

 

sr_check_database = 'postgres'

 執行流復制檢查的數據庫

 

delay_threshold = 10000000

指定能夠容忍的備機上相對於主服務器上的 WAL 的復制延遲,單位為字節。 如果延遲到達了 delay_threshold,pgpool-II 不再發送 SELECT 查詢到備機。 所有的東西都被發送到主服務器,即使啟用了負載均衡模式,直到備機追趕上來。

                                

# - 特殊命令 -

 

follow_master_command = 'bash /etc/pgpool-II/follow_master.sh %d %h %D %H %r '

本參數指定一個在主備流復制模式中發生主節點故障恢復后執行的命令。 pgpool-II 使用后台對應的信息代替以下的特別字符。

特殊字符

描述

%d

斷開連接的節點的后台 ID。

%h

斷開連接的節點的主機名。

%p

斷開連接的節點的端口號。

%D

斷開連接的節點的數據庫實例所在目錄。

%M

舊的主節點 ID。

%m

新的主節點 ID。

%H

新的主節點主機名。

%P

舊的第一節點 ID。

%r

新的主節點的端口號。

%R

新的主節點的數據庫實例所在目錄。

%%

'%' 字符

 

如果你改變了這個值,需要重新加載 pgpool.conf 以使變動生效。

如果 follow_master_commnd 不為空,當一個主備流復制中的主節點的故障切換完成, pgpool 退化所有的除新的主節點外的所有節點並啟動一個新的子進程, 再次准備好接受客戶端的連接。 在這之后,pgpool 針對每個退化的節點運行 ‘follow_master_command’ 指定的命令。 通常,這個命令應該用於調用例如 pcp_recovery_node 命令來從新的主節點恢復備節點。

 

#------------------------------------------------------------------------------

# HEALTH CHECK GLOBAL PARAMETERS

#------------------------------------------------------------------------------

 

health_check_period = 10

pgpool-II 定期嘗試連接到后台以檢測服務器是否在服務器或網絡上有問題。 這種錯誤檢測過程被稱為“健康檢查”。如果檢測到錯誤, 則 pgpool-II 會嘗試進行故障恢復或者退化操作。本參數指出健康檢查的間隔,單位為秒。

 

health_check_timeout = 20

本參數用於避免健康檢查在例如網線斷開等情況下等待很長時間。 超時值的單位為秒。默認值為 20 。0 禁用超時(一直等待到 TCP/IP 超時)。

 

health_check_user = 'checkuser'

 用於執行健康檢查的用戶。用戶必須存在於 PostgreSQL 后台中。health_check_password = 'checkuser123'

用於執行健康檢查的用戶的密碼。

   

health_check_database = 'postgres'

      執行健康檢查的數據庫名。

 

connect_timeout = 10000

使用 connect() 系統調用時候放棄連接到后端的超時毫秒值。 默認為 10000 毫秒。

                                                                 

 

#------------------------------------------------------------------------------

# FAILOVER AND FAILBACK

#------------------------------------------------------------------------------

 

failover_command = 'bash /etc/pgpool-II/failover.sh %d %h %D %m %H %r %P %R '

本參數指定當一個節點斷開連接時執行的命令。 pgpool-II 使用后台對應的信息代替以下的特別字符。

特殊字符

描述

%d

斷開連接的節點的后台 ID。

%h

斷開連接的節點的主機名。

%p

斷開連接的節點的端口號。

%D

斷開連接的節點的數據庫實例所在目錄。

%M

舊的主節點 ID。

%m

新的主節點 ID。

%H

新的主節點主機名。

%P

舊的第一節點 ID。

%r

新的主節點的端口號。

%R

新的主節點的數據庫實例所在目錄。

%%

'%' 字符

如果你改變了這個值,需要重新加載 pgpool.conf 以使變動生效。

當進行故障切換時,pgpool 殺掉它的所有子進程,這將順序終止所有的到 pgpool 的會話。 然后,pgpool 調用 failover_command 並等待它完成。 然后,pgpool 啟動新的子進程並再次開始從客戶端接受連接。

 

 

failover_on_backend_error = on

 如果為 on,當往后台進程的通信中寫入數據時發生錯誤,pgpool-II 將觸發故障處理過程。 

 

#------------------------------------------------------------------------------

# WATCHDOG

#------------------------------------------------------------------------------

 

use_watchdog = on

如果為 on,則激活看門狗。

 

# - 連接到上游服務器 -

 

trusted_servers = 'node228,node229'

用於檢測上游連接的可信服務器列表(即pgpool所在的服務器)。每台服務器都應能響應 ping。 指定一個用逗號分隔的服務器列表例如 "hostA,hostB,hostC"。 如果沒有任何服務器可以 ping 通,則看門狗認為 pgpool-II 出故障了。

 

ping_path = '/bin'

# 本參數指定用於監控上游服務器的 ping 命令的路徑。只需要設置路徑例如 "/bin" 

 

wd_hostname = 'node228'

指定 pgpool-II 的主機名或者 IP 地址。

 

wd_port = 9000

指定看門狗的通信端口

 

wd_priority = 1

本參數用於設定在主看門狗節點選舉時本地看門狗節點的優先權。 在集群啟動的時候或者舊的看門狗故障的時候,wd_priority 值較高的節點會被選為主看門狗節點。

 

wd_ipc_socket_dir = '/opt/pgpool'

建立 pgpool-II 看門狗 IPC 連接的本地域套接字建立的目錄。

 

# - 虛擬IP控制設置 -

 

delegate_IP = '10.40.239.240'

 指定客戶端的服務(例如應用服務等)連接到的 pgpool-II 的虛擬 IP (VIP) 地址。

 

if_cmd_path = '/usr/sbin'

本參數指定用於切換 IP 地址的命令的所在路徑。

 

if_up_cmd = 'ip addr add $_IP_$/24 dev ens33 label ens33:0'

本參數指定一個命令用以啟用虛擬 IP。設置命令和參數,例如 "ip addr add $_IP_$/24 dev ens33 label ens33:0"。 參數 $_IP_$ 會被 delegate_IP 設置的值替換。ens33根據現場機器改掉。

 

if_down_cmd = 'ip addr del $_IP_$/24 dev ens33'

本參數指定一個命令用以停用虛擬 IP

 

arping_path = '/usr/sbin'

本參數指定用於arp地址解析的命令的所在路徑。

 

arping_cmd = 'arping -U  $_IP_$  -w 1 -I ens33'

本參數指定一個命令用以在發生虛擬 IP 切換后用於發送一個 ARP 請求的命令。

 

# - pgpool-II 存活情況檢查 -

 

wd_lifecheck_method = 'heartbeat'

本參數指定存活檢查的模式。

 

wd_interval = 10

參數指定 pgpool-II 進行存活檢查的間隔,單位為秒。

 

# -- 心跳模式 --

 

wd_heartbeat_port = 9694

本選項指定接收心跳信號的端口號。默認為 9694。

 

wd_heartbeat_keepalive = 2

本選項指定發送心跳信號的間隔(秒)。默認值為 2。

 

wd_heartbeat_deadtime = 30

如果本選項指定的時間周期內沒有收到心跳信號,則看門狗認為遠端的 pgpool-II 發生故障。

 

heartbeat_destination0 = 'node229'

選項指定心跳信號發送的目標,即另一台安裝pgpool的主機,可以是 IP 地址或主機名。設置其它pgpool節點的節點名。

 

heartbeat_destination_port0 = 9694

本選項指定由 heartbeat_destinationX 指定的心跳信號目標的端口號。

 

heartbeat_device0 = 'ens33'

本選項指定用於發送心跳信號到由 heartbeat_destinationX指定的目標的設備名。 

 

 

# -- 查詢模式 --

 

wd_lifecheck_dbname = 'postgres'

用於檢查 pgpool-II 時連接到的數據庫名。

 

wd_lifecheck_user = 'checkuser'

用於檢查 pgpool-II 的用戶名。

 

wd_lifecheck_password = 'checkuser123'

 用於檢查 pgpool-II 的用戶的密碼

 

# - 其他 pgpool 連接設置 -

 

other_pgpool_hostname0 = '10.40.239.229'

指定需要監控的 pgpool-II 的服務器主機

                                  

other_pgpool_port0 = 9999

指定需要監控的 pgpool-II 服務器的 pgpool 服務的端口

 

other_wd_port0 = 9000

指定需要監控的 pgpool-II 服務器的看門狗的端口。

 

 

  1. 使用root用戶,修改命令ip,arping,ifup,ifconfig的權限

[root@node228 pgpool-II]# chmod  +s  /usr/sbin/ip

[root@node228 pgpool-II]# chmod  +s  /usr/sbin/arping

[root@node228 pgpool-II]# chmod  +s  /usr/sbin/ifup

[root@node228 pgpool-II]# chmod  +s  /usr/sbin/ifconfig

 

[root@node228 pgpool-II]# chmod  u+s  /usr/sbin/ip

[root@node228 pgpool-II]# chmod  u+s  /usr/sbin/arping

[root@node228 pgpool-II]# chmod  u+s  /usr/sbin/ifup

[root@node228 pgpool-II]# chmod  u+s  /usr/sbin/ifconfig

 

  1. 將文件failover.sh 和 follow_master.sh 上傳至兩台主機上的 /etc/pgpool-II 目錄中。
  1 #!/bin/bash
  2 # This script is run by failover_command.
  3 
  4 set -o xtrace
  5 # exec > >(logger -i -p local1.info) 2>&1
  6 
  7 # Special values:
  8 #   %d = failed node id
  9 #   %h = failed node hostname
 10 #   %p = failed node port number
 11 #   %D = failed node database cluster path
 12 #   %m = new master node id
 13 #   %H = new master node hostname
 14 #   %M = old master node id
 15 #   %P = old primary node id
 16 #   %r = new master port number
 17 #   %R = new master database cluster path
 18 #   %N = old primary node hostname
 19 #   %S = old primary node port number
 20 #   %% = '%' character
 21 
 22 
 23 FAILED_NODE_ID="$1"
 24 FAILED_NODE_HOST="$2"
 25 FAILED_NODE_PGDATA="$3"
 26 NEW_MASTER_NODE_ID="$4"
 27 NEW_MASTER_NODE_HOST="$5"
 28 NEW_MASTER_NODE_PORT="$6"
 29 OLD_PRIMARY_NODE_ID="$7"
 30 NEW_MASTER_NODE_PGDATA="$8"
 31 
 32 PGHOME=/opt/pg114
 33 REPL_USER=repuser
 34 PCP_USER=pgpool
 35 PGPOOL_PATH=/usr/bin
 36 PCP_PORT=9898
 37 PGPOOL_LOG_DIR=/opt/pgpool
 38 
 39 RECOVERY_CONF=${FAILED_NODE_PGDATA}/recovery.conf
 40 PGPASSFILE=/etc/pgpool-II/.pgpass
 41 PCPPASSFILE=/etc/pgpool-II/.pcppass
 42 
 43 declare -A parameter_list
 44 
 45 parameter_list=(
 46     ["FAILED_NODE_ID"]=${FAILED_NODE_ID}
 47     ["FAILED_NODE_HOST"]=${FAILED_NODE_HOST}
 48     ["FAILED_NODE_PGDATA"]=${FAILED_NODE_PGDATA}
 49     ["NEW_MASTER_NODE_ID"]=${NEW_MASTER_NODE_ID}
 50     ["NEW_MASTER_NODE_HOST"]=${NEW_MASTER_NODE_HOST}
 51     ["NEW_MASTER_NODE_PORT"]=${NEW_MASTER_NODE_PORT}
 52     ["OLD_PRIMARY_NODE_ID"]=${OLD_PRIMARY_NODE_ID}
 53     ["NEW_MASTER_NODE_PGDATA"]=${NEW_MASTER_NODE_PGDATA}
 54     ["PGHOME"]=${PGHOME}
 55     ["REPL_USER"]=${REPL_USER}
 56     ["PCP_USER"]=${PCP_USER}
 57     ["PGPOOL_PATH"]=${PGPOOL_PATH}
 58     ["PCP_PORT"]=${PCP_PORT}
 59     ["PGPOOL_LOG_DIR"]=${PGPOOL_LOG_DIR}
 60     ["RECOVERY_CONF"]=${RECOVERY_CONF}
 61     ["PGPASSFILE"]=${PGPASSFILE}
 62     ["PCPPASSFILE"]=${PCPPASSFILE}
 63 )
 64 
 65 has_error=0
 66 for parameter in ${!parameter_list[@]}
 67 do
 68     if [ -z ${parameter_list[$parameter]} ]
 69     then
 70         echo -e "ERROR: parameter \"$parameter\" is not defined. Exit"
 71         has_error=1
 72     fi
 73 done
 74 
 75 if [ ${has_error} -eq 1 ]
 76 then
 77     exit 1
 78 fi
 79 
 80 
 81 export PGPASSFILE
 82 export PCPPASSFILE
 83 
 84 #Get Hostname from ip address or Hostname
 85 # parameters:
 86   # $1: Hostname of the server
 87 GetHostname () {
 88     ## Test passwordless SSH
 89     host_name=$(ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@"$1" -i ~/.ssh/id_rsa_pgpool "hostname") > /dev/null
 90     echo $host_name
 91 }
 92 
 93 # Check if the postgresql node is running and is the master
 94 # parameters:
 95    # $1: Hostname of postgresql node
 96    # $2: Port number of the PostgreSQL node
 97    # $3: Home directory of PostgreSQL
 98 CheckIfPostgresqlIsMaster () {
 99     pg_node_host=$1
100     pg_node_port=$2
101     PGHOME=$3
102 
103     is_master=$(ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@"$pg_node_host" -i ~/.ssh/id_rsa_pgpool "$PGHOME/bin/psql -h pg_node_host -p $pg_node_port -U postgres -t -c \"select case when pg_is_in_recovery() then 1 else 0 end\"")  > /dev/null 2>&1
104 
105     if [ $? -eq 0 ] && [ $is_master -eq 0 ]
106     then
107         echo "yes";
108     else
109         echo "no"
110     fi
111 }
112 
113 
114 # do basebackup for postgresql by running pg_basebackup
115 # parameters:
116   # $1: Hostname of the detached node
117   # $2: Hostname of the new master node
118   # $3: Port number of the new master node
119   # $4: HOME directory of PostgreSQL
120   # $5: User for postgrsql replication
121   # $6: failed node database cluster path
122   # $7: Log directory of pgpool
123 DoPgBasebackup () {
124     failed_node_host=$1
125     new_master_node_host=$2
126     new_master_node_port=$3
127     PGHOME=$4
128     repl_user=$5
129     failed_node_pgdata=$6
130     PGPOOL_LOG_DIR=$7
131 
132     RECOVERY_CONF=${failed_node_pgdata}/recovery.conf
133 
134     master_is_real_master=$(CheckIfPostgresqlIsMaster ${new_master_node_host} ${new_master_node_port} ${PGHOME} ${PGPOOL_LOG_DIR}) > /dev/null 2>&1
135     if [ $master_is_real_master != 'yes' ]
136     then
137         echo "$(date +"%F %T") failover.sh ERROR: Postgres is not running as a master at ${new_master_node_host}. Exiting."
138         return 1
139     fi
140 
141     ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${failed_node_host} -i ~/.ssh/id_rsa_pgpool "
142 
143         set -o errexit
144 
145             # Execute pg_basebackup
146             rm -rf ${failed_node_pgdata}
147             ${PGHOME}/bin/pg_basebackup -X stream -h ${new_master_node_host} -U ${repl_user} -p ${new_master_node_port} -D ${failed_node_pgdata}
148 
149             cat > ${RECOVERY_CONF} << EOT
150 primary_conninfo = 'host=${new_master_node_host} port=${new_master_node_port} user=${repl_user} application_name=${failed_node_host} passfile=''/var/lib/pgsql/.pgpass'''
151 recovery_target_timeline = 'latest'
152 primary_slot_name = '${failed_node_host}'
153 standby_mode = 'on'
154 EOT
155         "
156 
157     if [ $? -ne 0 ]
158     then
159         return 1
160     fi
161     return 0
162 }
163 
164 echo "failover.sh: INFO: failed_node_id=${FAILED_NODE_ID} old_primary_node_id=${OLD_PRIMARY_NODE_ID} failed_host=${FAILED_NODE_HOST} new_master_host=${NEW_MASTER_NODE_HOST}" >> $PGPOOL_LOG_DIR/failover.log
165 
166 FAILED_NODE_HOST=$(GetHostname ${FAILED_NODE_HOST})
167 NEW_MASTER_NODE_HOST=$(GetHostname ${NEW_MASTER_NODE_HOST})
168 
169 ## If there's no master node anymore, skip failover.
170 if [ ${NEW_MASTER_NODE_ID} -lt 0 ]; then
171     echo "$(date +"%F %T") failover.sh ERROR: All nodes are down. Skipping failover." >> $PGPOOL_LOG_DIR/failover.log
172     exit 1
173 fi
174 
175 ## Test passwordless SSH
176 ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${NEW_MASTER_NODE_HOST} -i ~/.ssh/id_rsa_pgpool ls /tmp > /dev/null 2>&1
177 if [ $? -ne 0 ]; then
178     echo "$(date +"%F %T") failover.sh ERROR: passwordless SSH to postgres@${NEW_MASTER_NODE_HOST} failed. Please setup passwordless SSH."  >> $PGPOOL_LOG_DIR/failover.log
179     exit 1
180 fi
181 
182 ## If Standby node is down, start and attach it to pgpool.
183 if [ ${FAILED_NODE_ID} -ne ${OLD_PRIMARY_NODE_ID} ]
184 then
185     echo "$(date +"%F %T") failover.sh INFO: Standby node ${FAILED_NODE_ID} is down." >> $PGPOOL_LOG_DIR/failover.log
186 
187     ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${FAILED_NODE_HOST} -i ~/.ssh/id_rsa_pgpool "
188         set -o errexit
189         cat > ${RECOVERY_CONF} << EOT
190 primary_conninfo = 'host=${NEW_MASTER_NODE_HOST} port=${NEW_MASTER_NODE_PORT} user=${REPL_USER} application_name=${FAILED_NODE_HOST} passfile=''/var/lib/pgsql/.pgpass'''
191 recovery_target_timeline = 'latest'
192 primary_slot_name = '${FAILED_NODE_HOST}'
193 standby_mode = 'on'
194 EOT
195     "
196 
197     # start Standby node on ${FAILED_NODE_HOST}
198     ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${FAILED_NODE_HOST} -i ~/.ssh/id_rsa_pgpool \
199         "${PGHOME}/bin/pg_ctl -l /dev/null -w -D ${FAILED_NODE_PGDATA} start"
200 
201     if [ $? -eq 0 ]
202     then
203         echo "$(date +"%F %T") failover.sh INFO: node ${FAILED_NODE_ID} on ${FAILED_NODE_HOST} started as a standby node."
204         echo "$(date +"%F %T") failover.sh INFO: failover command complete."
205     else
206         echo "$(date +"%F %T") failover.sh INFO: Failed to start node ${FAILED_NODE_HOST}. Try pg_basebackup."
207 
208         # Create replication slot "${FAILED_NODE_HOST}"
209         ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${new_master_node_host} -i ~/.ssh/id_rsa_pgpool " ${PGHOME}/bin/psql -h ${new_master_node_host} -p ${new_master_node_port} -c \"SELECT pg_create_physical_replication_slot('${failed_node_host}');\" "
210 
211         DoPgBasebackup ${FAILED_NODE_HOST} ${NEW_MASTER_NODE_HOST} ${NEW_MASTER_NODE_PORT} ${PGHOME} ${REPL_USER} ${FAILED_NODE_PGDATA} ${PGPOOL_LOG_DIR}
212         if [ $? -ne 0 ]
213         then
214            echo "$(date +"%F %T") failover.sh ERROR: pg_basebackup failed"
215            # drop replication slot
216            ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${new_master_node_host} -i ~/.ssh/id_rsa_pgpool " ${PGHOME}/bin/psql -h ${new_master_node_host} -p ${NEW_MASTER_NODE_PORT} -c \"SELECT pg_drop_replication_slot('${FAILED_NODE_HOST}');\" "
217            echo "$(date +"%F %T") failover.sh INFO: failover command failed."
218            exit 1
219         fi
220 
221         # start Standby node on ${FAILED_NODE_HOST}
222         ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${FAILED_NODE_HOST} -i ~/.ssh/id_rsa_pgpool \
223         "${PGHOME}/bin/pg_ctl -l /dev/null -w -D ${FAILED_NODE_PGDATA} start"
224 
225         # If Standby is running, attach this node
226         if [ $? -eq 0 ]
227         then
228             echo "$(date +"%F %T") failover.sh INFO: node ${FAILED_NODE_ID} on ${FAILED_NODE_HOST} started as a standby node."
229             echo "$(date +"%F %T") failover.sh INFO: failover command complete."
230         else
231             echo "$(date +"%F %T") failover.sh ERROR: failed to start standby node ${FAILED_NODE_HOST}"
232             ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${new_master_node_host} -i ~/.ssh/id_rsa_pgpool \ "
233             ${PGHOME}/bin/psql -h ${new_master_node_host} -p ${new_master_node_port} -c \"SELECT pg_drop_replication_slot('${failed_node_host}')\" "
234 
235             echo "$(date +"%F %T") failover.sh ERROR: failover command failed"
236             exit 1
237         fi
238 
239         # If start Standby failed, drop replication slot "${failed_node_host}"
240     fi
241 
242     ${PGPOOL_PATH}/pcp_attach_node -h localhost -U ${PCP_USER} -p ${PCP_PORT} -n ${FAILED_NODE_ID}
243     if [ $? -eq 0 ]
244     then
245         echo "$(date +"%F %T") failover.sh INFO: pcp_attach_node complete."
246         echo "$(date +"%F %T") failover.sh INFO: follow master command complete."
247         return 0
248     else
249         echo "$(date +"%F %T") failover.sh ERROR: pcp_attach_node failed." >> ${PGPOOL_LOG_DIR}/failover.log
250         return 1
251     fi
252     exit 0
253 fi
254 
255 ## Promote Standby node.
256 echo "failover.sh: Primary node is down, promote standby node ${NEW_MASTER_NODE_HOST}."
257 
258 ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
259     postgres@${NEW_MASTER_NODE_HOST} -i ~/.ssh/id_rsa_pgpool ${PGHOME}/bin/pg_ctl -D ${NEW_MASTER_NODE_PGDATA} -w promote
260 
261 if [ $? -ne 0 ]
262 then
263     echo "$(date +"%F %T") failover.sh ERROR: new_master_host=${NEW_MASTER_NODE_HOST} promote failed."
264     exit 1
265 fi
266 
267 echo "failover.sh: INFO: node ${NEW_MASTER_NODE_ID} started as the primary node."
268 exit 0
failover.sh

 

#!/bin/bash
# This script is run after failover_command to synchronize the Standby with the new Primary.
# First try pg_rewind. If pg_rewind failed, use pg_basebackup.

set -o xtrace
#exec >> (logger -i -p local1.info) 2>&1

# Special values:
# %d    DB node ID of the detached node
# %h    Hostname of the detached node
# %p    Port number of the detached node
# %D    Database cluster directory of the detached node
# %M    Old master node ID
# %m    New master node ID
# %H    Hostname of the new master node
# %P    Old primary node ID
# %r    Port number of the new master node
# %R    Database cluster directory of the new master node
# %N    Hostname of the old primary node (Pgpool-II 4.1 or after)
# %S    Port number of the old primary node (Pgpool-II 4.1 or after)
# %%    '%' character

FAILED_NODE_ID="$1"
FAILED_NODE_HOST="$2"
FAILED_NODE_PGDATA="$3"
NEW_MASTER_NODE_HOST="$4"
NEW_MASTER_NODE_PORT="$5"

PGHOME="/opt/pg114"
REPL_USER=repuser
PCP_USER=pgpool
PGPOOL_PATH=/usr/bin
PCP_PORT=9898
PGPOOL_LOG_DIR=/opt/pgpool

PGPASSFILE=/etc/pgpool-II/.pgpass
PCPPASSFILE=/etc/pgpool-II/.pcppass

declare -A parameter_list

parameter_list=(
    ["FAILED_NODE_ID"]=${FAILED_NODE_ID}
    ["FAILED_NODE_HOST"]=${FAILED_NODE_HOST}
    ["FAILED_NODE_PGDATA"]=${FAILED_NODE_PGDATA}
    ["NEW_MASTER_NODE_HOST"]=${NEW_MASTER_NODE_HOST}
    ["NEW_MASTER_NODE_PORT"]=${NEW_MASTER_NODE_PORT}
    ["PGHOME"]=${PGHOME}
    ["REPL_USER"]=${REPL_USER}
    ["PCP_USER"]=${PCP_USER}
    ["PGPOOL_PATH"]=${PGPOOL_PATH}
    ["PCP_PORT"]=${PCP_PORT}
    ["PGPOOL_LOG_DIR"]=${PGPOOL_LOG_DIR}
    ["PGPASSFILE"]=${PGPASSFILE}
    ["PCPPASSFILE"]=${PCPPASSFILE}
)

has_error=0
for parameter in ${!parameter_list[@]}
do
    if [ -z ${parameter_list[$parameter]} ]
    then
        echo -e "ERROR: parameter \"$parameter\" is not defined. Exit"
        has_error=1
    fi
done

if [ ${has_error} -eq 1 ]
then
    exit 1
fi

export PGPASSFILE
export PCPPASSFILE

#Check if passwordless SSH connections to a server can be made.
# parameters:
  # $1: Hostname of the server
CheckIfSSHIsPasswordless () {
    ## Test passwordless SSH
    ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@"$1" -i ~/.ssh/id_rsa_pgpool ls /tmp > /dev/null
    if [ $? -eq 0 ]; then
       echo "yes"
    else
       echo "no"
    fi
}

#Get Hostname from ip address or Hostname
# parameters:
  # $1: Hostname of the server
GetHostname () {
    ## Test passwordless SSH
    host_name=$(ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@"$1" -i ~/.ssh/id_rsa_pgpool "hostname") > /dev/null
    echo $host_name
}

# Check if the postgresql node is running
# parameters:
   # $1: Hostname of postgresql node
   # $2: Home directory of PostgreSQL
   # $3: database cluster path
CheckIfPostgresqlIsRunning () {
    pg_node_host=$1
    PGHOME=$2
    pgdata=$3

    is_master=$(ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@"$pg_node_host" -i ~/.ssh/id_rsa_pgpool "$PGHOME/bin/pg_ctl -w -D ${pgdata} status")  > /dev/null 2>&1

    if [ $? -eq 0 ]
    then
        echo "yes";
    else
        echo "no"
    fi
}

# Check if the postgresql node is running and is the master
# parameters:
   # $1: Hostname of postgresql node
   # $2: Port number of the PostgreSQL node
   # $3: Home directory of PostgreSQL
CheckIfPostgresqlIsMaster () {
    pg_node_host=$1
    pg_node_port=$2
    PGHOME=$3

    is_master=$(ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@"$pg_node_host" -i ~/.ssh/id_rsa_pgpool "$PGHOME/bin/psql -h $pg_node_host -p $pg_node_port -U postgres -t -c \"select case when pg_is_in_recovery() then 1 else 0 end\"")  > /dev/null 2>&1

    if [ $? -eq 0 ] && [ $is_master -eq 0 ]
    then
        echo "yes";
    else
        echo "no"
    fi
}


# do basebackup for postgresql by running pg_basebackup
# parameters:
  # $1: Hostname of the detached node
  # $2: Hostname of the new master node
  # $3: Port number of the new master node
  # $4: HOME directory of PostgreSQL
  # $5: User for postgrsql replication
  # $6: database cluster path of failed node
  # $7: Log directory of pgpool
RunPgrewind ()
{
    failed_node_host=$1
    new_master_node_host=$2
    new_master_node_port=$3
    PGHOME=$4
    repl_user=$5
    failed_node_pgdata=$6
    PGPOOL_LOG_DIR=$7

    RECOVERY_CONF=${failed_node_pgdata}/recovery.conf

    master_is_real_master=$(CheckIfPostgresqlIsMaster ${new_master_node_host} ${new_master_node_port} ${PGHOME}) > /dev/null 2>&1
    if [ $master_is_real_master != 'yes' ]
    then
        echo "$(date +"%F %T") follow_master.sh ERROR: PostgreSQL is not running as a master at ${new_master_node_host}. Exiting.." >> ${PGPOOL_LOG_DIR}/follow_master.log
        return 1
    fi


    ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${failed_node_host} -i ~/.ssh/id_rsa_pgpool "

        set -o errexit

        ${PGHOME}/bin/pg_ctl -l /dev/null -w -D ${failed_node_pgdata} stop

        cat > ${RECOVERY_CONF} << EOT
primary_conninfo = 'host=${new_master_node_host} port=${new_master_node_port} user=${repl_user} application_name=${failed_node_host} passfile=''/var/lib/pgsql/.pgpass'''
recovery_target_timeline = 'latest'
primary_slot_name = '${failed_node_host}'
standby_mode = 'on'
EOT

        ${PGHOME}/bin/pg_rewind -D ${failed_node_pgdata} --source-server=\"user=postgres host=${new_master_node_host} port=${new_master_node_port}\"

    "
    if [ $? -ne 0 ]
    then
        return 1
    fi
    return 0
}


# do basebackup for postgresql by running pg_basebackup
# parameters:
  # $1: Hostname of the detached node
  # $2: Hostname of the new master node
  # $3: Port number of the new master node
  # $4: HOME directory of PostgreSQL
  # $5: User for postgrsql replication
  # $6: failed node database cluster path
  # $7: Log directory of pgpool
DoPgBasebackup () {
    failed_node_host=$1
    new_master_node_host=$2
    new_master_node_port=$3
    PGHOME=$4
    repl_user=$5
    failed_node_pgdata=$6
    PGPOOL_LOG_DIR=$7

    RECOVERY_CONF=${failed_node_pgdata}/recovery.conf

    master_is_real_master=$(CheckIfPostgresqlIsMaster ${new_master_node_host} ${new_master_node_port} ${PGHOME}) > /dev/null 2>&1
    if [ $master_is_real_master != 'yes' ]
    then
        echo "$(date +"%F %T") follow_master.sh ERROR: Postgres is not running as a master at ${new_master_node_host}. Exiting." >> ${PGPOOL_LOG_DIR}/follow_master.log
        return 1
    fi

    ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${failed_node_host} -i ~/.ssh/id_rsa_pgpool "

        set -o errexit

            # Execute pg_basebackup
            rm -rf ${failed_node_pgdata}
            ${PGHOME}/bin/pg_basebackup -X stream -h ${new_master_node_host} -U ${repl_user} -p ${new_master_node_port} -D ${failed_node_pgdata}

            cat > ${RECOVERY_CONF} << EOT
primary_conninfo = 'host=${new_master_node_host} port=${new_master_node_port} user=${repl_user} application_name=${failed_node_host} passfile=''/var/lib/pgsql/.pgpass'''
recovery_target_timeline = 'latest'
primary_slot_name = '${failed_node_host}'
standby_mode = 'on'
EOT
        "

    if [ $? -ne 0 ]
    then
        return 1
    fi
    return 0
}

#Check if the supposed master is actually a master
# parameters:
  # $1: DB node ID of the detached node
  # $2: Hostname of the detached node
  # $3: Database cluster directory of the detached node
  # $4: Hostname of the new master node
  # $5: port number of the new master node
  # $6: HOME directory of PostgreSQL
  # $7: User for postgrsql replication
  # $8: pcp user
  # $9: Path of pgpool
  # $10: pcp port
  # $11: Log directory of pgpool
RecoverPostgresqlNode () {
    failed_node_id="$1"
    failed_node_host="$2"
    failed_node_pgdata="$3"
    new_master_node_host="$4"
    new_master_node_port="$5"

    PGHOME="$6"
    repl_user="$7"
    pcp_user="$8"
    pgpool_path="$9"
    pcp_port="${10}"
    PGPOOL_LOG_DIR="${11}"

    RECOVERY_CONF=${failed_node_pgdata}/recovery.conf

    # start Standby node on ${failed_node_host}
    ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${failed_node_host} -i ~/.ssh/id_rsa_pgpool "${PGHOME}/bin/pg_ctl -l /dev/null -w -D ${failed_node_pgdata} start"

    standby_is_running=$(CheckIfPostgresqlIsRunning ${failed_node_host} ${PGHOME} ${failed_node_pgdata})  > /dev/null 2>&1

    master_is_real_master=$(CheckIfPostgresqlIsMaster ${new_master_node_host} ${new_master_node_port} ${PGHOME})  > /dev/null 2>&1
    if [ $master_is_real_master != 'yes' ]
    then
        echo "$(date +"%F %T") follow_master.sh ERROR: PostgreSQL is not running as a master at ${new_master_node_host}. Exiting.." >> ${PGPOOL_LOG_DIR}/follow_master.log
        return 1
    fi

    ## If Standby is running, synchronize it with the new Primary.
    if [ $standby_is_running == "yes" ]
    then
        echo "$(date +"%F %T") follow_master.sh INFO: Running pg_rewind for ${failed_node_host}"  >> ${PGPOOL_LOG_DIR}/follow_master.log

        # Create replication slot "${failed_node_host}"
        ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${new_master_node_host} -i ~/.ssh/id_rsa_pgpool " ${PGHOME}/bin/psql -h ${new_master_node_host}-p ${new_master_node_port} -c \"SELECT pg_create_physical_replication_slot('${failed_node_host}');\" "

        RunPgrewind ${failed_node_host} ${new_master_node_host} ${new_master_node_port} ${PGHOME} ${repl_user} ${failed_node_pgdata} ${PGPOOL_LOG_DIR}

        if [ $? -ne 0 ]
        then
            echo "$(date +"%F %T") follow_master.sh INFO: pg_rewind failed. Try pg_basebackup." >> ${PGPOOL_LOG_DIR}/follow_master.log
            DoPgBasebackup  ${failed_node_host} ${new_master_node_host} ${new_master_node_port} ${PGHOME} ${repl_user} ${failed_node_pgdata} ${PGPOOL_LOG_DIR}

            if [ $? -ne 0 ]
            then
                echo "$(date +"%F %T") follow_master.sh ERROR: pg_basebackup failed." >> ${PGPOOL_LOG_DIR}/follow_master.log
                # drop replication slot
                ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${new_master_node_host} -i ~/.ssh/id_rsa_pgpool " ${PGHOME}/bin/psql -h ${new_master_node_host} -p ${new_master_node_port} -c \"SELECT pg_drop_replication_slot('${failed_node_host}');\" "
                return 1
            fi
        fi

    else
        echo "$(date +"%F %T") follow_master.sh INFO: ${failed_node_host} is not running. Try pg_basebackup."  >> ${PGPOOL_LOG_DIR}/follow_master.log

        # Create replication slot "${failed_node_host}"
        ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${new_master_node_host} -i ~/.ssh/id_rsa_pgpool " ${PGHOME}/bin/psql -h ${new_master_node_host} -p ${new_master_node_port} -c \"SELECT pg_create_physical_replication_slot('${failed_node_host}');\" "

        DoPgBasebackup  ${failed_node_host} ${new_master_node_host} ${new_master_node_port} ${PGHOME} ${repl_user} ${failed_node_pgdata} ${PGPOOL_LOG_DIR}
        if [ $? -ne 0 ]
        then
            echo "$(date +"%F %T") follow_master.sh ERROR: pg_basebackup failed."  >> ${PGPOOL_LOG_DIR}/follow_master.log
            # drop replication slot
            ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${new_master_node_host} -i ~/.ssh/id_rsa_pgpool " ${PGHOME}/bin/psql -p ${new_master_node_port} -c \"SELECT pg_drop_replication_slot('${failed_node_host}');\" "
            return 1
        fi
    fi

    # start Standby node on ${failed_node_host}
    ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${failed_node_host} -i ~/.ssh/id_rsa_pgpool \
        "${PGHOME}/bin/pg_ctl -l /dev/null -w -D ${failed_node_pgdata} start"

    standby_is_running=$(CheckIfPostgresqlIsRunning ${failed_node_host} ${PGHOME} ${failed_node_pgdata})  > /dev/null 2>&1

    # If  Standby is running, attach this node
    if [ $standby_is_running == "yes" ]
    then
        # Run pcp_attact_node to attach Standby node to Pgpool-II.
        ${pgpool_path}/pcp_attach_node -h localhost -U ${pcp_user} -p ${pcp_port} -n ${failed_node_id}

        if [ $? -eq 0 ]
        then
            echo "$(date +"%F %T") follow_master.sh INFO: pcp_attach_node complete." >> ${PGPOOL_LOG_DIR}/follow_master.log
            echo "$(date +"%F %T") follow_master.sh INFO: follow master command complete." >> ${PGPOOL_LOG_DIR}/follow_master.log
            return 0
        else
            echo "$(date +"%F %T") follow_master.sh ERROR: pcp_attach_node failed." >> ${PGPOOL_LOG_DIR}/follow_master.log
            return 1
        fi

    # If start Standby failed, drop replication slot "${failed_node_host}"
    else
        ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${new_master_node_host} -i ~/.ssh/id_rsa_pgpool \ "
        ${PGHOME}/bin/psql -h ${new_master_node_host} -p ${new_master_node_port} -c \"SELECT pg_drop_replication_slot('${failed_node_host}')\" "

        echo "$(date +"%F %T") follow_master.sh ERROR: follow master command failed." >> ${PGPOOL_LOG_DIR}/follow_master.log
        return 1
    fi
}

if [ $(CheckIfSSHIsPasswordless ${FAILED_NODE_HOST}) == "no" ]
then
    echo 'follow_master.sh ERROR: passwordless SSH to postgres@"${FAILED_NODE_HOST}" failed. Please setup passwordless SSH.' >> ${PGPOOL_LOG_DIR}/follow_master.log
    exit 1
fi

if [ $(CheckIfSSHIsPasswordless ${NEW_MASTER_NODE_HOST}) == "no" ]
then
    echo 'follow_master.sh ERROR: passwordless SSH to postgres@"${NEW_MASTER_NODE_HOST}" failed. Please setup passwordless SSH.' >> ${PGPOOL_LOG_DIR}/follow_master.log
    exit 1
fi

FAILED_NODE_HOST=$(GetHostname ${FAILED_NODE_HOST})
NEW_MASTER_NODE_HOST=$(GetHostname ${NEW_MASTER_NODE_HOST})

RecoverPostgresqlNode ${FAILED_NODE_ID} ${FAILED_NODE_HOST} ${FAILED_NODE_PGDATA} ${NEW_MASTER_NODE_HOST} ${NEW_MASTER_NODE_PORT} ${PGHOME} ${REPL_USER} ${PCP_USER} ${PGPOOL_PATH} ${PCP_PORT} ${PGPOOL_LOG_DIR}

exit 0
follow_master.sh

 

2.4.3 設置 pgpool 服務

分別在兩台主機上啟動pool服務,並設置開機啟動:

[root@node228 pgpool-II]# systemctl enable pgpool

[root@node228 pgpool-II]# systemctl start pgpool

      

       日志會打印在 /var/log/messages 中.

3 方案驗證

3.1 驗證 pgpool-II 服務的高可用

1. 首先分別查看兩台服務器上的IP地址,確定虛擬IP在哪台服務器上。命令如下:

 [root@node228 ~]# ip addr show | grep -i "10.40.239.240"

 

如圖, pgpool-II 集群的主節點在服務器node228上。

 

2. 停止pgpool-II主節點的服務:

[root@node228 ~]# systemctl stop pgpool

 

3. 查看虛擬ip是否在哪台服務器上:

[root@node228 ~]# ip addr show | grep -i "10.40.239.240"

 

 

如圖,虛擬ip漂移到 node229 這台服務器上。

 

4. 在node229 上進入PostgreSQL 的bin目錄,通過pgpool訪問數據庫,判斷能否成功。下圖表示訪問成功。

 

 

 

3.2 驗證 PostgreSQL 的高可用

1. 進入PostgreSQL 的bin目錄,查看node228上的PostgreSQL的狀態:

[root@node228 bin]# cd /opt/pg114/bin

[root@node228 bin]# ./psql -h 10.40.239.228 -p 5432 -U postgres -c "select pg_is_in_recovery();"

 

如圖,node228 是PostgreSQL集群的主節點。

 

 

2. 查看node229上的PostgreSQL的狀態:

[root@node228 bin]# ./psql -h 10.40.239.229 -p 5435 -U postgres -c "select pg_is_in_recovery();"

 

如圖,node229 是PostgreSQL集群的備節點,處於在線恢復狀態。

 

 

3. 查看流復制狀態:

[root@node228 bin]# ./psql -h 10.40.239.228 -p 5432 -U postgres -c "select * from pg_stat_replication;"

 

 

4. 關閉node228上的主庫:

[root@node229 bin]# su postgres

[postgres@node228 bin]$ ./pg_ctl stop -D ../data

 

 

5. 再查看node229上的數據庫狀態。如圖,原來的備庫已經切換為了主庫。

 

 

6. 查看流復制狀態。如圖,node228 上的數據庫已經作為備庫運行。

 

 

7. 查看數據庫節點是否加入到pgpool-II集群中:

[postgres@node228 bin]$ ./psql -h 10.40.239.240 -p 9999 -U postgres -c "show pool_nodes;"

 

如圖,兩個節點均由pgpool-II管理:

 

 

3.3 驗證 pgpool-II 的負載均衡功能

1. 進入postgresql 的 bin 目錄,多次執行下面的命令,觀察結果:

 ./psql -h 10.40.239.240 -p 9999 -U postgres -c "select inet_server_addr()"

 

 

這條命令是通過pgpool 代理端口訪問數據庫,獲取數據庫IP 地址。可以看到,訪問請求可能會被分發到兩個數據庫節點中的任何一個。

 

參考文獻

[1] The Pgpool Global Development Group. pgpool-II 4.1.1 Documentation.2020-05-21

[2] The Pgpool Global Development Group. pgpool-II 3.5.4 Documentation. 2011-01-29

[3] PostgeSQL 全球開發小組. PostgreSQL 11.2 手冊. 彭煜瑋, 瀚高軟件譯.

[4] Francs. PostgreSQL 流復制 + Pgpool-II 實現高可用 HA. 2014-10-03

[5] 遙想公瑾當年. PGPool-II+PG流復制實現HA主備切換. 2016-12-21


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM