http://xmarker.blog.163.com/blog/static/22648405720131125560959/
最近一直在尋找postgresql的分布式解決方案,試過pgpool-ii,plproxy,但都不太滿意,昨天意外發現了一種開源的分布式postgresql解決方案,stado,前身是enterprise公司開發的gridsql(開源,但現在已經停止更新),今天測試后發現和阿里的cobar有點類似,在此分享下安裝和初步試用。
(一).下載stado軟件,地址為:http://www.stormdb.com/community/stado
(二).規划實驗環境:
主機: pgtest4 pgtest5 pgtest6
ip地址 10.1.1.13 10.1.1.14 10.1.1.15
操作系統 centos6.4 centos6.4 centos6.4
其中pgtest4、pgtest5、pgtest6都是做數據節點,其中pgtest4還做stado節點。
(三).創建stado用戶和相關安裝目錄:
[stado@pgtest4 ~]$ tar -zxvf stado_2_5.tar.gz
[stado@pgtest4 ~]$ chown -R stado:stadogrp stado
[stado@pgtest4 ~]$ chmod 700 stado/bin/*.sh
[stado@pgtest4 ~]$ chmod 775 stado/log
[stado@pgtest4 ~]$ chmod 755 stado/bin/gs-cmdline.sh
[stado@pgtest4 ~]$ chmod 600 stado/config/*
(四).在每個節點安裝postgresql軟件及數據庫(略)
(五)在每個節點數據庫中創建用戶
創建用戶(每個節點):
createuser –d –E stado –U postgres -P
創建.pgpass文件,這樣可以不用輸密碼就能直接連接
/home/postgres@pgtest4$cat .pgpass
*:*:*:stado:123456
chmod 600 .pgpass
(六).在stado節點(pgtest4)上修改配置文件
[stado@pgtest4 stado]$ cd /home/stado/stado/config/
[stado@pgtest4 config]$ ls
stado_agent.config stado.config
[stado@pgtest4 config]$ vim stado.config
xdb.port=6453
xdb.maxconnections=10
xdb.default.dbusername=stado
xdb.default.dbpassword=123456
xdb.default.dbport=5432
xdb.default.threads.pool.initsize=5
xdb.default.threads.pool.maxsize=10
xdb.metadata.database=XDBSYS
xdb.metadata.dbhost=127.0.0.1
xdb.nodecount=3
xdb.node.1.dbhost=pgtest4
xdb.node.2.dbhost=pgtest5
xdb.node.3.dbhost=pgtest6
xdb.coordinator.node=1
修改以上配置即可,其他的可以不用修改。
(七).在stado節點(pgtest4)上創建metadata數據庫
[stado@pgtest4 bin]$ ./gs-createmddb.sh -u admin -p secret
Executed Statement: create table xsystablespaces ( tablespaceid serial, tablespacename varchar(255) not null, ownerid int not null, primary key(tablespaceid))
Executed Statement: create unique index idx_xsystablespaces_1 on xsystablespaces (tablespacename)
Executed Statement: create table xsystablespacelocs ( tablespacelocid int not null, tablespaceid int not null, filepath varchar(1024) not null, nodeid int not null, primary key(tablespacelocid))
Executed Statement: create unique index idx_xsystablespacelocs_1 on xsystablespacelocs (tablespaceid, nodeid)
....
Executed Statement: create unique index idx_xsyschecks_1 on xsyschecks (constid, seqno)
Executed Statement: alter table xsyschecks add foreign key (constid) references xsysconstraints (constid)
User admin is created
(八).啟動stado
[stado@pgtest4 bin]$ ./gs-server.sh
Starting....
(九).創建用戶數據庫
[stado@pgtest4 bin]$ ./gs-createdb.sh -d xtest -u admin -p secret -n 1,2,3
OK
登陸各個節點后應該能看到xtest數據庫,如
__xtest__N2=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
----------------+--------------+----------+-------------+-------------+-----------------------
__xtest__N2 | stado | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
(十).登陸數據庫
第八步創建用戶數據庫后,stado會自動啟動,這時登陸stado管理界面即可做數據操作:
[stado@pgtest4 bin]$ ./gs-cmdline.sh -d xtest -u admin -p secret
Stado -> show databases;
+----------------------------+
| DATABASE | STATUS | NODES |
+----------------------------+
| xtest | Started | 1,2,3 |
+----------------------------+
1 row(s).
創建表:
Stado -> CREATE TABLE mytable1 (col1 INT)
PARTITIONING KEY col1 ON ALL;
OK
Stado -> INSERT INTO mytable1 VALUES (1);
1 row(s) affected
Stado -> INSERT INTO mytable1 VALUES (2);
1 row(s) affected
Stado -> SELECT * FROM mytable1;
+------+
| col1 |
+------+
| 2 |
| 1 |
+------+
2 row(s).
Stado -> INSERT INTO mytable1 VALUES (3);
1 row(s) affected
Stado -> SELECT * FROM mytable1;
+------+
| col1 |
+------+
| 2 |
| 1 |
| 3 |
+------+
3 row(s).
Stado -> show tables;
+----------------------------------------------------+
| TABLE | TABLE_PARTITIONING_COLUMN | TABLE_NODES |
+----------------------------------------------------+
| mytable1 | col1 | 1,2,3 |
+----------------------------------------------------+
1 row(s).
以上就是簡單的安裝及測試,下一篇將會介紹常用命令和操作。
http://xmarker.blog.163.com/blog/static/22648405720131126350993/
上篇介紹了stado的安裝及配置,本篇簡單介紹下常用命令和表分區用法
(一).停止stado :
如我們在pgtest5這個數據節點上查看stado進程,會發現如下進程在連接到真實的數據庫上:
[root@pgtest5 ~]# ps -ef|grep stado
postgres 1942 1815 0 10:59 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41465) idle
postgres 1943 1815 0 10:59 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41470) idle
postgres 1944 1815 0 10:59 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41471) idle
postgres 1945 1815 0 10:59 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41472) idle
postgres 1946 1815 0 10:59 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41473) idle
root 2085 2064 0 11:23 pts/1 00:00:00 grep stado
關閉stado進程:
[stado@pgtest4 bin]$ ./gs-dbstop.sh -d xtest -u admin -p secret
Database(s) xtest stopped.
[root@pgtest5 ~]# ps -ef|grep stado
root 4085 2064 0 18:38 pts/1 00:00:00 grep stado
(二).啟動stado:
開啟stado進程:
[stado@pgtest4 bin]$ ./gs-dbstart.sh -d xtest -u admin -p secret
Database(s) xtest started.
[root@pgtest5 ~]# ps -ef|grep stado
postgres 4101 1815 0 18:41 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41496) idle in transaction
postgres 4102 1815 0 18:41 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41499) idle
postgres 4103 1815 0 18:41 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41502) idle
postgres 4104 1815 0 18:41 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41505) idle
postgres 4105 1815 0 18:41 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41507) idle
root 4112 2064 0 18:42 pts/1 00:00:00 grep stado
(三)刪除一個數據庫連接:
[stado@pgtest4 bin]$ ./gs-dbstop.sh -d abc -u admin -p secret
Database(s) abc stopped.
[stado@pgtest4 bin]$ ./gs-dropdb.sh -d abc -u admin -p secret
OK
刪除數據庫及其鏈接需要先停止數據庫的連接進程,然后才能在數據庫中刪除。
(四)創建一個stado數據庫
[stado@pgtest4 bin]$ ./gs-createdb.sh -d abc -u admin -p secret -n 1,2,3
OK
(五)創建一個復制模式的表
Stado -> create table t (id int primary key,name text) REPLICATED ;
OK
Stado -> insert into t values(1,'abc');
1 row(s) affected
Stado -> insert into t values(2,'bcd');
1 row(s) affected
在各個節點上查看數據:
__xtest__N1=# select * from t;
id | name
----+------
1 | abc
2 | bcd
(2 rows)
__xtest__N2=# select * from t;
id | name
----+------
1 | abc
2 | bcd
(2 rows)
(六)創建一個partition模式的表
Stado -> create table t2 (area_id int,area_name varchar(20),descs text) partitioning key area_id on all;
OK
插入數據:
insert into t2 values(10002,'xishan','test');
insert into t2 values(10002,'xishan','fasdasdf');
insert into t2 values(10002,'xishan','testfasdf');
insert into t2 values(10001,'jiangyin','testfasdf');
insert into t2 values(10003,'yichang','test22');
insert into t2 values(10003,'yichang','test22');
insert into t2 values(10003,'yichang','test2yichang');
insert into t2 values(10003,'yichang','test22fasdfas');
Stado -> select * from t2;
+-------------------------------------+
| area_id | area_name | descs |
+-------------------------------------+
| 10001 | jiangyin | test |
| 10002 | xishan | test |
| 10001 | jiangyin | testfasdf |
| 10002 | xishan | test |
| 10003 | yichang | test22 |
| 10002 | xishan | fasdasdf |
| 10003 | yichang | test22 |
| 10002 | xishan | testfasdf |
| 10003 | yichang | test2yichang |
| 10003 | yichang | test22fasdfas |
+-------------------------------------+
10 row(s).
在各個節點查看數據分布:
pgtest4:
__xtest__N1=# select * from t2;
LOG: statement: select * from t2;
area_id | area_name | descs
---------+-----------+---------------
10001 | jiangyin | test
10001 | jiangyin | testfasdf
10003 | yichang | test22
10003 | yichang | test22
10003 | yichang | test2yichang
10003 | yichang | test22fasdfas
(6 rows)
pgtest5:
__xtest__N2=# select * from t2;
area_id | area_name | descs
---------+-----------+-----------
10002 | xishan | test
10002 | xishan | test
10002 | xishan | fasdasdf
10002 | xishan | testfasdf
(4 rows)
pgtest6:
__xtest__N3=# select * from t2;
area_id | area_name | descs
---------+-----------+-------
(0 rows)
發現按area_id分區也沒用完全均衡的分區,這個還需要進一步研究。
(七)創建一個
ROUND ROBIN
模式的表
Stado -> create table t3 (area_id int ,name varchar2(30)) ROUND ROBIN ON all;
OK
Stado -> insert into t3 values(1001,'jiangyi');
1 row(s) affected
Stado -> insert into t3 values(1002,'xishan');
1 row(s) affected
Stado -> insert into t3 values(1003,'yichang');
1 row(s) affected
Stado -> select * from t3;
+-------------------+
| area_id | name |
+-------------------+
| 1003 | yichang |
| 1001 | jiangyi |
| 1002 | xishan |
+-------------------+
3 row(s).
在pgtest4上查看:
__xtest__N1=# select * from t3;
LOG: statement: select * from t3;
area_id | name
---------+---------
1003 | yichang
(1 row)
__xtest__N2=# select * from t3;
area_id | name
---------+---------
1001 | jiangyi
(1 rows)
__xtest__N3=# select * from t3;
area_id | name
---------+--------
1002 | xishan
(1 row)
可以看到,這個是輪訓方式進行插入數據的。
參考:
http://xmarker.blog.163.com/blog/static/2264840572013112105159991/
上兩節分別對stado的安裝配置及常用命令進行了實驗,本篇將進一步對表關聯、分庫方式進行實驗。
(一)創建stado支持的三種類型的表:
根據列值hash分區(partitioning)、輪換分區(roundrobin)、復制模式(replicated),注意,不支持range分區,也就說要按某個列的某些范圍值分區是不支持的。
Stado -> create table t_partition (id int,name varchar(30)) partitioning key id on all;
OK
Stado -> insert into t_partition select generate_series(1,100)::int,'mcl'::varchar(30);
100 row(s) affected
Stado -> create table t_replicate (id int,name varchar(30)) replicated;
OK
Stado -> insert into t_replicate select generate_series(1,100)::int,'mcl'::varchar(30);
100 row(s) affected
Stado -> create table t_roundrobin (id int,name varchar(30)) round robin on all;
OK
Stado -> insert into t_roundrobin select generate_series(1,100)::int,'mcl'::varchar(30);
100 row(s) affected
以上分別建了三種類型的表,都分別插入100條數據
(二)三種表關聯查詢:
Stado -> select * from t_partition a,t_replicate b where a.id=b.id;
+-------------------------+
| id | name | id | name |
+-------------------------+
| 11 | mcl | 11 | mcl |
| 13 | mcl | 13 | mcl |
| 14 | mcl | 14 | mcl |
| 22 | mcl | 22 | mcl |
....
| 100 | mcl | 100 | mcl |
+-------------------------+
100 row(s).
我們看下執行計划:
Stado -> explain select * from t_partition a,t_replicate b where a.id=b.id;
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| |
| Step: 0 |
| ------- |
| Target: CREATE UNLOGGED TABLE "TMPTT23_1" ( "id" INT, "name" VARCHAR (30)) WITHOUT OIDS |
| Select: SELECT "a"."id" AS "id","a"."name" AS "name" FROM "t_partition" "a" |
| |
| |
| Step: 1 |
| ------- |
| Select: SELECT "TMPTT23_1"."id" AS "id","TMPTT23_1"."name" AS "name","b"."id" AS "EXPRESSION1","b"."name" AS "EXPRESSION2" FROM "TMPTT23_1" CROSS JOIN "t_replicate" "b" WHERE ("TMPTT23_1"."id" = "b"."id") |
| Drop: |
| TMPTT23_1 |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
12 row(s).
可以看到,join操作分兩步,第一步建unlogged 表TMPTT23_1,並插入t_partition的數據,第二步根據TMPTT23_1數據關聯t_replicate得到最終數據。
同樣的,partition表和roundrobin表也可以關聯查到同樣數據,看下執行計划(和上一步類似):
Stado -> explain select * from t_partition a,t_roundrobin b where a.id=b.id;
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| |
| Step: 0 |
| ------- |
| Target: CREATE UNLOGGED TABLE "TMPTT25_1" ( "id" INT, "name" VARCHAR (30)) WITHOUT OIDS |
| Select: SELECT "a"."id" AS "id","a"."name" AS "name" FROM "t_partition" "a" |
| |
| |
| Step: 1 |
| ------- |
| Select: SELECT "TMPTT25_1"."id" AS "id","TMPTT25_1"."name" AS "name","b"."id" AS "EXPRESSION1","b"."name" AS "EXPRESSION2" FROM "TMPTT25_1" CROSS JOIN "t_roundrobin" "b" WHERE ("TMPTT25_1"."id" = "b"."id") |
| Drop: |
| TMPTT25_1 |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
12 row(s).
同樣的,roundrobin和replicate也可以查出同樣的數據:
Stado -> explain select * from t_roundrobin a,t_replicate b where a.id=b.id;
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| |
| Step: 0 |
| ------- |
| Target: CREATE UNLOGGED TABLE "TMPTT27_1" ( "id" INT, "name" VARCHAR (30)) WITHOUT OIDS |
| Select: SELECT "a"."id" AS "id","a"."name" AS "name" FROM "t_roundrobin" "a" |
| |
| |
| Step: 1 |
| ------- |
| Select: SELECT "TMPTT27_1"."id" AS "id","TMPTT27_1"."name" AS "name","b"."id" AS "EXPRESSION1","b"."name" AS "EXPRESSION2" FROM "TMPTT27_1" CROSS JOIN "t_replicate" "b" WHERE ("TMPTT27_1"."id" = "b"."id") |
| Drop: |
| TMPTT27_1 |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
12 row(s).
從上可以看出,三種分區方式都支持關聯查詢
(三)group by查詢
Stado -> select a.id,count(*) from t_partition a,t_partition2 b where a.id=b.id and b.id<30 group by a.id order by 1;
+---------------+
| id | count(*) |
+---------------+
| 1 | 1 |
| 2 | 1 |
| 3 | 1 |
| 4 | 1 |
| 5 | 1 |
| 6 | 1 |
| 7 | 1 |
| 8 | 1 |
| 9 | 1 |
| 10 | 1 |
| 11 | 1 |
| 12 | 1 |
| 13 | 1 |
| 14 | 1 |
| 15 | 1 |
| 16 | 1 |
| 17 | 1 |
| 18 | 1 |
| 19 | 1 |
| 20 | 1 |
| 21 | 1 |
| 22 | 1 |
| 23 | 1 |
| 24 | 1 |
| 25 | 1 |
| 26 | 1 |
| 27 | 1 |
| 28 | 1 |
| 29 | 1 |
+---------------+
29 row(s).
partition表和partition表的關聯后group by(注意,t_partition2表和t_partition1表結構一樣,數據是后者的一半),最終分成3步完成:
Stado -> explain select a.id,count(*) from t_partition a,t_partition2 b where a.id=b.id and b.id<30 group by a.id order by 1;
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| |
| Step: 0 |
| ------- |
| Target: CREATE UNLOGGED TABLE "TMPTT34_1" ( "id" INT) WITHOUT OIDS |
| Select: SELECT "a"."id" AS "id" FROM "t_partition" "a" |
| |
| |
| Step: 1 |
| ------- |
| Target: CREATE UNLOGGED TABLE "TMPTT34_2" ( "XCOL1" INT, "XCOL2" INT) WITHOUT OIDS |
| Select: SELECT "TMPTT34_1"."id" AS "XCOL1",count(*) AS "XCOL2" FROM "TMPTT34_1" CROSS JOIN "t_partition2" "b" WHERE ("b"."id" < 30) AND ("TMPTT34_1"."id" = "b"."id") group by "TMPTT34_1"."id" |
| Drop: |
| TMPTT34_1 |
| |
| |
| Step: 2 |
| ------- |
| Select: SELECT "XCOL1" AS "id",SUM("XCOL2") AS "EXPRESSION67" FROM "TMPTT34_2" group by "XCOL1", "XCOL1" |
| Drop: |
| TMPTT34_2 |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
20 row(s).
(四)怎樣實現根據range分庫
很多時候我們可能不希望通過roundrobin或者partition 的hash分區方式分庫,但由於stado沒有提供這種分區,如果我們又有比較明顯的分區條件,比如無錫區域的用戶都會插入到無錫來,蘇州區域的用戶都會插入到蘇州,這樣我們可以用應用程序之間插入到各自對應的后台數據節點,查詢時通過stado匯總各個地區的結果集,一樣可以實現:
Stado -> create table t_area_record(area_id int,area_name varchar(30),name varchar(30)) partitioning key area_id on all;
OK
第一個節點作為江陰的數據庫:
__xtest__N1=# insert into t_area_record select 1001,'jiangyin'::varchar(30),generate_series(1,50)||'dba'::varchar(30);
INSERT 0 50
第二個節點作為無錫的數據庫:
__xtest__N2=# insert into t_area_record select 1002,'wuxi'::varchar(30),generate_series(1,50)||'dba'::varchar(30);
INSERT 0 50
第三個節點作為宜興的數據庫:
__xtest__N3=# insert into t_area_record select 1003,'yixing'::varchar(30),generate_series(1,50)||'dba'::varchar(30);
INSERT 0 50
然后在總的stado節點查詢:
Stado -> select area_id,count(*) from t_area_record group by area_id order by 2;
+--------------------+
| area_id | count(*) |
+--------------------+
| 1003 | 50 |
| 1002 | 50 |
| 1001 | 50 |
+--------------------+
3 row(s).
Stado -> select * from t_area_record order by area_id limit 5;
+----------------------------+
| area_id | area_name | name |
+----------------------------+
| 1001 | jiangyin | 2dba |
| 1001 | jiangyin | 3dba |
| 1001 | jiangyin | 4dba |
| 1001 | jiangyin | 5dba |
| 1001 | jiangyin | 1dba |
+----------------------------+
5 row(s).
可以看出,這樣分庫寫入,而總庫查詢正好彌補了stado沒有partition by range的缺點,而基於mysql的cobar的分區規則則更豐富和靈活,但cobar不能整個匯總,如在cobar查詢select count(*) from t_area_record會產生三條數據,因為cobar只是簡單的把sql語句發到所在的數據節點,然后結果分別返回而不做進一步處理,但stado顯然是做了處理的:
Stado -> select count(*) from t_area_record;
+----------+
| count(*) |
+----------+
|
150 |
+----------+
1 row(s).
Stado -> explain select count(*) from t_area_record;
+-------------------------------------------------------------------------+
| Query Plan |
+-------------------------------------------------------------------------+
| |
| Step: 0 |
| ------- |
| Target: CREATE UNLOGGED TABLE "TMPTT56_1" ( "XCOL1" INT) WITHOUT OIDS |
| Select: SELECT count(*) AS "XCOL1" FROM "t_area_record" |
| |
| |
| Step: 1 |
| ------- |
| Select: SELECT SUM("XCOL1") AS "EXPRESSION120" FROM "TMPTT56_1" |
| Drop: |
| TMPTT56_1 |
+-------------------------------------------------------------------------+
12 row(s).
不過還有更好的方式,結合postgres自帶的分區表功能,優化器還能做Constraint Exclusion,也就是約束排除,可以讓優化器更聰明的知道哪些字表需要掃描哪個數據節點,不需要掃描的不會再掃那個分區,下次再寫,該睡覺了。
http://xmarker.blog.163.com/blog/static/2264840572013101062120936/
本文簡單介紹postgresql的分庫方案plproxy的安裝及使用,實際分庫以后再深入學習后更新。本實驗使用三個centos6.4虛擬機做服務器,ip分別為10.1.1.2、10.1.1.11、10.1.1.12,其中10.1.1.12位plproxy節點,其他兩個為數據節點。
1.plproxy的原理(參考德哥的相關文章):

2.下載軟件:
選擇最新版本下載即可,目前最新版本為2.5。
3.解壓軟件並進入目錄(在plproxy節點安裝即可):
/postgres/plproxy-2.5@pgtest4$pwd
/postgres/plproxy-2.5
/postgres/plproxy-2.5@pgtest4$ls
AUTHORS config COPYRIGHT debian doc Makefile META.json NEWS plproxy.control plproxy.so README sql src test
/postgres/plproxy-2.5
/postgres/plproxy-2.5@pgtest4$ls
AUTHORS config COPYRIGHT debian doc Makefile META.json NEWS plproxy.control plproxy.so README sql src test
4.安裝:
注意需要用root用戶安裝,並且還要執行postgres用戶的.bash_profile
source /home/postgres/.bash_profile
/postgres/plproxy-2.5@pgtest4$make
bison -b src/parser -d src/parser.y
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include -DNO_SELECT=0 -I. -I. -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include/internal -D_GNU_SOURCE -c -o src/scanner.o src/scanner.c
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include -DNO_SELECT=0 -I. -I. -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include/internal -D_GNU_SOURCE -c -o src/