http://xmarker.blog.163.com/blog/static/22648405720131125560959/
最近一直在寻找postgresql的分布式解决方案,试过pgpool-ii,plproxy,但都不太满意,昨天意外发现了一种开源的分布式postgresql解决方案,stado,前身是enterprise公司开发的gridsql(开源,但现在已经停止更新),今天测试后发现和阿里的cobar有点类似,在此分享下安装和初步试用。
(一).下载stado软件,地址为:http://www.stormdb.com/community/stado
(二).规划实验环境:
主机: pgtest4 pgtest5 pgtest6
ip地址 10.1.1.13 10.1.1.14 10.1.1.15
操作系统 centos6.4 centos6.4 centos6.4
其中pgtest4、pgtest5、pgtest6都是做数据节点,其中pgtest4还做stado节点。
(三).创建stado用户和相关安装目录:
[stado@pgtest4 ~]$ tar -zxvf stado_2_5.tar.gz
[stado@pgtest4 ~]$ chown -R stado:stadogrp stado
[stado@pgtest4 ~]$ chmod 700 stado/bin/*.sh
[stado@pgtest4 ~]$ chmod 775 stado/log
[stado@pgtest4 ~]$ chmod 755 stado/bin/gs-cmdline.sh
[stado@pgtest4 ~]$ chmod 600 stado/config/*
(四).在每个节点安装postgresql软件及数据库(略)
(五)在每个节点数据库中创建用户
创建用户(每个节点):
createuser –d –E stado –U postgres -P
创建.pgpass文件,这样可以不用输密码就能直接连接
/home/postgres@pgtest4$cat .pgpass
*:*:*:stado:123456
chmod 600 .pgpass
(六).在stado节点(pgtest4)上修改配置文件
[stado@pgtest4 stado]$ cd /home/stado/stado/config/
[stado@pgtest4 config]$ ls
stado_agent.config stado.config
[stado@pgtest4 config]$ vim stado.config
xdb.port=6453
xdb.maxconnections=10
xdb.default.dbusername=stado
xdb.default.dbpassword=123456
xdb.default.dbport=5432
xdb.default.threads.pool.initsize=5
xdb.default.threads.pool.maxsize=10
xdb.metadata.database=XDBSYS
xdb.metadata.dbhost=127.0.0.1
xdb.nodecount=3
xdb.node.1.dbhost=pgtest4
xdb.node.2.dbhost=pgtest5
xdb.node.3.dbhost=pgtest6
xdb.coordinator.node=1
修改以上配置即可,其他的可以不用修改。
(七).在stado节点(pgtest4)上创建metadata数据库
[stado@pgtest4 bin]$ ./gs-createmddb.sh -u admin -p secret
Executed Statement: create table xsystablespaces ( tablespaceid serial, tablespacename varchar(255) not null, ownerid int not null, primary key(tablespaceid))
Executed Statement: create unique index idx_xsystablespaces_1 on xsystablespaces (tablespacename)
Executed Statement: create table xsystablespacelocs ( tablespacelocid int not null, tablespaceid int not null, filepath varchar(1024) not null, nodeid int not null, primary key(tablespacelocid))
Executed Statement: create unique index idx_xsystablespacelocs_1 on xsystablespacelocs (tablespaceid, nodeid)
....
Executed Statement: create unique index idx_xsyschecks_1 on xsyschecks (constid, seqno)
Executed Statement: alter table xsyschecks add foreign key (constid) references xsysconstraints (constid)
User admin is created
(八).启动stado
[stado@pgtest4 bin]$ ./gs-server.sh
Starting....
(九).创建用户数据库
[stado@pgtest4 bin]$ ./gs-createdb.sh -d xtest -u admin -p secret -n 1,2,3
OK
登陆各个节点后应该能看到xtest数据库,如
__xtest__N2=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
----------------+--------------+----------+-------------+-------------+-----------------------
__xtest__N2 | stado | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
(十).登陆数据库
第八步创建用户数据库后,stado会自动启动,这时登陆stado管理界面即可做数据操作:
[stado@pgtest4 bin]$ ./gs-cmdline.sh -d xtest -u admin -p secret
Stado -> show databases;
+----------------------------+
| DATABASE | STATUS | NODES |
+----------------------------+
| xtest | Started | 1,2,3 |
+----------------------------+
1 row(s).
创建表:
Stado -> CREATE TABLE mytable1 (col1 INT)
PARTITIONING KEY col1 ON ALL;
OK
Stado -> INSERT INTO mytable1 VALUES (1);
1 row(s) affected
Stado -> INSERT INTO mytable1 VALUES (2);
1 row(s) affected
Stado -> SELECT * FROM mytable1;
+------+
| col1 |
+------+
| 2 |
| 1 |
+------+
2 row(s).
Stado -> INSERT INTO mytable1 VALUES (3);
1 row(s) affected
Stado -> SELECT * FROM mytable1;
+------+
| col1 |
+------+
| 2 |
| 1 |
| 3 |
+------+
3 row(s).
Stado -> show tables;
+----------------------------------------------------+
| TABLE | TABLE_PARTITIONING_COLUMN | TABLE_NODES |
+----------------------------------------------------+
| mytable1 | col1 | 1,2,3 |
+----------------------------------------------------+
1 row(s).
以上就是简单的安装及测试,下一篇将会介绍常用命令和操作。
http://xmarker.blog.163.com/blog/static/22648405720131126350993/
上篇介绍了stado的安装及配置,本篇简单介绍下常用命令和表分区用法
(一).停止stado :
如我们在pgtest5这个数据节点上查看stado进程,会发现如下进程在连接到真实的数据库上:
[root@pgtest5 ~]# ps -ef|grep stado
postgres 1942 1815 0 10:59 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41465) idle
postgres 1943 1815 0 10:59 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41470) idle
postgres 1944 1815 0 10:59 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41471) idle
postgres 1945 1815 0 10:59 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41472) idle
postgres 1946 1815 0 10:59 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41473) idle
root 2085 2064 0 11:23 pts/1 00:00:00 grep stado
关闭stado进程:
[stado@pgtest4 bin]$ ./gs-dbstop.sh -d xtest -u admin -p secret
Database(s) xtest stopped.
[root@pgtest5 ~]# ps -ef|grep stado
root 4085 2064 0 18:38 pts/1 00:00:00 grep stado
(二).启动stado:
开启stado进程:
[stado@pgtest4 bin]$ ./gs-dbstart.sh -d xtest -u admin -p secret
Database(s) xtest started.
[root@pgtest5 ~]# ps -ef|grep stado
postgres 4101 1815 0 18:41 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41496) idle in transaction
postgres 4102 1815 0 18:41 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41499) idle
postgres 4103 1815 0 18:41 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41502) idle
postgres 4104 1815 0 18:41 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41505) idle
postgres 4105 1815 0 18:41 ? 00:00:00 postgres: stado __xtest__N2 10.1.1.13(41507) idle
root 4112 2064 0 18:42 pts/1 00:00:00 grep stado
(三)删除一个数据库连接:
[stado@pgtest4 bin]$ ./gs-dbstop.sh -d abc -u admin -p secret
Database(s) abc stopped.
[stado@pgtest4 bin]$ ./gs-dropdb.sh -d abc -u admin -p secret
OK
删除数据库及其链接需要先停止数据库的连接进程,然后才能在数据库中删除。
(四)创建一个stado数据库
[stado@pgtest4 bin]$ ./gs-createdb.sh -d abc -u admin -p secret -n 1,2,3
OK
(五)创建一个复制模式的表
Stado -> create table t (id int primary key,name text) REPLICATED ;
OK
Stado -> insert into t values(1,'abc');
1 row(s) affected
Stado -> insert into t values(2,'bcd');
1 row(s) affected
在各个节点上查看数据:
__xtest__N1=# select * from t;
id | name
----+------
1 | abc
2 | bcd
(2 rows)
__xtest__N2=# select * from t;
id | name
----+------
1 | abc
2 | bcd
(2 rows)
(六)创建一个partition模式的表
Stado -> create table t2 (area_id int,area_name varchar(20),descs text) partitioning key area_id on all;
OK
插入数据:
insert into t2 values(10002,'xishan','test');
insert into t2 values(10002,'xishan','fasdasdf');
insert into t2 values(10002,'xishan','testfasdf');
insert into t2 values(10001,'jiangyin','testfasdf');
insert into t2 values(10003,'yichang','test22');
insert into t2 values(10003,'yichang','test22');
insert into t2 values(10003,'yichang','test2yichang');
insert into t2 values(10003,'yichang','test22fasdfas');
Stado -> select * from t2;
+-------------------------------------+
| area_id | area_name | descs |
+-------------------------------------+
| 10001 | jiangyin | test |
| 10002 | xishan | test |
| 10001 | jiangyin | testfasdf |
| 10002 | xishan | test |
| 10003 | yichang | test22 |
| 10002 | xishan | fasdasdf |
| 10003 | yichang | test22 |
| 10002 | xishan | testfasdf |
| 10003 | yichang | test2yichang |
| 10003 | yichang | test22fasdfas |
+-------------------------------------+
10 row(s).
在各个节点查看数据分布:
pgtest4:
__xtest__N1=# select * from t2;
LOG: statement: select * from t2;
area_id | area_name | descs
---------+-----------+---------------
10001 | jiangyin | test
10001 | jiangyin | testfasdf
10003 | yichang | test22
10003 | yichang | test22
10003 | yichang | test2yichang
10003 | yichang | test22fasdfas
(6 rows)
pgtest5:
__xtest__N2=# select * from t2;
area_id | area_name | descs
---------+-----------+-----------
10002 | xishan | test
10002 | xishan | test
10002 | xishan | fasdasdf
10002 | xishan | testfasdf
(4 rows)
pgtest6:
__xtest__N3=# select * from t2;
area_id | area_name | descs
---------+-----------+-------
(0 rows)
发现按area_id分区也没用完全均衡的分区,这个还需要进一步研究。
(七)创建一个
ROUND ROBIN
模式的表
Stado -> create table t3 (area_id int ,name varchar2(30)) ROUND ROBIN ON all;
OK
Stado -> insert into t3 values(1001,'jiangyi');
1 row(s) affected
Stado -> insert into t3 values(1002,'xishan');
1 row(s) affected
Stado -> insert into t3 values(1003,'yichang');
1 row(s) affected
Stado -> select * from t3;
+-------------------+
| area_id | name |
+-------------------+
| 1003 | yichang |
| 1001 | jiangyi |
| 1002 | xishan |
+-------------------+
3 row(s).
在pgtest4上查看:
__xtest__N1=# select * from t3;
LOG: statement: select * from t3;
area_id | name
---------+---------
1003 | yichang
(1 row)
__xtest__N2=# select * from t3;
area_id | name
---------+---------
1001 | jiangyi
(1 rows)
__xtest__N3=# select * from t3;
area_id | name
---------+--------
1002 | xishan
(1 row)
可以看到,这个是轮训方式进行插入数据的。
参考:
http://xmarker.blog.163.com/blog/static/2264840572013112105159991/
上两节分别对stado的安装配置及常用命令进行了实验,本篇将进一步对表关联、分库方式进行实验。
(一)创建stado支持的三种类型的表:
根据列值hash分区(partitioning)、轮换分区(roundrobin)、复制模式(replicated),注意,不支持range分区,也就说要按某个列的某些范围值分区是不支持的。
Stado -> create table t_partition (id int,name varchar(30)) partitioning key id on all;
OK
Stado -> insert into t_partition select generate_series(1,100)::int,'mcl'::varchar(30);
100 row(s) affected
Stado -> create table t_replicate (id int,name varchar(30)) replicated;
OK
Stado -> insert into t_replicate select generate_series(1,100)::int,'mcl'::varchar(30);
100 row(s) affected
Stado -> create table t_roundrobin (id int,name varchar(30)) round robin on all;
OK
Stado -> insert into t_roundrobin select generate_series(1,100)::int,'mcl'::varchar(30);
100 row(s) affected
以上分别建了三种类型的表,都分别插入100条数据
(二)三种表关联查询:
Stado -> select * from t_partition a,t_replicate b where a.id=b.id;
+-------------------------+
| id | name | id | name |
+-------------------------+
| 11 | mcl | 11 | mcl |
| 13 | mcl | 13 | mcl |
| 14 | mcl | 14 | mcl |
| 22 | mcl | 22 | mcl |
....
| 100 | mcl | 100 | mcl |
+-------------------------+
100 row(s).
我们看下执行计划:
Stado -> explain select * from t_partition a,t_replicate b where a.id=b.id;
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| |
| Step: 0 |
| ------- |
| Target: CREATE UNLOGGED TABLE "TMPTT23_1" ( "id" INT, "name" VARCHAR (30)) WITHOUT OIDS |
| Select: SELECT "a"."id" AS "id","a"."name" AS "name" FROM "t_partition" "a" |
| |
| |
| Step: 1 |
| ------- |
| Select: SELECT "TMPTT23_1"."id" AS "id","TMPTT23_1"."name" AS "name","b"."id" AS "EXPRESSION1","b"."name" AS "EXPRESSION2" FROM "TMPTT23_1" CROSS JOIN "t_replicate" "b" WHERE ("TMPTT23_1"."id" = "b"."id") |
| Drop: |
| TMPTT23_1 |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
12 row(s).
可以看到,join操作分两步,第一步建unlogged 表TMPTT23_1,并插入t_partition的数据,第二步根据TMPTT23_1数据关联t_replicate得到最终数据。
同样的,partition表和roundrobin表也可以关联查到同样数据,看下执行计划(和上一步类似):
Stado -> explain select * from t_partition a,t_roundrobin b where a.id=b.id;
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| |
| Step: 0 |
| ------- |
| Target: CREATE UNLOGGED TABLE "TMPTT25_1" ( "id" INT, "name" VARCHAR (30)) WITHOUT OIDS |
| Select: SELECT "a"."id" AS "id","a"."name" AS "name" FROM "t_partition" "a" |
| |
| |
| Step: 1 |
| ------- |
| Select: SELECT "TMPTT25_1"."id" AS "id","TMPTT25_1"."name" AS "name","b"."id" AS "EXPRESSION1","b"."name" AS "EXPRESSION2" FROM "TMPTT25_1" CROSS JOIN "t_roundrobin" "b" WHERE ("TMPTT25_1"."id" = "b"."id") |
| Drop: |
| TMPTT25_1 |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
12 row(s).
同样的,roundrobin和replicate也可以查出同样的数据:
Stado -> explain select * from t_roundrobin a,t_replicate b where a.id=b.id;
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| |
| Step: 0 |
| ------- |
| Target: CREATE UNLOGGED TABLE "TMPTT27_1" ( "id" INT, "name" VARCHAR (30)) WITHOUT OIDS |
| Select: SELECT "a"."id" AS "id","a"."name" AS "name" FROM "t_roundrobin" "a" |
| |
| |
| Step: 1 |
| ------- |
| Select: SELECT "TMPTT27_1"."id" AS "id","TMPTT27_1"."name" AS "name","b"."id" AS "EXPRESSION1","b"."name" AS "EXPRESSION2" FROM "TMPTT27_1" CROSS JOIN "t_replicate" "b" WHERE ("TMPTT27_1"."id" = "b"."id") |
| Drop: |
| TMPTT27_1 |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
12 row(s).
从上可以看出,三种分区方式都支持关联查询
(三)group by查询
Stado -> select a.id,count(*) from t_partition a,t_partition2 b where a.id=b.id and b.id<30 group by a.id order by 1;
+---------------+
| id | count(*) |
+---------------+
| 1 | 1 |
| 2 | 1 |
| 3 | 1 |
| 4 | 1 |
| 5 | 1 |
| 6 | 1 |
| 7 | 1 |
| 8 | 1 |
| 9 | 1 |
| 10 | 1 |
| 11 | 1 |
| 12 | 1 |
| 13 | 1 |
| 14 | 1 |
| 15 | 1 |
| 16 | 1 |
| 17 | 1 |
| 18 | 1 |
| 19 | 1 |
| 20 | 1 |
| 21 | 1 |
| 22 | 1 |
| 23 | 1 |
| 24 | 1 |
| 25 | 1 |
| 26 | 1 |
| 27 | 1 |
| 28 | 1 |
| 29 | 1 |
+---------------+
29 row(s).
partition表和partition表的关联后group by(注意,t_partition2表和t_partition1表结构一样,数据是后者的一半),最终分成3步完成:
Stado -> explain select a.id,count(*) from t_partition a,t_partition2 b where a.id=b.id and b.id<30 group by a.id order by 1;
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| |
| Step: 0 |
| ------- |
| Target: CREATE UNLOGGED TABLE "TMPTT34_1" ( "id" INT) WITHOUT OIDS |
| Select: SELECT "a"."id" AS "id" FROM "t_partition" "a" |
| |
| |
| Step: 1 |
| ------- |
| Target: CREATE UNLOGGED TABLE "TMPTT34_2" ( "XCOL1" INT, "XCOL2" INT) WITHOUT OIDS |
| Select: SELECT "TMPTT34_1"."id" AS "XCOL1",count(*) AS "XCOL2" FROM "TMPTT34_1" CROSS JOIN "t_partition2" "b" WHERE ("b"."id" < 30) AND ("TMPTT34_1"."id" = "b"."id") group by "TMPTT34_1"."id" |
| Drop: |
| TMPTT34_1 |
| |
| |
| Step: 2 |
| ------- |
| Select: SELECT "XCOL1" AS "id",SUM("XCOL2") AS "EXPRESSION67" FROM "TMPTT34_2" group by "XCOL1", "XCOL1" |
| Drop: |
| TMPTT34_2 |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
20 row(s).
(四)怎样实现根据range分库
很多时候我们可能不希望通过roundrobin或者partition 的hash分区方式分库,但由于stado没有提供这种分区,如果我们又有比较明显的分区条件,比如无锡区域的用户都会插入到无锡来,苏州区域的用户都会插入到苏州,这样我们可以用应用程序之间插入到各自对应的后台数据节点,查询时通过stado汇总各个地区的结果集,一样可以实现:
Stado -> create table t_area_record(area_id int,area_name varchar(30),name varchar(30)) partitioning key area_id on all;
OK
第一个节点作为江阴的数据库:
__xtest__N1=# insert into t_area_record select 1001,'jiangyin'::varchar(30),generate_series(1,50)||'dba'::varchar(30);
INSERT 0 50
第二个节点作为无锡的数据库:
__xtest__N2=# insert into t_area_record select 1002,'wuxi'::varchar(30),generate_series(1,50)||'dba'::varchar(30);
INSERT 0 50
第三个节点作为宜兴的数据库:
__xtest__N3=# insert into t_area_record select 1003,'yixing'::varchar(30),generate_series(1,50)||'dba'::varchar(30);
INSERT 0 50
然后在总的stado节点查询:
Stado -> select area_id,count(*) from t_area_record group by area_id order by 2;
+--------------------+
| area_id | count(*) |
+--------------------+
| 1003 | 50 |
| 1002 | 50 |
| 1001 | 50 |
+--------------------+
3 row(s).
Stado -> select * from t_area_record order by area_id limit 5;
+----------------------------+
| area_id | area_name | name |
+----------------------------+
| 1001 | jiangyin | 2dba |
| 1001 | jiangyin | 3dba |
| 1001 | jiangyin | 4dba |
| 1001 | jiangyin | 5dba |
| 1001 | jiangyin | 1dba |
+----------------------------+
5 row(s).
可以看出,这样分库写入,而总库查询正好弥补了stado没有partition by range的缺点,而基于mysql的cobar的分区规则则更丰富和灵活,但cobar不能整个汇总,如在cobar查询select count(*) from t_area_record会产生三条数据,因为cobar只是简单的把sql语句发到所在的数据节点,然后结果分别返回而不做进一步处理,但stado显然是做了处理的:
Stado -> select count(*) from t_area_record;
+----------+
| count(*) |
+----------+
|
150 |
+----------+
1 row(s).
Stado -> explain select count(*) from t_area_record;
+-------------------------------------------------------------------------+
| Query Plan |
+-------------------------------------------------------------------------+
| |
| Step: 0 |
| ------- |
| Target: CREATE UNLOGGED TABLE "TMPTT56_1" ( "XCOL1" INT) WITHOUT OIDS |
| Select: SELECT count(*) AS "XCOL1" FROM "t_area_record" |
| |
| |
| Step: 1 |
| ------- |
| Select: SELECT SUM("XCOL1") AS "EXPRESSION120" FROM "TMPTT56_1" |
| Drop: |
| TMPTT56_1 |
+-------------------------------------------------------------------------+
12 row(s).
不过还有更好的方式,结合postgres自带的分区表功能,优化器还能做Constraint Exclusion,也就是约束排除,可以让优化器更聪明的知道哪些字表需要扫描哪个数据节点,不需要扫描的不会再扫那个分区,下次再写,该睡觉了。
http://xmarker.blog.163.com/blog/static/2264840572013101062120936/
本文简单介绍postgresql的分库方案plproxy的安装及使用,实际分库以后再深入学习后更新。本实验使用三个centos6.4虚拟机做服务器,ip分别为10.1.1.2、10.1.1.11、10.1.1.12,其中10.1.1.12位plproxy节点,其他两个为数据节点。
1.plproxy的原理(参考德哥的相关文章):

2.下载软件:
选择最新版本下载即可,目前最新版本为2.5。
3.解压软件并进入目录(在plproxy节点安装即可):
/postgres/plproxy-2.5@pgtest4$pwd
/postgres/plproxy-2.5
/postgres/plproxy-2.5@pgtest4$ls
AUTHORS config COPYRIGHT debian doc Makefile META.json NEWS plproxy.control plproxy.so README sql src test
/postgres/plproxy-2.5
/postgres/plproxy-2.5@pgtest4$ls
AUTHORS config COPYRIGHT debian doc Makefile META.json NEWS plproxy.control plproxy.so README sql src test
4.安装:
注意需要用root用户安装,并且还要执行postgres用户的.bash_profile
source /home/postgres/.bash_profile
/postgres/plproxy-2.5@pgtest4$make
bison -b src/parser -d src/parser.y
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include -DNO_SELECT=0 -I. -I. -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include/internal -D_GNU_SOURCE -c -o src/scanner.o src/scanner.c
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include -DNO_SELECT=0 -I. -I. -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include/internal -D_GNU_SOURCE -c -o src/