clickhouse主從部署(1分片2副本)


 

環境:
192.168.1.118
192.168.1.134

OS:Centos 7

DB:21.9.7

 

1.安裝zookeeper
可以安裝集群也可以安裝單節點

集群的安裝步驟請參考:https://www.cnblogs.com/hxlasky/p/14665752.html

我這里安裝的是單節點的zookeeper

 

2.每個節點上都安裝好clickhouse
安裝步驟請參考
https://www.cnblogs.com/hxlasky/p/15325313.html

 

3.創建 /etc/metrika.xml 文件

192.168.1.118節點

<?xml version="1.0"?>
<yandex>
  <clickhouse_remote_servers>
    <perftest_1shards_2replicas>
      <shard>
        <internal_replication>true</internal_replication>
        <replica>
          <host>192.168.1.118</host>
          <port>9000</port>
        </replica>
        <replica>
          <host>192.168.1.134</host>
          <port>9000</port>
        </replica>
      </shard>
    </perftest_1shards_2replicas>
  </clickhouse_remote_servers>
  <!--zookeeper相關配置-->
  <zookeeper-servers>
    <node index="1">
      <host>192.168.1.118</host>
      <port>2181</port>
    </node>
  </zookeeper-servers>
 <macros> <shard_name>01</shard_name> <replica>192.168.1.118</replica> </macros>
  <networks>
    <ip>::/0</ip>
  </networks>
  <clickhouse_compression>
    <case>
      <min_part_size>10000000000</min_part_size>
      <min_part_size_ratio>0.01</min_part_size_ratio>
      <method>lz4</method>
    </case>
  </clickhouse_compression>
</yandex>

 

 

192.168.1.134節點

<?xml version="1.0"?>
<yandex>
  <clickhouse_remote_servers>
    <perftest_1shards_2replicas>
      <shard>
        <internal_replication>true</internal_replication>
        <replica>
          <host>192.168.1.118</host>
          <port>9000</port>
        </replica>
        <replica>
          <host>192.168.1.134</host>
          <port>9000</port>
        </replica>
      </shard>
    </perftest_1shards_2replicas>
  </clickhouse_remote_servers>
  <!--zookeeper相關配置-->
  <zookeeper-servers>
    <node index="1">
      <host>192.168.1.118</host>
      <port>2181</port>
    </node>
  </zookeeper-servers>
 <macros> <shard_name>01</shard_name> <replica>192.168.1.134</replica> </macros>
  <networks>
    <ip>::/0</ip>
  </networks>
  <clickhouse_compression>
    <case>
      <min_part_size>10000000000</min_part_size>
      <min_part_size_ratio>0.01</min_part_size_ratio>
      <method>lz4</method>
    </case>
  </clickhouse_compression>
</yandex>

 

 

4.修改配置文件,引入metrika.xml

修改 /etc/clickhouse-server/config.xml 文件

<yandex>
 <!--引入metrika.xml--> <include_from>/etc/metrika.xml</include_from>
    
    <!--引用Zookeeper配置的定義-->
    <zookeeper incl="zookeeper-servers" optional="true">
    
    <!--clickhouse_remote_servers節點引入-->
    <remote_servers incl="clickhouse_remote_servers">  
        <!-- Test only shard config for testing distributed storage -->
        <test_shard_localhost>
            <!-- Inter-server per-cluster secret for Distributed queries default: no secret (no authentication will be performed) If set, then Distributed queries will be validated on shards, so at least: - such cluster should exist on the shard, - such cluster should have the same secret. <macros incl="macros" optional="true">
    </macros>

 

將配置文件拷貝到另外一個節點
scp /etc/clickhouse-server/config.xml root@192.168.1.134:/etc/clickhouse-server/

 

5.修改內部通信參數
vi /etc/clickhouse-server/config.xml

 

192.168.1.118節點
<interserver_http_host>192.168.1.118</interserver_http_host>

192.168.1.134節點
<interserver_http_host>192.168.1.134</interserver_http_host>

 

6.重啟
兩個節點都需要重啟動
systemctl stop clickhouse-server.service
systemctl start clickhouse-server.service
systemctl status clickhouse-server.service

 

7.查看集群情況
192.168.1.118和192.168.1.134上都查看

[root@localhost clickhouse219]#clickhouse-client --host 192.168.1.118 --port=9000 SELECT * FROM system.clusters Query id: cae1ecb0-35c9-445b-892e-9e40f3ceff5a ┌─cluster──────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─────┬─host_address──┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐ │ perftest_1shards_2replicas │ 111192.168.1.118192.168.1.11890001default │                  │            000 │ │ perftest_1shards_2replicas │ 112192.168.1.134192.168.1.13490000default │                  │            000 │ │ test_cluster_two_shards │ 111127.0.0.1127.0.0.190001default │                  │            000 │ │ test_cluster_two_shards │ 211127.0.0.2127.0.0.290000default │                  │            000 │ │ test_cluster_two_shards_internal_replication │ 111127.0.0.1127.0.0.190001default │                  │            000 │ │ test_cluster_two_shards_internal_replication │ 211127.0.0.2127.0.0.290000default │                  │            000 │ │ test_cluster_two_shards_localhost │ 111 │ localhost     │ ::190001default │                  │            000 │ │ test_cluster_two_shards_localhost │ 211 │ localhost     │ ::190001default │                  │            000 │ │ test_shard_localhost │ 111 │ localhost     │ ::190001default │                  │            000 │ │ test_shard_localhost_secure │ 111 │ localhost     │ ::194400default │                  │            000 │ │ test_unavailable_shard │ 111 │ localhost     │ ::190001default │                  │            000 │ │ test_unavailable_shard │ 211 │ localhost     │ ::110default │                  │            000 │ └──────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────────┴───────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘ 12 rows in set. Elapsed: 0.007 sec. 

 

8.創建數據庫

在其中一個節點上執行,我這里是在192.168.1.118上執行

CREATE DATABASE db_test ON CLUSTER perftest_1shards_2replicas; localhost :) CREATE DATABASE db_test ON CLUSTER perftest_1shards_2replicas; CREATE DATABASE db_test ON CLUSTER perftest_1shards_2replicas Query id: 631e56fc-f4ed-4a69-a289-d3abb474965f ┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐ │ 192.168.1.11890000 │       │                   11 │ └───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘ ┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐ │ 192.168.1.13490000 │       │                   00 │ └───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘

 

 

登陸查看數據庫 clickhouse-client --host 192.168.1.118 --port=9000 -m localhost :) show databases; SHOW DATABASES Query id: 55c214f1-6d65-4f49-bab0-134442c7d40e ┌─name────┐ │ db_test │ │ default │ │ system │ └─────────┘ clickhouse-client --host 192.168.1.134 --port=9000 -m localhost :) show databases; SHOW DATABASES Query id: 55c214f1-6d65-4f49-bab0-134442c7d40e ┌─name────┐ │ db_test │ │ default │ │ system │ └─────────┘

 

9.創建表並寫入數據

在其中一個節點上操作,我這里在192.168.1.118上執行

clickhouse-client --host 192.168.1.118 --port=9000 -m CREATE TABLE db_test.tb_test01 ON CLUSTER 'perftest_1shards_2replicas' ( `id` Int64, `p` Int16 ) ENGINE = ReplicatedMergeTree( '/clickhouse/tables/replicated/tb_test01', '{replica}') PARTITION BY p ORDER BY id; INSERT INTO db_test.tb_test01 VALUES(33,33); INSERT INTO db_test.tb_test01 VALUES(44,44); INSERT INTO db_test.tb_test01 VALUES(55,55); INSERT INTO db_test.tb_test01 VALUES(66,66);

 

另外一個節點查看

clickhouse-client --host 192.168.1.134 --port=9000 -m localhost :) select * from db_test.tb_test01; SELECT * FROM db_test.tb_test01 Query id: c231b503-1d01-488b-8ea5-ccac69c02b78 ┌─id─┬──p─┐ │ 5555 │ └────┴────┘ ┌─id─┬──p─┐ │ 6666 │ └────┴────┘ ┌─id─┬──p─┐ │ 3333 │ └────┴────┘ ┌─id─┬──p─┐ │ 4444 │ └────┴────┘

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM