clickhouse高可用-節點宕機數據一致性方案-熱擴容


1. 集群節點及服務分配

說明:

1.1. 在每個節點上啟動兩個clickhouse服務(后面會詳細介紹如何操作這一步),一個數據分片,一個數據備份,為了確保宕機數據一致性,數據分片和數據備份不能同一節點,比如gawh201上的shard不能備份在gawh201的replica,如果這樣做,當gawh201宕機了,該節點shard的數據是找不到的。

1.2. 基於a所以shard和replica必須錯開,但不是隨意錯開就可以了。按照上圖給的規律錯開(后面會詳細介紹超大節點的集群的shard和replica的分布規律)。

1.3. 如何宕機也能做到數據一致性的,如上圖舉例,不管出現任何情況,只要找得到三個shard的數據就可以了。

假設gawh202宕機了,如下圖依然可以找到三個shard的數據(gawh202的shard的數據可以從gawh203的replica找到)

假設gawh203宕機了,如下圖依然可以找到三個shard的數據(gawh203的shard的數據可以從gawh201的replica找到)

注意:

a. 為什么這里沒有說gawh201宕機了?因為在集群中都會有一個分布式表,分布式表本身不存儲數據,只是將各個分片上的數據聚合。分布式表需要在一個類似於主節點上創建,創建之后,如果主節點宕機,分布式表就無法使用查詢了(關於分布式表的主節點宕機問題后面會給出方案),所有該集群中設置中gawh201不能宕機。
b. 在上圖的實例中,不允許gawh202,gawh203節點同時全部宕機,如果全部宕機就會如下圖,三個shard的數據只找得到兩個

如果想要三節點集群兩節點宕機數據保證一次性,那么只有在gawh201上將clickhouse服務擴展一個,但不建議這么做,一個節點上三個clickhouse服務會增加該節點的資源消耗影響性能

c. 上述不管哪種情況對現有的數據沒有影響,只對插入數據,正在執行的查詢有影響

2. 高可用集群搭建及驗證

2.1. 先確保單節點和集群節點搭建沒有問題,可以參考之前的文檔

2.2. 在gawh201上增加一個clickhouse服務

a. 將/etc/clickhouse-server/config.xml文件拷貝一份改名
$: cp /etc/clickhouse-server/config.xml /etc/clickhouse-server/config1.xml
b. 編輯/etc/clickhouse-server/config1.xml更改以下標簽內容將兩個服務區分開
$:vim /etc/clickhouse-server/config1.xml
<log>/var/log/clickhouse-server/clickhouse-server1.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server1.err.log</errorlog>

<http_port>8124</http_port>
<tcp_port>9003</tcp_port>

<interserver_http_port>9010</interserver_http_port>

<path>/apps/clickhouse/data/clickhouse1</path>
<tmp_path>/var/lib/clickhouse1/tmp/</tmp_path>
<user_files_path>/var/lib/clickhouse1/user_files/</user_files_path>
原有的clickhouse服務的配置應該是這樣的:
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>

<http_port>8123</http_port>
<tcp_port>9003</tcp_port>

<interserver_http_port>9009</interserver_http_port>

<path>/apps/clickhouse/data/clickhouse</path>
<tmp_path>/var/lib/clickhouse/tmp/</tmp_path>
<user_files_path>/var/lib/clickhouse/user_files/</user_files_path>
c. 創建對應的目錄
d. 將之前默認的集群配置文件metrika.xml更改為自定義,config.xml和config1.xml都需要放開
$:vim /etc/clickhouse-server/config.xml
<include_from>/etc/clickhouse-server/metrika.xml</include_from>
$:vim /etc/clickhouse-server/config1.xml
<include_from>/etc/clickhouse-server/metrika1.xml</include_from>
e. 增加實例對應的服務的啟動腳本
$:cp /etc/init.d/clickhouse-server /etc/init.d/clickhouse-server1

在原有基礎上修改以下項

$vim /etc/init.d/clickhouse-server1
CLICKHOUSE_CONFIG=$CLICKHOUSE_CONFDIR/config1.xml
CLICKHOUSE_PIDFILE="$CLICKHOUSE_PIDDIR/$PROGRAM-1.pid"

原有的clickhouse-server為

$vim /etc/init.d/clickhouse-server
CLICKHOUSE_CONFIG=$CLICKHOUSE_CONFDIR/config.xml
CLICKHOUSE_PIDFILE="$CLICKHOUSE_PIDDIR/$PROGRAM.pid"
f. 將先前部署集群的/etc/metrika.xml文件復制到對應的config的include_from標簽配置的路徑中,兩個實例都要做該操作
$:cp /etc/metrika.xml /etc/clickhouse-server/
$:cp /etc/metrika.xml /etc/clickhouse-server/metrika1.xml

2.3. gawh201完成上述操作配置后,gawh202,gawh203節點上和gawh201的以上操作完全一樣,對三個節點每個節點兩個clickhouse的服務實例對metrika*.xml文件逐個修改

a. 六個metrika*.xml共同部分:
<yandex>
<clickhouse_remote_servers>
<!-- <perftest_3shards_2replicas> -->
<cluster>
<shard>
<weight>1</weight>
<internal_replication>true</internal_replication>
<replica>
<host>gawh201</host>
<port>9002</port>
</replica>
<replica>
<host>gawh202</host>
<port>9003</port>
</replica>
</shard>

<shard>
<weight>1</weight>
<internal_replication>true</internal_replication>
<replica>
<host>gawh202</host>
<port>9002</port>
</replica>
<replica>
<host>gawh203</host>
<port>9003</port>
</replica>
</shard>

<shard>
<weight>1</weight>
<internal_replication>true</internal_replication>
<replica>
<host>gawh203</host>
<port>9002</port>
</replica>
<replica>
<host>gawh201</host>
<port>9003</port>
</replica>
</shard>
<!-- </perftest_3shards_2replicas> -->
</cluster>
</clickhouse_remote_servers>

<!--zookeeper相關配置-->
<zookeeper-servers>
<node index="1">
<host>gawh201</host>
<port>2182</port>
</node>
<node index="2">
<host>gawh202</host>
<port>2182</port>
</node>
<node index="3">
<host>gawh203</host>
<port>2182</port>
</node>
</zookeeper-servers>

<macros>
<!-- <replica>gawh201</replica> -->
<layer>01</layer>
<shard>01</shard>
<replica>cluster01-01-1</replica>
</macros>

<networks>
<ip>::/0</ip>
</networks>

<clickhouse_compression>
<case>
<min_part_size>10000000000</min_part_size>
<min_part_size_ratio>0.01</min_part_size_ratio>
<method>lz4</method>
</case>
</clickhouse_compression>

</yandex>
b. 不同部分如下,對這部分做詳細修改

gawh201實例1(config.xml對應的)

<macros>
<!-- <replica>gawh201</replica> -->
<layer>01</layer>
<shard>01</shard>
<replica>cluster01-01-1</replica>
</macros>

其中layer是雙級分片設置,這里是01;然后是shard表示分片編號;最后是replica是副本標識,這里使用了cluster{layer}-{shard}-{replica}的表示方式,比如cluster01-02-1表示cluster01集群的02分片下的1號副本,這樣既非常直觀的表示又唯一確定副本
gawh201實例2(config1.xml對應的)

<macros>
<!-- <replica>gawh201</replica> -->
<layer>01</layer>
<shard>03</shard>
<replica>cluster01-03-2</replica>
</macros>

gawh202實例1(config.xml對應的)

<macros>
<!-- <replica>gawh201</replica> -->
<layer>01</layer>
<shard>02</shard>
<replica>cluster01-02-1</replica>
</macros>

gawh202實例2(config2.xml對應的)

<macros>
<!-- <replica>gawh201</replica> -->
<layer>01</layer>
<shard>01</shard>
<replica>cluster01-01-2</replica>
</macros>

gawh203實例1(config.xml對應的)

<macros>
<!-- <replica>gawh201</replica> -->
<layer>01</layer>
<shard>03</shard>
<replica>cluster01-03-1</replica>
</macros>

gawh203實例2(config1.xml對應的)

<macros>
<!-- <replica>gawh201</replica> -->
<layer>01</layer>
<shard>02</shard>
<replica>cluster01-02-2</replica>
</macros>

說明:這其中的規律顯而易見,這里不再說明

2.4.啟動高可用clickhouse集群

在三個節點上都執行如下腳本:

$: /etc/init.d/clickhouse-server start
$: /etc/init.d/clickhouse-server1 start

啟動無誤后客戶端進入每個服務進行驗證:
gawh201上:

$:clickhouse-client --host gawh201 --port 9002

$: clickhouse-client --host gawh201 --port 9003

gawh202上:

$:clickhouse-client --host gawh202 --port 9002

$:clickhouse-client --host gawh202 --port 9003

gawh203上:

$:clickhouse-client --host gawh203 --port 9002

$:clickhouse-client --host gawh203 --port 9003

說明:仔細對比發現和之前的設計規划是完全吻合的,證明高可用clickhouse集群是部署成功的

3.現有數據查詢節點宕機一致性方案驗證

3.1.高可用原理

zookeeper+ReplicatedMergeTree(復制表)+Distributed(分布式表)

3.2.首先創建ReplicatedMergeTree引擎表,以實例航班數據為例

a. 需要在三個節點六個實例中都創建,創建sql如下:
CREATE TABLE `ontime`( `Year` UInt16, `Quarter` UInt8, `Month` UInt8, `DayofMonth` UInt8, `DayOfWeek` UInt8, `FlightDate` Date, `UniqueCarrier` FixedString(7), `AirlineID` Int32, `Carrier` FixedString(2), `TailNum` String, `FlightNum` String, `OriginAirportID` Int32, `OriginAirportSeqID` Int32, `OriginCityMarketID` Int32, `Origin` FixedString(5), `OriginCityName` String, `OriginState` FixedString(2), `OriginStateFips` String, `OriginStateName` String, `OriginWac` Int32, `DestAirportID` Int32, `DestAirportSeqID` Int32, `DestCityMarketID` Int32, `Dest` FixedString(5), `DestCityName` String, `DestState` FixedString(2), `DestStateFips` String, `DestStateName` String, `DestWac` Int32, `CRSDepTime` Int32, `DepTime` Int32, `DepDelay` Int32, `DepDelayMinutes` Int32, `DepDel15` Int32, `DepartureDelayGroups` String, `DepTimeBlk` String, `TaxiOut` Int32, `WheelsOff` Int32, `WheelsOn` Int32, `TaxiIn` Int32, `CRSArrTime` Int32, `ArrTime` Int32, `ArrDelay` Int32, `ArrDelayMinutes` Int32, `ArrDel15` Int32, `ArrivalDelayGroups` Int32, `ArrTimeBlk` String, `Cancelled` UInt8, `CancellationCode` FixedString(1), `Diverted` UInt8, `CRSElapsedTime` Int32, `ActualElapsedTime` Int32, `AirTime` Int32, `Flights` Int32, `Distance` Int32, `DistanceGroup` UInt8, `CarrierDelay` Int32, `WeatherDelay` Int32, `NASDelay` Int32, `SecurityDelay` Int32, `LateAircraftDelay` Int32, `FirstDepTime` String, `TotalAddGTime` String, `LongestAddGTime` String, `DivAirportLandings` String, `DivReachedDest` String, `DivActualElapsedTime` String, `DivArrDelay` String, `DivDistance` String, `Div1Airport` String, `Div1AirportID` Int32, `Div1AirportSeqID` Int32, `Div1WheelsOn` String, `Div1TotalGTime` String, `Div1LongestGTime` String, `Div1WheelsOff` String, `Div1TailNum` String, `Div2Airport` String, `Div2AirportID` Int32, `Div2AirportSeqID` Int32, `Div2WheelsOn` String, `Div2TotalGTime` String, `Div2LongestGTime` String, `Div2WheelsOff` String, `Div2TailNum` String, `Div3Airport` String, `Div3AirportID` Int32, `Div3AirportSeqID` Int32, `Div3WheelsOn` String, `Div3TotalGTime` String, `Div3LongestGTime` String, `Div3WheelsOff` String, `Div3TailNum` String, `Div4Airport` String, `Div4AirportID` Int32, `Div4AirportSeqID` Int32, `Div4WheelsOn` String, `Div4TotalGTime` String, `Div4LongestGTime` String, `Div4WheelsOff` String, `Div4TailNum` String, `Div5Airport` String, `Div5AirportID` Int32, `Div5AirportSeqID` Int32, `Div5WheelsOn` String, `Div5TotalGTime` String, `Div5LongestGTime` String, `Div5WheelsOff` String, `Div5TailNum` String) ENGINE = ReplicatedMergeTree('/clickhouse/tables/01-01/ontime', 'cluster01-01-1') PARTITION BY toYYYYMM(FlightDate) ORDER BY (Year,FlightDate,intHash32(Year))

注意:下面這種創ReplicatedMergeTree已經被棄用

b. 六個實例中創建sql語句大部分相同 不同部分如a中綠色標出

gawh201實例1(config.xml對應的)為:
'/clickhouse/tables/01-01/ontime', 'cluster01-01-1'
gawh201實例2(config1.xml對應的)為:
'/clickhouse/tables/01-03/ontime', 'cluster01-03-2'
gawh202實例1(config.xml對應的)為:
'/clickhouse/tables/01-02/ontime', 'cluster01-02-1'
gawh202實例2(config1.xml對應的)為:
'/clickhouse/tables/01-01/ontime', 'cluster01-01-2'
gawh203實例1(config.xml對應的)為:
'/clickhouse/tables/01-03/ontime', 'cluster01-03-1'
gawh203實例2(config1.xml對應的)為:
'/clickhouse/tables/01-02/ontime', 'cluster01-02-2'
解釋:
ReplicatedMergeTree('/clickhouse/tables/01-01/ontime', 'cluster01-01-1')
第一個參數為ZooKeeper 中該表的路
第二個參數為ZooKeeper 中的該表的副本名稱
注意:
這里的配置要和每個clickhouse實例的metrika*.xml的macros標簽對應,如果仔細看可以發現規律

3.3. ReplicatedMergeTree表創建無誤后,約定一個主節點創建Distributed表,這里約定gawh201的clickhouse實例1(config.xml文件對應)

CREATE TABLE ontime_all AS ontime ENGINE = Distributed(cluster, qwrenzixing, ontime, rand())

3.4.寫入數據

在下載的航班數據目錄,執行以下腳本:

for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --host=gawh201 –port=9002 --query="INSERT INTO qwrenzixing.ontime_all FORMAT CSVWithNames"; done

3.5.數據寫入成功后對每個實例上分片和備份的數據做一個比較

select count(1) from qwrenzixing.ontime;

節點 實例1(9002) shard 實例2(9002) replica
gawh201 59667255 59673242
gawh202 59670658 59667255
gawh203 59673242 59670658
結果已經非常明顯了

select count(1) from qwrenzixing.ontime_all;

數據總量:179011155

3.6.ReplicatedMergeTree創建及寫入無誤,現在來驗證某一個節點宕機現有數據查詢的一致性

a. 將gawh202上的兩個實例服務全部停止模擬gawh202節點宕機:

在gawh202節點上

$: /etc/init.d/clickhouse-server stop
$: /etc/init.d/clickhouse-server1 stop
b. 首先驗證查詢,在分布式表中查詢數據總量
$:clickhouse-client --host gawh201 --port 9002
select count(1) from qwrenzixing.ontime_all;

總量仍然沒有變,證明對於查詢單節點宕機數據一致性得到了保證,當然,當只有gawh203宕機,其他節點不宕機,結果和gawh202宕機是一樣的

3.7.驗證gawh202,gawh203兩個節點四個示例全部宕機現有數據查詢是否能保證一致性

a. 將gawh202, gawh203上的兩個實例服務全部停止模擬gawh202節點宕機:

在gawh202,gawh203上均執行:

$: /etc/init.d/clickhouse-server stop
$: /etc/init.d/clickhouse-server1 stop
b. 首先驗證查詢,在分布式表中查詢數據總量
$:clickhouse-client --host gawh201 --port 9002
select count(1) from qwrenzixing.ontime_all;

直接報錯,可見在該方案中,gawh201作為主節點不能宕機,gawh202,gawh203只允許一個節點宕機。

4. 數據寫入單節點宕機數據一致性方案

4.1. 刪除三個節點六個clickhouse實例的的全部本地表

drop table ontime

4.2. 刪除主節點gawh202的實例1(config.xml文件對應)的分布式表

drop table ontime_all

注意:
clickhouse不支持delete from table所以只能刪表

4.3. 寫入數據

按照3.2,3.3,,3.4步驟寫入數據

4.4. 寫入過程中停止gawh202上的兩個實例模擬gawh202宕機

這里寫入實例航班數據(179011155)需要10分鍾左右
參照3.6的a

4.5. 數據寫完之后在主節點上查詢分布式表對比總量

$:clickhouse-client --host gawh201 --port 9002
select count(1) from qwrenzixing.ontime_all;

可見數據寫入一致性方案是成功的

5. 總結說明

5.1. 弊端

從上面的步驟中可以看出基於zookeeper + ReplicatedMergeTree (復制表) + Distributed(分布式表)的高可用及數據一致性方案是非常繁瑣的,包括配置,創復制表,這是因為clickhouse是對於集群和高可用時手動擋的,不像hadoop生態的數據庫hive,hbase那樣對分布式這塊簡單便捷。但集群越擴越大,該方案會帶來配置和維護的巨大成本

5.2. 是否需要在每個節點上都起兩個clickhouse實例

這里因為機器少的原因,所以在每個節點都起兩個clickhouse服務實例,如果機器充足,每個實例一個節點。在配置分片和備份的時候相互兩兩關聯。但權衡來看,這樣會,這不是一個很好的方案,因為在集權健壯的情況下。一個節點一個實例有點浪費資源。而且許多節點在集群健壯下都不會發揮作用

5.3. 集群擴展

a. 四個節點方案(更多節點可以參照規律)

b. 以上節點一個節點宕機參照三數據肯定可以保證一致性,來看兩個節點宕機是否仍然能保證

如上圖實例,肯定是可以保證的,但是節點gawh204壓力會加大,從這里也可以看出超大集群保證高可用和數據一致性會是比較簡單但卻繁瑣的問題

6. 熱擴容方案

擴容一般在寫入的時候存在瓶頸,這里根據4節中數據寫入單節點宕機數據一致性方案的步驟反向操作驗證熱擴容

6.1. 刪表,三個節點六個clickhouse實例復制表全部刪除

drop table ontime

6.2. 刪除主節點gawh202的實例1(config.xml文件對應)的分布式表

drop table ontime_all

6.3. 停止三個節點六個clickhouse實例服務

在gawh201,gawh202,gawh203上均執行:

$: /etc/init.d/clickhouse-server stop
$: /etc/init.d/clickhouse-server1 stop

6.4. 修改gawh201,gawh202共四個實例的metrika*.xml,把關於gawh203節點的配置信息全部注釋掉,gawh203兩個clickhouse實例服務保持原樣不做任何修改

gawh201,gawh202四個實例對應的metrika*.xml文件公共部分修改為如下

<cluster>
<shard>
<weight>1</weight>
<internal_replication>true</internal_replication>
<replica>
<host>gawh201</host>
<port>9002</port>
</replica>
<replica>
<host>gawh202</host>
<port>9003</port>
</replica>
</shard>

<shard>
<weight>1</weight>
<internal_replication>true</internal_replication>
<replica>
<host>gawh202</host>
<port>9002</port>
</replica>
<!--
<replica>
<host>gawh203</host>
<port>9003</port>
</replica>
-->
</shard>

<shard>
<weight>1</weight>
<internal_replication>true</internal_replication>
<!--
<replica>
<host>gawh203</host>
<port>9002</port>
</replica>
			-->
<replica>
<host>gawh201</host>
<port>9003</port>
</replica>
</shard>
<!-- </perftest_3shards_2replicas> -->
</cluster>

gawh203實例的metrika*.xml保持之前高可用的配置

6.6. 只啟動gawh201,gawh202的四個clickhouse實例

在gawh201,gawh202上執行:

$: /etc/init.d/clickhouse-server start
$: /etc/init.d/clickhouse-server1 start

6.6. 在gawh201,gawh202的四個clickhouse實例上創建復制表,在gawh201 9002實例上創建分布式表

參照3.2中a,b及3.3創建

6.7. 寫入過程中將gawh201,gawh202的四個clickhouse對應的metrika*.xml根據6.4參照將公共部分注釋放開

6.8. 啟動原有未做任何修改配置的gawh203的兩個clickhouse實例啟動

在gawh203上:

$: /etc/init.d/clickhouse-server start
$: /etc/init.d/clickhouse-server1 start

6.9. 在gawh203的兩個clickhouse實例中創建復制表

參照3.2中a,b

6.10. 創建完新加入的節點等待數據繼續寫幾分鍾后觀察新加的節點和新創建的復制表是否有數據

$:clickhouse-client --host gawh203 --port 9002

$:clickhouse-client --host gawh203 --port 9003

新加入的兩個實例復制表已經有數據寫入了,但到目前為止只能驗證熱擴容只成功了一半,還需要驗證分布式表數據總量。同時這里也證明clickhouse對metrika.xml配置文件是熱加載的

6.11. 全量數據寫入完成后驗證分布式表的數據總量

總量與實際總量是完全一致的,證明熱擴容是成功的


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM