neo4j企業版集群搭建


一、HA高可用集群搭建

版本采用的是neo4j-enterprise-3.5.3-unix.tar.gz

1.1、集群ip規划

192.168.56.10 neo4j-node1
192.168.56.11 neo4j-node2
192.168.56.12 neo4j-node3
192.168.56.13 neo4j-node4
192.168.56.14 neo4j-node5

1.2、免密登錄配置

$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa #生成秘鑰
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys #cp公鑰到驗證文件中
$ chmod 0600 ~/.ssh/authorized_keys

如上做免密登錄,這里只做了neo4j-node1 免密登錄其他節點

這里做不做免密,都可以,我做免密主要是為了方便,不需要一直輸入密碼,在正式環境中還是不用免密的好

1.3、修改主機名配置

/etc/hosts配置

192.168.56.10 neo4j-node1
192.168.56.11 neo4j-node2
192.168.56.12 neo4j-node3
192.168.56.13 neo4j-node4

/etc/sysconfig/network配置

hostname neo4j-node1

重啟網卡

$ systemctl restart network

如上內容完成之后退出重新登錄,修改主機名的設置就能生效

1.4、neo4j文件配置

官方配置參數詳細列表

https://neo4j.com/docs/operations-manual/current/ha-cluster/tutorial/setup-cluster/

官方HA配置模式

https://neo4j.com/docs/operations-manual/current/ha-cluster/tutorial/setup-cluster/

分別在三台不同機器上進行修改配置文件,conf/neo4j.conf

neo4j-node1

# Unique server id for this Neo4j instance
# can not be negative id and must be unique
ha.server_id=1

# List of other known instances in this cluster
ha.initial_hosts=neo4j-node1:5001,neo4j-node2:5001,neo4j-node3:5001
# Alternatively, use IP addresses:
#ha.initial_hosts=192.168.0.20:5001,192.168.0.21:5001,192.168.0.22:5001

# HA - High Availability
# SINGLE - Single mode, default.
dbms.mode=HA

# HTTP Connector
dbms.connector.http.enabled=true
dbms.connector.http.listen_address=:7474

neo4j-node2

# Unique server id for this Neo4j instance
# can not be negative id and must be unique
ha.server_id=2

# List of other known instances in this cluster
ha.initial_hosts=neo4j-node1:5001,neo4j-node2:5001,neo4j-node3:5001
# Alternatively, use IP addresses:
#ha.initial_hosts=192.168.0.20:5001,192.168.0.21:5001,192.168.0.22:5001

# HA - High Availability
# SINGLE - Single mode, default.
dbms.mode=HA

# HTTP Connector
dbms.connector.http.enabled=true
dbms.connector.http.listen_address=:7474

neo4j-node3

# Unique server id for this Neo4j instance
# can not be negative id and must be unique
ha.server_id=3

# List of other known instances in this cluster
ha.initial_hosts=neo4j-node1:5001,neo4j-node2:5001,neo4j-node3:5001
# Alternatively, use IP addresses:
#ha.initial_hosts=192.168.0.20:5001,192.168.0.21:5001,192.168.0.22:5001

# HA - High Availability
# SINGLE - Single mode, default.
dbms.mode=HA

# HTTP Connector
dbms.connector.http.enabled=true
dbms.connector.http.listen_address=:7474

根據上面官方給出的配置,啟動的時候會報錯的,報錯信息如下:

logs/neo4j.log

2019-03-07 03:43:22.176+0000 INFO  ======== Neo4j 3.5.3 ========
2019-03-07 03:43:22.194+0000 INFO  Starting...
2019-03-07 03:43:24.254+0000 INFO  Write transactions to database disabled
2019-03-07 03:43:24.662+0000 INFO  Initiating metrics...
2019-03-07 03:43:26.000+0000 INFO  Attempting to join cluster of [neo4j-node1:5001, neo4j-node2:5001, neo4j-node3:5001]
2019-03-07 03:44:56.174+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase@1e141e42' was successfully initialized, but failed to start. Please see the attached cause exception "Conversation-response mapping:
{1/13#=ResponseFuture{conversationId='1/13#', initiatedByMessageType=join, response=null}}". Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase@1e141e42' was successfully initialized, but failed to start. Please see the attached cause exception "Conversation-response mapping:
{1/13#=ResponseFuture{conversationId='1/13#', initiatedByMessageType=join, response=null}}".
org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase@1e141e42' was successfully initialized, but failed to start. Please see the attached cause exception "Conversation-response mapping:
{1/13#=ResponseFuture{conversationId='1/13#', initiatedByMessageType=join, response=null}}".
	at org.neo4j.server.exception.ServerStartupErrors.translateToServerStartupError(ServerStartupErrors.java:45)
	at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:184)
	at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:123)
	at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:90)
	at com.neo4j.server.enterprise.CommercialEntryPoint.main(CommercialEntryPoint.java:22)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.server.database.LifecycleManagingDatabase@1e141e42' was successfully initialized, but failed to start. Please see the attached cause exception "Conversation-response mapping:
{1/13#=ResponseFuture{conversationId='1/13#', initiatedByMessageType=join, response=null}}".
	at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:473)
	at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
	at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:177)
	... 3 more

如上錯誤說明三台機器之間訪問出現了問題,不能相互感知,不能加入集群,還需要增加下面配置

#在neo4j-node1 neo4j.conf中添加
dbms.connectors.default_listen_address=192.168.56.10
dbms.connector.bolt.enabled=true
dbms.connector.bolt.listen_address=:7687
#在neo4j-node2 neo4j.conf中添加
dbms.connectors.default_listen_address=192.168.56.11
dbms.connector.bolt.enabled=true
dbms.connector.bolt.listen_address=:7687
#在neo4j-node3 neo4j.conf中添加
dbms.connectors.default_listen_address=192.168.56.12
dbms.connector.bolt.enabled=true
dbms.connector.bolt.listen_address=:7687

如上配置完成之后啟動相關機器,啟動沒有順序區別,依次進行啟動

$ neo4j-node1$ ./bin/neo4j start
$ neo4j-node2$ ./bin/neo4j start
$ neo4j-node3$ ./bin/neo4j start
2019-03-07 06:01:26.887+0000 INFO  ======== Neo4j 3.5.3 ========
2019-03-07 06:01:26.892+0000 INFO  Starting...
2019-03-07 06:01:28.897+0000 INFO  Write transactions to database disabled
2019-03-07 06:01:29.255+0000 INFO  Initiating metrics...
2019-03-07 06:01:30.911+0000 INFO  Attempting to join cluster of [neo4j-node1:5001, neo4j-node2:5001, neo4j-node3:5001]
2019-03-07 06:01:42.993+0000 INFO  Could not join cluster of [neo4j-node1:5001, neo4j-node2:5001, neo4j-node3:5001]
2019-03-07 06:01:42.993+0000 INFO  Creating new cluster with name [neo4j.ha]...
2019-03-07 06:01:42.998+0000 INFO  Instance 1 (this server)  entered the cluster
2019-03-07 06:01:43.012+0000 INFO  Instance 1 (this server)  was elected as coordinator
2019-03-07 06:01:43.094+0000 INFO  I am 1, moving to master
2019-03-07 06:01:43.170+0000 INFO  Instance 1 (this server)  was elected as coordinator
2019-03-07 06:01:43.256+0000 INFO  I am 1, successfully moved to master
2019-03-07 06:01:43.264+0000 INFO  Instance 1 (this server)  is available as master at ha://192.168.56.10:6001?serverId=1 with StoreId{creationTime=1551928418454, randomId=5747408418777003467, storeVersion=16094931155187206, upgradeTime=1551928418454, upgradeId=1}
2019-03-07 06:01:43.288+0000 INFO  Sending metrics to CSV file at /root/neo4j-e-3.5.3/metrics
2019-03-07 06:01:43.300+0000 INFO  Database available for write transactions
2019-03-07 06:01:43.357+0000 INFO  Instance 1 (this server)  is available as backup at backup://127.0.0.1:6362 with StoreId{creationTime=1551928418454, randomId=5747408418777003467, storeVersion=16094931155187206, upgradeTime=1551928418454, upgradeId=1}
2019-03-07 06:01:43.787+0000 INFO  Instance 2  joined the cluster
2019-03-07 06:01:43.893+0000 INFO  Instance 3  joined the cluster
2019-03-07 06:01:43.905+0000 INFO  Instance 1 (this server)  was elected as coordinator
2019-03-07 06:01:43.955+0000 INFO  Instance 1 (this server)  is available as master at ha://192.168.56.10:6001?serverId=1 with StoreId{creationTime=1551928418454, randomId=5747408418777003467, storeVersion=16094931155187206, upgradeTime=1551928418454, upgradeId=1}
2019-03-07 06:01:43.973+0000 INFO  Instance 1 (this server)  is available as backup at backup://127.0.0.1:6362 with StoreId{creationTime=1551928418454, randomId=5747408418777003467, storeVersion=16094931155187206, upgradeTime=1551928418454, upgradeId=1}
2019-03-07 06:01:44.082+0000 INFO  Bolt enabled on 192.168.56.10:7687.
2019-03-07 06:01:44.703+0000 INFO  Instance 2  is available as slave at ha://192.168.56.11:6001?serverId=2 with StoreId{creationTime=1551928418454, randomId=5747408418777003467, storeVersion=16094931155187206, upgradeTime=1551928418454, upgradeId=1}
2019-03-07 06:01:46.444+0000 WARN  Server thread metrics not available (missing neo4j.server.threads.jetty.all)
2019-03-07 06:01:46.455+0000 WARN  Server thread metrics not available (missing neo4j.server.threads.jetty.idle)
2019-03-07 06:01:46.979+0000 INFO  Started.
2019-03-07 06:01:47.403+0000 INFO  Mounted REST API at: /db/manage
2019-03-07 06:01:47.514+0000 INFO  Server thread metrics have been registered successfully
2019-03-07 06:01:49.508+0000 INFO  Remote interface available at http://localhost:7474/
2019-03-07 06:01:49.895+0000 INFO  Instance 1 (this server)  was elected as coordinator
2019-03-07 06:01:49.923+0000 INFO  Instance 1 (this server)  is available as master at ha://192.168.56.10:6001?serverId=1 with StoreId{creationTime=1551928418454, randomId=5747408418777003467, storeVersion=16094931155187206, upgradeTime=1551928418454, upgradeId=1}
2019-03-07 06:01:49.971+0000 INFO  Instance 1 (this server)  is available as backup at backup://127.0.0.1:6362 with StoreId{creationTime=1551928418454, randomId=5747408418777003467, storeVersion=16094931155187206, upgradeTime=1551928418454, upgradeId=1}
2019-03-07 06:01:49.992+0000 INFO  Instance 2  is available as slave at ha://192.168.56.11:6001?serverId=2 with StoreId{creationTime=1551928418454, randomId=5747408418777003467, storeVersion=16094931155187206, upgradeTime=1551928418454, upgradeId=1}
2019-03-07 06:01:50.222+0000 INFO  Instance 3  is available as slave at ha://192.168.56.12:6001?serverId=3 with StoreId{creationTime=1551928418454, randomId=5747408418777003467, storeVersion=16094931155187206, upgradeTime=1551928418454, upgradeId=1}
2019-03-07 06:01:50.872+0000 WARN  The client is unauthorized due to authentication failure.

如上日志中可以看出來,機器分別都已經加入到集群中,訪問集群

如上圖,點擊之后查詢出集群的詳細信息,可以看到配置的ha.server_id機器分別都加入集群中,並且可以看出那台機器為master節點

說明配置成功,隨便訪問任何一台機器都能夠提供服務,可以自己新建一個node或者relation分別在不同的機器提供的訪問接口查看是否創建成功,數據是否同步成功

二、Causal Clustering(因果集群)搭建

官方文檔https://neo4j.com/docs/operations-manual/current/

2.1、因果集群架構和集群規划

如上圖所示就是因果集群的架構圖,架構中主要分為兩種角色類型,一種是Replica ServersCore Servers兩種角色,下面對兩種角色進行介紹:

Core Servers:此角色是邏輯划分,在該系統角色中由多台物理實體機組成,主要提供數據的存儲和數據一致性,事務處理等功能,同時在Core集群中是支持HA的,所以有LeaderFollower兩中角色,這兩種角色能夠互相進行切換,Leader角色主要提供管理集群的作用,存儲元數據信息處理集群心跳數據分發等工作,Core節點是能夠進行writer and readCore集群中是能夠進行集群拓撲的

Replica Servers:該角色系統中的節點主要提供的是對數據的查詢,他能提供查詢任意在Core集群中的圖數據,這里的數據是由Core集群發送給Replica集群的副本,所以在Replica中,數據丟失不會影響到整個集群的使用,他主要的作用就是擴展集群的工作負載例如查詢等

因果一致性neo4j采用的是因果一致性來保證數據的一致性,使用到的協議是Raft協議來復制所有事務來實現數據的安全和數據一致性;這和其他的分布式系統中采用的Paxos協議不同

Raft協議原理解釋https://www.jianshu.com/p/8e4bbe7e276c

官方提供的一個應用程序對圖進行讀寫操作的一個流程圖,這里CoreRead Replicas都是集群的方式進行展示

在上面的圖中能夠看出數據是怎么流動的,Core是能夠寫和讀,reads是只能被讀的,但是Core只能是Leader提供操作,否則報錯如下:

Neo.ClientError.Cluster.NotALeader: No write operations are allowed directly on this database. Writes must pass through the leader. The role of this server is: FOLLOWER

本次的搭建中的節點規划是:

neo4j-node2 core
neo4j-node3 core
neo4j-node4 core
neo4j-node5 add core OR add Read

2.2、因果集群配置

因果集群的Core集群配置

neo4j-node2

dbms.connectors.default_listen_address=192.168.56.11
dbms.connectors.default_advertised_address=192.168.56.11
dbms.mode=CORE
causal_clustering.minimum_core_cluster_size_at_formation=3
causal_clustering.minimum_core_cluster_size_at_runtime=3
causal_clustering.discovery_members=192.168.56.11:5000,192.168.56.12:5000,192.168.56.13:5000

neo4j-node3

dbms.connectors.default_listen_address=192.168.56.12
dbms.connectors.default_advertised_address=192.168.56.12
dbms.mode=CORE
causal_clustering.minimum_core_cluster_size_at_formation=3
causal_clustering.minimum_core_cluster_size_at_runtime=3
causal_clustering.discovery_members=192.168.56.11:5000,192.168.56.13:5000,192.168.56.13:5000

neo4j-node4

dbms.connectors.default_listen_address=192.168.56.13
dbms.connectors.default_advertised_address=192.168.56.13
dbms.mode=CORE
causal_clustering.minimum_core_cluster_size_at_formation=3
causal_clustering.minimum_core_cluster_size_at_runtime=3
causal_clustering.discovery_members=192.168.56.11:5000,192.168.56.12:5000,192.168.56.13:5000

如上內容配置完成之后依次啟動集群中core節點

$ neo4j-node2$ ./bin/neo4j start
$ neo4j-node3$ ./bin/neo4j start
$ neo4j-node4$ ./bin/neo4j start

啟動完成之后登陸到響應的web界面,然后查詢集群詳情就能看出來相關集群的配置是否正確

2.3、添加新的Core節點進入集群

添加新的節點到集群中,這里使用新的節點neo4j-node5來進行操作

conf/neo4j.conf

dbms.connectors.default_listen_address=192.168.56.14
dbms.connectors.default_advertised_address=192.168.56.14
dbms.mode=CORE
causal_clustering.minimum_core_cluster_size_at_formation=3
causal_clustering.minimum_core_cluster_size_at_runtime=3
causal_clustering.discovery_members=192.168.56.11:5000,192.168.56.12:5000,192.168.56.13:5000

在配置新的機器到Core集群中作為Core節點時配置基本和上面的一樣,但是需要注意的是causal_clustering.discovery_members這個配置,這個配置還是上面的配置,並沒有將目前的機器加入,是因為當前的機器是新加入的,不是永久存在的集群,當不使用的時候可以將當前機器剔除集群,如果一開始就規划好了集群,那么開始就配置完整

neo4j-node5配置完成之后,啟動該節點,啟動完成之后需要登錄當前節點,登錄完成之后查看當前節點是否加入集群中,查看方式和上面的查看方式相同,或者可以看日志,是否加入集群,並且擁有集群id

啟動

$ neo4j-node5$ ./bin/neo4j start

2.4、添加新的Replicas節點進入集群

下面是添加Replicas角色節點,在集群中他是可有可無的,沒有他集群也能夠正常的工作,有他的存在是擴展了查詢和過程分析能夠多了一些節點來進行處理,但是該節點的數據均是來源於Core節點

還是用neo4j-node5來進行試驗

conf/neo4j.conf配置

dbms.connectors.default_listen_address=192.168.56.14
dbms.mode=READ_REPLICA
causal_clustering.discovery_members=192.168.56.11:5000,192.168.56.12:5000,192.168.56.13:5000

該節點不參與集群中的Leader選舉所以配置相對簡單

修改完成之后啟動

$ neo4j-node5$ ./bin/neo4j start

啟動完成之后如果是新節點需要先訪問當前的web服務設置好密碼之后,接着可以到集群中任意節點查詢節點是否添加到集群中

如上圖所示,說明成功添加到集群中

三、通過驅動(driver)訪問neo4j 因果集群方式

pom.xml

<dependency>
    <groupId>org.neo4j.driver</groupId>
    <artifactId>neo4j-java-driver</artifactId>
    <version>1.7.2</version>
</dependency>

訪問實例:

 public Driver createDriver() throws URISyntaxException {
        List<URI> rout = new ArrayList<>();
        rout.add(new URI("bolt+routing://192.168.8.106"));
        rout.add(new URI("bolt+routing://192.168.8.107"));
        rout.add(new URI("bolt+routing://192.168.8.108"));
        Driver driver = 		GraphDatabase.routingDriver(rout,AuthTokens.basic("neo4j","123456"),null);
        return driver;
    }

Driver訪問集群:在訪問因果集群的時候設置好集群的相關訪問地址,驅動能夠自己判斷當前訪問應該走那台機器,負載由集群自己實現,程序不用關心集群機器可用性,驅動會自己發現集群可用性,並且選擇能夠操作的機器進行操作!

web方式訪問:上面的方式是通過代碼的方式進行訪問,這里是通過web進行訪問,在web訪問的時候寫數據只能在leader節點,查詢數據在所有節點都能進行查詢,這里和上面的驅動訪問是有區別的

四、多群集群搭建

注:這里暫時省略了,目前還沒使用,一般的公司也用不到,而且還是收費的


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM