近期對hbase多用戶插入數據時,regionserver會莫名奇妙的關閉,regionserver的日志有很多異常:
如下:
org.apache.hadoop.hbase.DroppedSnapshotException: region: t,12130111020202,1369296305769.f14b9a1d05ae485981f6a8579f1324fb.
at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1000)
at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:905)
at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:857)
at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:394)
at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushOneForGlobalPressure(MemStoreFlusher.java:202)
at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:222)
2013-05-23 00:48:27,671 WARN org.apache.hadoop.hbase.regionserver.Store: Failed open of hdfs://cloudgis4:9000/hbase/t/c85d7d3bc3a55a93a147f5c4f07f87b8/imageFamily/2223460197050463756.74f68489b6ea43b520c2adca643cbbdb; presumption is that file was corrupted at flush and lost edits picked up by commit log replay. Verify!
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:241)
at org.apache.hadoop.hdfs.DFSClient.access$800(DFSClient.java:74)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2037)
at java.io.DataInputStream.readFully(DataInputStream.java:178)
at java.io.DataInputStream.readLong(DataInputStream.java:399)
at org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1526)
at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:885)
at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:819)
at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382)
at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
ABORTING region server serverName=cloudgis1,60020,1369232412016, load=(requests=1662, regions=111, usedHeap=3758, maxHeap=4991): Replay of HLog required. Forcing server shutdown
2013-05-23 00:48:20,081 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Too many open files
在網上查了很久也沒有解決辦法,把日志從頭看了一遍,發現一句話:
2013-05-23 00:48:16,939 WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /192.168.3.6:50010, add to deadNodes and continuejava.net.SocketException: Too many open files
原來linux對打開文件數有限制,而datanode無法打開文件,所以就回報異常,regionserver也關閉了。解決方法如下:
HBase是數據庫,會在同一時間使用很多的文件句柄。大多數linux系統使用的默認值1024是不能滿足的,會導致FAQ: Why do I see "java.io.IOException...(Too many open files)" in my logs?異常。還可能會發生這樣的異常
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException 2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
所以你需要修改你的最大文件句柄限制。可以設置到10k. 你還需要修改 hbase 用戶的 nproc
,如果過低會造成 OutOfMemoryError
異常。 [2] [3].
需要澄清的,這兩個設置是針對操作系統的,不是Hbase本身的。有一個常見的錯誤是Hbase運行的用戶,和設置最大值的用戶不是一個用戶。在Hbase啟動的時候,第一行日志會現在ulimit信息,所以你最好檢查一下。 [4]
如果你使用的是Ubuntu,你可以這樣設置:
在文件 /etc/security/limits.conf
添加一行,如:
hadoop - nofile 32768
可以把 hadoop
替換成你運行Hbase和Hadoop的用戶。如果你用兩個用戶,你就需要配兩個。還有配nproc hard 和 soft limits. 如:
hadoop soft/hard nproc 32000
.
在 /etc/pam.d/common-session
加上這一行:
session required pam_limits.so
否則在 /etc/security/limits.conf
上的配置不會生效.
還有注銷再登錄,這些配置才能生效!
一個 Hadoop HDFS Datanode 有一個同時處理文件的上限. 這個參數叫 xcievers
(Hadoop的作者把這個單詞拼錯了). 在你加載之前,先確認下你有沒有配置這個文件conf/hdfs-site.xml
里面的xceivers
參數,至少要有4096:
<property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property>
對於HDFS修改配置要記得重啟.
如果沒有這一項配置,你可能會遇到奇怪的失敗。你會在Datanode的日志中看到xcievers exceeded,但是運行起來會報 missing blocks錯誤。例如: 10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
[5]