Druid時序數據庫常見問題及處理方式


最近將Druid-0.10.0升級到Druid-0.12.1的過程中遇到一些問題,為了后期方便分析問題和及時解決問題,特此寫這篇文章將工作中遇到的Druid問題及解決辦法記錄下來,以供其他人借鑒,其中如有不妥之處,還望大家多多見諒!!

1、協調節點無法創建任務

協調節點無法創建任務基本可以從以下兩個方面進行考慮:

  • 任務信息已在數據庫中存在
  • 非堆最大內存小於實際所需內存
實時節點報錯信息:
Error: com.metamx.tranquility.druid.IndexServicePermanentException: Service[druid:overlord] call failed with status: 400 Bad Request
任務報錯信息:
Not enough direct memory.  Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, druid.processing.numThreads, or druid.processing.numMergeBuffers: maxDirectMemory[1,908,932,608], memoryNeeded[2,684,354,560] = druid.processing.buffer.sizeBytes[536,870,912] * (druid.processing.numMergeBuffers[2] + druid.processing.numThreads[2] + 1)
解決方法:

調小MaxDirectMemorySize或者修改common.runtime.config中的processing.numthreads

-server
-Xms24g
-Xmx24g
-XX:MaxDirectMemorySize=4096m
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

注意:這個問題常見於Historical和MiddleManager節點。如果出現此類問題,首先應該排查Historical和MIddleManager節點的MaxDirectMemorySize配置的值。

Historical和MiddleManager節點配置MaxDirectMemorySize必須滿足下面這個公式:

MaxDirectMemorySize >= druid.processing.buffer.sizeBytes[536,870,912] * (druid.processing.numMergeBuffers[2] + druid.processing.numThreads[2] + 1)

2、數據進行GroupBy時聚合數量大從而導致報錯

出現該錯誤主要是由以下兩方面導致:

  • druid.processing.buffer.sizeBytes配置的緩存數據過小
  • druid.query.groupBy.maxOnDiskStorage默認關閉,未開啟磁盤溢出功能
錯誤信息:
io.druid.query.QueryInterruptedException: Not enough dictionary space to execute this query. Try increasing druid.query.groupBy.maxMergingDictionarySize or enable disk spilling by setting druid.query.groupBy.maxOnDiskStorage to a positive number.
解決方法:
  • Broker、Historical以及實時節點更改druid.processing.buffer.sizeBytes屬性值。
druid.processing.buffer.sizeBytes=536870912
  • Broker、Historical以及實時節點開啟磁盤溢出和配置最大磁盤存儲空間,具體配置如下所示:
druid.query.groupBy.maxOnDiskStorage=1
druid.query.groupBy.maxOnDiskStorage=6442450944

注意:這兩種方式中可以任選其中一種,修改配置后,只需要重啟相應的節點即可。

Druid創建任務時出現端口占用的情況

這種異常情況下,druid能夠正常在mysql中添加task記錄,但是沒法在var/druid/task目錄下創建segment,從而導致無法將數據保存到Druid中。從0.12.0版本開始偶爾會出現這種情況。

報錯信息:
2018-07-25T05:00:14,262 WARN [main] com.sun.jersey.spi.inject.Errors - The following warnings have been detected with resource and/or provider classes:
  WARNING: A HTTP GET method, public void io.druid.server.http.SegmentListerResource.getSegments(long,long,long,javax.servlet.http.HttpServletRequest) throws java.io.IOException, MUST return a non-void type.
2018-07-25T05:00:14,273 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@915d7c4{/,null,AVAILABLE}
2018-07-25T05:00:14,277 ERROR [main] io.druid.cli.CliPeon - Error when starting up.  Failing.
java.net.BindException: Address already in use
	at sun.nio.ch.Net.bind0(Native Method) ~[?:1.8.0_65]
	at sun.nio.ch.Net.bind(Net.java:433) ~[?:1.8.0_65]
	at sun.nio.ch.Net.bind(Net.java:425) ~[?:1.8.0_65]
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) ~[?:1.8.0_65]
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) ~[?:1.8.0_65]
	at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:317) ~[jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
	at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80) ~[jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
	at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:235) ~[jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) ~[jetty-util-9.3.19.v20170502.jar:9.3.19.v20170502]
	at org.eclipse.jetty.server.Server.doStart(Server.java:401) ~[jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) ~[jetty-util-9.3.19.v20170502.jar:9.3.19.v20170502]
	at io.druid.server.initialization.jetty.JettyServerModule$1.start(JettyServerModule.java:315) ~[druid-server-0.12.1.jar:0.12.1]
	at io.druid.java.util.common.lifecycle.Lifecycle.start(Lifecycle.java:311) ~[java-util-0.12.1.jar:0.12.1]
	at io.druid.guice.LifecycleModule$2.start(LifecycleModule.java:134) ~[druid-api-0.12.1.jar:0.12.1]
	at io.druid.cli.GuiceRunnable.initLifecycle(GuiceRunnable.java:101) [druid-services-0.12.1.jar:0.12.1]
	at io.druid.cli.CliPeon.run(CliPeon.java:301) [druid-services-0.12.1.jar:0.12.1]
	at io.druid.cli.Main.main(Main.java:116) [druid-services-0.12.1.jar:0.12.1]
解決辦法:

MiddleManager節點的配置文件runtime.properties添加如下配置:

cat druid.indexer.runner.startPort=40000

注意事項:先查看linux服務器支持的端口范圍,然后根據端口范圍定義任務執行起始端口

$ sysctl net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 32768	60999

3. Segment進行compact時臨時文件不存在導致報錯

出現該錯誤主要是由於MiddleManager節點指定的臨時文件不存在

MiddleManager節點配置文件jvm.properties中指定臨時目錄

-Djava.io.tmpdir=var/tmp
錯誤信息
java.lang.IllegalStateException: Failed to create directory within 10000 attempts (tried 1472453270713-0 to 1472453270713-9999)
解決辦法

手動創建臨時目錄,比如上面的臨時目錄var/tmp

[work@localhost ~]$ mkdir -p var/tmp


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM