Druid寫入zookeeper數據太大的解決辦法


 

報錯如下

org.apache.zookeeper.ClientCnxn - Session 0x102c87b7f880003 for server cweb244/10.17.2.241:2181, unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Packet len6429452 is out of range!

意思是數據包長度大於jute.maxBuffer允許的長度。

源碼詳情

private int packetLen = ZKClientConfig.CLIENT_MAX_PACKET_LENGTH_DEFAULT;

protected void initProperties() throws IOException {
    try {
        packetLen = clientConfig.getInt(
            ZKConfig.JUTE_MAXBUFFER,
            ZKClientConfig.CLIENT_MAX_PACKET_LENGTH_DEFAULT);
        LOG.info("{} value is {} Bytes", ZKConfig.JUTE_MAXBUFFER, packetLen);
    } catch (NumberFormatException e) {
        String msg = MessageFormat.format(
            "Configured value {0} for property {1} can not be parsed to int",
            clientConfig.getProperty(ZKConfig.JUTE_MAXBUFFER),
            ZKConfig.JUTE_MAXBUFFER);
        LOG.error(msg);
        throw new IOException(msg);
    }
}

void readLength() throws IOException {
    int len = incomingBuffer.getInt();
    if (len < 0 || len >= packetLen) {
        throw new IOException("Packet len " + len + " is out of range!");
    }
    incomingBuffer = ByteBuffer.allocate(len);
}

zookeeper的默認值最大值為4M。所以Druid一些數據大於默認上限時就會報錯。

解決辦法

  進入zk的conf目錄下,新建一個java.env文件 將 -Djute.maxbuffer 設置為10M

  

#!/bin/sh

export JAVA_HOME=/...../

# heap size MUST be modified according to cluster environment

export JVMFLAGS="-Xms2048m -Xmx4096m $JVMFLAGS -Djute.maxbuffer=10485760 "

同步修改所有節點后重啟

強烈建議Druid單獨部署一套zookeeper集群


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM