數據源一開始配置:
jdbc.initialSize=1
jdbc.minIdle=1
jdbc.maxActive=5
程序運行一段時間后,執行查詢拋如下異常:
exception=org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.GetConnectionTimeoutException: wait millis 60000, active 5, maxActive 5
### The error may exist in ...
懷疑是最大連接數不夠,講配置改為
jdbc.initialSize=1
jdbc.minIdle=1
jdbc.maxActive=20
運行一段時間后,又出現上述異常。在datasource配置中添加如下參數(參考http://www.cnblogs.com/netcorner/p/4380949.html):
<!-- 超過時間限制是否回收 --> <property name="removeAbandoned" value="true" /> <!-- 超時時間;單位為秒。180秒=3分鍾 --> <property name="removeAbandonedTimeout" value="180" /> <!-- 關閉abanded連接時輸出錯誤日志 --> <property name="logAbandoned" value="true" />
拋如下異常:
2016-12-27 14:35:22.773 [Druid-ConnectionPool-Destroy-1821010113] ERROR com.alibaba.druid.pool.DruidDataSource - abandon connection, owner thread: quartzScheduler_Worker-1, connected time nano: 506518587214834, open stackTrace
at java.lang.Thread.getStackTrace(Thread.java:1552)
at com.alibaba.druid.pool.DruidDataSource.getConnectionDirect(DruidDataSource.java:1014)
at com.alibaba.druid.filter.FilterChainImpl.dataSource_connect(FilterChainImpl.java:4544)
at com.alibaba.druid.filter.stat.StatFilter.dataSource_getConnection(StatFilter.java:662)
at com.alibaba.druid.filter.FilterChainImpl.dataSource_connect(FilterChainImpl.java:4540)
at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:938)
at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:930)
at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:102)
at com.xxx.doCopyIn(AppQustionSync.java:168)
at com.xxx.executeInternal(AppQustionSync.java:99)
at org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:75)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
紅色標注處連接未釋放,查看代碼,
try { InputStream input = new FileInputStream(dataFile); conn = copyInDataSource.getConnection(); baseConn = (BaseConnection) conn.getMetaData() .getConnection(); baseConn.setAutoCommit(false); stmt = baseConn.createStatement(); LOG.info("delete data: " + delSql); stmt.executeUpdate(delSql); CopyManager copyManager = new CopyManager(baseConn); LOG.info("copy in: " + copyInSql); copyManager.copyIn(copyInSql, input); baseConn.commit(); jobDataMap.remove(data_file_path_key); } catch (SQLException e) { try { LOG.warn(JobDataMapHelper.jobName(jobDataMap) + ":" + "批量更新任務失敗回滾..."); baseConn.rollback(); } catch (SQLException ex) { String errorMessage = String.format( "JobName:[%s] failed: ", JobDataMapHelper.jobName(jobDataMap)); throw new SQLException(errorMessage, ex); } } finally { stmt.close(); baseConn.close(); jobDataMap.remove(data_file_path_key); FileUtils.deleteQuietly(dataFile); }
這里baseConn是通過druid datasource獲取的一個PgSQL 底層連接, 上面代碼執行完后,finally中調用baseConn.close()關閉了這個連接,(猜測關閉這個底層連接,druid連接池卻不知道,還認為自己擁有這個連接,但實際該連接是不可用的,這段代碼執行多次,就將druid連接池中所有連接都耗光了,於是便拋出Could not get JDBC Connection錯誤。
將上面finally代碼塊改為
finally { stmt.close(); conn.close(); jobDataMap.remove(data_file_path_key); FileUtils.deleteQuietly(dataFile); }
即只調用druid 連接的close方法(只是釋放該連接,而不是直接關閉底層連接),問題解決。
