為什么用數據庫連接池?
為什么要用數據庫連接池?
如果我們分析一下典型的【連接數據庫】所涉及的步驟,我們將理解為什么:
- 使用數據庫驅動程序打開與數據庫的連接
- 打開TCP套接字以讀取/寫入數據
- 通過套接字讀取/寫入數據
- 關閉連接
- 關閉套接字
很明顯,【連接數據庫】是相當昂貴的操作,因此,應該想辦法盡可能地減少、避免這種操作。
這就是數據庫連接池發揮作用的地方。通過簡單地實現數據庫連接容器(允許我們重用大量現有連接),我們可以有效地節省執行大量昂貴【連接數據庫】的成本,從而提高數據庫驅動應用程序的整體性能。
↑ 譯自 A Simple Guide to Connection Pooling in Java ,有刪改
HikariCP快速入門
HikariCP是一個輕量級的高性能JDBC連接池。GitHub鏈接:https://github.com/brettwooldridge/HikariCP
1、依賴
- HikariCP
- slf4j (不需要日志實現也能跑)
- logback-core
- logback-classic
1和2以及相應數據庫的JDBC驅動是必要的,日志實現可以用其它方案。
2、簡單的草稿程序
package org.sample.dao; import com.zaxxer.hikari.HikariConfig; import com.zaxxer.hikari.HikariDataSource; import org.sample.entity.Profile; import org.sample.exception.DaoException; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.SQLException; public class Test { private static HikariConfig config = new HikariConfig(); private static HikariDataSource ds; static { config.setJdbcUrl("jdbc:mysql://127.0.0.1:3306/profiles?characterEncoding=utf8"); config.setUsername("root"); config.setPassword("???????"); config.addDataSourceProperty("cachePrepStmts", "true"); config.addDataSourceProperty("prepStmtCacheSize", "250"); config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048"); ds = new HikariDataSource(config); config = new HikariConfig(); } public static Connection getConnection() throws SQLException { return ds.getConnection(); } private Test(){} public static void main(String[] args) { Profile profile = new Profile(); profile.setUsername("testname3"); profile.setPassword("123"); profile.setNickname("testnickname"); int i = 0; try { Connection conn = Test.getConnection(); String sql = "INSERT ignore INTO `profiles`.`profile` (`username`, `password`, `nickname`) " + "VALUES (?, ?, ?)"; // 添加ignore出現重復不會拋出異常而是返回0 try (PreparedStatement ps = conn.prepareStatement(sql)) { ps.setString(1, profile.getUsername()); ps.setString(2, profile.getPassword()); ps.setString(3, profile.getNickname()); i = ps.executeUpdate(); } } catch (SQLException e) { throw new DaoException(e); } System.out.println(i); } }
3、設置連接池參數(只列舉常用的)
一台四核的電腦基本可以全部采用默認設置?
① autoCommit:控制由連接池所返回的connection默認的autoCommit狀況。默認值為是true。
② connectionTimeout:該參數決定無可用connection時的最長等待時間,超時將拋出SQLException。允許的最小值為250,默認值是30000(30秒)。
③ maximumPoolSize:該參數控制連接池所允許的最大連接數(包括在用連接和空閑連接)。基本上,此值將確定應用程序與數據庫實際連接的最大數量。它的合理值最好由你的具體執行環境確定。當連接池達到最大連接數,並且沒有空閑連接時,調用getConnection()將會被阻塞,最長等待時間取決於connectionTimeout。 對於這個值設定多少比較好,涉及的東西有點多,詳細可參看About Pool Sizing,一般可以簡單用這個公式計算:連接數 = ((核心數 * 2) + 有效磁盤數),默認值是10。
④ minimumIdle:控制最小的空閑連接數,當連接池內空閑的連接數少於minimumIdle,且總連接數不大於maximumPoolSize時,HikariCP會盡力補充新的連接。出於性能方面的考慮,不建議設置此值,而是讓HikariCP把連接池當做固定大小的處理,minimumIdle的默認值等於maximumPoolSize。
⑤ maxLifetime:用來設置一個connection在連接池中的最大存活時間。一個使用中的connection永遠不會被移除,只有在它關閉后才會被移除。用微小的負衰減來避免連接池中的connection一次性大量滅絕。我們強烈建議設置這個值,它應該比數據庫所施加的時間限制短個幾秒。如果設置為0,則表示connection的存活時間為無限大,當然還要受制於idleTimeout。默認值是1800000(30分鍾)。(不大理解,然而mysql的時間限制不是8個小時???)
⑥ idleTimeout:控制一個connection所被允許的最大空閑時間。當空閑的連接數超過minimumIdle時,一旦某個connection的持續空閑時間超過idleTimeout,就會被移除。只有當minimumIdle小於maximumPoolSize時,這個參數才生效。默認值是600000(10分鍾)。
⑦ poolName:用戶定義的連接池名稱,主要顯示在日志記錄和JMX管理控制台中,以標識連接池以及它的配置。默認值由HikariCP自動生成。
4、MySQL配置
jdbcUrl=jdbc:mysql://127.0.0.1:3306/profiles?characterEncoding=utf8
username=root
password=test
dataSource.cachePrepStmts=true
dataSource.prepStmtCacheSize=250
dataSource.prepStmtCacheSqlLimit=2048
dataSource.useServerPrepStmts=true
dataSource.useLocalSessionState=true
dataSource.rewriteBatchedStatements=true
dataSource.cacheResultSetMetadata=true
dataSource.cacheServerConfiguration=true
dataSource.elideSetAutoCommits=true
dataSource.maintainTimeStats=false
5、修改Java連接數據庫#02#中的代碼
① HikariCPDataSource.java,hikari.properties如上所示。
package org.sample.db; import com.zaxxer.hikari.HikariConfig; import com.zaxxer.hikari.HikariDataSource; import java.sql.Connection; import java.sql.SQLException; public class HikariCPDataSource { private static final String HIKARI_PROPERTIES_FILE_PATH = "/hikari.properties"; private static HikariConfig config = new HikariConfig(HIKARI_PROPERTIES_FILE_PATH); private static HikariDataSource ds = new HikariDataSource(config); public static Connection getConnection() throws SQLException { return ds.getConnection(); } }
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
② ConnectionFactory.java
package org.sample.db; import java.sql.Connection; import java.sql.SQLException; /** * 線程池版 */ public class ConnectionFactory { private ConnectionFactory() { // Exists to defeat instantiation } private static final ThreadLocal<Connection> LocalConnectionHolder = new ThreadLocal<>(); public static Connection getConnection() throws SQLException { Connection conn = LocalConnectionHolder.get(); if (conn == null || conn.isClosed()) { conn = HikariCPDataSource.getConnection(); LocalConnectionHolder.set(conn); } return conn; } public static void removeLocalConnection() { LocalConnectionHolder.remove(); } }
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
③ ConnectionProxy.java(代碼分層有錯誤!)
package org.sample.manager; import org.sample.db.ConnectionFactory; import org.sample.exception.DaoException; import java.sql.Connection; /** * 對應線程池版本ConnectionFactory,方便在Service層進行事務控制 */ public class ConnectionProxy { public static void setAutoCommit(boolean autoCommit) { try { Connection conn = ConnectionFactory.getConnection(); conn.setAutoCommit(autoCommit); } catch (Exception e) { throw new DaoException(e); } } public static void commit() { try { Connection conn = ConnectionFactory.getConnection(); conn.commit(); } catch (Exception e) { throw new DaoException(e); } } public static void rollback() { try { Connection conn = ConnectionFactory.getConnection(); conn.rollback(); } catch (Exception e) { throw new DaoException(e); } } public static void close() { try { Connection conn = ConnectionFactory.getConnection(); conn.close(); ConnectionFactory.removeLocalConnection(); } catch (Exception e) { throw new DaoException(e); } } // TODO 設置隔離級別 }
其它地方把LocalConnectionFactory改為ConnectionFactory,LocalConnectionProxy改為ConnectionProxy就行了!后續如果要換其它連接池,只需要改變ConnectionFactory.java里的一小點代碼。
6、測試
package org.sample.manager; import org.junit.Test; import org.sample.dao.ProfileDAO; import org.sample.dao.impl.ProfileDAOImpl; import org.sample.entity.Profile; import org.sample.exception.DaoException; import java.util.ArrayList; import java.util.Collections; import java.util.LinkedList; import java.util.List; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; import java.util.logging.Logger; import static org.junit.Assert.assertTrue; public class DaoTest { private static final Logger LOGGER = Logger.getLogger(DaoTest.class.getName()); private static final String ORIGIN_STRING = "hello"; private static String RandomString() { return Math.random() + ORIGIN_STRING + Math.random(); } private static Profile RandomProfile() { Profile profile = new Profile(RandomString(), ORIGIN_STRING, RandomString()); return profile; } private static final ProfileDAO PROFILE_DAO = ProfileDAOImpl.INSTANCE; private class Worker implements Runnable { private final Profile profile = RandomProfile(); @Override public void run() { LOGGER.info(Thread.currentThread().getName() + " has started his work"); try { // ConnectionProxy.setAutoCommit(false); PROFILE_DAO.saveProfile(profile); // ConnectionProxy.commit(); } catch (DaoException e) { e.printStackTrace(); } finally { try { ConnectionProxy.close(); } catch (DaoException e) { e.printStackTrace(); } } LOGGER.info(Thread.currentThread().getName() + " has finished his work"); } } /** * numTasks指並發線程數。 * -- 不用連接池: * numTasks<=100正常運行,完成100個任務耗時大概是550ms~600ms * numTasks>100報錯“too many connections”,偶爾不報錯,這是來自mysql數據庫本身的限制 * -- 采用連接池 * numTasks>10000仍正常運行,完成10000個任務耗時大概是26s(池大小是10) */ private static final int NUM_TASKS = 2000; @Test public void test() throws Exception { List<Runnable> workers = new LinkedList<>(); for(int i = 0; i != NUM_TASKS; ++i) { workers.add(new Worker()); } assertConcurrent("Dao test ", workers, Integer.MAX_VALUE); } public static void assertConcurrent(final String message, final List<? extends Runnable> runnables, final int maxTimeoutSeconds) throws InterruptedException { final int numThreads = runnables.size(); final List<Throwable> exceptions = Collections.synchronizedList(new ArrayList<Throwable>()); final ExecutorService threadPool = Executors.newFixedThreadPool(numThreads); try { final CountDownLatch allExecutorThreadsReady = new CountDownLatch(numThreads); final CountDownLatch afterInitBlocker = new CountDownLatch(1); final CountDownLatch allDone = new CountDownLatch(numThreads); for (final Runnable submittedTestRunnable : runnables) { threadPool.submit(new Runnable() { public void run() { allExecutorThreadsReady.countDown(); try { afterInitBlocker.await(); submittedTestRunnable.run(); } catch (final Throwable e) { exceptions.add(e); } finally { allDone.countDown(); } } }); } // wait until all threads are ready assertTrue("Timeout initializing threads! Perform long lasting initializations before passing runnables to assertConcurrent", allExecutorThreadsReady.await(runnables.size() * 10, TimeUnit.MILLISECONDS)); // start all test runners afterInitBlocker.countDown(); assertTrue(message +" timeout! More than" + maxTimeoutSeconds + "seconds", allDone.await(maxTimeoutSeconds, TimeUnit.SECONDS)); } finally { threadPool.shutdownNow(); } assertTrue(message + "failed with exception(s)" + exceptions, exceptions.isEmpty()); } }
本來打算調整連接池參數觀察對性能影響的,結果發現即使參數不變,運行時間起伏也有點大。所以暫時先這樣了。。。具體原因待探究!