Tomcat性能調優
修改Tomcat Connector運行模式,優化Tomcat運行性能
Tomcat Connector(Tomcat連接器)有bio、nio、apr三 種運行模式
http://www.365mini.com/page/tomcat-connector-mode.htm
1. bio(blocking I/O),顧名思義,即阻塞式I/O操作,表示Tomcat使用的是傳統的Java I/O操作(即java.io
包 及其子包)。Tomcat在默認情況下,就是以bio模式運行的。遺憾的是,就一般而言,bio模式是三種運行模式中性能最低的一種。通過 Tomcat Manager來查看服務器的當前狀態。
http://server206:8080/manager/status
JVM
Free memory: 355.66 MB Total memory: 466.86 MB Max memory: 1016.12 MB
"ajp-apr-8009"
"http-apr-8080"
2. nio(new I/O),是Java SE 1.4及后續版本提供的一種新的I/O操作方式(即java.nio包及其子包)。Java nio是一個基於緩沖區、並能提供非阻塞I/O操作的Java API,因此nio也被看成是non-blocking I/O的縮寫。它擁有比傳統I/O操作(bio)更好的並發運行性能。要讓Tomcat以nio模式來運行也比較簡單,我們只需要在Tomcat安裝目錄 /conf/server.xml文件中將如下配置:
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
中的protocol屬性值改為org.apache.coyote.http11.Http11NioProtocol即可:
<Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol"
connectionTimeout="20000"
redirectPort="8443" />
此時,我們就可以在Tomcat Manager中看到當前服務器狀態頁面的HTTP協議的Connector運行模式已經從http-bio-8080變成了http-nio- 8080。
3. apr(Apache Portable Runtime/Apache可移植運行時),是Apache HTTP服務器的支持庫。你可以簡單地理解為,Tomcat將以JNI的形式調用Apache HTTP服務器的核心動 態鏈接庫來 處理文件讀取或網絡傳輸操作,從而大大地提高Tomcat對靜態文件的處理性能。 Tomcat apr也是在Tomcat上運行高並發應用的首選模式。與配置nio運行模式一樣,也需要將對應的Connector節點的protocol屬性值改為 org.apache.coyote.http11.Http11AprProtocol。
Tomcat官方關於三種運行模式的對比
https://tomcat.apache.org/tomcat-7.0-doc/config/http.html#Connector_Comparison
Java Blocking Connector Java Non Blocking Connector APR/native Connector
BIO NIO APR
Classname Http11Protocol Http11NioProtocol Http11AprProtocol
Tomcat Version 3.x onwards 6.x onwards 5.5.x onwards
Support Polling NO YES YES
Polling Size N/A maxConnections maxConnections
Read Request Headers Blocking Non Blocking Blocking
Read Request Body Blocking Blocking Blocking
Write Response Blocking Blocking Blocking
Wait for next Request Blocking Non Blocking Non Blocking
SSL Support Java SSL Java SSL OpenSSL
SSL Handshake Blocking Non blocking Blocking
Max Connections maxConnections maxConnections maxConnections
如何選擇BIO or NIO以及默認設置
https://tomcat.apache.org/tomcat-8.0-doc/config/http.html
When to prefer NIO vs BIO depends on the use case.
- If you mostly have regular request-response usage, then it doesn’t matter, and even BIO might be a better choice (as seen in my previous benchmarks).
- If you have long-living connections, then NIO is the better choice, because it can server more concurrent users without the need to dedicate a blocked thread to each. The poller threads handle the sending of data back to the client, while the worker threads handle new requests. In other words, neither poller, nor worker threads are blocked and reserved by a single user.
Tomcat7和Tomcat8默認設置都是http1.1,Tomcat7默認使用BIO,Tomcat8根據情況自動選擇BIO還是NIO,甚至NIO2.
The default value is always “HTTP/1.1″, but in Tomcat 7 that “uses an auto-switching mechanism to select either a blocking Java based connector or an APR/native based connector”, while in Tomcat 8 “uses an auto-switching mechanism to select either a non blocking Java NIO based connector or an APR/native based connector”. And to make things even harder, they introduced a NIO2 connector. And to be honest, I don’t know which one of the two NIO connectors is used by default.
NIO比BIO並沒有明顯的性能提升
http://techblog.bozho.net/non-blocking-benchmark/
“non-blocking doesn’t have visible benefits and one should not force himself to use the possibly unfriendly callback programming model for the sake of imaginary performance gains”
使用SSL時的APR vs. NIO/BIO
https://blog.eveoh.nl/2012/04/some-notes-on-tomcat-connector-performance/
Tomcat ships with 3 HTTP(s) connectors:
- Blocking IO (org.apache.coyote.Http11Protocol): this is the default and most stable connector.
- Non-blocking IO (org.apache.coyote.Http11NioProtocol): this is a non-blocking IO connector, which you will need for Comet/WebSockets and should offer slightly better performance in most cases.
- APR (org.apache.coyote.Http11AprProtocol): this connector uses Apache Portable Runtime and OpenSSL instead of their Java counterparts. Because of this, performance is generally better, especially when using SSL.
Some conclusions we can draw from these results:
- Apache Tomcat performs on a level comparable to Apache HTTPD. This is at least the case for dynamic requests, but I have seen similar results for static requests.
- When using SSL: use keep-alive, unless you have a specific reason not to.
- When using SSL on Tomcat: use APR.
- Don’t expect NIO to give a better performance than BIO out of the box. Benchmark, benchmark, benchmark.
關於Keep-alive設置
maxKeepAliveRequests If not specified, this attribute is set to 100. 一般設置100~200.
keepAliveTimeout The default value is to use the value that has been set for the connectionTimeout attribute.
NIO運行模式啟動的log
...
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Aug 16, 2016 1:38:31 PM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-nio-8080"]
Aug 16, 2016 1:38:31 PM org.apache.tomcat.util.net.NioSelectorPool getSharedSelector
INFO: Using a shared selector for servlet write/read
Aug 16, 2016 1:38:31 PM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-nio-8443"]
Aug 16, 2016 1:38:31 PM org.apache.tomcat.util.net.NioSelectorPool getSharedSelector
INFO: Using a shared selector for servlet write/read
Aug 16, 2016 1:38:31 PM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["ajp-bio-8009"]
...
Tomcat性能調優之線程池配置
http://jiangzhengjun.iteye.com/blog/852924
線程池配置
<Service name="Catalina">
<Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
maxThreads="500" minSpareThreads="20" maxIdleTime="60000" /> http://zyycaesar.iteye.com/blog/299818
最大線程500(一般服務器足以),最小空閑線程數20,線程最大空閑時間60秒。
<Connector executor="tomcatThreadPool" port="8090" protocol="org.apache.coyote.http11.Http11AprProtocol"
connectionTimeout="45000" redirectPort="8445" />
or
<Connector executor="tomcatThreadPool" port="80" protocol="HTTP/1.1"
maxThreads="600"
minSpareThreads="100"
maxSpareThreads="300"
connectionTimeout="60000"
keepAliveTimeout="15000"
maxKeepAliveRequests="1"
redirectPort="443"
....../>
http://192.168.1.155:8090/manager/status
默認:
JVM
Free memory: 74.17 MB Total memory: 149.12 MB Max memory: 227.56 MB
"ajp-apr-8109"
Max threads: 200 Current thread count: 0 Current thread busy: 0 Keeped alive sockets count: 0
Max processing time: 0 ms Processing time: 0.0 s Request count: 0 Error count: 0 Bytes received: 0.00 MB Bytes sent: 0.00 MB
"http-apr-8090"
Max threads: 200 Current thread count: 10 Current thread busy: 1 Keeped alive sockets count: 2
Max processing time: 702 ms Processing time: 3.743 s Request count: 83 Error count: 3 Bytes received: 0.00 MB Bytes sent: 0.35 MB
配置線程池之后:
JVM
Free memory: 76.36 MB Total memory: 153.81 MB Max memory: 227.56 MB
"ajp-apr-8109"
Max threads: 200 Current thread count: 0 Current thread busy: 0 Keeped alive sockets count: 0
Max processing time: 0 ms Processing time: 0.0 s Request count: 0 Error count: 0 Bytes received: 0.00 MB Bytes sent: 0.00 MB
"http-apr-8090"
Max threads: 500 Current thread count: 2 Current thread busy: 1 Keeped alive sockets count: 0
Max processing time: 0 ms Processing time: 0.0 s Request count: 0 Error count: 0 Bytes received: 0.00 MB Bytes sent: 0.00 MB
Linux下查看tomcat連接數
netstat -na | grep ESTAB | grep 8080 | wc -l
$ netstat -na | grep ESTAB | grep 8090 | wc -l
2
$ netstat -na | grep ESTAB | grep 8090
tcp 0 0 123.56.104.184:8090 111.205.208.29:50634 ESTABLISHED
tcp 0 0 123.56.104.184:8090 111.205.208.29:50635 ESTABLISHED
tcp 0 0 123.56.104.184:8090 111.205.208.29:50633 ESTABLISHED
查看Manager Status: "http-bio-8090",顯示thread count為3,是一致,如下:
Max threads: 500 Current thread count: 20 Current thread busy: 3
Tomcat性能調優之APR (Apache Portable Runtime)配置
allows Tomcat to use certain native resources for performance
http://tomcat.apache.org/native-doc/
如果沒裝啟動Tomcat時候,Catalina.out:
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
1. install APR
In rpm based Linux those dependencies could be installed by something like:
yum install apr-devel openssl-devel
==========================================================================================================================
Package Arch Version Repository Size
==========================================================================================================================
Installing:
apr-devel x86_64 1.3.9-5.el6_2 base 176 k
openssl-devel x86_64 1.0.1e-30.el6_6.4 updates 1.2 M
Installing for dependencies:
apr x86_64 1.3.9-5.el6_2 base 123 k
keyutils-libs-devel x86_64 1.4-5.el6 base 29 k
krb5-devel x86_64 1.10.3-33.el6 base 498 k
libcom_err-devel x86_64 1.41.12-21.el6 base 32 k
libselinux-devel x86_64 2.0.94-5.8.el6 base 137 k
libsepol-devel x86_64 2.0.41-4.el6 base 64 k
zlib-devel x86_64 1.2.3-29.el6 base 44 k
Updating for dependencies:
e2fsprogs x86_64 1.41.12-21.el6 base 553 k
e2fsprogs-libs x86_64 1.41.12-21.el6 base 121 k
keyutils-libs x86_64 1.4-5.el6 base 20 k
krb5-libs x86_64 1.10.3-33.el6 base 765 k
libcom_err x86_64 1.41.12-21.el6 base 37 k
libselinux x86_64 2.0.94-5.8.el6 base 108 k
libselinux-utils x86_64 2.0.94-5.8.el6 base 82 k
libss x86_64 1.41.12-21.el6 base 41 k
openssl x86_64 1.0.1e-30.el6_6.4 updates 1.5 M
Transaction Summary
==========================================================================================================================
Install 9 Package(s)
Upgrade 9 Package(s)
Total download size: 5.4 M
Is this ok [y/N]: y
Downloading Packages:
(1/18): apr-1.3.9-5.el6_2.x86_64.rpm | 123 kB 00:00
(2/18): apr-devel-1.3.9-5.el6_2.x86_64.rpm | 176 kB 00:04
(3/18): e2fsprogs-1.41.12-21.el6.x86_64.rpm | 553 kB 00:00
(4/18): e2fsprogs-libs-1.41.12-21.el6.x86_64.rpm | 121 kB 00:02
(5/18): keyutils-libs-1.4-5.el6.x86_64.rpm | 20 kB 00:00
(6/18): keyutils-libs-devel-1.4-5.el6.x86_64.rpm | 29 kB 00:00
(7/18): krb5-devel-1.10.3-33.el6.x86_64.rpm | 498 kB 00:00
(8/18): krb5-libs-1.10.3-33.el6.x86_64.rpm | 765 kB 00:00
(9/18): libcom_err-1.41.12-21.el6.x86_64.rpm | 37 kB 00:00
(10/18): libcom_err-devel-1.41.12-21.el6.x86_64.rpm | 32 kB 00:00
(11/18): libselinux-2.0.94-5.8.el6.x86_64.rpm | 108 kB 00:00
(12/18): libselinux-devel-2.0.94-5.8.el6.x86_64.rpm | 137 kB 00:03
(13/18): libselinux-utils-2.0.94-5.8.el6.x86_64.rpm | 82 kB 00:00
(14/18): libsepol-devel-2.0.41-4.el6.x86_64.rpm | 64 kB 00:00
(15/18): libss-1.41.12-21.el6.x86_64.rpm | 41 kB 00:00
(16/18): openssl-1.0.1e-30.el6_6.4.x86_64.rpm | 1.5 MB 00:00
(17/18): openssl-devel-1.0.1e-30.el6_6.4.x86_64.rpm | 1.2 MB 00:41
(18/18): zlib-devel-1.2.3-29.el6.x86_64.rpm | 44 kB 00:00
--------------------------------------------------------------------------------------------------------------------------
Total 43 kB/s | 5.4 MB 02:09
[root@iZ94gt8lzgnZ logs]# rpm -ql apr
/usr/lib64/libapr-1.so.0
/usr/lib64/libapr-1.so.0.3.9
/usr/share/doc/apr-1.3.9
/usr/share/doc/apr-1.3.9/CHANGES
/usr/share/doc/apr-1.3.9/LICENSE
/usr/share/doc/apr-1.3.9/NOTICE
[root@iZ94gt8lzgnZ native]# rpm -ql apr-devel
/usr/bin/apr-1-config
/usr/include/apr-1
/usr/include/apr-1/apr-x86_64.h
/usr/include/apr-1/apr.h
/usr/include/apr-1/apr_allocator.h
/usr/include/apr-1/apr_atomic.h
/usr/include/apr-1/apr_dso.h
/usr/include/apr-1/apr_env.h
/usr/include/apr-1/apr_errno.h
/usr/include/apr-1/apr_file_info.h
/usr/include/apr-1/apr_file_io.h
/usr/include/apr-1/apr_fnmatch.h
/usr/include/apr-1/apr_general.h
/usr/include/apr-1/apr_getopt.h
/usr/include/apr-1/apr_global_mutex.h
/usr/include/apr-1/apr_hash.h
/usr/include/apr-1/apr_inherit.h
/usr/include/apr-1/apr_lib.h
/usr/include/apr-1/apr_mmap.h
/usr/include/apr-1/apr_network_io.h
/usr/include/apr-1/apr_poll.h
/usr/include/apr-1/apr_pools.h
/usr/include/apr-1/apr_portable.h
/usr/include/apr-1/apr_proc_mutex.h
/usr/include/apr-1/apr_random.h
/usr/include/apr-1/apr_ring.h
/usr/include/apr-1/apr_shm.h
/usr/include/apr-1/apr_signal.h
/usr/include/apr-1/apr_strings.h
/usr/include/apr-1/apr_support.h
/usr/include/apr-1/apr_tables.h
/usr/include/apr-1/apr_thread_cond.h
/usr/include/apr-1/apr_thread_mutex.h
/usr/include/apr-1/apr_thread_proc.h
/usr/include/apr-1/apr_thread_rwlock.h
/usr/include/apr-1/apr_time.h
/usr/include/apr-1/apr_user.h
/usr/include/apr-1/apr_version.h
/usr/include/apr-1/apr_want.h
/usr/lib64/apr-1
/usr/lib64/apr-1/build
/usr/lib64/apr-1/build/apr_rules.mk
/usr/lib64/apr-1/build/libtool
/usr/lib64/apr-1/build/make_exports.awk
/usr/lib64/apr-1/build/make_var_export.awk
/usr/lib64/apr-1/build/mkdir.sh
/usr/lib64/libapr-1.la
/usr/lib64/libapr-1.so
/usr/lib64/pkgconfig/apr-1.pc
/usr/share/aclocal/find_apr.m4
/usr/share/doc/apr-devel-1.3.9
/usr/share/doc/apr-devel-1.3.9/APRDesign.html
/usr/share/doc/apr-devel-1.3.9/canonical_filenames.html
/usr/share/doc/apr-devel-1.3.9/incomplete_types
/usr/share/doc/apr-devel-1.3.9/non_apr_programs
http://wander312.iteye.com/blog/1132975
/usr/lib64已經在path里面了,不用加了:
vi /etc/profile
# 后面添加以下內容
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64
# 使profile生效,
source /etc/profile
2. install Tomcat Native Connector
wget http://mirrors.cnnic.cn/apache/tomcat/tomcat-connectors/native/1.1.32/source/tomcat-native-1.1.32-src.tar.gz
tar zxvf tomcat-native-1.1.32-src.tar.gz
cd tomcat-native-1.1.32-src/jni/native
./configure --with-apr=/usr/bin/apr-1-config --with-java-home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.65.x86_64 --with-ssl=/usr/bin/openssl --prefix=/root/apache-tomcat-7.0.57
[root@iZ94gt8lzgnZ ~]# ls /usr/lib/jvm
java-1.7.0-openjdk-1.7.0.65.x86_64 jre jre-1.7.0 jre-1.7.0-openjdk.x86_64 jre-openjdk
[root@iZ94gt8lzgnZ native]# ls /usr/bin/open*
/usr/bin/open /usr/bin/openssl /usr/bin/openvt
3. 啟動 tomcat 后, 看日志:
bin/startup.sh
head logs/catalina.out
可以看到以下結果:
信息: Loaded APR based Apache Tomcat Native library 1.1.14.
2009-1-13 11:12:51 org.apache.catalina.core.AprLifecycleListener init
信息: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
Linux下查看tomcat連接數
netstat -na | grep ESTAB | grep 8080 | wc -l
$ netstat -na | grep ESTAB | grep 8090 | wc -l
2
$ netstat -na | grep ESTAB | grep 8090
tcp 0 0 123.56.104.184:8090 111.205.208.29:50634 ESTABLISHED
tcp 0 0 123.56.104.184:8090 111.205.208.29:50635 ESTABLISHED
tcp 0 0 123.56.104.184:8090 111.205.208.29:50633 ESTABLISHED
查看Manager Status: "http-bio-8090",顯示thread count為3,是一致,如下:
Max threads: 500 Current thread count: 20 Current thread busy: 3
Tomcat JVM性能調優
關於JVM
整個JVM內存大小=年輕代大小 + 年老代大小 + 持久代大小.(1536+512=2g)
Sun官方推薦年輕代配置為整個堆的3/8。
JVM內存構成說明
JVM Heap memory elements |
JVM Non-Heap memory parts |
Eden Space: |
Permanent Generation: |
Survivor Space: |
Code Cache: |
Tenured Generation/PS Old Gen: |
JVM內存構成圖解
Below diagram illustrates JVM memory parts and JVM (java) related command line switches used to preset/limit some of them.
- See more at: http://www.jvmhost.com/articles/how-can-i-monitor-memory-usage-of-my-tomcat-jvm#sthash.Ctd36Kca.dpuf
NewRatio在不同平台的推薦值
https://blogs.oracle.com/jonthecollector/entry/the_second_most_important_gc
If your application has a large proportion of long lived data, the value of NewRatio may not put enough of the space into the tenured generation.
Something to add here is that the default value of NewRatio is platform dependent and runtime compiler (JIT) dependent. Below are the values as we get ready for the JDK6 release.
-server on amd64 2
-server on ia32 8
-server on sparc 2
-client on ia32 12
-client on sparc 8
IBM的Heap Analyser
G1GC
JDK7新特性之
http://dongliu.net/post/404142
Garbage-first garbage collector,簡稱G1 GC,是最終將用於代替Concurrent Mark-Sweep garbage collector(CMS GC)的新一代垃圾回收器。
目前JDK1.6update14及以后版本的jvm中已經繼承了G1 GC,可以使用參數-XX:+UnlockExperimentalVMOptions -XX:+UseG1GC來啟用。
https://blogs.oracle.com/g1gc/
JVM內存限制(最大值)
在命令行下用 java -XmxXXXXM -version 命令來進行測試,然后逐漸的增大XXXX的值,如果執行正常就表示指定的內存大小可用,否則會打印錯誤信息。
http://unixboy.iteye.com/blog/174173
Tomcat JVM 調優記錄
Tomcat 20150616配置和狀態
整個JVM內存大小=年輕代大小 + 年老代大小 + 持久代大小
Note:Tomcat從20150604啟動,20150612 reload DDTService。
$ cat /opt/tomcat-ddtservice/bin/setenv.sh
export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xms512m -Xmx2048m -XX:PermSize=128m -XX:MaxPermSize=768m"
admin@.184:~]
$ ps -auxw --sort=%cpu
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 17733 0.1 0.1 991520 10048 ? Sl May20 76:51 /usr/local/aegis/alihids/AliHids
root 18847 0.2 0.1 793852 13936 ? Sl Apr03 249:44 /usr/local/aegis/aegis_client/aegis_00_73/AliYunDun
admin 20880 0.8 33.3 5557668 2686652 ? Sl Jun04 146:14 /opt/jdk1.7.0_71/bin/java -Djava.util.logging.config.file=/opt/tomca
admin 2480 3.0 7.2 3926212 584256 ? Sl Jan06 7151:06 /opt/jdk1.7.0_71/bin/java -Djava.util.logging.config.file=/opt/apac
admin@.184:~]
$ free
total used free shared buffers cached
Mem: 8058260 7873136 185124 0 149520 4179732
-/+ buffers/cache: 3543884 4514376
Swap: 0 0 0
http://.28:8161/admin/subscribers.jsp
Active Durable Topic Subscribers
46
http://.184:8090/manager/status
Server Information |
|||||||
Tomcat Version |
JVM Version |
JVM Vendor |
OS Name |
OS Version |
OS Architecture |
Hostname |
IP Address |
Apache Tomcat/7.0.57 |
1.7.0_71-b14 |
Oracle Corporation |
Linux |
2.6.32-358.6.2.el6.x86_64 |
amd64 |
iZ25f12r4xgZ |
10.170.207.50 |
"ajp-bio-8209"
Max threads: 200 Current thread count: 0 Current thread busy: 0
Max processing time: 0 ms Processing time: 0.0 s Request count: 0 Error count: 0 Bytes received: 0.00 MB Bytes sent: 0.00 MB
"http-bio-8090"
Max threads: 500 Current thread count: 28 Current thread busy: 18
Max processing time: 1320993 ms Processing time: 24161.775 s Request count: 380932 Error count: 63077 Bytes received: 55.11 MB Bytes sent: 3862.71 MB
JVM
Free memory: 1377.64 MB Total memory: 1899.50 MB Max memory: 1899.50 MB
Memory Pool |
Type |
Initial |
Total |
Maximum |
Used |
PS Eden Space |
Heap memory |
129.00 MB |
530.50 MB |
674.50 MB |
26.95 MB (3%) |
PS Old Gen |
Heap memory |
341.50 MB |
1365.00 MB |
1365.00 MB |
446.50 MB (32%) |
PS Survivor Space |
Heap memory |
21.00 MB |
4.00 MB |
4.00 MB |
0.00 MB (0%) |
Code Cache |
Non-heap memory |
2.43 MB |
45.37 MB |
48.00 MB |
44.79 MB (93%) |
PS Perm Gen |
Non-heap memory |
128.00 MB |
310.00 MB |
768.00 MB |
309.57 MB (40%) |
Tomcat 20150616 問題
- 系統剩余內存太少。(185M)
- Code Cache used過高,接近上限。(ReservedCodeCacheSize and InitialCodeCacheSize)
- PS Perm Gen(Permanent Generation)只用到一半不到,基本不會再增長。
- Heap memory總體使用不到一半。
- 服務器一般設置-Xms、-Xmx相等以避免在每次GC 后調整堆的大小。
Tomcat 20150616 調整
export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xms512m -Xmx2048m -XX:PermSize=128m -XX:MaxPermSize=768m"
調整為:
export JAVA_OPTS="-Dfile.encoding=UTF-8 –Xms1024m -Xmx1536m -XX:PermSize=256m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m"
Tomcat 20150616 調整后跟蹤
20150618上線之后測試完畢22:00
JVM
Free memory: 1245.49 MB Total memory: 1436.00 MB Max memory: 1436.00 MB
Memory Pool |
Type |
Initial |
Total |
Maximum |
Used |
PS Eden Space |
Heap memory |
384.00 MB |
311.50 MB |
311.50 MB |
61.70 MB (19%) |
PS Old Gen |
Heap memory |
1024.00 MB |
1024.00 MB |
1024.00 MB |
121.35 MB (11%) |
PS Survivor Space |
Heap memory |
64.00 MB |
100.50 MB |
100.50 MB |
7.44 MB (7%) |
Code Cache |
Non-heap memory |
2.43 MB |
9.43 MB |
128.00 MB |
9.24 MB (7%) |
PS Perm Gen |
Non-heap memory |
256.00 MB |
256.00 MB |
512.00 MB |
74.74 MB (14%) |
20150619上線之后測試完畢16:47
JVM
Free memory: 1199.51 MB Total memory: 1508.50 MB Max memory: 1508.50 MB
Memory Pool |
Type |
Initial |
Total |
Maximum |
Used |
PS Eden Space |
Heap memory |
384.00 MB |
456.50 MB |
456.50 MB |
53.91 MB (11%) |
PS Old Gen |
Heap memory |
1024.00 MB |
1024.00 MB |
1024.00 MB |
247.09 MB (24%) |
PS Survivor Space |
Heap memory |
64.00 MB |
28.00 MB |
28.00 MB |
7.96 MB (28%) |
Code Cache |
Non-heap memory |
2.43 MB |
19.62 MB |
128.00 MB |
18.88 MB (14%) |
PS Perm Gen |
Non-heap memory |
256.00 MB |
256.00 MB |
512.00 MB |
98.51 MB (19%) |
20150629 10:30(6月19日晚上重啟過)上線后10天
JVM
Free memory: 1180.61 MB Total memory: 1470.50 MB Max memory: 1470.50 MB
Memory Pool |
Type |
Initial |
Total |
Maximum |
Used |
PS Eden Space |
Heap memory |
384.00 MB |
420.00 MB |
439.00 MB |
16.91 MB (3%) |
PS Old Gen |
Heap memory |
1024.00 MB |
1024.00 MB |
1024.00 MB |
246.81 MB (24%) |
PS Survivor Space |
Heap memory |
64.00 MB |
26.50 MB |
26.50 MB |
26.15 MB (98%) |
Code Cache |
Non-heap memory |
2.43 MB |
27.00 MB |
128.00 MB |
26.58 MB (20%) |
PS Perm Gen |
Non-heap memory |
256.00 MB |
256.00 MB |
512.00 MB |
121.63 MB (23%) |
20150629 10:33
Free memory: 972.35 MB Total memory: 1492.00 MB Max memory: 1492.00 MB
Memory Pool |
Type |
Initial |
Total |
Maximum |
Used |
PS Eden Space |
Heap memory |
384.00 MB |
421.50 MB |
421.50 MB |
274.89 MB (65%) |
PS Old Gen |
Heap memory |
1024.00 MB |
1024.00 MB |
1024.00 MB |
244.74 MB (23%) |
PS Survivor Space |
Heap memory |
64.00 MB |
46.50 MB |
46.50 MB |
0.00 MB (0%) |
Code Cache |
Non-heap memory |
2.43 MB |
27.00 MB |
128.00 MB |
26.59 MB (20%) |
PS Perm Gen |
Non-heap memory |
256.00 MB |
256.00 MB |
512.00 MB |
121.57 MB (23%) |
整個JVM內存大小=年輕代大小 + 年老代大小 + 持久代大小.(1536+512=2g)
Sun官方推薦年輕代配置為整個堆的3/8。
-Xmn2g:設置年輕代大小為2G。
-XX:NewRatio=4:設置年輕代(包括Eden和兩個Survivor區)與年老代的比值(除去持久代)。設置為4,則年輕代與年老代所占比值為1:4,年輕代占整個堆棧的1/5
-XX:SurvivorRatio=4:設置年輕代中Eden區與Survivor區的大小比值。設置為4,則兩個Survivor區與一個Eden區的比值為2:4,一個Survivor區占整個年輕代的1/6
-Xss128k:設置每個線程的堆棧大小。JDK5.0以后每個線程堆棧大小為1M,以前每個線程堆棧大小為256K。
-XX:NewRation=1設為1是因為PS Eden Space上下浮動很大,同時PS Old Gen卻無變化,說明創建了很多新對象之后很快被回收;因此應該調大Eden Space.
Tomcat 再次調整
export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xms512m -Xmx2048m -XX:PermSize=128m -XX:MaxPermSize=768m"
調整為:
export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xms1024m -Xmx1536m -XX:PermSize=256m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m"
調整為:
export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xms1536m -Xmx1536m -XX:NewRatio=1 -XX:PermSize=256m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m –Xss256k "
export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xms1536m -Xmx1536m -XX:NewRatio=1 -XX:SurvivorRatio=6 -XX:PermSize=256m -XX:MaxPermSize=512m -Xss256k -XX:ReservedCodeCacheSize=128m "
tomcat+java的web程序持續占cpu問題調試
解決方法:
1、先用top查看占用cpu的進程id
2、再用ps -ef | grep PID定位具體的進程主體;如是否是tomcat啟動的java程序
3、用ps -mp pid -o THREAD,tid,time打印出該進程下的線程占用cpu情況
找到了耗時最高的線程28802,占用CPU時間快兩個小時了!
4、其次將需要的線程ID轉換為16進制格式:printf "%x\n" tid
5、最后打印線程的堆棧信息:jstack pid | grep tid -A 30
找到出現問題的代碼,並分析具體函數中是否有可能出現死循環的代碼段。
通常問題出現在while, for之類的循環代碼片段。
jstack command not found on centos
[root@yfddt5Z logs]# sudo updatedb
[root@yfddt5Z logs]# locate jstack
/usr/java/jdk1.7.0_79/bin/jstack
/usr/java/jdk1.7.0_79/man/ja_JP.UTF-8/man1/jstack.1
/usr/java/jdk1.7.0_79/man/man1/jstack.1
Reference:
http://blog.csdn.net/netdevgirl/article/details/53943224 jstack(查看線程)、jmap(查看內存)和jstat(性能分析)命令