Tomcat性能调优
修改Tomcat Connector运行模式,优化Tomcat运行性能
Tomcat Connector(Tomcat连接器)有bio、nio、apr三 种运行模式
http://www.365mini.com/page/tomcat-connector-mode.htm
1. bio(blocking I/O),顾名思义,即阻塞式I/O操作,表示Tomcat使用的是传统的Java I/O操作(即java.io
包 及其子包)。Tomcat在默认情况下,就是以bio模式运行的。遗憾的是,就一般而言,bio模式是三种运行模式中性能最低的一种。通过 Tomcat Manager来查看服务器的当前状态。
http://server206:8080/manager/status
JVM
Free memory: 355.66 MB Total memory: 466.86 MB Max memory: 1016.12 MB
"ajp-apr-8009"
"http-apr-8080"
2. nio(new I/O),是Java SE 1.4及后续版本提供的一种新的I/O操作方式(即java.nio包及其子包)。Java nio是一个基于缓冲区、并能提供非阻塞I/O操作的Java API,因此nio也被看成是non-blocking I/O的缩写。它拥有比传统I/O操作(bio)更好的并发运行性能。要让Tomcat以nio模式来运行也比较简单,我们只需要在Tomcat安装目录 /conf/server.xml文件中将如下配置:
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
中的protocol属性值改为org.apache.coyote.http11.Http11NioProtocol即可:
<Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol"
connectionTimeout="20000"
redirectPort="8443" />
此时,我们就可以在Tomcat Manager中看到当前服务器状态页面的HTTP协议的Connector运行模式已经从http-bio-8080变成了http-nio- 8080。
3. apr(Apache Portable Runtime/Apache可移植运行时),是Apache HTTP服务器的支持库。你可以简单地理解为,Tomcat将以JNI的形式调用Apache HTTP服务器的核心动 态链接库来 处理文件读取或网络传输操作,从而大大地提高Tomcat对静态文件的处理性能。 Tomcat apr也是在Tomcat上运行高并发应用的首选模式。与配置nio运行模式一样,也需要将对应的Connector节点的protocol属性值改为 org.apache.coyote.http11.Http11AprProtocol。
Tomcat官方关于三种运行模式的对比
https://tomcat.apache.org/tomcat-7.0-doc/config/http.html#Connector_Comparison
Java Blocking Connector Java Non Blocking Connector APR/native Connector
BIO NIO APR
Classname Http11Protocol Http11NioProtocol Http11AprProtocol
Tomcat Version 3.x onwards 6.x onwards 5.5.x onwards
Support Polling NO YES YES
Polling Size N/A maxConnections maxConnections
Read Request Headers Blocking Non Blocking Blocking
Read Request Body Blocking Blocking Blocking
Write Response Blocking Blocking Blocking
Wait for next Request Blocking Non Blocking Non Blocking
SSL Support Java SSL Java SSL OpenSSL
SSL Handshake Blocking Non blocking Blocking
Max Connections maxConnections maxConnections maxConnections
如何选择BIO or NIO以及默认设置
https://tomcat.apache.org/tomcat-8.0-doc/config/http.html
When to prefer NIO vs BIO depends on the use case.
- If you mostly have regular request-response usage, then it doesn’t matter, and even BIO might be a better choice (as seen in my previous benchmarks).
- If you have long-living connections, then NIO is the better choice, because it can server more concurrent users without the need to dedicate a blocked thread to each. The poller threads handle the sending of data back to the client, while the worker threads handle new requests. In other words, neither poller, nor worker threads are blocked and reserved by a single user.
Tomcat7和Tomcat8默认设置都是http1.1,Tomcat7默认使用BIO,Tomcat8根据情况自动选择BIO还是NIO,甚至NIO2.
The default value is always “HTTP/1.1″, but in Tomcat 7 that “uses an auto-switching mechanism to select either a blocking Java based connector or an APR/native based connector”, while in Tomcat 8 “uses an auto-switching mechanism to select either a non blocking Java NIO based connector or an APR/native based connector”. And to make things even harder, they introduced a NIO2 connector. And to be honest, I don’t know which one of the two NIO connectors is used by default.
NIO比BIO并没有明显的性能提升
http://techblog.bozho.net/non-blocking-benchmark/
“non-blocking doesn’t have visible benefits and one should not force himself to use the possibly unfriendly callback programming model for the sake of imaginary performance gains”
使用SSL时的APR vs. NIO/BIO
https://blog.eveoh.nl/2012/04/some-notes-on-tomcat-connector-performance/
Tomcat ships with 3 HTTP(s) connectors:
- Blocking IO (org.apache.coyote.Http11Protocol): this is the default and most stable connector.
- Non-blocking IO (org.apache.coyote.Http11NioProtocol): this is a non-blocking IO connector, which you will need for Comet/WebSockets and should offer slightly better performance in most cases.
- APR (org.apache.coyote.Http11AprProtocol): this connector uses Apache Portable Runtime and OpenSSL instead of their Java counterparts. Because of this, performance is generally better, especially when using SSL.
Some conclusions we can draw from these results:
- Apache Tomcat performs on a level comparable to Apache HTTPD. This is at least the case for dynamic requests, but I have seen similar results for static requests.
- When using SSL: use keep-alive, unless you have a specific reason not to.
- When using SSL on Tomcat: use APR.
- Don’t expect NIO to give a better performance than BIO out of the box. Benchmark, benchmark, benchmark.
关于Keep-alive设置
maxKeepAliveRequests If not specified, this attribute is set to 100. 一般设置100~200.
keepAliveTimeout The default value is to use the value that has been set for the connectionTimeout attribute.
NIO运行模式启动的log
...
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Aug 16, 2016 1:38:31 PM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-nio-8080"]
Aug 16, 2016 1:38:31 PM org.apache.tomcat.util.net.NioSelectorPool getSharedSelector
INFO: Using a shared selector for servlet write/read
Aug 16, 2016 1:38:31 PM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-nio-8443"]
Aug 16, 2016 1:38:31 PM org.apache.tomcat.util.net.NioSelectorPool getSharedSelector
INFO: Using a shared selector for servlet write/read
Aug 16, 2016 1:38:31 PM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["ajp-bio-8009"]
...
Tomcat性能调优之线程池配置
http://jiangzhengjun.iteye.com/blog/852924
线程池配置
<Service name="Catalina">
<Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
maxThreads="500" minSpareThreads="20" maxIdleTime="60000" /> http://zyycaesar.iteye.com/blog/299818
最大线程500(一般服务器足以),最小空闲线程数20,线程最大空闲时间60秒。
<Connector executor="tomcatThreadPool" port="8090" protocol="org.apache.coyote.http11.Http11AprProtocol"
connectionTimeout="45000" redirectPort="8445" />
or
<Connector executor="tomcatThreadPool" port="80" protocol="HTTP/1.1"
maxThreads="600"
minSpareThreads="100"
maxSpareThreads="300"
connectionTimeout="60000"
keepAliveTimeout="15000"
maxKeepAliveRequests="1"
redirectPort="443"
....../>
http://192.168.1.155:8090/manager/status
默认:
JVM
Free memory: 74.17 MB Total memory: 149.12 MB Max memory: 227.56 MB
"ajp-apr-8109"
Max threads: 200 Current thread count: 0 Current thread busy: 0 Keeped alive sockets count: 0
Max processing time: 0 ms Processing time: 0.0 s Request count: 0 Error count: 0 Bytes received: 0.00 MB Bytes sent: 0.00 MB
"http-apr-8090"
Max threads: 200 Current thread count: 10 Current thread busy: 1 Keeped alive sockets count: 2
Max processing time: 702 ms Processing time: 3.743 s Request count: 83 Error count: 3 Bytes received: 0.00 MB Bytes sent: 0.35 MB
配置线程池之后:
JVM
Free memory: 76.36 MB Total memory: 153.81 MB Max memory: 227.56 MB
"ajp-apr-8109"
Max threads: 200 Current thread count: 0 Current thread busy: 0 Keeped alive sockets count: 0
Max processing time: 0 ms Processing time: 0.0 s Request count: 0 Error count: 0 Bytes received: 0.00 MB Bytes sent: 0.00 MB
"http-apr-8090"
Max threads: 500 Current thread count: 2 Current thread busy: 1 Keeped alive sockets count: 0
Max processing time: 0 ms Processing time: 0.0 s Request count: 0 Error count: 0 Bytes received: 0.00 MB Bytes sent: 0.00 MB
Linux下查看tomcat连接数
netstat -na | grep ESTAB | grep 8080 | wc -l
$ netstat -na | grep ESTAB | grep 8090 | wc -l
2
$ netstat -na | grep ESTAB | grep 8090
tcp 0 0 123.56.104.184:8090 111.205.208.29:50634 ESTABLISHED
tcp 0 0 123.56.104.184:8090 111.205.208.29:50635 ESTABLISHED
tcp 0 0 123.56.104.184:8090 111.205.208.29:50633 ESTABLISHED
查看Manager Status: "http-bio-8090",显示thread count为3,是一致,如下:
Max threads: 500 Current thread count: 20 Current thread busy: 3
Tomcat性能调优之APR (Apache Portable Runtime)配置
allows Tomcat to use certain native resources for performance
http://tomcat.apache.org/native-doc/
如果没装启动Tomcat时候,Catalina.out:
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
1. install APR
In rpm based Linux those dependencies could be installed by something like:
yum install apr-devel openssl-devel
==========================================================================================================================
Package Arch Version Repository Size
==========================================================================================================================
Installing:
apr-devel x86_64 1.3.9-5.el6_2 base 176 k
openssl-devel x86_64 1.0.1e-30.el6_6.4 updates 1.2 M
Installing for dependencies:
apr x86_64 1.3.9-5.el6_2 base 123 k
keyutils-libs-devel x86_64 1.4-5.el6 base 29 k
krb5-devel x86_64 1.10.3-33.el6 base 498 k
libcom_err-devel x86_64 1.41.12-21.el6 base 32 k
libselinux-devel x86_64 2.0.94-5.8.el6 base 137 k
libsepol-devel x86_64 2.0.41-4.el6 base 64 k
zlib-devel x86_64 1.2.3-29.el6 base 44 k
Updating for dependencies:
e2fsprogs x86_64 1.41.12-21.el6 base 553 k
e2fsprogs-libs x86_64 1.41.12-21.el6 base 121 k
keyutils-libs x86_64 1.4-5.el6 base 20 k
krb5-libs x86_64 1.10.3-33.el6 base 765 k
libcom_err x86_64 1.41.12-21.el6 base 37 k
libselinux x86_64 2.0.94-5.8.el6 base 108 k
libselinux-utils x86_64 2.0.94-5.8.el6 base 82 k
libss x86_64 1.41.12-21.el6 base 41 k
openssl x86_64 1.0.1e-30.el6_6.4 updates 1.5 M
Transaction Summary
==========================================================================================================================
Install 9 Package(s)
Upgrade 9 Package(s)
Total download size: 5.4 M
Is this ok [y/N]: y
Downloading Packages:
(1/18): apr-1.3.9-5.el6_2.x86_64.rpm | 123 kB 00:00
(2/18): apr-devel-1.3.9-5.el6_2.x86_64.rpm | 176 kB 00:04
(3/18): e2fsprogs-1.41.12-21.el6.x86_64.rpm | 553 kB 00:00
(4/18): e2fsprogs-libs-1.41.12-21.el6.x86_64.rpm | 121 kB 00:02
(5/18): keyutils-libs-1.4-5.el6.x86_64.rpm | 20 kB 00:00
(6/18): keyutils-libs-devel-1.4-5.el6.x86_64.rpm | 29 kB 00:00
(7/18): krb5-devel-1.10.3-33.el6.x86_64.rpm | 498 kB 00:00
(8/18): krb5-libs-1.10.3-33.el6.x86_64.rpm | 765 kB 00:00
(9/18): libcom_err-1.41.12-21.el6.x86_64.rpm | 37 kB 00:00
(10/18): libcom_err-devel-1.41.12-21.el6.x86_64.rpm | 32 kB 00:00
(11/18): libselinux-2.0.94-5.8.el6.x86_64.rpm | 108 kB 00:00
(12/18): libselinux-devel-2.0.94-5.8.el6.x86_64.rpm | 137 kB 00:03
(13/18): libselinux-utils-2.0.94-5.8.el6.x86_64.rpm | 82 kB 00:00
(14/18): libsepol-devel-2.0.41-4.el6.x86_64.rpm | 64 kB 00:00
(15/18): libss-1.41.12-21.el6.x86_64.rpm | 41 kB 00:00
(16/18): openssl-1.0.1e-30.el6_6.4.x86_64.rpm | 1.5 MB 00:00
(17/18): openssl-devel-1.0.1e-30.el6_6.4.x86_64.rpm | 1.2 MB 00:41
(18/18): zlib-devel-1.2.3-29.el6.x86_64.rpm | 44 kB 00:00
--------------------------------------------------------------------------------------------------------------------------
Total 43 kB/s | 5.4 MB 02:09
[root@iZ94gt8lzgnZ logs]# rpm -ql apr
/usr/lib64/libapr-1.so.0
/usr/lib64/libapr-1.so.0.3.9
/usr/share/doc/apr-1.3.9
/usr/share/doc/apr-1.3.9/CHANGES
/usr/share/doc/apr-1.3.9/LICENSE
/usr/share/doc/apr-1.3.9/NOTICE
[root@iZ94gt8lzgnZ native]# rpm -ql apr-devel
/usr/bin/apr-1-config
/usr/include/apr-1
/usr/include/apr-1/apr-x86_64.h
/usr/include/apr-1/apr.h
/usr/include/apr-1/apr_allocator.h
/usr/include/apr-1/apr_atomic.h
/usr/include/apr-1/apr_dso.h
/usr/include/apr-1/apr_env.h
/usr/include/apr-1/apr_errno.h
/usr/include/apr-1/apr_file_info.h
/usr/include/apr-1/apr_file_io.h
/usr/include/apr-1/apr_fnmatch.h
/usr/include/apr-1/apr_general.h
/usr/include/apr-1/apr_getopt.h
/usr/include/apr-1/apr_global_mutex.h
/usr/include/apr-1/apr_hash.h
/usr/include/apr-1/apr_inherit.h
/usr/include/apr-1/apr_lib.h
/usr/include/apr-1/apr_mmap.h
/usr/include/apr-1/apr_network_io.h
/usr/include/apr-1/apr_poll.h
/usr/include/apr-1/apr_pools.h
/usr/include/apr-1/apr_portable.h
/usr/include/apr-1/apr_proc_mutex.h
/usr/include/apr-1/apr_random.h
/usr/include/apr-1/apr_ring.h
/usr/include/apr-1/apr_shm.h
/usr/include/apr-1/apr_signal.h
/usr/include/apr-1/apr_strings.h
/usr/include/apr-1/apr_support.h
/usr/include/apr-1/apr_tables.h
/usr/include/apr-1/apr_thread_cond.h
/usr/include/apr-1/apr_thread_mutex.h
/usr/include/apr-1/apr_thread_proc.h
/usr/include/apr-1/apr_thread_rwlock.h
/usr/include/apr-1/apr_time.h
/usr/include/apr-1/apr_user.h
/usr/include/apr-1/apr_version.h
/usr/include/apr-1/apr_want.h
/usr/lib64/apr-1
/usr/lib64/apr-1/build
/usr/lib64/apr-1/build/apr_rules.mk
/usr/lib64/apr-1/build/libtool
/usr/lib64/apr-1/build/make_exports.awk
/usr/lib64/apr-1/build/make_var_export.awk
/usr/lib64/apr-1/build/mkdir.sh
/usr/lib64/libapr-1.la
/usr/lib64/libapr-1.so
/usr/lib64/pkgconfig/apr-1.pc
/usr/share/aclocal/find_apr.m4
/usr/share/doc/apr-devel-1.3.9
/usr/share/doc/apr-devel-1.3.9/APRDesign.html
/usr/share/doc/apr-devel-1.3.9/canonical_filenames.html
/usr/share/doc/apr-devel-1.3.9/incomplete_types
/usr/share/doc/apr-devel-1.3.9/non_apr_programs
http://wander312.iteye.com/blog/1132975
/usr/lib64已经在path里面了,不用加了:
vi /etc/profile
# 后面添加以下内容
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64
# 使profile生效,
source /etc/profile
2. install Tomcat Native Connector
wget http://mirrors.cnnic.cn/apache/tomcat/tomcat-connectors/native/1.1.32/source/tomcat-native-1.1.32-src.tar.gz
tar zxvf tomcat-native-1.1.32-src.tar.gz
cd tomcat-native-1.1.32-src/jni/native
./configure --with-apr=/usr/bin/apr-1-config --with-java-home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.65.x86_64 --with-ssl=/usr/bin/openssl --prefix=/root/apache-tomcat-7.0.57
[root@iZ94gt8lzgnZ ~]# ls /usr/lib/jvm
java-1.7.0-openjdk-1.7.0.65.x86_64 jre jre-1.7.0 jre-1.7.0-openjdk.x86_64 jre-openjdk
[root@iZ94gt8lzgnZ native]# ls /usr/bin/open*
/usr/bin/open /usr/bin/openssl /usr/bin/openvt
3. 启动 tomcat 后, 看日志:
bin/startup.sh
head logs/catalina.out
可以看到以下结果:
信息: Loaded APR based Apache Tomcat Native library 1.1.14.
2009-1-13 11:12:51 org.apache.catalina.core.AprLifecycleListener init
信息: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
Linux下查看tomcat连接数
netstat -na | grep ESTAB | grep 8080 | wc -l
$ netstat -na | grep ESTAB | grep 8090 | wc -l
2
$ netstat -na | grep ESTAB | grep 8090
tcp 0 0 123.56.104.184:8090 111.205.208.29:50634 ESTABLISHED
tcp 0 0 123.56.104.184:8090 111.205.208.29:50635 ESTABLISHED
tcp 0 0 123.56.104.184:8090 111.205.208.29:50633 ESTABLISHED
查看Manager Status: "http-bio-8090",显示thread count为3,是一致,如下:
Max threads: 500 Current thread count: 20 Current thread busy: 3
Tomcat JVM性能调优
关于JVM
整个JVM内存大小=年轻代大小 + 年老代大小 + 持久代大小.(1536+512=2g)
Sun官方推荐年轻代配置为整个堆的3/8。
JVM内存构成说明
JVM Heap memory elements |
JVM Non-Heap memory parts |
Eden Space: |
Permanent Generation: |
Survivor Space: |
Code Cache: |
Tenured Generation/PS Old Gen: |
JVM内存构成图解
Below diagram illustrates JVM memory parts and JVM (java) related command line switches used to preset/limit some of them.
- See more at: http://www.jvmhost.com/articles/how-can-i-monitor-memory-usage-of-my-tomcat-jvm#sthash.Ctd36Kca.dpuf
NewRatio在不同平台的推荐值
https://blogs.oracle.com/jonthecollector/entry/the_second_most_important_gc
If your application has a large proportion of long lived data, the value of NewRatio may not put enough of the space into the tenured generation.
Something to add here is that the default value of NewRatio is platform dependent and runtime compiler (JIT) dependent. Below are the values as we get ready for the JDK6 release.
-server on amd64 2
-server on ia32 8
-server on sparc 2
-client on ia32 12
-client on sparc 8
IBM的Heap Analyser
G1GC
JDK7新特性之
http://dongliu.net/post/404142
Garbage-first garbage collector,简称G1 GC,是最终将用于代替Concurrent Mark-Sweep garbage collector(CMS GC)的新一代垃圾回收器。
目前JDK1.6update14及以后版本的jvm中已经继承了G1 GC,可以使用参数-XX:+UnlockExperimentalVMOptions -XX:+UseG1GC来启用。
https://blogs.oracle.com/g1gc/
JVM内存限制(最大值)
在命令行下用 java -XmxXXXXM -version 命令来进行测试,然后逐渐的增大XXXX的值,如果执行正常就表示指定的内存大小可用,否则会打印错误信息。
http://unixboy.iteye.com/blog/174173
Tomcat JVM 调优记录
Tomcat 20150616配置和状态
整个JVM内存大小=年轻代大小 + 年老代大小 + 持久代大小
Note:Tomcat从20150604启动,20150612 reload DDTService。
$ cat /opt/tomcat-ddtservice/bin/setenv.sh
export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xms512m -Xmx2048m -XX:PermSize=128m -XX:MaxPermSize=768m"
admin@.184:~]
$ ps -auxw --sort=%cpu
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 17733 0.1 0.1 991520 10048 ? Sl May20 76:51 /usr/local/aegis/alihids/AliHids
root 18847 0.2 0.1 793852 13936 ? Sl Apr03 249:44 /usr/local/aegis/aegis_client/aegis_00_73/AliYunDun
admin 20880 0.8 33.3 5557668 2686652 ? Sl Jun04 146:14 /opt/jdk1.7.0_71/bin/java -Djava.util.logging.config.file=/opt/tomca
admin 2480 3.0 7.2 3926212 584256 ? Sl Jan06 7151:06 /opt/jdk1.7.0_71/bin/java -Djava.util.logging.config.file=/opt/apac
admin@.184:~]
$ free
total used free shared buffers cached
Mem: 8058260 7873136 185124 0 149520 4179732
-/+ buffers/cache: 3543884 4514376
Swap: 0 0 0
http://.28:8161/admin/subscribers.jsp
Active Durable Topic Subscribers
46
http://.184:8090/manager/status
Server Information |
|||||||
Tomcat Version |
JVM Version |
JVM Vendor |
OS Name |
OS Version |
OS Architecture |
Hostname |
IP Address |
Apache Tomcat/7.0.57 |
1.7.0_71-b14 |
Oracle Corporation |
Linux |
2.6.32-358.6.2.el6.x86_64 |
amd64 |
iZ25f12r4xgZ |
10.170.207.50 |
"ajp-bio-8209"
Max threads: 200 Current thread count: 0 Current thread busy: 0
Max processing time: 0 ms Processing time: 0.0 s Request count: 0 Error count: 0 Bytes received: 0.00 MB Bytes sent: 0.00 MB
"http-bio-8090"
Max threads: 500 Current thread count: 28 Current thread busy: 18
Max processing time: 1320993 ms Processing time: 24161.775 s Request count: 380932 Error count: 63077 Bytes received: 55.11 MB Bytes sent: 3862.71 MB
JVM
Free memory: 1377.64 MB Total memory: 1899.50 MB Max memory: 1899.50 MB
Memory Pool |
Type |
Initial |
Total |
Maximum |
Used |
PS Eden Space |
Heap memory |
129.00 MB |
530.50 MB |
674.50 MB |
26.95 MB (3%) |
PS Old Gen |
Heap memory |
341.50 MB |
1365.00 MB |
1365.00 MB |
446.50 MB (32%) |
PS Survivor Space |
Heap memory |
21.00 MB |
4.00 MB |
4.00 MB |
0.00 MB (0%) |
Code Cache |
Non-heap memory |
2.43 MB |
45.37 MB |
48.00 MB |
44.79 MB (93%) |
PS Perm Gen |
Non-heap memory |
128.00 MB |
310.00 MB |
768.00 MB |
309.57 MB (40%) |
Tomcat 20150616 问题
- 系统剩余内存太少。(185M)
- Code Cache used过高,接近上限。(ReservedCodeCacheSize and InitialCodeCacheSize)
- PS Perm Gen(Permanent Generation)只用到一半不到,基本不会再增长。
- Heap memory总体使用不到一半。
- 服务器一般设置-Xms、-Xmx相等以避免在每次GC 后调整堆的大小。
Tomcat 20150616 调整
export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xms512m -Xmx2048m -XX:PermSize=128m -XX:MaxPermSize=768m"
调整为:
export JAVA_OPTS="-Dfile.encoding=UTF-8 –Xms1024m -Xmx1536m -XX:PermSize=256m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m"
Tomcat 20150616 调整后跟踪
20150618上线之后测试完毕22:00
JVM
Free memory: 1245.49 MB Total memory: 1436.00 MB Max memory: 1436.00 MB
Memory Pool |
Type |
Initial |
Total |
Maximum |
Used |
PS Eden Space |
Heap memory |
384.00 MB |
311.50 MB |
311.50 MB |
61.70 MB (19%) |
PS Old Gen |
Heap memory |
1024.00 MB |
1024.00 MB |
1024.00 MB |
121.35 MB (11%) |
PS Survivor Space |
Heap memory |
64.00 MB |
100.50 MB |
100.50 MB |
7.44 MB (7%) |
Code Cache |
Non-heap memory |
2.43 MB |
9.43 MB |
128.00 MB |
9.24 MB (7%) |
PS Perm Gen |
Non-heap memory |
256.00 MB |
256.00 MB |
512.00 MB |
74.74 MB (14%) |
20150619上线之后测试完毕16:47
JVM
Free memory: 1199.51 MB Total memory: 1508.50 MB Max memory: 1508.50 MB
Memory Pool |
Type |
Initial |
Total |
Maximum |
Used |
PS Eden Space |
Heap memory |
384.00 MB |
456.50 MB |
456.50 MB |
53.91 MB (11%) |
PS Old Gen |
Heap memory |
1024.00 MB |
1024.00 MB |
1024.00 MB |
247.09 MB (24%) |
PS Survivor Space |
Heap memory |
64.00 MB |
28.00 MB |
28.00 MB |
7.96 MB (28%) |
Code Cache |
Non-heap memory |
2.43 MB |
19.62 MB |
128.00 MB |
18.88 MB (14%) |
PS Perm Gen |
Non-heap memory |
256.00 MB |
256.00 MB |
512.00 MB |
98.51 MB (19%) |
20150629 10:30(6月19日晚上重启过)上线后10天
JVM
Free memory: 1180.61 MB Total memory: 1470.50 MB Max memory: 1470.50 MB
Memory Pool |
Type |
Initial |
Total |
Maximum |
Used |
PS Eden Space |
Heap memory |
384.00 MB |
420.00 MB |
439.00 MB |
16.91 MB (3%) |
PS Old Gen |
Heap memory |
1024.00 MB |
1024.00 MB |
1024.00 MB |
246.81 MB (24%) |
PS Survivor Space |
Heap memory |
64.00 MB |
26.50 MB |
26.50 MB |
26.15 MB (98%) |
Code Cache |
Non-heap memory |
2.43 MB |
27.00 MB |
128.00 MB |
26.58 MB (20%) |
PS Perm Gen |
Non-heap memory |
256.00 MB |
256.00 MB |
512.00 MB |
121.63 MB (23%) |
20150629 10:33
Free memory: 972.35 MB Total memory: 1492.00 MB Max memory: 1492.00 MB
Memory Pool |
Type |
Initial |
Total |
Maximum |
Used |
PS Eden Space |
Heap memory |
384.00 MB |
421.50 MB |
421.50 MB |
274.89 MB (65%) |
PS Old Gen |
Heap memory |
1024.00 MB |
1024.00 MB |
1024.00 MB |
244.74 MB (23%) |
PS Survivor Space |
Heap memory |
64.00 MB |
46.50 MB |
46.50 MB |
0.00 MB (0%) |
Code Cache |
Non-heap memory |
2.43 MB |
27.00 MB |
128.00 MB |
26.59 MB (20%) |
PS Perm Gen |
Non-heap memory |
256.00 MB |
256.00 MB |
512.00 MB |
121.57 MB (23%) |
整个JVM内存大小=年轻代大小 + 年老代大小 + 持久代大小.(1536+512=2g)
Sun官方推荐年轻代配置为整个堆的3/8。
-Xmn2g:设置年轻代大小为2G。
-XX:NewRatio=4:设置年轻代(包括Eden和两个Survivor区)与年老代的比值(除去持久代)。设置为4,则年轻代与年老代所占比值为1:4,年轻代占整个堆栈的1/5
-XX:SurvivorRatio=4:设置年轻代中Eden区与Survivor区的大小比值。设置为4,则两个Survivor区与一个Eden区的比值为2:4,一个Survivor区占整个年轻代的1/6
-Xss128k:设置每个线程的堆栈大小。JDK5.0以后每个线程堆栈大小为1M,以前每个线程堆栈大小为256K。
-XX:NewRation=1设为1是因为PS Eden Space上下浮动很大,同时PS Old Gen却无变化,说明创建了很多新对象之后很快被回收;因此应该调大Eden Space.
Tomcat 再次调整
export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xms512m -Xmx2048m -XX:PermSize=128m -XX:MaxPermSize=768m"
调整为:
export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xms1024m -Xmx1536m -XX:PermSize=256m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m"
调整为:
export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xms1536m -Xmx1536m -XX:NewRatio=1 -XX:PermSize=256m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m –Xss256k "
export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xms1536m -Xmx1536m -XX:NewRatio=1 -XX:SurvivorRatio=6 -XX:PermSize=256m -XX:MaxPermSize=512m -Xss256k -XX:ReservedCodeCacheSize=128m "
tomcat+java的web程序持续占cpu问题调试
解决方法:
1、先用top查看占用cpu的进程id
2、再用ps -ef | grep PID定位具体的进程主体;如是否是tomcat启动的java程序
3、用ps -mp pid -o THREAD,tid,time打印出该进程下的线程占用cpu情况
找到了耗时最高的线程28802,占用CPU时间快两个小时了!
4、其次将需要的线程ID转换为16进制格式:printf "%x\n" tid
5、最后打印线程的堆栈信息:jstack pid | grep tid -A 30
找到出现问题的代码,并分析具体函数中是否有可能出现死循环的代码段。
通常问题出现在while, for之类的循环代码片段。
jstack command not found on centos
[root@yfddt5Z logs]# sudo updatedb
[root@yfddt5Z logs]# locate jstack
/usr/java/jdk1.7.0_79/bin/jstack
/usr/java/jdk1.7.0_79/man/ja_JP.UTF-8/man1/jstack.1
/usr/java/jdk1.7.0_79/man/man1/jstack.1
Reference:
http://blog.csdn.net/netdevgirl/article/details/53943224 jstack(查看线程)、jmap(查看内存)和jstat(性能分析)命令