寫在前面
上周,同事寫了一段ConcurrentHashMap的測試代碼,說往map里放了32個元素就內存溢出了,我大致看了一下他的代碼及運行的jvm參數,覺得很奇怪,於是就自己搗鼓了一下。首先上一段代碼:
public class MapTest { public static void main(String[] args) { System.out.println("Before allocate map, free memory is " + Runtime.getRuntime().freeMemory()/(1024*1024) + "M"); Map<String, String> map = new ConcurrentHashMap<String, String>(2000000000); System.out.println("After allocate map, free memory is " + Runtime.getRuntime().freeMemory()/(1024*1024) + "M"); int i = 0; try { while (i < 1000000) { System.out.println("Before put the " + (i + 1) + " element, free memory is " + Runtime.getRuntime().freeMemory()/(1024*1024) + "M"); map.put(String.valueOf(i), String.valueOf(i)); System.out.println("After put the " + (i + 1) + " element, free memory is " + Runtime.getRuntime().freeMemory()/(1024*1024) + "M"); i++; } } catch (Exception e) { e.printStackTrace(); } catch (Throwable t) { t.printStackTrace(); } finally { System.out.println("map size is " + map.size()); } } }
執行時加上jvm執行參數 -Xms512m -Xmx512m ,執行結果:
Before allocate map, free memory is 120M After allocate map, free memory is 121M Before put the 1 element, free memory is 121M After put the 1 element, free memory is 121M Before put the 2 element, free memory is 121M After put the 2 element, free memory is 122M Before put the 3 element, free memory is 122M After put the 3 element, free memory is 122M Before put the 4 element, free memory is 122M After put the 4 element, free memory is 122M Before put the 5 element, free memory is 122M After put the 5 element, free memory is 114M Before put the 6 element, free memory is 114M java.lang.OutOfMemoryError: Java heap space map size is 5 at java.util.concurrent.ConcurrentHashMap.ensureSegment(Unknown Source) at java.util.concurrent.ConcurrentHashMap.put(Unknown Source) at com.j.u.c.tj.MapTest.main(MapTest.java:17)
最開始的代碼是沒有加入一些日志打印的,當時就很奇怪,為什么只往map里放一個元素就報了OutOfMemoryError了。
於是就加了上述打印日志,發現在創建map的時候已經占用了二百多兆的空間,然后往里面put一個元素,put前都還有二百多兆,put時就報了OutOfMemoryError, 那就更奇怪了,初始化map的時候會占用一定空間,可以理解,但是只是往里面put一個很小的元素,為什么就會OutOfMemoryError呢?
排查過程
1、第一步,將第一段代碼中的Map<String, String> map = new ConcurrentHashMap<String, String>(2000000000);修改為Map<String, String> map = new ConcurrentHashMap<String, String>();,這次運行正常。(沒有找到問題根因,但是以后使用ConcurrentHashMap得注意:1、盡量不初始化;2、如果需要初始化,盡量給一個比較合適的值)
2、第二步,執行時加上jvm參數-Xms20124m -Xmx1024m 。發現還是同樣出現問題。
3、第三步,分析ConcurrentHashMap源代碼,首先,了解一下ConcurrentHashMap的結構,它是由多個Segment組成(每個Segment擁有一把鎖,也是ConcurrentHashMap線程安全的保證,不是本文討論的重點),每個Segment由一個HashEntry數組組成,出問題就在這個HashEntry上面
4、第四步,查看ConcurrentHashMap的初始化方法,可以看出,初始化了Segment[0]的HashEntry數組,數組的長度為cap值,這個值為67108864
cap的計算過程(可以針對於初始化過程進行調試)
1)initialCapacity為2,000,000,000,而MAXIMUM_CAPACITY的默認值(也即ConcurrentHashMap支持的最大值是1<<30,即230=1,073,741,824),initialCapacity的值大於MAXIMUM_CAPACITY,即initialCapacity=1,073,741,824
2)c的值計算為 initialCapacity/ssize=67108864
3)cap為 大於等於c的第一個2的n次方數 也即67108864
public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) { if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0) throw new IllegalArgumentException(); if (concurrencyLevel > MAX_SEGMENTS) concurrencyLevel = MAX_SEGMENTS; // Find power-of-two sizes best matching arguments int sshift = 0; int ssize = 1; while (ssize < concurrencyLevel) { ++sshift; ssize <<= 1; } this.segmentShift = 32 - sshift; this.segmentMask = ssize - 1; if (initialCapacity > MAXIMUM_CAPACITY) initialCapacity = MAXIMUM_CAPACITY; int c = initialCapacity / ssize; if (c * ssize < initialCapacity) ++c; int cap = MIN_SEGMENT_TABLE_CAPACITY; while (cap < c) cap <<= 1; // create segments and segments[0] Segment<K,V> s0 = new Segment<K,V>(loadFactor, (int)(cap * loadFactor), (HashEntry<K,V>[])new HashEntry[cap]); Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize]; UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0] this.segments = ss; }
5、第五步,查看ConcurrentHashMap的put方法,可以看出在put一個元素的時候,
1)計算當前key的hash值,查看當前key所在的semgment有沒有初始化,若果有,則執行后續的put操作。
2)如果沒有去執行ensureSement()方法,而在ensureSement()方法中又會初始化了一個HashEntry數組,數組的長度和第一個初始化的Segment的HashEntry的長度一致。
public V put(K key, V value) { Segment<K,V> s; if (value == null) throw new NullPointerException(); int hash = hash(key); int j = (hash >>> segmentShift) & segmentMask; if ((s = (Segment<K,V>)UNSAFE.getObject // nonvolatile; recheck (segments, (j << SSHIFT) + SBASE)) == null) // in ensureSegment s = ensureSegment(j); return s.put(key, hash, value, false); } private Segment<K,V> ensureSegment(int k) { final Segment<K,V>[] ss = this.segments; long u = (k << SSHIFT) + SBASE; // raw offset Segment<K,V> seg; if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) { Segment<K,V> proto = ss[0]; // use segment 0 as prototype int cap = proto.table.length; float lf = proto.loadFactor; int threshold = (int)(cap * lf); HashEntry<K,V>[] tab = (HashEntry<K,V>[])new HashEntry[cap]; if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) { // recheck Segment<K,V> s = new Segment<K,V>(lf, threshold, tab); while ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) { if (UNSAFE.compareAndSwapObject(ss, u, null, seg = s)) break; } } } return seg; }
6、這個時候能定位到是因為put一個元素的時候創建了一個長度為67103684的HashEntry數組,而這個HashEntry數組會占用67108864*4byte=256M,和上面的測試結果能對應起來。為什么數組會占用這么大的空間,很多同學可能會有疑問,那來看一下數組的初始化 而數組初始化會在堆內存創建一個HashEntry引用的數組,且長度為67103684,而每個HashEntry引用(所引用的對象都為null)都占用4個字節。
問題總結
1、ConcurrentHashMap初始化時要指定合理的初始化參數(當然本人做了一個小測試,指定初始化參數暫時沒有發現性能上的提升)