關於Java Microbenchmark的一點記錄


大家知道單元測試對代碼質量的保障作用已經沒什么可說的了。Microbenchmark(微基准測試)也是保證代碼質量的重要手段,也是容易忽略的,它用來衡量一些小的代碼片段的性能指標,完善的Microbenchmark可以便於定位出一些性能瓶頸,它類似於單元測試,能夠進行持續集成,當代碼有改動時能夠通過持續集成的歷史數據 看出對性能的影響點。

之前使用Google的Caliper,但目前還在重度開發中,每個版本API變化比較大,還有好些地方不夠穩定,所以暫時放棄使用。

JUnitBenchmark

這里先重點介紹一下JUnitBenchmark的實踐,它使用簡單,有直觀的圖表。

例子:

添加依賴:

   <dependency>
        <groupId>com.carrotsearch</groupId>
        <artifactId>junit-benchmarks</artifactId>
        <scope>test</scope>
        <version>0.7.0</version>
   </dependency> 

@BenchmarkMethodChart(filePrefix = "target/PinyinConvertersBenchmark")  //指定報表的路徑和文件名前綴
@BenchmarkHistoryChart(filePrefix = "target/PinyinConvertersBenchmark-history", labelWith = LabelType.CUSTOM_KEY, maxRuns = 20)  //設置歷史數據報表參數
public class PinyinConvertersBenchmark extends AbstractBenchmark {
    final static Random random = new Random();

    final static HanyuPinyinOutputFormat hanyuPinyinOutputFormat = SimplePinyinConverter.getInstance()
                                                                                    .getDefaultPinyinFormat()
                                                                                    .getPinyin4jOutputFormat();

    @AfterClass
    public static void after() {
        CachedPinyinConverter cachedPinyinConverter = (CachedPinyinConverter) PinyinConverterFactory.CACHED_DEFAULT.get();
        cachedPinyinConverter.dumpCacheInfo(System.out);
        CachedConvertAccess.clear(cachedPinyinConverter);
    }

    //總共運行20w次+5次熱身
    @Test
    @BenchmarkOptions(benchmarkRounds = 200000, warmupRounds = 5, clock = Clock.NANO_TIME)
    public void pinyinConverters_ConvertOneStr_CN() throws ConverterException {
        PinyinConverters.toPinyin("我們對發動過", "");
    }

    @Test
    @BenchmarkOptions(benchmarkRounds = 200000, warmupRounds = 5, clock = Clock.NANO_TIME)
    public void pinyin4j_ConvertOneStr_CN() throws BadHanyuPinyinOutputFormatCombination {
        PinyinHelper.toHanyuPinyinString("我們對發動過", hanyuPinyinOutputFormat, "");
    }

    //100個線程運行
    @Test
    @BenchmarkOptions(benchmarkRounds = 200000, warmupRounds = 5, concurrency = 100, clock = Clock.NANO_TIME)
    public void testPutOne_100Thread_CN() {
        testPutOne_OneThread_CN();
    }
}

然后作為普通單元測試運行就可以了。

如果需要生產報表,

1. 要添加jvm參數運行,-Djub.consumers=CONSOLE,H2 -Djub.db.file=./target/.benchmarks

jub.db.file路徑自己定義。

2. 還需要添加H2的依賴:

    <dependency>
        <groupId>com.h2database</groupId>
        <artifactId>h2</artifactId>
        <version>1.3.170</version>
        <scope>test</scope>
    </dependency>

運行后在指定的報表目錄下可以找到類似的html報表,對比了總次數、耗時、每個方法的運行時間、gc次數和耗時等數據:

benchmark

不足之處

JUnitBenchmark也存在一些不足,報表和功能還不夠豐富,只能做一些簡單的微基准;使用並發測試時(例如設置concurrency = 100)經常會出現失敗,已經反饋了bug,作者表示會盡快修復;

目前還沒有現成的jenkins集成插件。但是JUnitBenchmark還只是alpha階段,做到這樣已經不錯了。

其他Microbenchmark框架

以下記錄一些Microbenchmark框架,不作詳細介紹,有興趣的慢慢去研究選擇適合自己的。

jmh

ORACLE出品

http://assylias.wordpress.com/2013/05/06/java-micro-benchmark-with-jmh-and-netbeans/

https://github.com/nitsanw/jmh-samples

Japex

需要xml配置,初看配置有點復雜,但圖表完善。

https://japex.java.net/docs/manual.html

Benchmarking framework

http://www.ellipticgroup.com/misc/projectLibrary.zip

Create quick/reliable benchmark with java

not parameterizable; Java library; JVM micro benchmarking; no plotting; no persistence; no trend analysis; statistics.

Commons monitoring

not parameterizable!?; Java library; no JVM micro benchmarking!?; plotting; persistence through a servlet; no trend analysis!?; no statistics!?.

Supports AOP instrumentation.

JAMon

not parameterizable; Java library; no JVM micro benchmarking; plotting, persistence and trend analysis with additional tools (Jarep or JMX); statistics.

Good monitoring, intertwined with log4j, data can also be programmatically accessed or queried and your program can take actions on the results.

Java Simon

not parameterizable!?; Java library; no JVM micro benchmarking; plotting only with Jarep; persistence only with JMX; no trend analysis; no statistics!?.

Competitor of Jamon, supports a hierarchy of monitors.

JETM

not parameterizable; Java library; JVM micro benchmarking; plotting; persistence; no trend analysis; no statistics.

Nice lightweight monitoring tool, no dependencies :) Does not offer sufficient statistics (no standard deviation), and extending the plugIn correspondingly looks quite difficult (Aggregators and Aggregates only have fixed getters for min, max and average).

junitperf

Mainly for doing trend analysis for performance (with the JUnit test decorator TimedTest) and scalability (with the JUnit test decorator LoadTest).

parameterizable; Java library; no JVM micro benchmarking; no plotting; no persistence; no statistics.

perf4j

not parameterizable; Java library; no JVM micro benchmarking; plotting; persistence via JMX; trend analysis via a log4j appender; statistics.

Builds upon a logging framework, can use AOP.


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM