Let's say you have a google/benchmark
written for a part of your code and now
you make some changes that you think that will improve the performance of that code.
The flow would be to:
run the benchmark for the initial code and save the results (so that you have a baseline with which you can compare the new implementation)
do the new changes
run the benchmark for the new code
Now you need to compare the benchmark results of the initial and new code.
In the official google/benchmark
repo there is a tool that can compare
the results of a benchmark.
But, for my taste, this tool is not very useful(who wants to waste time with python dependencies!).
So I wrote my own tool in Go(gbenchdiff
) for this based heavily on Go's benchstat.
See installation instructions here.
The usage is very simple, you pass the benchmark results for the old code and the benchmark results for the new code in JSON format.
Benchmark results can be exported as JSON with the flag --benchmark_out=filename.json
.
In order to have relevant statistical results(enough samples) you should run the benchmark
multiple times(i.e. use the --benchmark_repetitions
flag).
To generate the results you can do:
./benchmark_binary --benchmark_repetitions=10 --benchmark_out=old.json
# do the (possible) performance improvments
./benchmark_binary --benchmark_repetitions=10 --benchmark_out=new.json
Now you can compare the results by runnig:
gbenchdiff old.json new.json
This will output something like this (here are some details on what the output means):
real time delta note old new
--------- ----- ---- --- ---
BM_foo/10 +11.73% (p=0.01 n=5+5) 4.33ns 4.83ns