-
Notifications
You must be signed in to change notification settings - Fork 35
Benchmarking
Currently we will use the PkgBenchmark package to do basic benchmarking for Mimi. The goal is to provide a straightforward way to run small performance checks to support design decisions and ensure that changes made to Mimi do not negatively affect performance in a significant way.
To use these tools, use Julia 1.0, and versions of Mimi that has been ported to Julia 1.0. You will also need to work on the Master branch of PkgBenchmark, as opposed to the latest tagged version. Note that you must work in your Julia default environment, as as PkgBenchmark cannot yet pick up other environments. A setup for working with these tools may look like the following:
jl //start a Julia1.0 REPL
] activate default //activate default environment
] add PkgBenchmark#master //add PkgBenchmark and checkout master branch
] add Mimi#master //add Mimi and checkout master branch
//look at the status
] st
Status `~/.julia/dev/Mimi/default/Project.toml`
[e4e893b0] Mimi v0.5.0+ #master (https://github.com/anthofflab/Mimi.jl.git)
[32113eaa] PkgBenchmark v0.1.1+ #master (https://github.com/JuliaCI/PkgBenchmark.jl.git)
...
The benchmarking tool depends on the Mimi/bechmark/benchmarks.jl
file, which instructs it on what code to test. Currently this script calls RegionTutorialBenchmarks.jl
, which runs both the one-region and two-region tutorials. This file can be changed, but since it must be consistent between compared branches in order to get sensible results, do not change this file without communicating with the rest of the development team. In summary, the workflow for benchmarking will include running the test code for both branches being considered, save a results file for each, and then comparing the two files and outputting this comparison into a pre-formatted Markdown file.
The first step is to produce the .json
results files for both test runs.
using PkgBenchmark
//get first run results
benchmarkpkg(benchmarkpkg("/Users/lisarennels/.julia/dev/Mimi/benchmark/benchmarks.jl", resultfile = "masterrun.json")
//change branch and get second run results
add Mimi#branch
benchmarkpkg(benchmarkpkg("/Users/lisarennels/.julia/dev/Mimi/benchmark/benchmarks.jl", resultfile = "enhancementrun.json")
The next step will be to compare the two results files and output a formatted Markdown file, which presents the results and a also notes any significant regressions or improvements.
export_markdown("comparison.md", judge(readresults("masterrun.json"), readresults("enhancementrun.json")))