Copyright | (c) 2009-2014 Bryan O'Sullivan |
---|---|
License | BSD-style |
Maintainer | bos@serpentine.com |
Stability | experimental |
Portability | GHC |
Safe Haskell | Trustworthy |
Language | Haskell2010 |
Wrappers for compiling and running benchmarks quickly and easily.
See defaultMain
below for an example.
All of the IO
-returning functions in this module initialize the timer
before measuring time (refer to the documentation for initializeTime
for more details).
Synopsis
- data Benchmarkable
- data Benchmark
- env :: NFData env => IO env -> (env -> Benchmark) -> Benchmark
- envWithCleanup :: NFData env => IO env -> (env -> IO a) -> (env -> Benchmark) -> Benchmark
- perBatchEnv :: (NFData env, NFData b) => (Int64 -> IO env) -> (env -> IO b) -> Benchmarkable
- perBatchEnvWithCleanup :: (NFData env, NFData b) => (Int64 -> IO env) -> (Int64 -> env -> IO ()) -> (env -> IO b) -> Benchmarkable
- perRunEnv :: (NFData env, NFData b) => IO env -> (env -> IO b) -> Benchmarkable
- perRunEnvWithCleanup :: (NFData env, NFData b) => IO env -> (env -> IO ()) -> (env -> IO b) -> Benchmarkable
- toBenchmarkable :: (Int64 -> IO ()) -> Benchmarkable
- bench :: String -> Benchmarkable -> Benchmark
- bgroup :: String -> [Benchmark] -> Benchmark
- nf :: NFData b => (a -> b) -> a -> Benchmarkable
- whnf :: (a -> b) -> a -> Benchmarkable
- nfIO :: NFData a => IO a -> Benchmarkable
- whnfIO :: IO a -> Benchmarkable
- nfAppIO :: NFData b => (a -> IO b) -> a -> Benchmarkable
- whnfAppIO :: (a -> IO b) -> a -> Benchmarkable
- defaultMain :: [Benchmark] -> IO ()
- defaultMainWith :: Config -> [Benchmark] -> IO ()
- defaultConfig :: Config
- makeMatcher :: MatchType -> [String] -> Either String (String -> Bool)
- runMode :: Mode -> [Benchmark] -> IO ()
How to write benchmarks
The Benchmarkable
type is a container for code that can be
benchmarked. The value inside must run a benchmark the given
number of times. We are most interested in benchmarking two
things:
IO
actions. MostIO
actions can be benchmarked directly.- Pure functions. GHC optimises aggressively when compiling with
-O
, so it is easy to write innocent-looking benchmark code that doesn't measure the performance of a pure function at all. We work around this by benchmarking both a function and its final argument together.
Benchmarking IO actions
Most IO
actions can be benchmarked easily using one of the following
two functions:
nfIO
::NFData
a =>IO
a ->Benchmarkable
whnfIO
::IO
a ->Benchmarkable
In certain corner cases, you may find it useful to use the following variants, which take the input as a separate argument:
nfAppIO
::NFData
b => (a ->IO
b) -> a ->Benchmarkable
whnfAppIO
:: (a ->IO
b) -> a ->Benchmarkable
This is useful when the bulk of the work performed by the function is not bound by IO, but rather by pure computations that may optimize away if the argument is known statically, as in 'nfIO'/'whnfIO'.
Benchmarking pure code
Because GHC optimises aggressively when compiling with -O
, it is
potentially easy to write innocent-looking benchmark code that will
only be evaluated once, for which all but the first iteration of
the timing loop will be timing the cost of doing nothing.
To work around this, we provide two functions for benchmarking pure code.
The first will cause results to be fully evaluated to normal form (NF):
nf
::NFData
b => (a -> b) -> a ->Benchmarkable
The second will cause results to be evaluated to weak head normal form (the Haskell default):
whnf
:: (a -> b) -> a ->Benchmarkable
As both of these types suggest, when you want to benchmark a function, you must supply two values:
- The first element is the function, saturated with all but its last argument.
- The second element is the last argument to the function.
Here is an example that makes the use of these functions clearer. Suppose we want to benchmark the following function:
firstN :: Int -> [Int] firstN k = take k [(0::Int)..]
So in the easy case, we construct a benchmark as follows:
nf
firstN 1000
Fully evaluating a result
The whnf
harness for evaluating a pure function only evaluates
the result to weak head normal form (WHNF). If you need the result
evaluated all the way to normal form, use the nf
function to
force its complete evaluation.
Using the firstN
example from earlier, to naive eyes it might
appear that the following code ought to benchmark the production
of the first 1000 list elements:
whnf
firstN 1000
Since we are using whnf
, in this case the result will only be
forced until it reaches WHNF, so what this would actually
benchmark is merely how long it takes to produce the first list
element!
Types
data Benchmarkable #
A pure function or impure action that can be benchmarked. The
Int64
parameter indicates the number of times to run the given
function or action.
Specification of a collection of benchmarks and environments. A benchmark may consist of:
- An environment that creates input data for benchmarks, created
with
env
. - A single
Benchmarkable
item with a name, created withbench
. - A (possibly nested) group of
Benchmark
s, created withbgroup
.
Creating a benchmark suite
:: NFData env | |
=> IO env | Create the environment. The environment will be evaluated to normal form before being passed to the benchmark. |
-> (env -> Benchmark) | Take the newly created environment and make it available to the given benchmarks. |
-> Benchmark |
Run a benchmark (or collection of benchmarks) in the given environment. The purpose of an environment is to lazily create input data to pass to the functions that will be benchmarked.
A common example of environment data is input that is read from a file. Another is a large data structure constructed in-place.
Motivation. In earlier versions of criterion, all benchmark inputs were always created when a program started running. By deferring the creation of an environment when its associated benchmarks need the its, we avoid two problems that this strategy caused:
- Memory pressure distorted the results of unrelated benchmarks. If one benchmark needed e.g. a gigabyte-sized input, it would force the garbage collector to do extra work when running some other benchmark that had no use for that input. Since the data created by an environment is only available when it is in scope, it should be garbage collected before other benchmarks are run.
- The time cost of generating all needed inputs could be significant in cases where no inputs (or just a few) were really needed. This occurred often, for instance when just one out of a large suite of benchmarks was run, or when a user would list the collection of benchmarks without running any.
Creation. An environment is created right before its related
benchmarks are run. The IO
action that creates the environment
is run, then the newly created environment is evaluated to normal
form (hence the NFData
constraint) before being passed to the
function that receives the environment.
Complex environments. If you need to create an environment that contains multiple values, simply pack the values into a tuple.
Lazy pattern matching. In situations where a "real" environment is not needed, e.g. if a list of benchmark names is being generated, a value which throws an exception will be passed to the function that receives the environment. This avoids the overhead of generating an environment that will not actually be used.
The function that receives the environment must use lazy pattern
matching to deconstruct the tuple (e.g., ~(x, y)
, not (x, y)
),
as use of strict pattern matching will cause a crash if an
exception-throwing value is passed in.
Example. This program runs benchmarks in an environment that contains two values. The first value is the contents of a text file; the second is a string. Pay attention to the use of a lazy pattern to deconstruct the tuple in the function that returns the benchmarks to be run.
setupEnv = do let small = replicate 1000 (1 :: Int) big <- map length . words <$> readFile "/usr/dict/words" return (small, big) main = defaultMain [ -- notice the lazy pattern match here! env setupEnv $ \ ~(small,big) -> bgroup "main" [ bgroup "small" [ bench "length" $ whnf length small , bench "length . filter" $ whnf (length . filter (==1)) small ] , bgroup "big" [ bench "length" $ whnf length big , bench "length . filter" $ whnf (length . filter (==1)) big ] ] ]
Discussion. The environment created in the example above is
intentionally not ideal. As Haskell's scoping rules suggest, the
variable big
is in scope for the benchmarks that use only
small
. It would be better to create a separate environment for
big
, so that it will not be kept alive while the unrelated
benchmarks are being run.
:: (NFData env, NFData b) | |
=> (Int64 -> IO env) | Create an environment for a batch of N runs. The environment will be evaluated to normal form before running. |
-> (env -> IO b) | Function returning the IO action that should be benchmarked with the newly generated environment. |
-> Benchmarkable |
Create a Benchmarkable where a fresh environment is allocated for every batch of runs of the benchmarkable.
The environment is evaluated to normal form before the benchmark is run.
When using whnf
, whnfIO
, etc. Criterion creates a Benchmarkable
whichs runs a batch of N
repeat runs of that expressions. Criterion may
run any number of these batches to get accurate measurements. Environments
created by env
and envWithCleanup
, are shared across all these batches
of runs.
This is fine for simple benchmarks on static input, but when benchmarking IO operations where these operations can modify (and especially grow) the environment this means that later batches might have their accuracy effected due to longer, for example, longer garbage collection pauses.
An example: Suppose we want to benchmark writing to a Chan, if we allocate
the Chan using environment and our benchmark consists of writeChan env ()
,
the contents and thus size of the Chan will grow with every repeat. If
Criterion runs a 1,000 batches of 1,000 repeats, the result is that the
channel will have 999,000 items in it by the time the last batch is run.
Since GHC GC has to copy the live set for every major GC this means our last
set of writes will suffer a lot of noise of the previous repeats.
By allocating a fresh environment for every batch of runs this function should eliminate this effect.
:: (NFData env, NFData b) | |
=> (Int64 -> IO env) | Create an environment for a batch of N runs. The environment will be evaluated to normal form before running. |
-> (Int64 -> env -> IO ()) | Clean up the created environment. |
-> (env -> IO b) | Function returning the IO action that should be benchmarked with the newly generated environment. |
-> Benchmarkable |
Same as perBatchEnv
, but but allows for an additional callback
to clean up the environment. Resource clean up is exception safe, that is,
it runs even if the Benchmark
throws an exception.
:: (NFData env, NFData b) | |
=> IO env | Action that creates the environment for a single run. |
-> (env -> IO b) | Function returning the IO action that should be benchmarked with the newly genereted environment. |
-> Benchmarkable |
Create a Benchmarkable where a fresh environment is allocated for every run of the operation to benchmark. This is useful for benchmarking mutable operations that need a fresh environment, such as sorting a mutable Vector.
As with env
and perBatchEnv
the environment is evaluated to normal form
before the benchmark is run.
This introduces extra noise and result in reduce accuracy compared to other Criterion benchmarks. But allows easier benchmarking for mutable operations than was previously possible.
toBenchmarkable :: (Int64 -> IO ()) -> Benchmarkable #
Construct a Benchmarkable
value from an impure action, where the Int64
parameter indicates the number of times to run the action.
:: String | A name to identify the benchmark. |
-> Benchmarkable | An activity to be benchmarked. |
-> Benchmark |
Create a single benchmark.
:: String | A name to identify the group of benchmarks. |
-> [Benchmark] | Benchmarks to group under this name. |
-> Benchmark |
Group several benchmarks together under a common name.
Running a benchmark
nf :: NFData b => (a -> b) -> a -> Benchmarkable #
Apply an argument to a function, and evaluate the result to normal form (NF).
whnf :: (a -> b) -> a -> Benchmarkable #
Apply an argument to a function, and evaluate the result to weak head normal form (WHNF).
nfIO :: NFData a => IO a -> Benchmarkable #
whnfIO :: IO a -> Benchmarkable #
Perform an action, then evaluate its result to weak head normal
form (WHNF). This is useful for forcing an IO
action whose result
is an expression to be evaluated down to a more useful value.
If the construction of the 'IO a' value is an important factor
in the benchmark, it is best to use whnfAppIO
instead.
nfAppIO :: NFData b => (a -> IO b) -> a -> Benchmarkable #
Apply an argument to a function which performs an action, then
evaluate its result to normal form (NF).
This function constructs the 'IO b' value on each iteration,
similar to nf
.
This is particularly useful for IO
actions where the bulk of the
work is not bound by IO, but by pure computations that may
optimize away if the argument is known statically, as in nfIO
.
whnfAppIO :: (a -> IO b) -> a -> Benchmarkable #
Perform an action, then evaluate its result to weak head normal
form (WHNF).
This function constructs the 'IO b' value on each iteration,
similar to whnf
.
This is particularly useful for IO
actions where the bulk of the
work is not bound by IO, but by pure computations that may
optimize away if the argument is known statically, as in nfIO
.
Turning a suite of benchmarks into a program
defaultMain :: [Benchmark] -> IO () Source #
An entry point that can be used as a main
function.
import Criterion.Main fib :: Int -> Int fib 0 = 0 fib 1 = 1 fib n = fib (n-1) + fib (n-2) main = defaultMain [ bgroup "fib" [ bench "10" $ whnf fib 10 , bench "35" $ whnf fib 35 , bench "37" $ whnf fib 37 ] ]
defaultMainWith :: Config -> [Benchmark] -> IO () Source #
An entry point that can be used as a main
function, with
configurable defaults.
Example:
import Criterion.Main.Options import Criterion.Main myConfig = defaultConfig { -- Resample 10 times for bootstrapping resamples = 10 } main = defaultMainWith myConfig [ bench "fib 30" $ whnf fib 30 ]
If you save the above example as "Fib.hs"
, you should be able
to compile it as follows:
ghc -O --make Fib
Run "Fib --help"
on the command line to get a list of command
line options.
defaultConfig :: Config Source #
Default benchmarking configuration.
Other useful code
Create a function that can tell if a name given on the command line matches a benchmark.