miniterion-0.1.1.0: Simple and lightweight benchmark utilities
LicenseMIT
Safe HaskellSafe-Inferred
LanguageHaskell2010

Miniterion

Description

Simple benchmarking utilities with API subset of criterion (and also a subset of gauge and tasty-bench).

The goal of this package is to provide simple and lightweight benchmark utilities with less amount of codes and dependency packages. For robust and feature rich benchmarking utility, use the other packages mentioned above.

This is the only module exposed from the miniterion package. The dependency packages of miniterion are kept small (at the moment base and deepseq) to make the compilation time and installation time short, by dropping some functionalities and efficiencies.

Synopsis

Types

data Benchmark Source #

Benchmarks are simple tree structure with names, and additional information to support envWithCleanup.

Drop-in replacement for Criterion.Benchmark.

data Benchmarkable Source #

Something that can be benchmarked, produced by nf, whnf, nfIO, whnfIO, nfAppIO, and whnfAppIO.

Drop-in replacement for Criterion.Benchmarkable.

Creating benchmark suite

env Source #

Arguments

:: NFData env 
=> IO env

Action to create the environment.

-> (env -> Benchmark)

A function returning benchmark.

-> Benchmark 

Run a benchmark (or collection of benchmarks) in the given environment, usually reading large input data from file.

Drop-in replacement for Criterion.env.

envWithCleanup Source #

Arguments

:: NFData env 
=> IO env

Action to create the environment.

-> (env -> IO a)

Action to cleanup the environment.

-> (env -> Benchmark)

A function returning benchmark.

-> Benchmark 

Similar to env, but includes an additional argument to clean up the environment.

Drop-in replacement for Criterion.envWithCleanup.

perBatchEnv Source #

Arguments

:: (NFData env, NFData b) 
=> (Word64 -> IO env)

Action to create an environment for a batch of N runs.

-> (env -> IO b)

Benchmark body function.

-> Benchmarkable 

Create a Benchmarkable where a fresh environment is allocated for every batch of runs of the benchmarkable.

Drop-in replacement for Criterion.perBatchEnv.

perBatchEnvWithCleanup Source #

Arguments

:: (NFData env, NFData b) 
=> (Word64 -> IO env)

Action to create an environment for a batch of N runs.

-> (Word64 -> env -> IO ())

Action to cleanup the environment.

-> (env -> IO b)

Benchmark body function.

-> Benchmarkable 

Same as perBatchEnv, but but allows for an additional callback to clean up the environment.

Drop-in replacement for Criterion.perBatchEnvWithCleanup.

perRunEnv Source #

Arguments

:: (NFData env, NFData b) 
=> IO env

Action to create an environment for a single run.

-> (env -> IO b)

Benchmark body function.

-> Benchmarkable 

Create a Benchmarkable where a fresh environment is allocated for every run of the operation to benchmark.

Drop-in replacement for Criterion.perRunEnv.

NOTE: This function does not work well (or not work at all) if the time spent in the initialization work is relatively long compared to the time spent in the benchmark body function. In such case, consider modifying the benchmark body function to spend more elapsed time, or switch to the criterion package.

perRunEnvWithCleanup Source #

Arguments

:: (NFData env, NFData b) 
=> IO env

Action to create an environment for a single run.

-> (env -> IO ())

Action to cleanup the environment.

-> (env -> IO b)

Benchmark body function.

-> Benchmarkable 

Same as perBatchEnv, but allows for an additional callback to clean up the environment.

Drop-in replacement for Criterion.perRunEnvWithCleanup.

NOTE: See the note in perRunEnv.

toBenchmarkable :: (Word64 -> IO ()) -> Benchmarkable Source #

Construct a Benchmarkable value from an impure action, where the Word64 parameter indicates the number of times to run the action.

Drop-in replacement for Criterion.toBenchmarkable.

bench Source #

Arguments

:: String

Name of this benchmark.

-> Benchmarkable

Benchmark target.

-> Benchmark 

Attach a name to Benchmarkable.

The type signature is compatible with Criterion.bench.

bgroup Source #

Arguments

:: String

Name of this benchmark group.

-> [Benchmark]

List of benchmarks in the group.

-> Benchmark 

Attach a name to a group of Benchmark.

The type signature is compatible with Criterion.bgroup.

Running a benchmark

nf :: NFData b => (a -> b) -> a -> Benchmarkable Source #

nf f x measures time to compute a normal form (by means of rnf, not force) of an application of f to x. This does not include time to evaluate f or x themselves. Ideally x should be a primitive data type like Int.

Drop-in replacement for Criterion.nf.

whnf :: (a -> b) -> a -> Benchmarkable Source #

whnf f x measures time to compute a weak head normal form of an application of f to x. This does not include time to evaluate f or x themselves. Ideally x should be a primitive data type like Int.

Drop-in replacement for Criterion.whnf.

nfIO :: NFData a => IO a -> Benchmarkable Source #

nfIO x measures time to evaluate side-effects of x and compute its normal form (by means of force, not rnf).

Drop-in replacement for Criterion.nfIO.

whnfIO :: IO a -> Benchmarkable Source #

whnfIO x measures time to evaluate side-effects of x and compute its weak head normal form.

Drop-in replacement for Criterion.whnfIO.

nfAppIO :: NFData b => (a -> IO b) -> a -> Benchmarkable Source #

nfAppIO f x measures time to evaluate side-effects of an application of f to x and compute its normal form (by means of force, not rnf). This does not include time to evaluate f or x themselves. Ideally x should be a primitive data type like Int.

Drop-in replacement for Criterion.nfAppIO.

whnfAppIO :: (a -> IO b) -> a -> Benchmarkable Source #

whnfAppIO f x measures time to evaluate side-effects of an application of f to x and compute its weak head normal form. This does not include time to evaluate f or x themselves. Ideally x should be a primitive data type like Int.

Drop-in replacement for Criterion.whnfAppIO.

Turning a suite of benchmarks into a program

defaultMain :: [Benchmark] -> IO () Source #

Run benchmarks and report results, providing an interface compatible with Criterion.Main.defaultMain.

For interactive use

benchmark :: Benchmarkable -> IO () Source #

Run a benchmark interactively, providing an interface compatible with Criterion.benchmark.