hyperion: Reliable performance measurement with robust data export.

[ benchmarking, bsd3, library, program ] [ Propose Tags ] [ Report a vulnerability ]

Please see README.md


[Skip to Readme]

Downloads

Maintainer's Corner

Package maintainers

For package maintainers and hackage trustees

Candidates

  • No Candidates
Versions [RSS] 0.1.0.0
Dependencies aeson (>=0.11), ansi-wl-pprint, base (>=4.9 && <5), bytestring (>=0.10), clock (>=0.7.2), containers (>=0.5), deepseq (>=1.4), directory, exceptions (>=0.8), filepath, generic-deriving (>=1.11), hashable, hyperion, lens (>=4.0), mtl (>=2.2), optparse-applicative (>=0.12), process, random (>=1.1), random-shuffle (>=0.0.4), statistics (>=0.13), text (>=1.2), time (>=1.0), unordered-containers (>=0.2), vector (>=0.11) [details]
License BSD-3-Clause
Author Tweag I/O
Maintainer nicolas.mattia@tweag.io
Category Benchmarking
Home page https://github.com/tweag/hyperion#readme
Source repo head: git clone https://github.com/tweag/hyperion
Uploaded by MathieuBoespflug at 2017-09-06T11:46:14Z
Distributions
Executables hyperion-end-to-end-benchmark-example, hyperion-micro-benchmark-example
Downloads 1001 total (5 in the last 30 days)
Rating (no votes yet) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs available [build log]
Last success reported on 2017-09-06 [all 1 reports]

Readme for hyperion-0.1.0.0

[back to package description]

Hyperion: Haskell-based systems benchmarking

Build Status

Hyperion is a DSL for writing benchmarks to measure and analyze software performance. It is a lab for future [Criterion][criterion] features.

Getting started

Build

You can build the micro benchmark example using stack:

$ stack build
$ stack exec hyperion-micro-benchmark-example

Example usage

The Hyperion DSL is a backwards compatible extension to [Criterion][criterion]'s DSL (except for the rarely used env combinator, which has a safer type). Here is an example:

benchmarks :: [Benchmark]
benchmarks =
    [ bench "id" (nf id ())
    , series [0,5..20] $ \n ->
        bgroup "pure-functions"
          [ bench "fact" (nf fact n)
          , bench "fib" (nf fib n)
          ]
    , series [1..4] $ \n ->
        series [1..n] $ \k ->
          bench "n choose k" $ nf (uncurry choose) (n, k)
    ]

main :: IO ()
main = defaultMain "hyperion-example-micro-benchmarks" benchmarks

By default Hyperion runs your benchmarks and pretty prints the results. There are several command-line options that you can pass to the executable, like printing the results to a JSON file or including individual raw measurements. To see the full set of options run the executable with --help:

$ stack exec hyperion-micro-benchmark-example -- --help
Usage: hyperion-micro-benchmark-example ([--pretty] | [-j|--json PATH] |
                                        [-f|--flat PATH]) ([-l|--list] | [--run]
                                        | [--no-analyze]) [--raw]
                                        [--arg KEY:VAL] [NAME...]

Available options:
  -h,--help                Show this help text
  --pretty                 Pretty prints the measurements on stdout.
  -j,--json PATH           Where to write the json benchmarks output. Can be a
                           file name, a directory name or '-' for stdout.
  -f,--flat PATH           Where to write the json benchmarks output. Can be a
                           file name, a directory name or '-' for stdout.
  --version                Display version information
  -l,--list                List benchmark names
  --run                    Run benchmarks and analyze them (default)
  --no-analyze             Only run the benchmarks
  --raw                    Include raw measurement data in report.
  --arg KEY:VAL            Extra metadata to include in the report, in the
                           format key:value.