HUnit-1.5.0.0: A unit testing framework for Haskell

Safe HaskellSafe
LanguageHaskell2010

Test.HUnit.Base

Contents

Description

Basic definitions for the HUnit library.

This module contains what you need to create assertions and test cases and combine them into test suites.

This module also provides infrastructure for implementing test controllers (which are used to execute tests). See Test.HUnit.Text for a great example of how to implement a test controller.

Synopsis

Declaring tests

data Test Source #

The basic structure used to create an annotated tree of test cases.

Constructors

TestCase Assertion

A single, independent test case composed.

TestList [Test]

A set of Tests sharing the same level in the hierarchy.

TestLabel String Test

A name or description for a subtree of the Tests.

Instances

(~=?) infix 1 Source #

Arguments

:: (HasCallStack, Eq a, Show a) 
=> a

The expected value

-> a

The actual value

-> Test 

Shorthand for a test case that asserts equality (with the expected value on the left-hand side, and the actual value on the right-hand side).

(~?=) infix 1 Source #

Arguments

:: (HasCallStack, Eq a, Show a) 
=> a

The actual value

-> a

The expected value

-> Test 

Shorthand for a test case that asserts equality (with the actual value on the left-hand side, and the expected value on the right-hand side).

(~:) :: (HasCallStack, Testable t) => String -> t -> Test infixr 0 Source #

Creates a test from the specified Testable, with the specified label attached to it.

Since Test is Testable, this can be used as a shorthand way of attaching a TestLabel to one or more tests.

(~?) infix 1 Source #

Arguments

:: (HasCallStack, AssertionPredicable t) 
=> t

A value of which the asserted condition is predicated

-> String

A message that is displayed on test failure

-> Test 

Creates a test case resulting from asserting the condition obtained from the specified AssertionPredicable.

Making assertions

assertFailure Source #

Arguments

:: HasCallStack 
=> String

A message that is displayed with the assertion failure

-> Assertion 

Unconditionally signals that a failure has occured. All other assertions can be expressed with the form:

   if conditionIsMet
       then IO ()
       else assertFailure msg

assertBool Source #

Arguments

:: HasCallStack 
=> String

The message that is displayed if the assertion fails

-> Bool

The condition

-> Assertion 

Asserts that the specified condition holds.

assertEqual Source #

Arguments

:: (HasCallStack, Eq a, Show a) 
=> String

The message prefix

-> a

The expected value

-> a

The actual value

-> Assertion 

Asserts that the specified actual value is equal to the expected value. The output message will contain the prefix, the expected value, and the actual value.

If the prefix is the empty string (i.e., ""), then the prefix is omitted and only the expected and actual values are output.

assertString Source #

Arguments

:: HasCallStack 
=> String

The message that is displayed with the assertion failure

-> Assertion 

Signals an assertion failure if a non-empty message (i.e., a message other than "") is passed.

type Assertion = IO () Source #

When an assertion is evaluated, it will output a message if and only if the assertion fails.

Test cases are composed of a sequence of one or more assertions.

(@=?) infix 1 Source #

Arguments

:: (HasCallStack, Eq a, Show a) 
=> a

The expected value

-> a

The actual value

-> Assertion 

Asserts that the specified actual value is equal to the expected value (with the expected value on the left-hand side).

(@?=) infix 1 Source #

Arguments

:: (HasCallStack, Eq a, Show a) 
=> a

The actual value

-> a

The expected value

-> Assertion 

Asserts that the specified actual value is equal to the expected value (with the actual value on the left-hand side).

(@?) infix 1 Source #

Arguments

:: (HasCallStack, AssertionPredicable t) 
=> t

A value of which the asserted condition is predicated

-> String

A message that is displayed if the assertion fails

-> Assertion 

Asserts that the condition obtained from the specified AssertionPredicable holds.

Extending the assertion functionality

class Assertable t where Source #

Allows the extension of the assertion mechanism.

Since an Assertion can be a sequence of Assertions and IO actions, there is a fair amount of flexibility of what can be achieved. As a rule, the resulting Assertion should be the body of a TestCase or part of a TestCase; it should not be used to assert multiple, independent conditions.

If more complex arrangements of assertions are needed, Tests and Testable should be used.

Minimal complete definition

assert

Instances

class ListAssertable t where Source #

A specialized form of Assertable to handle lists.

Minimal complete definition

listAssert

type AssertionPredicate = IO Bool Source #

The result of an assertion that hasn't been evaluated yet.

Most test cases follow the following steps:

  1. Do some processing or an action.
  2. Assert certain conditions.

However, this flow is not always suitable. AssertionPredicate allows for additional steps to be inserted without the initial action to be affected by side effects. Additionally, clean-up can be done before the test case has a chance to end. A potential work flow is:

  1. Write data to a file.
  2. Read data from a file, evaluate conditions.
  3. Clean up the file.
  4. Assert that the side effects of the read operation meet certain conditions.
  5. Assert that the conditions evaluated in step 2 are met.

class AssertionPredicable t where Source #

Used to signify that a data type can be converted to an assertion predicate.

Minimal complete definition

assertionPredicate

class Testable t where Source #

Provides a way to convert data into a Test or set of Test.

Minimal complete definition

test

Methods

test :: HasCallStack => t -> Test Source #

Instances

Testable Test Source # 

Methods

test :: Test -> Test Source #

Testable t => Testable [t] Source # 

Methods

test :: [t] -> Test Source #

Assertable t => Testable (IO t) Source # 

Methods

test :: IO t -> Test Source #

Test execution

Note: the rest of the functionality in this module is intended for implementors of test controllers. If you just want to run your tests cases, simply use a test controller, such as the text-based controller in Test.HUnit.Text.

data State Source #

Keeps track of the remaining tests and the results of the performed tests. As each test is performed, the path is removed and the counts are updated as appropriate.

Constructors

State 

Fields

data Counts Source #

A data structure that hold the results of tests that have been performed up until this point.

Constructors

Counts 

Fields

type Path = [Node] Source #

Uniquely describes the location of a test within a test hierarchy. Node order is from test case to root.

data Node Source #

Composed into Paths.

Constructors

ListItem Int 
Label String 

testCasePaths :: Test -> [Path] Source #

Determines the paths for all TestCases in a tree of Tests.

testCaseCount :: Test -> Int Source #

Counts the number of TestCases in a tree of Tests.

type ReportStart us = State -> us -> IO us Source #

Report generator for reporting the start of a test run.

type ReportProblem us = Maybe SrcLoc -> String -> State -> us -> IO us Source #

Report generator for reporting problems that have occurred during a test run. Problems may be errors or assertion failures.

performTest Source #

Arguments

:: ReportStart us

report generator for the test run start

-> ReportProblem us

report generator for errors during the test run

-> ReportProblem us

report generator for assertion failures during the test run

-> us 
-> Test

the test to be executed

-> IO (Counts, us) 

Performs a test run with the specified report generators.

This handles the actual running of the tests. Most developers will want to use HUnit.Text.runTestTT instead. A developer could use this function to execute tests via another IO system, such as a GUI, or to output the results in a different manner (e.g., upload XML-formatted results to a webservice).

Note that the counts in a start report do not include the test case being started, whereas the counts in a problem report do include the test case just finished. The principle is that the counts are sampled only between test case executions. As a result, the number of test case successes always equals the difference of test cases tried and the sum of test case errors and failures.