ad: Automatic Differentiation

[ bsd3, library, math ] [ Propose Tags ] [ Report a vulnerability ]

Forward-, reverse- and mixed- mode automatic differentiation combinators with a common API.

Type-level "branding" is used to both prevent the end user from confusing infinitesimals and to limit unsafe access to the implementation details of each Mode.

Each mode has a separate module full of combinators.

  • Numeric.AD.Mode.Forward provides basic forward-mode AD. It is good for computing simple derivatives.

  • Numeric.AD.Mode.Reverse uses benign side-effects to compute reverse-mode AD. It is good for computing gradients in one pass. It generates a Wengert list (linear tape) using Data.Reflection.

  • Numeric.AD.Mode.Kahn uses benign side-effects to compute reverse-mode AD. It is good for computing gradients in one pass. It generates a tree-like tape that needs to be topologically sorted in the end.

  • Numeric.AD.Mode.Sparse computes a sparse forward-mode AD tower. It is good for higher derivatives or large numbers of outputs.

  • Numeric.AD.Mode.Tower computes a dense forward-mode AD tower useful for higher derivatives of single input functions.

  • Numeric.AD computes using whichever mode or combination thereof is suitable to each individual combinator.

While not every mode can provide all operations, the following basic operations are supported, modified as appropriate by the suffixes below:

  • grad computes the gradient (partial derivatives) of a function at a point.

  • jacobian computes the Jacobian matrix of a function at a point.

  • diff computes the derivative of a function at a point.

  • du computes a directional derivative of a function at a point.

  • hessian computes the Hessian matrix (matrix of second partial derivatives) of a function at a point.

The following suffixes alter the meanings of the functions above as follows:

  • ' -- also return the answer

  • With lets the user supply a function to blend the input with the output

  • F is a version of the base function lifted to return a Traversable (or Functor) result

  • s means the function returns all higher derivatives in a list or f-branching Stream

  • T means the result is transposed with respect to the traditional formulation.

  • 0 means that the resulting derivative list is padded with 0s at the end.

  • NoEq means that an infinite list of converging values is returned rather than truncating the list when they become constant


[Skip to Readme]

Flags

Manual Flags

NameDescriptionDefault
herbieDisabled

Use -f <flag> to enable a flag, or -f -<flag> to disable that flag. More info

Downloads

Note: This package has metadata revisions in the cabal description newer than included in the tarball. To unpack the package including the revisions, use 'cabal get'.

Maintainer's Corner

Package maintainers

For package maintainers and hackage trustees

Candidates

  • No Candidates
Versions [RSS] 0.12, 0.13, 0.15, 0.17, 0.18, 0.19, 0.20, 0.21, 0.22, 0.23, 0.24, 0.27, 0.28, 0.30.0, 0.31.0, 0.32.0, 0.33.0, 0.40, 0.40.1, 0.44.0, 0.44.1, 0.44.2, 0.44.3, 0.44.4, 0.45.0, 0.46.0, 0.46.1, 0.46.2, 0.47.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.0.6, 1.1.0, 1.1.0.1, 1.1.1, 1.1.3, 1.2.0, 1.2.0.1, 1.2.0.2, 1.3, 1.3.0.1, 1.3.1, 1.4, 1.5, 1.5.0.1, 1.5.0.2, 3.0, 3.0.1, 3.1.1, 3.1.2, 3.1.3, 3.1.4, 3.2, 3.2.1, 3.2.2, 3.3.0.1, 3.3.1, 3.3.1.1, 3.4, 4.0, 4.0.0.1, 4.1, 4.2, 4.2.0.1, 4.2.1, 4.2.1.1, 4.2.2, 4.2.3, 4.2.4, 4.3, 4.3.1, 4.3.2, 4.3.2.1, 4.3.3, 4.3.4, 4.3.5, 4.3.6, 4.4, 4.4.1, 4.5, 4.5.1, 4.5.2, 4.5.3, 4.5.4, 4.5.5, 4.5.6
Change log CHANGELOG.markdown
Dependencies array (>=0.2 && <0.6), base (>=4.3 && <4.18), comonad (>=4 && <6), containers (>=0.2 && <0.7), data-reify (>=0.6 && <0.7), erf (>=2.0 && <2.1), free (>=4.6.1 && <6), nats (>=0.1.2 && <2), reflection (>=1.4 && <3), semigroups (>=0.16 && <1), tagged (>=0.7 && <1), transformers (>=0.3 && <0.6) [details]
Tested with ghc ==7.4.2, ghc ==7.6.3, ghc ==7.8.4, ghc ==7.10.3, ghc ==8.0.2, ghc ==8.2.2, ghc ==8.4.4, ghc ==8.6.5, ghc ==8.8.1
License BSD-3-Clause
Copyright (c) Edward Kmett 2010-2015, (c) Barak Pearlmutter and Jeffrey Mark Siskind 2008-2009
Author Edward Kmett
Maintainer ekmett@gmail.com
Revised Revision 2 made by ryanglscott at 2023-02-19T12:47:23Z
Category Math
Home page http://github.com/ekmett/ad
Bug tracker http://github.com/ekmett/ad/issues
Source repo head: git clone git://github.com/ekmett/ad.git
Uploaded by ryanglscott at 2020-02-03T12:52:08Z
Distributions LTSHaskell:4.5.6, NixOS:4.5.6, Stackage:4.5.6
Reverse Dependencies 24 direct, 26 indirect [details]
Downloads 84363 total (326 in the last 30 days)
Rating 2.5 (votes: 4) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs available [build log]
Last success reported on 2020-02-03 [all 1 reports]

Readme for ad-4.4

[back to package description]

ad

Hackage Build Status

A package that provides an intuitive API for Automatic Differentiation (AD) in Haskell. Automatic differentiation provides a means to calculate the derivatives of a function while evaluating it. Unlike numerical methods based on running the program with multiple inputs or symbolic approaches, automatic differentiation typically only decreases performance by a small multiplier.

AD employs the fact that any program y = F(x) that computes one or more value does so by composing multiple primitive operations. If the (partial) derivatives of each of those operations is known, then they can be composed to derive the answer for the derivative of the entire program at a point.

This library contains at its core a single implementation that describes how to compute the partial derivatives of a wide array of primitive operations. It then exposes an API that enables a user to safely combine them using standard higher-order functions, just as you would with any other Haskell numerical type.

There are several ways to compose these individual Jacobian matrices. We hide the choice used by the API behind an explicit "Mode" type-class and universal quantification. This prevents users from confusing infinitesimals. If you want to risk infinitesimal confusion in order to get greater flexibility in how you curry, flip and generally combine the differential operators, then the Rank1.* modules are probably your cup of tea.

Features

  • Provides forward- and reverse- mode AD combinators with a common API.
  • Optional type-level "branding" is available to prevent the end user from confusing infinitesimals
  • Each mode has a separate module full of combinators, with a consistent look and feel.

Examples

You can compute derivatives of functions

Prelude Numeric.AD> diff sin 0 {- cos 0 -}
1.0

Or both the answer and the derivative of a function:

Prelude Numeric.AD> diff' (exp . log) 2
(2.0,1.0)

You can compute the derivative of a function with a constant parameter using auto:

Prelude Numeric.AD> let t = 2.0 :: Double
Prelude Numeric.AD> diff (\ x -> auto t * sin x) 0
2.0

You can use a symbolic numeric type, like the one from simple-reflect to obtain symbolic derivatives:

Prelude Debug.SimpleReflect Numeric.AD> diff atanh x
recip (1 - x * x) * 1

You can compute gradients for functions that take non-scalar values in the form of a Traversable functor full of AD variables.

Prelude Numeric.AD Debug.SimpleReflect> grad (\[x,y,z] -> x * sin (x + log y)) [x,y,z]
[ 0 + (0 + sin (x + log y) * 1 + 1 * (0 + cos (x + log y) * (0 + x * 1)))
, 0 + (0 + recip y * (0 + 1 * (0 + cos (x + log y) * (0 + x * 1))))
, 0
]

which one can simplify to:

[ sin (x + log y) + cos (x + log y) * x, recip y * cos (x + log y) * x, 0 ]

If you need multiple derivatives you can calculate them with diffs:

Prelude Numeric.AD> take 10 $ diffs sin 1
[0.8414709848078965,0.5403023058681398,-0.8414709848078965,-0.5403023058681398,0.8414709848078965,0.5403023058681398,-0.8414709848078965,-0.5403023058681398,0.8414709848078965,0.5403023058681398]

or if your function takes multiple inputs, you can use grads, which returns an 'f-branching stream' of derivatives, that you can inspect lazily. Somewhat more intuitive answers can be obtained by converting the stream into the polymorphically recursive Jet data type. With that we can look at a single "layer" of the answer at a time:

The answer:

Prelude Numeric.AD> headJet $ jet $  grads (\[x,y] -> exp (x * y)) [1,2]
7.38905609893065

The gradient:

Prelude Numeric.AD> headJet $ tailJet $ jet $  grads (\[x,y] -> exp (x * y)) [1,2]
[14.7781121978613,7.38905609893065]

The hessian (n * n matrix of 2nd derivatives)

Prelude Numeric.AD> headJet $ tailJet $ tailJet $ jet $  grads (\[x,y] -> exp (x * y)) [1,2]
[[29.5562243957226,22.16716829679195],[22.16716829679195,7.38905609893065]]

Or even higher order tensors of derivatives as a jet.

Prelude Numeric.AD> headJet $ tailJet $ tailJet $ tailJet $ jet $  grads (\[x,y] -> exp (x * y)) [1,2]
[[[59.1124487914452,44.3343365935839],[44.3343365935839,14.7781121978613]],[[44.3343365935839,14.7781121978613],[14.7781121978613,7.38905609893065]]]

Note the redundant values caused by the various symmetries in the tensors. The ad library is careful to compute each distinct derivative only once, lazily and to share the resulting computation.

Overview

Modules

  • Numeric.AD computes using whichever mode or combination thereof is suitable to each individual combinator. This mode is the default, re-exported by Numeric.AD
  • Numeric.AD.Mode.Forward provides basic forward-mode AD. It is good for computing simple derivatives.
  • Numeric.AD.Mode.Sparse computes a sparse forward-mode AD tower. It is good for higher derivatives or large numbers of outputs.
  • Numeric.AD.Mode.Kahn computes with reverse-mode AD. It is good for computing a few outputs given many inputs.
  • Numeric.AD.Mode.Reverse computes with reverse-mode AD. It is good for computing a few outputs given many inputs, when not using sparks heavily.
  • Numeric.AD.Mode.Tower computes a dense forward-mode AD tower useful for higher derivatives of single input functions.
  • Numeric.AD.Newton provides a number of combinators for root finding using Newton's method with quadratic convergence.
  • Numeric.AD.Halley provides a number of combinators for root finding using Halley's method with cubic convergence.
  • Numeric.AD.Rank1.* provides combinators for AD that are strictly rank-1. This makes it easier to flip and contort them with higher order functions at the expense of type safety when it comes to infinitsimal confusion.

Combinators

While not every mode can provide all operations, the following basic operations are supported, modified as appropriate by the suffixes below:

  • grad computes the gradient (vector of partial derivatives at a given point) of a function.
  • jacobian computes the Jacobian matrix of a function at a point.
  • diff computes the derivative of a function at a point.
  • du computes a directional derivative of a function at a point.
  • hessian computes the Hessian matrix (matrix of second partial derivatives) of a function at a point.

Combinator Suffixes

The following suffixes alter the meanings of the functions above as follows:

  • ' also return the answer
  • With lets the user supply a function to blend the input with the output
  • F is a version of the base function lifted to return a Traversable (or Functor) result
  • s means the function returns all higher derivatives in a list or f-branching Stream
  • T means the result is transposed with respect to the traditional formulation (usually to avoid paying for transposing back)
  • 0 means that the resulting derivative list is padded with 0s at the end.
  • NoEq means that an infinite list of converging values is returned rather than truncating the list when they become constant

Contact Information

Contributions and bug reports are welcome!

Please feel free to contact me through github or on the #haskell IRC channel on irc.freenode.net.

-Edward Kmett