ad-4.2.1.1: Automatic Differentiation

Copyright(c) Edward Kmett 2010-2014
LicenseBSD3
Maintainerekmett@gmail.com
Stabilityexperimental
PortabilityGHC only
Safe HaskellNone
LanguageHaskell2010

Numeric.AD

Contents

Description

Mixed-Mode Automatic Differentiation.

Each combinator exported from this module chooses an appropriate AD mode. The following basic operations are supported, modified as appropriate by the suffixes below:

  • grad computes the gradient (partial derivatives) of a function at a point
  • jacobian computes the Jacobian matrix of a function at a point
  • diff computes the derivative of a function at a point
  • du computes a directional derivative of a function at a point
  • hessian compute the Hessian matrix (matrix of second partial derivatives) of a function at a point

The suffixes have the following meanings:

  • ' -- also return the answer
  • With lets the user supply a function to blend the input with the output
  • F is a version of the base function lifted to return a Traversable (or Functor) result
  • s means the function returns all higher derivatives in a list or f-branching Stream
  • T means the result is transposed with respect to the traditional formulation.
  • 0 means that the resulting derivative list is padded with 0s at the end.

Synopsis

Documentation

data AD s a Source

Instances

Bounded a => Bounded (AD s a) 
Enum a => Enum (AD s a) 
Eq a => Eq (AD s a) 
Floating a => Floating (AD s a) 
Fractional a => Fractional (AD s a) 
Num a => Num (AD s a) 
Ord a => Ord (AD s a) 
Read a => Read (AD s a) 
Real a => Real (AD s a) 
RealFloat a => RealFloat (AD s a) 
RealFrac a => RealFrac (AD s a) 
Show a => Show (AD s a) 
Erf a => Erf (AD s a) 
InvErf a => InvErf (AD s a) 
Mode a => Mode (AD s a) 
Typeable (* -> * -> *) AD 
type Scalar (AD s a) = Scalar a 

AD modes

class (Num t, Num (Scalar t)) => Mode t where Source

Associated Types

type Scalar t Source

Methods

auto :: Scalar t -> t Source

Embed a constant

Instances

Mode Double 
Mode Float 
Mode Int 
Mode Int8 
Mode Int16 
Mode Int32 
Mode Int64 
Mode Integer 
Mode Word 
Mode Word8 
Mode Word16 
Mode Word32 
Mode Word64 
Mode Natural 
Mode ForwardDouble 
Integral a => Mode (Ratio a) 
RealFloat a => Mode (Complex a) 
Num a => Mode (Id a) 
Num a => Mode (Tower a) 
Num a => Mode (Sparse a) 
(Mode t, Mode (Scalar t)) => Mode (On t) 
Num a => Mode (Kahn a) 
Num a => Mode (Forward a) 
(Num a, Traversable f) => Mode (Dense f a) 
Mode a => Mode (AD s a) 
(Reifies * s Tape, Num a) => Mode (Reverse s a) 
(Mode a, Mode b, Chosen s, (~) * (Scalar a) (Scalar b)) => Mode (Or s a b) 

Gradients (Reverse Mode)

grad :: (Traversable f, Num a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> f a Source

The grad function calculates the gradient of a non-scalar-to-scalar function with reverse-mode AD in a single pass.

>>> grad (\[x,y,z] -> x*y+z) [1,2,3]
[2,1,1]

grad' :: (Traversable f, Num a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> (a, f a) Source

The grad' function calculates the result and gradient of a non-scalar-to-scalar function with reverse-mode AD in a single pass.

>>> grad' (\[x,y,z] -> x*y+z) [1,2,3]
(5,[2,1,1])

gradWith :: (Traversable f, Num a) => (a -> a -> b) -> (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> f b Source

grad g f function calculates the gradient of a non-scalar-to-scalar function f with reverse-mode AD in a single pass. The gradient is combined element-wise with the argument using the function g.

grad == gradWith (_ dx -> dx)
id == gradWith const

gradWith' :: (Traversable f, Num a) => (a -> a -> b) -> (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> (a, f b) Source

grad' g f calculates the result and gradient of a non-scalar-to-scalar function f with reverse-mode AD in a single pass the gradient is combined element-wise with the argument using the function g.

grad' == gradWith' (_ dx -> dx)

Higher Order Gradients (Sparse-on-Reverse)

grads :: (Traversable f, Num a) => (forall s. f (AD s (Sparse a)) -> AD s (Sparse a)) -> f a -> Cofree f a Source

Variadic Gradients (Sparse or Kahn)

Variadic combinators for variadic mixed-mode automatic differentiation.

Unfortunately, variadicity comes at the expense of being able to use quantification to avoid sensitivity confusion, so be careful when counting the number of auto calls you use when taking the gradient of a function that takes gradients!

class Num a => Grad i o o' a | i -> a o o', o -> a i o', o' -> a i o Source

Minimal complete definition

pack, unpack, unpack'

Instances

Num a => Grad (Kahn a) [a] (a, [a]) a 
Grad i o o' a => Grad (Kahn a -> i) (a -> o) (a -> o') a 

vgrad :: Grad i o o' a => i -> o Source

vgrad' :: Grad i o o' a => i -> o' Source

class Num a => Grads i o a | i -> a o, o -> a i Source

Minimal complete definition

packs, unpacks

Instances

Num a => Grads (Sparse a) (Cofree [] a) a 
Grads i o a => Grads (Sparse a -> i) (a -> o) a 

vgrads :: Grads i o a => i -> o Source

Jacobians (Sparse or Reverse)

jacobian :: (Traversable f, Functor g, Num a) => (forall s. Reifies s Tape => f (Reverse s a) -> g (Reverse s a)) -> f a -> g (f a) Source

The jacobian function calculates the jacobian of a non-scalar-to-non-scalar function with reverse AD lazily in m passes for m outputs.

>>> jacobian (\[x,y] -> [y,x,x*y]) [2,1]
[[0,1],[1,0],[1,2]]

jacobian' :: (Traversable f, Functor g, Num a) => (forall s. Reifies s Tape => f (Reverse s a) -> g (Reverse s a)) -> f a -> g (a, f a) Source

The jacobian' function calculates both the result and the Jacobian of a nonscalar-to-nonscalar function, using m invocations of reverse AD, where m is the output dimensionality. Applying fmap snd to the result will recover the result of jacobian | An alias for gradF'

>>> jacobian' (\[x,y] -> [y,x,x*y]) [2,1]
[(1,[0,1]),(2,[1,0]),(2,[1,2])]

jacobianWith :: (Traversable f, Functor g, Num a) => (a -> a -> b) -> (forall s. Reifies s Tape => f (Reverse s a) -> g (Reverse s a)) -> f a -> g (f b) Source

'jacobianWith g f' calculates the Jacobian of a non-scalar-to-non-scalar function f with reverse AD lazily in m passes for m outputs.

Instead of returning the Jacobian matrix, the elements of the matrix are combined with the input using the g.

jacobian == jacobianWith (_ dx -> dx)
jacobianWith const == (f x -> const x <$> f x)

jacobianWith' :: (Traversable f, Functor g, Num a) => (a -> a -> b) -> (forall s. Reifies s Tape => f (Reverse s a) -> g (Reverse s a)) -> f a -> g (a, f b) Source

jacobianWith g f' calculates both the result and the Jacobian of a nonscalar-to-nonscalar function f, using m invocations of reverse AD, where m is the output dimensionality. Applying fmap snd to the result will recover the result of jacobianWith

Instead of returning the Jacobian matrix, the elements of the matrix are combined with the input using the g.

jacobian' == jacobianWith' (_ dx -> dx)

Higher Order Jacobian (Sparse-on-Reverse)

jacobians :: (Traversable f, Functor g, Num a) => (forall s. f (AD s (Sparse a)) -> g (AD s (Sparse a))) -> f a -> g (Cofree f a) Source

Transposed Jacobians (Forward Mode)

jacobianT :: (Traversable f, Functor g, Num a) => (forall s. f (AD s (Forward a)) -> g (AD s (Forward a))) -> f a -> f (g a) Source

A fast, simple, transposed Jacobian computed with forward-mode AD.

jacobianWithT :: (Traversable f, Functor g, Num a) => (a -> a -> b) -> (forall s. f (AD s (Forward a)) -> g (AD s (Forward a))) -> f a -> f (g b) Source

A fast, simple, transposed Jacobian computed with Forward mode AD that combines the output with the input.

Hessian (Sparse-On-Reverse)

hessian :: (Traversable f, Num a) => (forall s. Reifies s Tape => f (On (Reverse s (Sparse a))) -> On (Reverse s (Sparse a))) -> f a -> f (f a) Source

Compute the Hessian via the Jacobian of the gradient. gradient is computed in reverse mode and then the Jacobian is computed in sparse (forward) mode.

>>> hessian (\[x,y] -> x*y) [1,2]
[[0,1],[1,0]]

hessian' :: (Traversable f, Num a) => (forall s. f (AD s (Sparse a)) -> AD s (Sparse a)) -> f a -> (a, f (a, f a)) Source

Hessian Tensors (Sparse or Sparse-On-Reverse)

hessianF :: (Traversable f, Functor g, Num a) => (forall s. Reifies s Tape => f (On (Reverse s (Sparse a))) -> g (On (Reverse s (Sparse a)))) -> f a -> g (f (f a)) Source

Compute the order 3 Hessian tensor on a non-scalar-to-non-scalar function using 'Sparse'-on-'Reverse'

>>> hessianF (\[x,y] -> [x*y,x+y,exp x*cos y]) [1,2]
[[[0.0,1.0],[1.0,0.0]],[[0.0,0.0],[0.0,0.0]],[[-1.1312043837568135,-2.4717266720048188],[-2.4717266720048188,1.1312043837568135]]]

Hessian Tensors (Sparse)

hessianF' :: (Traversable f, Functor g, Num a) => (forall s. f (AD s (Sparse a)) -> g (AD s (Sparse a))) -> f a -> g (a, f (a, f a)) Source

Hessian Vector Products (Forward-On-Reverse)

hessianProduct :: (Traversable f, Num a) => (forall s. Reifies s Tape => f (On (Reverse s (Forward a))) -> On (Reverse s (Forward a))) -> f (a, a) -> f a Source

hessianProduct f wv computes the product of the hessian H of a non-scalar-to-scalar function f at w = fst $ wv with a vector v = snd $ wv using "Pearlmutter's method" from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.6143, which states:

H v = (d/dr) grad_w (w + r v) | r = 0

Or in other words, we take the directional derivative of the gradient. The gradient is calculated in reverse mode, then the directional derivative is calculated in forward mode.

hessianProduct' :: (Traversable f, Num a) => (forall s. Reifies s Tape => f (On (Reverse s (Forward a))) -> On (Reverse s (Forward a))) -> f (a, a) -> f (a, a) Source

hessianProduct' f wv computes both the gradient of a non-scalar-to-scalar f at w = fst $ wv and the product of the hessian H at w with a vector v = snd $ wv using "Pearlmutter's method". The outputs are returned wrapped in the same functor.

H v = (d/dr) grad_w (w + r v) | r = 0

Or in other words, we return the gradient and the directional derivative of the gradient. The gradient is calculated in reverse mode, then the directional derivative is calculated in forward mode.

Derivatives (Forward Mode)

diff :: Num a => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> a Source

The diff function calculates the first derivative of a scalar-to-scalar function by forward-mode AD

>>> diff sin 0
1.0

diffF :: (Functor f, Num a) => (forall s. AD s (Forward a) -> f (AD s (Forward a))) -> a -> f a Source

The diffF function calculates the first derivatives of scalar-to-nonscalar function by Forward mode AD

>>> diffF (\a -> [sin a, cos a]) 0
[1.0,-0.0]

diff' :: Num a => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> (a, a) Source

The diff' function calculates the result and first derivative of scalar-to-scalar function by Forward mode AD

diff' sin == sin &&& cos
diff' f = f &&& d f
>>> diff' sin 0
(0.0,1.0)
>>> diff' exp 0
(1.0,1.0)

diffF' :: (Functor f, Num a) => (forall s. AD s (Forward a) -> f (AD s (Forward a))) -> a -> f (a, a) Source

The diffF' function calculates the result and first derivatives of a scalar-to-non-scalar function by Forward mode AD

>>> diffF' (\a -> [sin a, cos a]) 0
[(0.0,1.0),(1.0,-0.0)]

Derivatives (Tower)

diffs :: Num a => (forall s. AD s (Tower a) -> AD s (Tower a)) -> a -> [a] Source

diffsF :: (Functor f, Num a) => (forall s. AD s (Tower a) -> f (AD s (Tower a))) -> a -> f [a] Source

diffs0 :: Num a => (forall s. AD s (Tower a) -> AD s (Tower a)) -> a -> [a] Source

diffs0F :: (Functor f, Num a) => (forall s. AD s (Tower a) -> f (AD s (Tower a))) -> a -> f [a] Source

Directional Derivatives (Forward Mode)

du :: (Functor f, Num a) => (forall s. f (AD s (Forward a)) -> AD s (Forward a)) -> f (a, a) -> a Source

Compute the directional derivative of a function given a zipped up Functor of the input values and their derivatives

du' :: (Functor f, Num a) => (forall s. f (AD s (Forward a)) -> AD s (Forward a)) -> f (a, a) -> (a, a) Source

Compute the answer and directional derivative of a function given a zipped up Functor of the input values and their derivatives

duF :: (Functor f, Functor g, Num a) => (forall s. f (AD s (Forward a)) -> g (AD s (Forward a))) -> f (a, a) -> g a Source

Compute a vector of directional derivatives for a function given a zipped up Functor of the input values and their derivatives.

duF' :: (Functor f, Functor g, Num a) => (forall s. f (AD s (Forward a)) -> g (AD s (Forward a))) -> f (a, a) -> g (a, a) Source

Compute a vector of answers and directional derivatives for a function given a zipped up Functor of the input values and their derivatives.

Directional Derivatives (Tower)

dus :: (Functor f, Num a) => (forall s. f (AD s (Tower a)) -> AD s (Tower a)) -> f [a] -> [a] Source

dus0 :: (Functor f, Num a) => (forall s. f (AD s (Tower a)) -> AD s (Tower a)) -> f [a] -> [a] Source

dusF :: (Functor f, Functor g, Num a) => (forall s. f (AD s (Tower a)) -> g (AD s (Tower a))) -> f [a] -> g [a] Source

dus0F :: (Functor f, Functor g, Num a) => (forall s. f (AD s (Tower a)) -> g (AD s (Tower a))) -> f [a] -> g [a] Source

Taylor Series (Tower)

taylor :: Fractional a => (forall s. AD s (Tower a) -> AD s (Tower a)) -> a -> a -> [a] Source

taylor0 :: Fractional a => (forall s. AD s (Tower a) -> AD s (Tower a)) -> a -> a -> [a] Source

Maclaurin Series (Tower)

maclaurin :: Fractional a => (forall s. AD s (Tower a) -> AD s (Tower a)) -> a -> [a] Source

maclaurin0 :: Fractional a => (forall s. AD s (Tower a) -> AD s (Tower a)) -> a -> [a] Source

Gradient Descent

gradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> [f a] Source

The gradientDescent function performs a multivariate optimization, based on the naive-gradient-descent in the file stalingrad/examples/flow-tests/pre-saddle-1a.vlad from the VLAD compiler Stalingrad sources. Its output is a stream of increasingly accurate results. (Modulo the usual caveats.)

It uses reverse mode automatic differentiation to compute the gradient.

gradientAscent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> [f a] Source

Perform a gradient descent using reverse mode automatic differentiation to compute the gradient.

conjugateGradientDescent :: (Traversable f, Ord a, Fractional a) => (forall s. Chosen s => f (Or s (On (Forward (Forward a))) (Kahn a)) -> Or s (On (Forward (Forward a))) (Kahn a)) -> f a -> [f a] Source

Perform a conjugate gradient descent using reverse mode automatic differentiation to compute the gradient, and using forward-on-forward mode for computing extrema.

>>> let sq x = x * x
>>> let rosenbrock [x,y] = sq (1 - x) + 100 * sq (y - sq x)
>>> rosenbrock [0,0]
1
>>> rosenbrock (conjugateGradientDescent rosenbrock [0, 0] !! 5) < 0.1
True

conjugateGradientAscent :: (Traversable f, Ord a, Fractional a) => (forall s. Chosen s => f (Or s (On (Forward (Forward a))) (Kahn a)) -> Or s (On (Forward (Forward a))) (Kahn a)) -> f a -> [f a] Source

Perform a conjugate gradient ascent using reverse mode automatic differentiation to compute the gradient.

stochasticGradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Scalar a) -> f (Reverse s a) -> Reverse s a) -> [f (Scalar a)] -> f a -> [f a] Source

The stochasticGradientDescent function approximates the true gradient of the constFunction by a gradient at a single example. As the algorithm sweeps through the training set, it performs the update for each training example.

It uses reverse mode automatic differentiation to compute the gradient The learning rate is constant through out, and is set to 0.001