Copyright | (c) Edward Kmett 2010-2015 |
---|---|
License | BSD3 |
Maintainer | ekmett@gmail.com |
Stability | experimental |
Portability | GHC only |
Safe Haskell | None |
Language | Haskell2010 |
Synopsis
- findZero :: (Fractional a, Eq a) => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> [a]
- findZeroNoEq :: Fractional a => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> [a]
- inverse :: (Fractional a, Eq a) => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> a -> [a]
- inverseNoEq :: Fractional a => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> a -> [a]
- fixedPoint :: (Fractional a, Eq a) => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> [a]
- fixedPointNoEq :: Fractional a => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> [a]
- extremum :: (Fractional a, Eq a) => (forall s. AD s (On (Forward (Forward a))) -> AD s (On (Forward (Forward a)))) -> a -> [a]
- extremumNoEq :: Fractional a => (forall s. AD s (On (Forward (Forward a))) -> AD s (On (Forward (Forward a)))) -> a -> [a]
- gradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> [f a]
- constrainedDescent :: forall f a. (Traversable f, RealFloat a, Floating a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> [CC f a] -> f a -> [(a, f a)]
- data CC f a where
- eval :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> a
- gradientAscent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> [f a]
- conjugateGradientDescent :: (Traversable f, Ord a, Fractional a) => (forall s. Chosen s => f (Or s (On (Forward (Forward a))) (Kahn a)) -> Or s (On (Forward (Forward a))) (Kahn a)) -> f a -> [f a]
- conjugateGradientAscent :: (Traversable f, Ord a, Fractional a) => (forall s. Chosen s => f (Or s (On (Forward (Forward a))) (Kahn a)) -> Or s (On (Forward (Forward a))) (Kahn a)) -> f a -> [f a]
- stochasticGradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Scalar a) -> f (Reverse s a) -> Reverse s a) -> [f (Scalar a)] -> f a -> [f a]
Newton's Method (Forward AD)
findZero :: (Fractional a, Eq a) => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> [a] Source #
The findZero
function finds a zero of a scalar function using
Newton's method; its output is a stream of increasingly accurate
results. (Modulo the usual caveats.) If the stream becomes constant
("it converges"), no further elements are returned.
Examples:
>>>
take 10 $ findZero (\x->x^2-4) 1
[1.0,2.5,2.05,2.000609756097561,2.0000000929222947,2.000000000000002,2.0]
>>>
last $ take 10 $ findZero ((+1).(^2)) (1 :+ 1)
0.0 :+ 1.0
findZeroNoEq :: Fractional a => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> [a] Source #
The findZeroNoEq
function behaves the same as findZero
except that it
doesn't truncate the list once the results become constant. This means it
can be used with types without an Eq
instance.
inverse :: (Fractional a, Eq a) => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> a -> [a] Source #
The inverse
function inverts a scalar function using
Newton's method; its output is a stream of increasingly accurate
results. (Modulo the usual caveats.) If the stream becomes
constant ("it converges"), no further elements are returned.
Example:
>>>
last $ take 10 $ inverse sqrt 1 (sqrt 10)
10.0
inverseNoEq :: Fractional a => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> a -> [a] Source #
The inverseNoEq
function behaves the same as inverse
except that it
doesn't truncate the list once the results become constant. This means it
can be used with types without an Eq
instance.
fixedPoint :: (Fractional a, Eq a) => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> [a] Source #
The fixedPoint
function find a fixedpoint of a scalar
function using Newton's method; its output is a stream of
increasingly accurate results. (Modulo the usual caveats.)
If the stream becomes constant ("it converges"), no further elements are returned.
>>>
last $ take 10 $ fixedPoint cos 1
0.7390851332151607
fixedPointNoEq :: Fractional a => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> [a] Source #
The fixedPointNoEq
function behaves the same as fixedPoint
except that
it doesn't truncate the list once the results become constant. This means it
can be used with types without an Eq
instance.
extremum :: (Fractional a, Eq a) => (forall s. AD s (On (Forward (Forward a))) -> AD s (On (Forward (Forward a)))) -> a -> [a] Source #
The extremum
function finds an extremum of a scalar
function using Newton's method; produces a stream of increasingly
accurate results. (Modulo the usual caveats.) If the stream
becomes constant ("it converges"), no further elements are returned.
>>>
last $ take 10 $ extremum cos 1
0.0
extremumNoEq :: Fractional a => (forall s. AD s (On (Forward (Forward a))) -> AD s (On (Forward (Forward a)))) -> a -> [a] Source #
The extremumNoEq
function behaves the same as extremum
except that it
doesn't truncate the list once the results become constant. This means it
can be used with types without an Eq
instance.
Gradient Ascent/Descent (Reverse AD)
gradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> [f a] Source #
The gradientDescent
function performs a multivariate
optimization, based on the naive-gradient-descent in the file
stalingrad/examples/flow-tests/pre-saddle-1a.vlad
from the
VLAD compiler Stalingrad sources. Its output is a stream of
increasingly accurate results. (Modulo the usual caveats.)
It uses reverse mode automatic differentiation to compute the gradient.
constrainedDescent :: forall f a. (Traversable f, RealFloat a, Floating a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> [CC f a] -> f a -> [(a, f a)] Source #
constrainedDescent obj fs env
optimizes the convex function obj
subject to the convex constraints f <= 0
where f
. This is
done using a log barrier to model constraints (i.e. Boyd, Chapter 11.3).
The returned optimal point for the objective function must satisfy elem
fsfs
,
but the initial environment, env
, needn't be feasible.
Convex constraint, CC, is a GADT wrapper that hides the existential
(s
) which is so prevalent in the rest of the API. This is an
engineering convenience for managing the skolems.
eval :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> a Source #
gradientAscent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> [f a] Source #
Perform a gradient descent using reverse mode automatic differentiation to compute the gradient.
conjugateGradientDescent :: (Traversable f, Ord a, Fractional a) => (forall s. Chosen s => f (Or s (On (Forward (Forward a))) (Kahn a)) -> Or s (On (Forward (Forward a))) (Kahn a)) -> f a -> [f a] Source #
Perform a conjugate gradient descent using reverse mode automatic differentiation to compute the gradient, and using forward-on-forward mode for computing extrema.
>>>
let sq x = x * x
>>>
let rosenbrock [x,y] = sq (1 - x) + 100 * sq (y - sq x)
>>>
rosenbrock [0,0]
1>>>
rosenbrock (conjugateGradientDescent rosenbrock [0, 0] !! 5) < 0.1
True
conjugateGradientAscent :: (Traversable f, Ord a, Fractional a) => (forall s. Chosen s => f (Or s (On (Forward (Forward a))) (Kahn a)) -> Or s (On (Forward (Forward a))) (Kahn a)) -> f a -> [f a] Source #
Perform a conjugate gradient ascent using reverse mode automatic differentiation to compute the gradient.
stochasticGradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Scalar a) -> f (Reverse s a) -> Reverse s a) -> [f (Scalar a)] -> f a -> [f a] Source #
The stochasticGradientDescent
function approximates
the true gradient of the constFunction by a gradient at
a single example. As the algorithm sweeps through the training
set, it performs the update for each training example.
It uses reverse mode automatic differentiation to compute the gradient The learning rate is constant through out, and is set to 0.001