sgd-0.6.0.0: Stochastic gradient descent

Safe HaskellNone
LanguageHaskell98

Numeric.SGD

Contents

Description

Main module of the stochastic gradient descent (SGD) library.

SGD is a method for optimizing a global objective function defined as a sum of smaller, differentiable functions. The individual component functions share the same set of parameters, represented by the ParamSet class.

To perform SGD, the gradients of the individual functions need to be determined. This can be done manually or automatically, using one of the automatic differentiation libraries (ad, backprop) available in Haskell.

For instance, let's say we have a list of functions defined as:

funs = [\x -> 0.3*x^2, \x -> -2*x, const 3, sin]

The global objective is then defined as:

objective x = sum $ map ($x) funs

We can manually determine the individual derivatives:

derivs = [\x -> 0.6*x, const (-2), const 0, cos]

or use an automatic differentiation library, for instance:

import qualified Numeric.AD as AD
derivs = map
  (\k -> AD.diff (funs !! k))
  [0..length funs-1]

Finally, run allows to approach a (potentially local) minimum of the global objective function:

>>> run (momentum def id) (take 10000 $ cycle derivs) 0.0
4.180177042912455

where:

  • (take 10000 $ cycle derivs) is the stream of training examples
  • (momentum def id) is the selected SGD variant (momentum), supplied with the default configuration (def) and the function (id) for calculating the gradient from a training example
  • 0.0 is the initial parameter value
Synopsis

SGD variants

momentum Source #

Arguments

:: (Monad m, ParamSet p) 
=> Config

Momentum configuration

-> (e -> p -> p)

Gradient on a training element

-> SGD m e p 

Stochastic gradient descent with momentum. See Numeric.SGD.Momentum for more information.

adaDelta Source #

Arguments

:: (Monad m, ParamSet p) 
=> Config

AdaDelta configuration

-> (e -> p -> p)

Gradient on a training element

-> SGD m e p 

Perform gradient descent using the AdaDelta algorithm. See Numeric.SGD.AdaDelta for more information.

adam Source #

Arguments

:: (Monad m, ParamSet p) 
=> Config

Adam configuration

-> (e -> p -> p)

Gradient on a training element

-> SGD m e p 

Perform gradient descent using the Adam algorithm. See Numeric.SGD.Adam for more information.

Pure SGD

run Source #

Arguments

:: ParamSet p 
=> SGD Identity e p

Selected SGD method

-> [e]

Training data stream

-> p

Initial parameters

-> p 

Traverse all the elements in the training data stream in one pass, calculate the subsequent gradients, and apply them progressively starting from the initial parameter values.

Consider using runIO if your training dataset is large.

IO-based SGD

data Config Source #

High-level IO-based SGD configuration

Constructors

Config 

Fields

  • iterNum :: Natural

    Number of iteration over the entire training dataset

  • batchRandom :: Bool

    Should the mini-batch be selected at random? If not, the subsequent training elements will be picked sequentially. Random selection gives no guarantee of seeing each training sample in every epoch.

  • reportEvery :: Double

    How often the value of the objective function should be reported (with 1 meaning once per pass over the training data)

Instances
Eq Config Source # 
Instance details

Defined in Numeric.SGD

Methods

(==) :: Config -> Config -> Bool #

(/=) :: Config -> Config -> Bool #

Ord Config Source # 
Instance details

Defined in Numeric.SGD

Show Config Source # 
Instance details

Defined in Numeric.SGD

Generic Config Source # 
Instance details

Defined in Numeric.SGD

Associated Types

type Rep Config :: Type -> Type #

Methods

from :: Config -> Rep Config x #

to :: Rep Config x -> Config #

Default Config Source # 
Instance details

Defined in Numeric.SGD

Methods

def :: Config #

type Rep Config Source # 
Instance details

Defined in Numeric.SGD

runIO Source #

Arguments

:: ParamSet p 
=> Config

SGD configuration

-> SGD IO e p

Selected SGD method

-> (e -> p -> Double)

Value of the objective function on a sample element (needed for model quality reporting)

-> DataSet e

Training dataset

-> p

Initial parameter values

-> IO p 

Perform SGD in the IO monad, regularly reporting the value of the objective function on the entire dataset. A higher-level wrapper which should be convenient to use when the training dataset is large.

An alternative is to use the simpler function run, or to build a custom SGD pipeline based on lower-level combinators (pipeSeq, adaDelta, every, result, etc.).

Combinators

pipeSeq :: DataSet e -> Producer e IO () Source #

Pipe the dataset sequentially in a loop.

pipeRan :: DataSet e -> Producer e IO () Source #

Pipe the dataset randomly in a loop.

result Source #

Arguments

:: Monad m 
=> p

Default value (in case the stream is empty)

-> Producer p m ()

Stream of parameter sets

-> m p 

Extract the result of the SGD calculation (the last parameter set flowing downstream).

every :: Monad m => Int -> (p -> m ()) -> Pipe p p m x Source #

Apply the given function every k param sets flowing downstream.

Re-exports

def :: Default a => a #

The default value for this type.