LambdaNet-0.2.0.0: A configurable and extensible neural network library

Safe HaskellNone
LanguageHaskell98

Network.Trainer

Synopsis

Documentation

data BackpropTrainer a Source

Trainer is a typeclass for all trainer types - a trainer will take in an instance of itself, a network, a list of training data, and return a new network trained on the data. class Trainer a where fit :: (Floating b) => a -> Network b -> [TrainingData b] -> Network b

A BackpropTrainer performs simple backpropagation on a neural network. It can be used as the basis for more complex trainers.

Constructors

BackpropTrainer 

Fields

eta :: a
 
cost :: CostFunction a
 
cost' :: CostFunction' a
 

type CostFunction a = Vector a -> Vector a -> a Source

A CostFunction is used for evaluating a network's performance on a given input

type CostFunction' a = Vector a -> Vector a -> Vector a Source

A CostFunction' (derivative) is used in backPropagation

type TrainingData a = (Vector a, Vector a) Source

A tuple of (input, expected output)

type Selection a = [TrainingData a] -> [[TrainingData a]] Source

A selection function for performing gradient descent

type TrainCompletionPredicate a = Network a -> BackpropTrainer a -> [TrainingData a] -> Int -> Bool Source

A predicate (given a network, trainer, a list of training data, and the number of [fit]s performed) that tells the trainer to stop training

trainNTimes :: (Floating (Vector a), Container Vector a, Product a) => Network a -> BackpropTrainer a -> Selection a -> [TrainingData a] -> Int -> Network a Source

Given a network, a trainer, a list of training data, and N, this function trains the network with the list of training data N times

trainUntilErrorLessThan :: (Floating (Vector a), Container Vector a, Product a, Ord a) => Network a -> BackpropTrainer a -> Selection a -> [TrainingData a] -> a -> Network a Source

Given a network, a trainer, a list of training data, and an error value, this function trains the network with the list of training data until the error of the network (calculated by averaging the errors of each training data) is less than the given error value

trainUntil :: (Floating (Vector a), Container Vector a, Product a) => Network a -> BackpropTrainer a -> Selection a -> [TrainingData a] -> TrainCompletionPredicate a -> Int -> Network a Source

This function trains a network until a given TrainCompletionPredicate is satisfied.

quadraticCost :: (Floating (Vector a), Container Vector a) => Vector a -> Vector a -> a Source

The quadratic cost function (1/2) * sum (y - a) ^ 2

quadraticCost' :: Floating (Vector a) => Vector a -> Vector a -> Vector a Source

The derivative of the quadratic cost function sum (y - a)

minibatch :: (Floating (Vector a), Container Vector a) => Int -> [TrainingData a] -> [[TrainingData a]] Source

The minibatch function becomes a Selection when partially applied with the minibatch size

online :: (Floating (Vector a), Container Vector a) => [TrainingData a] -> [[TrainingData a]] Source

If we want to train the network online

backprop :: (Floating (Vector a), Container Vector a, Product a) => BackpropTrainer a -> Network a -> [TrainingData a] -> Network a Source

Perform backpropagation on a single training data instance.

inputs :: (Floating (Vector a), Container Vector a, Product a) => Vector a -> Network a -> [Vector a] Source

The inputs function performs a similar task to outputs, but returns a list of vectors of unactivated inputs

outputs :: (Floating (Vector a), Container Vector a, Product a) => Vector a -> Network a -> [Vector a] Source

The outputs function scans over each layer of the network and stores the activated results

deltas :: (Floating (Vector a), Container Vector a, Product a) => BackpropTrainer a -> Network a -> TrainingData a -> [Vector a] Source

The deltas function returns a list of layer deltas.

hiddenDeltas :: (Floating (Vector a), Container Vector a, Product a) => Network a -> Vector a -> [Vector a] -> [Vector a] Source

Compute the hidden layer deltas

calculateNablas :: (Floating (Vector a), Container Vector a, Product a) => BackpropTrainer a -> Network a -> Network a -> TrainingData a -> Network a Source

Calculate the nablas for a minibatch and return them as a network (so each weight and bias gets its own nabla).

fit :: (Floating (Vector a), Container Vector a, Product a) => Selection a -> BackpropTrainer a -> Network a -> [TrainingData a] -> Network a Source

Declare the BackpropTrainer to be an instance of Trainer. instance (Floating a) => Trainer (BackpropTrainer a) where

evaluate :: (Floating (Vector a), Container Vector a, Product a) => BackpropTrainer a -> Network a -> TrainingData a -> a Source

Use the cost function to determine the error of a network