neural-0.3.0.0: Neural Networks in native Haskell

Copyright(c) Lars Brünjes, 2016
LicenseMIT
Maintainerbrunjlar@gmail.com
Stabilityexperimental
Portabilityportable
Safe HaskellNone
LanguageHaskell2010
Extensions
  • ScopedTypeVariables
  • DataKinds
  • RankNTypes
  • TypeOperators
  • ExplicitNamespaces
  • ExplicitForAll

Numeric.Neural.Layer

Description

This modules defines special "layer" components and convenience functions for the creation of such layers.

Synopsis

Documentation

type Layer i o = Component (Vector i) (Vector o) Source #

A Layer i o is a component that maps a Vector of length i to a Vector of length o.

linearLayer :: forall i o. (KnownNat i, KnownNat o) => Layer i o Source #

Creates a linear Layer, i.e. a layer that multiplies the input with a weight Matrix and adds a bias to get the output.

Random initialization follows the recommendation from chapter 3 of the online book Neural Networks and Deep Learning by Michael Nielsen.

layer :: (KnownNat i, KnownNat o) => Diff' -> Layer i o Source #

Creates a Layer as a combination of a linear layer and a non-linear activation function.

tanhLayer :: (KnownNat i, KnownNat o) => Layer i o Source #

This is a simple Layer, specialized to tanh-activation. Output values are all in the interval [0,1].

logisticLayer :: (KnownNat i, KnownNat o) => Layer i o Source #

This is a simple Layer, specialized to the logistic function as activation. Output values are all in the interval [-1,1].

reLULayer :: (KnownNat i, KnownNat o) => Layer i o Source #

This is a simple Layer, specialized to the rectified linear unit activation function. Output values are all non-negative.

softmax :: (Floating a, Functor f, Foldable f) => f a -> f a Source #

The softmax function normalizes a vector, so that all entries are in [0,1] with sum 1. This means the output entries can be interpreted as probabilities.