hasktorch-indef-0.0.1.0: Core Hasktorch abstractions wrapping FFI bindings

Copyright(c) Sam Stites 2017
LicenseBSD3
Maintainersam@stites.io
Stabilityexperimental
Portabilitynon-portable
Safe HaskellNone
LanguageHaskell2010

Torch.Indef.Static.NN.Pooling

Contents

Description

 
Synopsis

Documentation

_featureLPPooling_updateOutput :: Tensor d -> Tensor d -> Double -> Int -> Int -> Bool -> IO () Source #

featureLPPooling forward pass (updates the output tensor)

_featureLPPooling_updateGradInput :: Tensor d -> Tensor d -> Tensor d -> Tensor d -> Double -> Int -> Int -> Bool -> IO () Source #

featureLPPooling backward-update (updates the layer and bias tensors)

1d pooling functions

_temporalMaxPooling_updateOutput :: Tensor d -> Tensor d -> IndexTensor d -> Int -> Int -> IO () Source #

temporalMaxPooling forward pass (updates the output tensor)

_temporalMaxPooling_updateGradInput :: Tensor d -> Tensor d -> Tensor d -> IndexTensor d -> Int -> Int -> IO () Source #

temporalMaxPooling backward-update (updates the layer and bias tensors)

2d pooling functions

type SpatialDilationCheckC kH kW dH dW pH pW dilH dilW = (All KnownDim '[kH, kW, pH, pW, dH, dW, dilH, dilW], (kW > 0) ~ True, (kH > 0) ~ True, (dW > 0) ~ True, (dH > 0) ~ True, (dilW > 0) ~ True, (dilH > 0) ~ True, (Div kW 2 >= pW) ~ True, (Div kH 2 >= pH) ~ True) Source #

Constraint to assert that all hyperparameters are valid and to make the requirement that all dimension values are KnownDims.

type CeilModeOutputDims i k d p o dil ceilMode = If (ceilMode && (Rem ((i - ((dil * (k - 1)) + 1)) + (2 * p)) d > 0)) ((2 + Div ((i - ((dil * (k - 1)) + 1)) + (2 * p)) d) ~ o) ((1 + Div ((i - ((dil * (k - 1)) + 1)) + (2 * p)) d) ~ o) Source #

Type-level if statement to indicate what the output dimension should be if CeilMode is turned on.

type SpatialDilationC iH iW kH kW dH dW pH pW oW oH dilH dilW ceilMode = (SpatialDilationCheckC kH kW dH dW pH pW dilH dilW, CeilModeOutputDims iH kH dH pH oH dilH ceilMode, CeilModeOutputDims iW kW dW pW oW dilW ceilMode, All KnownDim '[oH, oW, iH, iW]) Source #

Top-level constraint to assert that checks CeilModeOutputDims on height and width dimensions and asserts that all dimensions checks in SpatialDilationCheckC are true.

dilatedMaxPooling2d Source #

Arguments

:: SpatialDilationC iH iW kH kW dH dW pH pW oW oH dilH dilW ceilMode 
=> KnownDim inPlane 
=> Reifies s W 
=> Kernel2d '(kH, kW)

kernel size

-> Step2d '(dH, dW)

step size

-> Padding2d '(pH, pW)

padding size

-> Dilation2d '(dilH, dilW)

dilation size

-> SBool ceilMode

ceil mode

-> BVar s (Tensor '[inPlane, iW, iH]) 
-> BVar s (Tensor '[inPlane, oW, oH]) 

run a backprop-aware dilatedMaxPooling2d function

dilatedMaxPooling2dBatch Source #

Arguments

:: SpatialDilationC iH iW kH kW dH dW pH pW oW oH dilH dilW ceilMode 
=> KnownDim inPlane 
=> KnownDim b 
=> Reifies s W 
=> Kernel2d '(kH, kW)

kernel size

-> Step2d '(dH, dW)

step size

-> Padding2d '(pH, pW)

padding size

-> Dilation2d '(dilH, dilW)

dilation size

-> SBool ceilMode

ceil mode

-> BVar s (Tensor '[b, inPlane, iW, iH]) 
-> BVar s (Tensor '[b, inPlane, oW, oH]) 

run a backprop-aware dilatedMaxPooling2d function with a batch dimension.

_dilatedMaxPooling2d Source #

Arguments

:: All KnownDim '[kH, kW, pH, pW, dH, dW, dilH, dilW] 
=> All Dimensions '[d', d] 
=> Reifies s W 
=> Kernel2d '(kH, kW)

kernel size

-> Step2d '(dH, dW)

step size

-> Padding2d '(pH, pW)

padding size

-> Dilation2d '(dilH, dilW)

dilation size

-> SBool ceilMode

ceil mode

-> BVar s (Tensor d)

input

-> BVar s (Tensor d')

output

internal function of dilatedMaxPooling2d and dilatedMaxPooling2dBatch. Should not be used.

2d max pooling helpers

_maxPooling2d Source #

Arguments

:: All KnownDim '[kH, kW, pH, pW, dH, dW] 
=> All Dimensions '[d', d] 
=> Reifies s W 
=> Kernel2d '(kH, kW)

kernel size

-> Step2d '(dH, dW)

step size. Note: default in C is the kernel size.

-> Padding2d '(pH, pW)

padding size

-> SBool ceilMode

ceil mode

-> BVar s (Tensor d)

input

-> BVar s (Tensor d')

output

internal function of maxPooling2d and maxPooling2dBatch. Should not be used.

maxPooling2d Source #

Arguments

:: SpatialDilationC iH iW kH kW dH dW pH pW oW oH 1 1 ceilMode 
=> Reifies s W 
=> KnownDim inPlane 
=> Kernel2d '(kH, kW)

kernel size

-> Step2d '(dH, dW)

step size

-> Padding2d '(pH, pW)

padding size

-> SBool ceilMode

ceil mode

-> BVar s (Tensor '[inPlane, iH, iW]) 
-> BVar s (Tensor '[inPlane, oH, oW]) 

backprop-aware maxPooling2d function.

maxPooling2dBatch Source #

Arguments

:: SpatialDilationC iH iW kH kW dH dW pH pW oW oH 1 1 ceilMode 
=> Reifies s W 
=> KnownDim inPlane 
=> KnownDim b 
=> Kernel2d '(kH, kW)

kernel size

-> Step2d '(dH, dW)

step size

-> Padding2d '(pH, pW)

padding size

-> SBool ceilMode

ceil mode

-> BVar s (Tensor '[b, inPlane, iH, iW]) 
-> BVar s (Tensor '[b, inPlane, oH, oW]) 

backprop-aware maxPooling2d function with a batch dimension.

maxPooling2dWithIO Source #

Arguments

:: All KnownDim '[kH, kW, pH, pW, dH, dW] 
=> All Dimensions '[d', d] 
=> Maybe (IndexTensor d') 
-> Maybe (Tensor d') 
-> Maybe (Tensor d) 
-> Kernel2d '(kH, kW)

kernel size

-> Step2d '(dH, dW)

step size. Note: default in C is the kernel size.

-> Padding2d '(pH, pW)

padding size

-> SBool ceilMode

ceil mode

-> Tensor d 
-> IO (Tensor d', Tensor d' -> IO (Tensor d)) 

internal function of maxPooling2d and maxPooling2dBatch. Should not be used.

maxPooling2dIO Source #

Arguments

:: SpatialDilationC iH iW kH kW dH dW pH pW oW oH 1 1 ceilMode 
=> KnownDim inPlane 
=> Kernel2d '(kH, kW)

kernel size

-> Step2d '(dH, dW)

step size

-> Padding2d '(pH, pW)

padding size

-> SBool ceilMode

ceil mode

-> Tensor '[inPlane, iH, iW] 
-> IO (Tensor '[inPlane, oH, oW], Tensor '[inPlane, oH, oW] -> IO (Tensor '[inPlane, iH, iW])) 

backprop-aware maxPooling2d function.

maxPooling2dBatchIO Source #

Arguments

:: SpatialDilationC iH iW kH kW dH dW pH pW oW oH 1 1 ceilMode 
=> KnownDim inPlane 
=> KnownDim b 
=> Kernel2d '(kH, kW)

kernel size

-> Step2d '(dH, dW)

step size

-> Padding2d '(pH, pW)

padding size

-> SBool ceilMode

ceil mode

-> Tensor '[b, inPlane, iH, iW] 
-> IO (Tensor '[b, inPlane, oH, oW], Tensor '[b, inPlane, oH, oW] -> IO (Tensor '[b, inPlane, iH, iW])) 

backprop-aware maxPooling2d function with a batch dimension.

_spatialAdaptiveMaxPooling_updateOutput :: Tensor d -> Tensor d -> IndexTensor d -> Int -> Int -> IO () Source #

spatialAdaptiveMaxPooling forward pass (updates the output tensor)

_spatialAdaptiveMaxPooling_updateGradInput :: Tensor d -> Tensor d -> Tensor d -> IndexTensor d -> IO () Source #

spatialAdaptiveMaxPooling backward-update (updates the layer and bias tensors)

_spatialFractionalMaxPooling_updateOutput :: Tensor d -> Tensor d -> Int -> Int -> Int -> Int -> IndexTensor d -> Tensor d -> IO () Source #

spatialFractionalMaxPooling forward pass (updates the output tensor)

_spatialFractionalMaxPooling_updateGradInput :: Tensor d -> Tensor d -> Tensor d -> Int -> Int -> Int -> Int -> IndexTensor d -> IO () Source #

spatialFractionalMaxPooling backward-update (updates the layer and bias tensors)

_spatialMaxUnpooling_updateOutput :: Tensor d -> Tensor d -> IndexTensor d -> Int -> Int -> IO () Source #

spatialMaxUnpooling forward pass (updates the output tensor)

_spatialMaxUnpooling_updateGradInput :: Tensor d -> Tensor d -> Tensor d -> IndexTensor d -> Int -> Int -> IO () Source #

spatialMaxUnpooling backward-update (updates the layer and bias tensors)

_spatialAdaptiveAveragePooling_updateOutput :: Tensor d -> Tensor d -> Int -> Int -> IO () Source #

spatialAdaptiveAveragePooling forward pass (updates the output tensor)

_spatialAdaptiveAveragePooling_updateGradInput :: Tensor d -> Tensor d -> Tensor d -> IO () Source #

spatialAdaptiveAveragePooling backward-update (updates the layer and bias tensors)

type AvgPool2dOutputDim i k p s ceilMode o = (If (ceilMode && (Rem ((i + (2 * p)) - k) s > 0)) ((2 + Div ((i + (2 * p)) - k) s) ~ o) ((1 + Div ((i + (2 * p)) - k) s) ~ o), (k > 0) ~ True, (s > 0) ~ True, (o > 0) ~ True, (Div k 2 >= p) ~ True) Source #

Type-level if statement to indicate what the output dimension should be if CeilMode is turned on.

gapPool2dBatchIO Source #

Arguments

:: varlist ~ '[b, c, iH, iW] 
=> All KnownNat varlist 
=> All KnownDim varlist 
=> AvgPool2dOutputDim iH iH 0 iH False 1 
=> AvgPool2dOutputDim iW iW 0 iW False 1 
=> Tensor '[b, c, iH, iW]

input tensor

-> IO (Tensor '[b, c], Tensor '[b, c] -> IO (Tensor '[b, c, iH, iW])) 

spatial global average pooling on batches in IO

avgPool2dWithIO Source #

Arguments

:: All KnownNat '[c, iH, iW, oH, oW, kW, kH, dW, dH, padW, padH] 
=> All KnownDim '[c, iH, iW, oH, oW, kW, kH, dW, dH, padW, padH] 
=> AvgPool2dOutputDim iH kH padH dH ceil_mode oH 
=> AvgPool2dOutputDim iW kW padW dW ceil_mode oW 
=> Kernel2d '(kH, kW)

kernel sizes

-> Step2d '(dH, dW)

step sizes

-> Padding2d '(padH, padW)

pad sizes

-> SBool ceil_mode

ceiling mode: when True, will use ceil instead of floor to compute the output shape

-> SBool count_include_pad

count_include_pad: when True, will include the zero-padding in the averaging calculation

-> Tensor '[c, iH, iW]

input tensor

-> IO (Tensor '[c, oH, oW], Tensor '[c, oH, oW] -> IO (Tensor '[c, iH, iW])) 

spatial average pooling with backprop support in IO

avgPool2dBatchIO Source #

Arguments

:: All KnownNat '[b, c, iH, iW, oH, oW, kW, kH] 
=> All KnownDim '[b, c, iH, iW, oH, oW, kW, kH] 
=> AvgPool2dOutputDim iH kH 0 kH False oH 
=> AvgPool2dOutputDim iW kW 0 kW False oW 
=> Kernel2d '(kH, kW)

kernel sizes

-> Tensor '[b, c, iH, iW]

input tensor

-> IO (Tensor '[b, c, oH, oW], Tensor '[b, c, oH, oW] -> IO (Tensor '[b, c, iH, iW])) 

spatial average pooling on batches with backprop support in IO and defaults

avgPool2dBatchWithIO Source #

Arguments

:: All KnownNat '[b, c, iH, iW, oH, oW, kW, kH, dW, dH, padW, padH] 
=> All KnownDim '[b, c, iH, iW, oH, oW, kW, kH, dW, dH, padW, padH] 
=> AvgPool2dOutputDim iH kH padH dH ceil_mode oH 
=> AvgPool2dOutputDim iW kW padW dW ceil_mode oW 
=> Kernel2d '(kH, kW)

kernel sizes

-> Step2d '(dH, dW)

step sizes

-> Padding2d '(padH, padW)

pad sizes

-> SBool ceil_mode

ceiling mode: when True, will use ceil instead of floor to compute the output shape

-> SBool count_include_pad

count_include_pad: when True, will include the zero-padding in the averaging calculation

-> Tensor '[b, c, iH, iW]

input tensor

-> IO (Tensor '[b, c, oH, oW], Tensor '[b, c, oH, oW] -> IO (Tensor '[b, c, iH, iW])) 

spatial average pooling on batches with backprop support in IO

_avgPool2dWithIO Source #

Arguments

:: All KnownNat '[kW, kH, dW, dH, padW, padH] 
=> All KnownDim '[kW, kH, dW, dH, padW, padH] 
=> All Dimensions '[dout, din] 
=> Maybe (Tensor dout)

cached output (optional)

-> Maybe (Tensor din)

cached input gradient (optional)

-> Kernel2d '(kH, kW)

kernel sizes

-> Step2d '(dH, dW)

step sizes

-> Padding2d '(padH, padW)

pad sizes

-> SBool ceil_mode

ceiling mode: when True, will use ceil instead of floor to compute the output shape

-> SBool count_include_pad

count_include_pad: when True, will include the zero-padding in the averaging calculation

-> Tensor din

input tensor

-> IO (Tensor dout, Tensor dout -> IO (Tensor din)) 

generic spatial average pooling with backprop support in IO. This works without constraints and can be applied on either batch or non-batch tensors, but C errors may occur if you misuse this function.

3D pooling functions

_volumetricFractionalMaxPooling_updateOutput :: Tensor d -> Tensor d -> Int -> Int -> Int -> Int -> Int -> Int -> IndexTensor d -> Tensor d -> IO () Source #

volumetricFractionalMaxPooling forward pass (updates the output tensor)

_volumetricFractionalMaxPooling_updateGradInput :: Tensor d -> Tensor d -> Tensor d -> Int -> Int -> Int -> Int -> Int -> Int -> IndexTensor d -> IO () Source #

volumetricFractionalMaxPooling backward-update (updates the layer and bias tensors)

_volumetricMaxPooling_updateOutput :: Tensor d -> Tensor d -> IndexTensor d -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Bool -> IO () Source #

volumetricMaxPooling forward pass (updates the output tensor)

_volumetricMaxPooling_updateGradInput :: Tensor d -> Tensor d -> Tensor d -> IndexTensor d -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Bool -> IO () Source #

volumetricMaxPooling backward-update (updates the layer and bias tensors)

_volumetricDilatedMaxPooling_updateOutput :: Tensor d -> Tensor d -> IndexTensor d -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Bool -> IO () Source #

volumetricDilatedMaxPooling forward pass (updates the output tensor)

_volumetricDilatedMaxPooling_updateGradInput :: Tensor d -> Tensor d -> Tensor d -> IndexTensor d -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Bool -> IO () Source #

volumetricDilatedMaxPooling backward-update (updates the layer and bias tensors)

_volumetricMaxUnpooling_updateOutput :: Tensor d -> Tensor d -> IndexTensor d -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> IO () Source #

volumetricMaxUnpooling forward pass (updates the output tensor)

_volumetricMaxUnpooling_updateGradInput :: Tensor d -> Tensor d -> Tensor d -> IndexTensor d -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> IO () Source #

volumetricMaxUnpooling backward-update (updates the layer and bias tensors)

_volumetricAdaptiveMaxPooling_updateOutput :: Tensor d -> Tensor d -> IndexTensor d -> Int -> Int -> Int -> IO () Source #

volumetricAdaptiveMaxPooling forward pass (updates the output tensor)

_volumetricAdaptiveMaxPooling_updateGradInput :: Tensor d -> Tensor d -> Tensor d -> IndexTensor d -> IO () Source #

volumetricAdaptiveMaxPooling backward-update (updates the layer and bias tensors)

_volumetricAveragePooling_updateOutput :: Tensor d -> Tensor d -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Bool -> Bool -> IO () Source #

volumetricAveragePooling forward pass (updates the output tensor)

_volumetricAveragePooling_updateGradInput :: Tensor d -> Tensor d -> Tensor d -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Int -> Bool -> Bool -> IO () Source #

volumetricAveragePooling backward-update (updates the layer and bias tensors)

_volumetricAdaptiveAveragePooling_updateOutput :: Tensor d -> Tensor d -> Int -> Int -> Int -> IO () Source #

volumetricAdaptiveAveragePooling forward pass (updates the output tensor)

_volumetricAdaptiveAveragePooling_updateGradInput :: Tensor d -> Tensor d -> Tensor d -> IO () Source #

volumetricAdaptiveAveragePooling backward-update (updates the layer and bias tensors)