grain.chain

Chain means autograd operators in grain that is equivalent to - pytorch: torch.nn.Module - chainer: chainer.Chain or chainer.Link

Users cannot apply grain.functions to Variable without new or applyForward. Instead of that, you can apply grain.chains to Variable with opCall.

TODO test chains as functions

Members

Functions

abs
auto abs(Variable!(T, dim, Storage) x)

abs

addVec
auto addVec(Variable!(T, 2, Storage) a, Variable!(T, 1, Storage) b)

matrix + vector row-wise addition. TODO replace this with broadcasted addition

convolution
auto convolution(Variable!(T, dim, Storage) x, Variable!(T, dim, Storage) w, int[imDim] stride, int[imDim] pad, int[imDim] dilation)

tensor convolution (do cross correlation default)

cos
auto cos(Variable!(T, dim, Storage) x)

cos x

crossEntropy
auto crossEntropy(Variable!(float, 2, Storage) x, Variable!(int, 1, Storage) t, int ignoreIndex)

cross entropy loss (logsoftmax -> negative loglikelihood function)

dropout
auto dropout(Variable!(T, dim, Storage) x, float ratio, bool isTrain)

dropout : apply random mask

exp
auto exp(Variable!(T, dim, Storage) x)

exp x

log
auto log(Variable!(T, dim, Storage) x)

log x

logSoftmax
auto logSoftmax(Variable!(T, dim, Storage) x)

log exp(x_i) / sum_i (exp(x_i))

matMul
auto matMul(Variable!(T, 2, Storage) a, Variable!(T, 2, Storage) b)

matrix x matrix multiplication

neg
auto neg(Variable!(T, dim, Storage) x)

-x

negativeLogLikelihood
auto negativeLogLikelihood(Variable!(float, 2, Storage) x, Variable!(int, 1, Storage) t, int ignoreIndex)

//// Loss negative loglikelihood - log p(x). note that p(x) should be normalized

opBinaryFunc
auto opBinaryFunc(Variable!(T, dim, Storage) a, Variable!(T, dim, Storage) b, T alpha1, T alpha2)

op(alpha1 * a, alpha2 * b)

pow
auto pow(Variable!(T, dim, Storage) x, T power)

pow

reciprocal
auto reciprocal(Variable!(T, dim, Storage) x)

1 / x

relu
auto relu(Variable!(T, dim, Storage) x)

///// Unary functions rectified linear unit nonlinearity

sigmoid
auto sigmoid(Variable!(T, dim, Storage) x)

sigmoid nonlinearity

sin
auto sin(Variable!(T, dim, Storage) x)

sin x

squeeze
auto squeeze(Variable!(T, dim, Storage) x)

squeeze/remove redundant size-1 dimension (axis) d

sum
auto sum(Variable!(T, dim, Storage) x)

summation

tan
auto tan(Variable!(T, dim, Storage) x)

tan x

tanh
auto tanh(Variable!(T, dim, Storage) x)

tanh nonlinearity

unsqueeze
auto unsqueeze(Variable!(T, dim, Storage) x)

unsqueeze/add redundant size-1 dimension (axis) d

view
auto view(Variable!(T, sdim, Storage) x, ptrdiff_t[tdim] shape)

/ Topology functions reorganizing shape while it hold total elements a.k.a. reshape. At most one dimension of the new shape can be -1. In this case, the value is inferred from the size of the tensor and the remaining dimensions.

Structs

Convolution
struct Convolution(T, size_t dim, alias Storage)

/// Parametric chains convolution operator

Embedding
struct Embedding(T, alias Storage)

Emebedding ID into vector

Linear
struct Linear(T, alias Storage)

linear operator

Meta