Compute the connectionist temporal classification loss between a sequence
of probabilities and a ground truth labeling. Optionally compute the
gradient with respect to the inputs.
\param in activations pointer to the activations in either CPU or GPU
addressable memory, depending on info. We assume a fixed
memory layout for this 3 dimensional tensor, which has dimension
(t, n, p), where t is the time index, n is the minibatch index,
and p indexes over probabilities of each symbol in the alphabet.
The memory layout is (t, n, p) in C order (slowest to fastest changing
index, aka row-major), or (p, n, t) in Fortran order (fastest to slowest
changing index, aka column-major). We also assume strides are equal to
dimensions - there is no padding between dimensions.
More precisely, element (t, n, p), for a problem with mini_batch examples
in the mini batch, and alphabet_size symbols in the alphabet, is located at:
activations[(t * mini_batch + n) * alphabet_size + p]
\param out gradients if not NULL, then gradients are computed. Should be
allocated in the same memory space as probs and memory
ordering is identical.
\param in flat_labels Always in CPU memory. A concatenation
of all the labels for the minibatch.
\param in label_lengths Always in CPU memory. The length of each label
for each example in the minibatch.
\param in input_lengths Always in CPU memory. The number of time steps
for each sequence in the minibatch.
\param in alphabet_size The number of possible output symbols. There
should be this many probabilities for each time step.
\param in mini_batch How many examples in a minibatch.
\param out costs Always in CPU memory. The cost of each example in the
minibatch.
\param [in,out] workspace In same memory space as probs. Should be of
size requested by get_workspace_size.
\param in options see struct ctcOptions
Compute the connectionist temporal classification loss between a sequence of probabilities and a ground truth labeling. Optionally compute the gradient with respect to the inputs. \param in activations pointer to the activations in either CPU or GPU addressable memory, depending on info. We assume a fixed memory layout for this 3 dimensional tensor, which has dimension (t, n, p), where t is the time index, n is the minibatch index, and p indexes over probabilities of each symbol in the alphabet. The memory layout is (t, n, p) in C order (slowest to fastest changing index, aka row-major), or (p, n, t) in Fortran order (fastest to slowest changing index, aka column-major). We also assume strides are equal to dimensions - there is no padding between dimensions. More precisely, element (t, n, p), for a problem with mini_batch examples in the mini batch, and alphabet_size symbols in the alphabet, is located at: activations[(t * mini_batch + n) * alphabet_size + p] \param out gradients if not NULL, then gradients are computed. Should be allocated in the same memory space as probs and memory ordering is identical. \param in flat_labels Always in CPU memory. A concatenation of all the labels for the minibatch. \param in label_lengths Always in CPU memory. The length of each label for each example in the minibatch. \param in input_lengths Always in CPU memory. The number of time steps for each sequence in the minibatch. \param in alphabet_size The number of possible output symbols. There should be this many probabilities for each time step. \param in mini_batch How many examples in a minibatch. \param out costs Always in CPU memory. The cost of each example in the minibatch. \param [in,out] workspace In same memory space as probs. Should be of size requested by get_workspace_size. \param in options see struct ctcOptions
\return Status information