PositionalEmbedding

class PositionalEmbedding(num_embeddings, embedding_dim, padding_idx, **kwargs)[source]

Class for learned positional embeddings. Behaviour is similar to torch.nn.Embedding. Each position index will be assigned its own embedding. Note that this restricts the maximum length of the input sequence length.

This is not the sinosoidal positional embedding introducted in the Transformer model architecture (Vaswani et al., 2017). These embeddings do not have any functional constraints on how they should behave. The embeddings are learned entirely from scratch.

forward(input_ids, incremental_state=None)[source]
input_idstorch LongTensor

LongTensor containing the token indices of a batch of input sequences. Shape of (batch size, sequence length).