
Building Transformer from Scratch
A Transformer is a neural network architecture that processes sequences by learning which parts of the input to pay attention to. The architecture has two main blocks: Encoder: reads and understands the input Decoder: generates the output We are going to start building each component of transformers one by one. Task 1: Input Embeddings + Positional Encoding Transformers take words as input, but neural networks need numbers. So we convert each word into a vector using an embedding layer. But here's the problem: unlike RNNs, transformers process all words at once and have no sense of order. To fix this we add positional encoding, this tells the model where each word sits in the sequence. The formula for positional encoding is: for word_position in sequence: for i in [total dimensions/number of features in each embedding] (step = 2): # even PE(pos, i) = sin(pos / 10000**(i/total dimensions)) # odd PE(pos, i + 1) = sin(pos / 10000**(i/total dimensions)) Complete the code below: import torc
Continue reading on Dev.to
Opens in a new tab

