Back to articles
Understanding Seq2Seq Neural Networks – Part 6: Decoder Outputs and the Fully Connected Layer

Understanding Seq2Seq Neural Networks – Part 6: Decoder Outputs and the Fully Connected Layer

via Dev.toRijul Rajesh

In the previous article , we were looking at the embedding values in the encoder and the decoder . As you can see, they have different input words and symbols (tokens) and different weights , which result in different embedding values for each token . Because we have just finished encoding the English sentence “Let’s go,” the decoder starts with the embedding values for the token . The decoder then performs computations using two layers of LSTMs , each with two LSTM cells . The output values (the short-term memories , or hidden states ) from the top layer of LSTM cells are then transformed using additional weights and biases in what is called a fully connected layer . We will explore this further in the next article . Looking for an easier way to install tools, libraries, or entire repositories? Try Installerpedia : a community-driven, structured installation platform that lets you install almost anything with minimal hassle and clear, reliable guidance . Just run: ipm install repo-nam

Continue reading on Dev.to

Opens in a new tab

Read Full Article
6 views

Related Articles