Understanding Teacher Forcing in Seq2Seq Models
When we learn about seq2seq neural networks, there is a term we should know called Teacher Forcing. When we train a seq2seq model, the decoder generates one token at a time, building the output seq...

Source: DEV Community
When we learn about seq2seq neural networks, there is a term we should know called Teacher Forcing. When we train a seq2seq model, the decoder generates one token at a time, building the output sequence step by step. At each step, it needs a previous token as input to predict the next one. So in this case, we should think about what to provide as the previous token, since this choice directly affects how well the model learns. Without teacher forcing, the model uses its own previous prediction as input. Suppose the target is "I am learning" Predict "I" ✅ Uses "I" and predicts "Is" ❌ Uses "is", and everything goes off track Here, one small mistake early causes all the following predictions to be wrong. So, if there is one mistake early, the mistakes keep compounding step by step. This makes training slow, unstable, and harder for the model to learn correct sequences. With teacher forcing, instead of using the model’s prediction, we feed the correct token from the dataset at every step.