First of all, thanks for your great work on this project!
I noticed in the demo that the visualization focuses on the input token sequence "The quick brown fox jumps over the lazy dog." This is helpful, but I was wondering if there is a way to visualize the attention of the next predicted token. This would be incredibly valuable for understanding the model's "thinking" process and providing better explanations.
Is it possible to implement this feature, or are there any existing tools or methods that could achieve this?
Actually, each token in the visualisation already represents the next predicted token based on the previous sequence. Just select any token to see the attentions of it.
Hi there,
First of all, thanks for your great work on this project!
I noticed in the demo that the visualization focuses on the input token sequence "The quick brown fox jumps over the lazy dog." This is helpful, but I was wondering if there is a way to visualize the attention of the next predicted token. This would be incredibly valuable for understanding the model's "thinking" process and providing better explanations.
Is it possible to implement this feature, or are there any existing tools or methods that could achieve this?
Thank you!