Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BERT, RoBERTA, T5, and T0).
Is there a way right now to support attribution for a pre-generated input / output pair. (Without relying on the generation of a model).
For example, let's say I have the following sentence. (input / output)
"I have two toys, a fluffy bunny and a race car. I want to pick the one that is fast and can go on a circuit, which one should I pick? " "The race car".
I want to be able to run the saliency analysis on the "The race car" words, but without having to rely on the model outputting this specific words. Essentially forcing the model to "generate" a pre-defined output.
Hi,
Is there a way right now to support attribution for a pre-generated input / output pair. (Without relying on the generation of a model).
For example, let's say I have the following sentence. (input / output)
"I have two toys, a fluffy bunny and a race car. I want to pick the one that is fast and can go on a circuit, which one should I pick? " "The race car".
I want to be able to run the saliency analysis on the "The race car" words, but without having to rely on the model outputting this specific words. Essentially forcing the model to "generate" a pre-defined output.
Thanks