Closed benlipkin closed 2 years ago
Now what happens here is: the special tokens are chopped off from each stimulus when extracting stimulus-level representations evaluated within a context. The remaining thing here is: being able to extract first-token/last-token/special-token representation for a single stimulus, because now special tokens are chopped off by default since in context they represent the whole context rather than any stimulus
whoops, that was an incorrect reference to this issue. it should have been #18 instead
special tokens, e.g., from tokenizer cause 1-off errors when using indices to extract sentence representations from context.