Open Butanium opened 5 months ago
Oh wait, seems like token indexing is supposed to work only with tracer.invoke
calls.
It would be nice if it also works on directly batched input, not sure how easy it is to add it with the current implementation
Ok but with UnifiedTransformer, token indexing doesn't work as padding side is right by default :
l = ["ab dfez zd", "a", "b"]
from nnsight import LanguageModel
model = LanguageModel("gpt2", device_map="cpu")
with model.trace(l):
inp = model.input[1]['input_ids'].token[0].save()
with model.trace() as tracer:
inp_l = []
for s in l:
with tracer.invoke(s):
inp_l.append(model.input[1]['input_ids'].token[0].save())
from nnsight.models.UnifiedTransformer import UnifiedTransformer
umodel = UnifiedTransformer("gpt2", device="cpu")
with umodel.trace() as tracer:
inp_l2 = []
for s in l:
with tracer.invoke(s):
inp_l2.append(umodel.input[1]['input'].token[0].save())
print(inp)
print([i.item() for i in inp_l])
print([i.item() for i in inp_l2])
You're using a GPT2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
Loaded pretrained model gpt2 into HookedTransformer
You're using a GPT2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
tensor([ 397, 50256, 50256])
[397, 64, 65]
[397, 50256, 50256]
Using
.token[0]
returns the padding tokenout: