x = model.generate_from_string("what do you take when do you go hunting? \n (A) gun (B) wow (C) beat (D) fuck (E) carry (F) cast (G) challenge (H) stop (I) drop (J) leave (K) turn (L) \
climb (M) no (N) enter (O) move ", tokenizer=tokenizer)
-> ['gun']
but if there are so many candidates, it shows like that
-> ['catalogue (O) working']
It's been trained on up to 5 candidates, but I think it generally works fine on more candidates (obviously the performance would deteriorate with more candidates).
x = model.generate_from_string("what do you take when do you go hunting? \n (A) gun (B) wow (C) beat (D) fuck (E) carry (F) cast (G) challenge (H) stop (I) drop (J) leave (K) turn (L) \ climb (M) no (N) enter (O) move ", tokenizer=tokenizer) -> ['gun']
but if there are so many candidates, it shows like that -> ['catalogue (O) working']