Closed azrael05 closed 5 hours ago
Could you please try the instruction-tuned model instead? It should give you better results.
Could you please try the instruction-tuned model instead? It should give you better results.
Thanks, With the instruct tuned model the output is perfect.
Btw is there any reason why the gemma_2b_en model produced repetitive output instead ks stopping ?.
It's kind of expected that the pre-trained models only try to complete text. Maybe one way you could try is to tune the sampling parameters to see if you can get a bit diversity in the output.
I am just happy to be a part of this chat
It's kind of expected that the pre-trained models only try to complete text. Maybe one way you could try is to tune the sampling parameters to see if you can get a bit diversity in the output.
Yeah, Its expected of it to complete the text but still shouldn't repeat its text right? Example the other text generation models might produce half ending sentence outputs depending on the max_length size but they don't producr repeating ouputs.
I've noticed the 2b model repeating itself as well. Although, I found it does it when the context of my prompt would be hard even for a human to figure out.
it is expected these repetitions on PT models.it would be better to fine tune them or use the IT models
Could you please confirm if this issue is resolved for you with the above comment ? Please feel free to close the issue if it is resolved ?
Thank you.
Closing this issue due to lack of recent activity, Please reopen if this is still a valid request.
Thank you!
While generating any text with a specified value of max_length, the generated text keeps repeating several times until the output spans the value of max_length. An example of the above is using the following code
As you can observe the sentence keeps repeating to span the max_length while it should ideally stop once it has written the base text.
The code was run on Kaggle with "gemma_2b_en" model GPU - P100 To recreate the issue you can run the given code.