Open S4lXLV opened 6 months ago
Hello! In its current form, AutoQuizzer is only a nice demonstration.
I invite you to fork the repo and replace the current Generator with another one that is compatible with the desired model.
Some docs:
Thank you. Another question, is it always going to give the same questions and only take the beginning of the webpage as of now, even if I use a bigger model like Command R+? Like as of now, when I use it, it gives the same exact questions from the very beginning of the webpage no matter how many times I run it. I tried to increase the tokens but still the same result. I would like to get random questions from all over the article I am passing. maybe it is possible to achieve this with Groq.
As mentioned in the Readme, I am truncating the text to the first 4k characters: in the online version, I do not want to hit Groq rate limits. https://github.com/anakin87/autoquizzer/blob/083f13e3a38e0d5cc9a484937a7469d56405cfa4/backend/pipelines.py#L35
If you are using the project locally, you can safely remove this limit.
Thank you for being patient with me. I am kinda new to all of this. so by removing this line, you mentioned it would go through the full page right? Also do I need to increase the max_tokens here?
generation_kwargs={"max_tokens": 1000, "temperature": 0.5, "top_p": 1},
last question how can I make it generate more than 5 questions at the time? is changing create 5 multiple choice.....
enough?
max_tokens
refers to the generated tokens, not to the length of the original prompt, so there is no need to change it (unless you expect the text of the generated quiz to be longer)
Hey as the title says. can you add Command R+ support?