Open gdgdandsz opened 2 months ago
Hi,
the present code is quite outdated. Nowadays, I would recommend implement everything using Huggingface. We have a tutorial on how to do this with some of the data from the CENTaUR paper here: https://github.com/Zak-Hussain/LLM4BeSci/tree/main/choice
More info in the corresponding paper: https://osf.io/preprints/psyarxiv/f7stn
You should be able to simply plug in the LLaMA models from huggingface: https://huggingface.co/docs/transformers/main/en/model_doc/llama or use some of the newer models instead.
Thank you for your reply! I will check that.
Hi there, Thank you for this amazing project and I'm currently trying to reproduce your results and adjust some of the prompts. I'm wondering if we have direct access to model=7B,13B,30B and 65B, or if we need to produce them by ourselves?