Open gvijqb opened 1 year ago
Hi,
I'm working (slowly) on a new version on the dev branch, which supports the open model inference. I previously tested with LLaMA only and haven't had time to compute the results thoroughly.
Cheers, Terry
Hi @terryyz
Thanks for the update. I am looking forward to it. Please keep me posted once you have an update.
At MonsterAPI we have developed a no-code LLM finetuner and are exploring different ways to do quick evaluation on finetuned adapters.
Thanks, Gaurav
Hi @gvijqb,
No problem!
Please let me know if you'd like to collaborate on this project and beyond :)
Cheers, Terry
Sure, would love to explore.
Please share how we can collaborate?
Not sure if MonsterAPI may support some computational resources. I'm a bit short of good GPUs these days 😞
I need a way to evaluate a model like this: https://huggingface.co/qblocks/falcon-7b-python-code-instructions-18k-alpaca
This is a finetuned model on base open source model falcon-7b for code generation. The output is a adapter file using LORA. How can I do this with your tool?