Closed gsamaras closed 5 months ago
You can try running a LLaMA or mistral model locally, as shown in demo.py
Yes exactly @ngruver, that is what I was trying in a Kaggle Haggle notebook, but it was asking for openai and Mistral keys. Can you provide an example?
I am only interested in feeding the air passenger dataset to LLMtime and get predictions, just for a basic demo
@gsamaras
You just need to comment the lines here
or you create a dummy OPENAI_API_KEY env variable
@zaizou thank you for your suggestions. Unfortunately they don't work, since it asks for an openai key, as you can see on this Kaggle notebook, in which you are welcome to play around.
Just comment the models that you don't want to use in the model_predict_fns variable
@zaizou apologies, I had done that but didn't mention it. When I did this, it would ask for a Mistral Key. If I provide a dummy mistral key, it will say unauthorized. If I create a real Mistral key, it seems it is a pay-as-you-go service. If I add Mistral mode via Kaggle, it won't make a difference, because LLM-Time tries to connect to the Mistral client.
So what I did was to use mistral
instead of mistral-api-medium
and add the model locally. Now the notebook runs out of memory, but I'll see what I can do about it, thanks!
I was wondering if I could quickly use your model without an LLM key (e.g. OpenAI key)?