sagemathinc / cocalc-docker

DEPRECATED (was -- Docker setup for running CoCalc as downloadable software on your own computer)
https://cocalc.com
Other
398 stars 103 forks source link

support using a free local large language model instead of ChatGPT #190

Open williamstein opened 1 year ago

williamstein commented 1 year ago

At some point it would be very nice if there were a script one could run inside cocalc-docker, which would download a local LLM, which could then be used to implement the same things as ChatGPT.

One model https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi is already potentially good enough and is "free for noncommercial use", which would be fine for a substantial chunk of cocalc-docker users.

This might be useful on an airplane, when off grid, in an air-gapped network environment, etc.

I've tried the models in "GPT4free" and so far I think none are good enough. This one (which is bigger) is maybe just good enough to be of use for something, maybe: https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi It probably requires high end hardware, of course.

Including the qdrant vector database with index of all relevant-to-cocalc docs and including relevant info in any prompt could transform something like https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi into something that is actually really useful.

arm2arm commented 1 year ago

I am really interested to test it. How much GPU ram it require? Is A100 with 40GB works?

williamstein commented 1 year ago

I am really interested to test it. How much GPU ram it require? Is A100 with 40GB works?

I don't understand what you're asking for here. If you just click on the link above you can try the model out in your web browser right now: https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi

If you meant to test say Guanaco integrated into cocalc-docker, then that doesn't exist at yet, hence this issue. So make sure to watch this issue.

skaunov commented 12 months ago

Unlocking OpenAI product integration to more ubiquitous LLM interface would be an important step! Thank you for your work on docker image!!

williamstein commented 11 months ago

This project https://github.com/jmorganca/ollama might be a good way to support local models. It has a nice UI and momentum.

66Leo66 commented 6 months ago

I would suggest allowing a custom API URL to be set to replace api.openai.com, which would allow not only this use case (along with https://github.com/mudler/LocalAI to host models locally with a ChatGPT-compatible api) but also using other services, e.g. Azure OpenAI.