Open hatkyinc2 opened 1 year ago
Does llama only have 2k token limit?
yea seems like 2,048 tokens from a skim might be harder to do, but if we can do this, the rest would work for sure. Also, the cost is close to 0 and high speed hopefully as 0 networking, dedicated local resources, so running unlimited number of smaller queries
It's aspirational. IDK if something else comes along that is open to running locally The goal is for running local something so as not to send source code to central companies Don't care what the final model and such would be as long as it would be able to do the job.
Why Users don't want to send their code to OPENAI
What Allow users to connect to different models. Maybe asking users to host the model and provide an API?
For example
Support gpt4all https://github.com/nomic-ai/gpt4all
https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html
For use on "secret"/proprietary code with fewer security concerns.