Closed Anindyadeep closed 5 months ago
@Anindyadeep sure thing, go for it, would love to have it implemented (also, love the name of the issue đ)
Awesome, I will be pushing a PR by tomorrow with tests (possibly). Thanks @gventuri
This is super. I have some questions.
Is the pull request completed? and where is the document to download and install GPT4All? and is it good quality compared to StarCoder?
Hi @thanhnew2001, we paused this feature's development for a bit. The reasons are:
gpt4all
contains lot of models (including starcoder), so you can even choose your model to run pandas-ai
.However, we were seeing that the performance was not very good when compared to chatGPT. Hence we paused for some time. However, I restarted some development on this issue, because:
codellama
becoming the state of the art for Open Source Code generation LLM.
đ The feature
Support for GPT4ALL.
Motivation, pitch
Open AI is very much costly when it comes to experimentation and hence it is sometimes better to use open-source LLMs. One way of using Open source LLMs is using HuggingFace (currently implemented), another way to do the same locally is by using gpt4all. GPT4ALL has an awesome community providing a huge variety of list of models and load them easily and operates under CPU. In that way we are getting access to different LLMs at the same time getting to use PandasAI through that.
Alternatives
I started with langchain. So in the platform, I saw that we can either use pre-implemented LLMs or it will go to wrap around Langchain LLM. I tried to do that for gpt4all, since langchain has gpt4all support, but I got several bunches of errors. Hence this alternative is buggy and better if we can have our native support.
Additional context
I already implemented that. Here is an example code:
Lemme know if I can put a PR on this. Thanks