vanna-ai / vanna

🤖 Chat with your SQL database 📊. Accurate Text-to-SQL Generation via LLMs using RAG 🔄.
https://vanna.ai/docs/
MIT License
12.11k stars 970 forks source link

sqlcoder LLM support? #303

Open andreped opened 8 months ago

andreped commented 8 months ago

Is your feature request related to a problem? Please describe. There has been added support to several proprietary and open-source LLMs to Vanna.

However, it seems like one open-source LLM variant outperforms LLMs like GPT-4 and Claude-2.0 on SQL completion tasks:

image

I think it would be highly relevant to the community to add official support for it in the framework. Even a 7B param model outperforms GPT-4. Hence, for SQL completion tasks, this model seems like a nobrainer to use: https://github.com/defog-ai/sqlcoder

Describe the solution you'd like

Different sqlcoder LLMs can be used through a common API, similarly to Ollama: https://github.com/vanna-ai/vanna/blob/main/src/vanna/ollama/ollama.py


@zainhoda I can make a PR to add support for this LLM.

zainhoda commented 8 months ago

I've used the 7b of sqlcoder via Ollama and found it to be extremely slow for some reason compared to models like mistral.

I think if we use sqlcoder 70b it pretty much has to be via some API. Is there an API you were thinking of using?

zainhoda commented 8 months ago

Here's a benchmark that I ran:

vanna-llm-sql-benchmark-2024-03-20

For the ones in purple, they were set up like this:

class Vanna_Ollama(ChromaDB_VectorStore, Ollama):
    def __init__(self, config=None):
        ChromaDB_VectorStore.__init__(self, config=config)
        Ollama.__init__(self, config=config)

vn = Vanna_Ollama(config={'model': 'sqlcoder', 'path': path})

I'm not sure we need to do anything additional for running locally

andreped commented 8 months ago

I've used the 7b of sqlcoder via Ollama and found it to be extremely slow for some reason compared to models like mistral.

I think if we use sqlcoder 70b it pretty much has to be via some API. Is there an API you were thinking of using?

There is documentation here on which API to use: https://github.com/defog-ai/sqlcoder/blob/main/inference.py#L67

I can do some simple performance benchmarks, if you'd like. If possible, I can do this benchmark in CoLab. I have managed to run 7B models in CoLab before, but could be that this model goes beyond the limit (RAM or VRAM).


EDIT: Could you share me the exact code you used to reproduce the benchmark for sqlcoder, as well as which dataset you used? Perhaps it was public?

emraza1 commented 7 months ago

Here's a benchmark that I ran:

vanna-llm-sql-benchmark-2024-03-20

For the ones in purple, they were set up like this:

class Vanna_Ollama(ChromaDB_VectorStore, Ollama):
    def __init__(self, config=None):
        ChromaDB_VectorStore.__init__(self, config=config)
        Ollama.__init__(self, config=config)

vn = Vanna_Ollama(config={'model': 'sqlcoder', 'path': path})

I'm not sure we need to do anything additional for running locally

I assume your benchmark runs the vanna functions as is without catering to the prompt format of the sql expert open LLMs hence the poor performance. Once that is catered, it really is marginally better that gpt-3.5 and comparable to gpt-4