Open surapuramakhil opened 5 months ago
We'll need to cover a lot of things that are missing from this SIP: • What packages/licenses are needed, and are the compatible? • What are the security/privacy implications? • How do we (as an open-source solution) stay vendor-agnostic here? What's the abstraction layer?
This will need to be put up for a DISCUSS thread on the mailing list to move forward, but I think the proposal needs more detail/resolution.
We'll need to cover a lot of things that are missing from this SIP: • What packages/licenses are needed, and are they compatible? Python langchain package (or) modules required for making HTTP calls
• What are the security/privacy implications? User configures necessary API keys. LLM calls happen though backend, since Schema needs to be passed to RAG for quality responses. Either Approaches - support both options self-hosted (protecting security & privacy) or using provider of choice.
• How do we (as an open-source solution) stay vendor-agnostic here? What's the abstraction layer? we can stay vendor-agnostic by leaving choice to the user with their preferred mode (self-hosted or LLM as service etc.) and also choice of LLM. What's the abstraction layer -- I have found https://python.langchain.com/docs/use_cases/sql/quickstart/#convert-question-to-sql-query in Langchain which we can directly use.
Either two Options (draft) - feel free to add what you think. (probably I need to find a way of doing this as collaborative one).
If maintainers can create a sheet that works, else I can create a spreadsheet. For evaluating or suggesting various approaches or implementation ideas
LLM access using LangChain | LLM access using HTTP | |
---|---|---|
Advantages | Scalability while adding/extending features | Little code - leaves LLM to end user |
Supports lot of LLM's but switch might requried | User configures HTTP endpoint - giving him choice of either self-hosted or LLM as servie where he can just configure end point. | |
Congigurations | user configures necessary API keys/options con work | while user define http end point - he will configure headers |
Code solves for 1 particular usescases. extensibility is tough | ||
levrage Langchain | we might need to add code which FW's like langchain already does | |
Provider agnostic - as it supports Almost all providers | Provider-agnostic - user configures endpoint | |
Less changes requried - as these are SDK's | Request need to changed whenever releases happens etc | |
Above table is a draft - dumping my thoughts
Based on my evaluation - use langchain is better -
@surapuramakhil thanks. I think it makes sense to update the description with all new info and make sure you are covering all the technical/architectural considerations. First question that comes to mind, how do you intend to pull the right metadata from the database for the LLM to use? There is a limited context window and you just can't pull the whole schema for both context and performance limitations.
@geido based on my research langchain already solves this. https://python.langchain.com/docs/use_cases/sql/quickstart/#convert-question-to-sql-query.
They wrote pipelines for generating Queries from text. It works for any llm model. we can just piggyback on that. All I am planning to is have llm_provider or llm_factory which creats llm based on user needs and send to their pipeline.
@geido as you said updated description.
First question that comes to mind, how do you intend to pull the right metadata from the database for the LLM to use? There is a limited context window, and you just can't pull the whole schema for both context and performance limitations.
Let's try with langchain and see its results.
It looks more like a toy for now:
Has definitions for all the available tables.
This won't work for production databases that might have hundreds of tables and columns.
I think having langchain in the repo might be a nice thing to have to enable LLM-related capabilities. However, that would be a separate SIP to illustrate how langchain could be leveraged in the repo. It looks like starting from SQL generation is hard.
It looks like starting from SQL generation is hard.
Why do you think so? It's the first use case which Apache superset needs
As someone who has actually implemented this exact idea in superset for a hackathon a few months back, this is a pipe-dream at best (to be fairly blunt). Using RAG to pull relevant table metadata at prompt-time still led to unmanageable levels of LLM hallucination that only grows worse as the size of the warehouse being queried increases.
Something like this may be feasible for a user with a handful of tables, but at-scale it simply doesn't work. And a query that is 99% correct is functionally worthless if this is intended to be utilized by folks who don't have the skills necessary to parse through AI-generated SQL.
like this may be feasible for a user with a handful of tables, but at-scale it simply doesn't work
This is the problem with Language Model. That's exactly why LLM choice is given to users. If the situation were the scale is high, the best they can with high context size model like Gemini pro-1.5. Thats a separate Data Science problem which Apache Superset doesn't need to solve. just leverage what is available.
Using RAG to pull relevant table metadata at prompt-time still led to unmanageable levels of LLM hallucination that only grows worse as the size of the warehouse being queried increases.
This is a separate data science problem which Apache Superset doesn't need to solve, currently langchain community (quite popular in datascience) are solving this problem. we just leverage it.
this might protect from hallucination https://python.langchain.com/docs/use_cases/sql/query_checking/ Prompting / RAG strategies while working at scale - https://python.langchain.com/docs/use_cases/sql/large_db/
As both evolve (by time), Quality of Queries will become better & better.
a query that is 99% correct is functionally worthless if this is intended to be utilized by folks who don't have the skills necessary to parse through AI-generated SQL.
I agree with you about this, this doesn't solve fully for those who doesn't necessary knowledge to understand AI generated SQL. It's a copilot instead of an auto pilot.
@surapuramakhil do you still intend to move forward with this?
Hi @rusackas We have implemented LLM based query generation for our use case which is using self hosted model . We have also developed an adapter which can support popular LLM as service platforms like chat GPT etc. using API key configurations. What is the process ahead to move forward on this sip now. Should we create a PR.
@ved-kashyap-samsung are you on Apache Superset Slack? You can find me there as "Diego Pucci". I was the lead engineer for AI Assist for Preset, I should be able to help with getting the SIP right. Please, get in touch.
@ved-kashyap-samsung I think adding AI to Superset requires a proposal for consensus on the approach. If you want to open a PR with what you have, you're more than welcome to, but it's unlikely it'll get merged without going through a SIP proess. You can add your details/approach here if you want to use this SIP, or you can open your own SIP. Please reach out on slack if you'd like assistance.
Last call for interested parties to sign up and dial in this proposal. In a couple more weeks, this will have gone 6 months without being brought up for discussion on the ASF mailing list, and will be closed as inactive. Thanks to everyone, however it plays out :)
Maybe a less exciting idea for it: could we just use a chrome extension for the purpose? It will be much easier to use it along with superset, without worrying about the main update on Superset.
You can absolutely use/create a chrome plugin. I think it could be a Superset plugin if you want to author such a thing, or something more generalized. Either way, you wouldn't need a SIP, but we'd be happy to help evangelize the effort if it comes to fruition.
Please make sure you are familiar with the SIP process documented here. The SIP will be numbered by a committer upon acceptance.
[SIP] Proposal for AI/LLM query generation in SQL lab
Motivation
To make Apache superset dashboard/chats creation possible for Non Dev/SQL background users. https://github.com/apache/superset/discussions/27272
Proposed Change
Describe how the feature will be implemented, or the problem will be solved. If possible, include mocks, screenshots, or screencasts (even if from different tools).
This is current SQL LAB used for showing SQL editor box
Forward user prompts to LLM Model Along with other system prompts - which shares databases schema information (consider it as RAG) for quality prompt responses. (optional) Some additional query / prompts which are required for understanding data. Maybe sharing first 10 rows. (Or) distinct values for a column, etc. (whatever is necessary)
populate the editor with the query generated by the model.
Query Generation there are already pipelines in langchain for this https://python.langchain.com/docs/use_cases/sql/quickstart/#convert-question-to-sql-query.
we can use these pipelines for generating Queries from text. It works for any llm model. we can just piggyback on that. All I am planning to be have llm_provider or llm_factory which creates llm based on user needs and send to their pipeline.
Few Technical Implementations / Considerations
LLM access as API would give choice whether they want to use existing services rather than deploying. Packaging LLM in superset deployment is not feasible
Backend Architecture Diagram
New or Changed Public Interfaces
Describe any new additions to the model, views or
REST
endpoints. Describe any changes to existing visualizations, dashboards and React components. Describe changes that affect the Superset CLI and how Superset is deployed.New dependencies
Describe any
npm
/PyPI
packages that are required. Are they actively maintained? What are their licenses?Migration Plan and Compatibility
Describe any database migrations that are necessary, or updates to stored URLs.
Rejected Alternatives
Describe alternative approaches that were considered and rejected.