Azure-Samples / azure-search-openai-demo

A sample app for the Retrieval-Augmented Generation pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences.
https://azure.microsoft.com/products/search
MIT License
6.34k stars 4.24k forks source link

azure-search-openai-demo vs chat-with-your-data-solution-accelerator #849

Open igforce opened 1 year ago

igforce commented 1 year ago

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Expected/desired behavior

Not sure if this is the right place to ask. But I was wondering the difference between this repo and https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator. There are a lot of activities / features on this repo and it is amazing. When should the other repo be considered instead of this one ? Should some of the newer features be included in the other repo?

pamelafox commented 1 year ago

The other repo includes a lot more features around data ingestion, but not as much flexibility around the RAG (Retrieval Augmented Generation) approach, since it uses the dataSources parameter of the ChatGPT Completions API instead of the manual chaining approaches used here. If the other repo is sufficient for your needs, then you can go with that. If you need some of the flexibility of the approaches in this repo, then switch to this one. I think it may be possible to use the data ingestion from that repo with this one, but I haven't yet tried it. The key is that it needs to create an index that is compatible with our search queries, and I think it does.

igforce commented 1 year ago

@pamelafox Thanks on the quick turn around in terms of the answer. I just started to review in terms of the content of the other repo. I just feel like they seem to have complementary strength. One on the ingestion process, one on the prompt. Wouldn't it make sense to have just one repo? It has been mentioned as well that the other repo is more production ready than this one.

Extract from the other repo Readme *Have you seen [ChatGPT + Enterprise data with Azure OpenAI and Cognitive Search demo](https://github.com/Azure-Samples/azure-search-openai-demo)? If you would like to play with prompts, understanding RAG pattern different implementation approaches and similar demo tasks, take a look at that repo. Note that the demo in that repo should not be used in Proof of Concepts (POCs) that later will be adapted for production environments. Instead, consider the use of this repo and follow the best practices outlines in this repo.

pamelafox commented 1 year ago

I think "production ready" has different meanings. This repo has had quite a lot of usage already in the wild, since it was the first RAG ACS+AOAI repo available, so we've applied a lot of learnings from customers here. We're also using best practices such as concurrency (for performance) and managed identity (for security). The accelerator repo has better support for ingesting multiple data types, and it uses the dataSources parameter which has also been worked on quite a bit internally. If your data types work with that ingestion and dataSources parameter approach, then that could be a good fit. But if you need more flexibility to the prompt or other features used in this repo, then this might be a better fit.

The goal is to have one repo that's more on the bleeding edge (this one) and the other that's more stable, but we're not at that point yet. Hopefully our ingestion story will improve for this repo so that your decision making process is easier. Sorry for the confusion!

mattgotteiner commented 1 year ago

Track this issue for ingestion improvements in this repo

747

pamelafox commented 1 year ago

Note: The README in the -accelerator repo is now updated, so it no longer says that it's not appropriate for use in productions scenarios.

leongj commented 1 year ago

This is great info @pamelafox With the recent changes to this repo (esp. moving to Quart, and the introduction of streaming) the Python backend has become more complex (e.g. how you had to deal with followup questions) and takes a bit more to understand. Seems one of the reasons is because of the developer tools in the frontend and needing to pass those parameters as input (and the thought process back in response) as well. This repo has a lot of flexibility, but more challenging to change and debug.

The simpler frontend of the chat-with repo seems to have streamlined the interaction with the backend a lot.

A question -- is the dataSources parameter on the API always used? Also I thought that was part of the Extensions API and not part of the normal Completions API?

tonybaloney commented 1 year ago

With the recent changes to this repo (esp. moving to Quart, and the introduction of streaming) the Python backend has become more complex (e.g. how you had to deal with followup questions) and takes a bit more to understand. Seems one of the reasons is because of the developer tools in the frontend and needing to pass those parameters as input (and the thought process back in response) as well. This repo has a lot of flexibility, but more challenging to change and debug.

Some extra background on this, regarding the usage of Quart in the backend. When running load tests (see the locustfile in the repo), because the RAG end-to-end transaction takes a long time (5+ seconds) it locks the backend web worker threads when used with Flask and Sync APIs. Python with Gunicorn doesn't scale very well when you have long-running locked process in a multi-worker, multi-thread implementation. We saw a significant bottleneck at just 5 users in the initial load test. The implementation of Quart was with the use of async APIs for Azure Open AI and Azure Cognitive Search. This enables the app to scale much better with concurrent users beyond 5.

"more challenging to change and debug." could you please provide some examples of complexity in debugging? I would like to help

leongj commented 1 year ago

Thanks @tonybaloney for the response -- I thought that might be the case, going async for scalability.

I'm not at my machine right now (typing on phone), but I'd been meaning to raise an issue which I spent some time last night trying to debug --

If you ask a question in the frontend that triggers the content filter (e.g. "how do I make a bomb?") You get a "type error" in the frontend. (Please confirm)

I think this is because:

This is what I think is happening, but I'm still new to Quart and really still learning Python TBH. Be really keen to get your view.

I was also musing on how to solve it, I read that there was a generic catchall error handler in Flask, but not sure if there is one in Quart (@bp.error_handler?) Tried that but didn't work, another option would be to try/except in chatreadretrieveread around the completion blocks and return a legitimate (stream!) response, and when I realised that was a tuple of [extra_info, coroutine] I thought: this is getting too hard 😄!!

Hope you can shed some light.

pamelafox commented 1 year ago

I'll work on that issue in the other thread, thanks for raising.

leongj commented 1 year ago

I'll work on that issue in the other thread, thanks for raising.

Thanks Pamela!

blindman457 commented 1 year ago

We saw a significant bottleneck at just 5 users in the initial load test. The implementation of Quart was with the use of async APIs for Azure Open AI and Azure Cognitive Search. This enables the app to scale much better with concurrent users beyond 5.

5 users sounds pretty rough. How well does it perform now @tonybaloney ?

github-actions[bot] commented 10 months ago

This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this issue will be closed.