Zipstack / unstract

No-code LLM Platform to launch APIs and ETL Pipelines to structure unstructured documents
https://unstract.com
GNU Affero General Public License v3.0
372 stars 31 forks source link

feat: Backend - prompt service plumbing for error handling #315

Closed chandrasekharan-zipstack closed 2 months ago

chandrasekharan-zipstack commented 2 months ago

What

Why

How

Can this PR break any existing features. If yes, please list possible items. If no, please explain why. (PS: Admins do not merge the PR without this section filled)

Relevant Docs

Notes on Testing

Screenshots

image image image

Checklist

I have read and understood the [Contribution Guidelines]().

sonarcloud[bot] commented 2 months ago

Quality Gate Passed Quality Gate passed

Issues
3 New issues
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
0.0% Duplication on New Code

See analysis details on SonarCloud

chandrasekharan-zipstack commented 2 months ago

@chandrasekharan-zipstack In screenshot 2 and 3, we need to show the error description without JSON. Is it achievable only if we get rid of general exception catch or is it because the error happens outside our code?

@Deepak-Kesavan the error \gets triggered from

openAI -> llama_index methods -> SDK -> Prompt Service

Ideally our LLM adapter should expose the .complete() method and the error should flow as
openAI -> llama_index methods -> Adapter -> SDK -> Prompt Service
  1. Now we can only implement error handling in SDK or prompt service for this and doing it in the SDK makes most sense so that it'll be applicable for all scenarios where we call an LLM.
  2. We'll have to do this for every LLM provider - go through their code base's error classes and obtain a way to parse the error message (preferably their base class so that all errors can be tackled) - I'll raise an SDK PR handling it for OpenAI