Closed s-kostyaev closed 6 months ago
This is fixed, but keep in mind you can still get errors thrown from the initial (sync) part of llm calls. For example, bad providers may throw errors, such as when Open AI providers aren't initialized with a key. It's just that all async parts should go to the callback.
Hi @ahyatt
Simple code for reproducing:
There is no process listening port 3333. Reproducing with both providers.
Message will never received.