-
-
### Describe the bug
Hello dear team. hope you are doing well.
I am about to report a bug, although it probably works as expected, then I would ask you to clarify this behavior in the documentatio…
-
# Expected Behavior
Caching should improve performance
# Current Behaviour
When running the server with caching enabled:
```bash
python -m llama_cpp.server --model --cache true --cache_typ…
-
Hello I'm having some problems when deploying a script that locally was working perfectly.
Since I needed some [Vercel AI utiliities](https://github.com/vercel/ai) to deal with OpenAI streaming API w…
-
We only recalculate fingerprints when we up fingerprint version to save resources. This can cause issues for users who's data changes distribution with fast pace. The most prominent of which is binnin…
-
### Describe the issue
When trying to use a custom built WASM artifact, the following error is thrown after downloading:
`wasm streaming compile failed: LinkError: WebAssembly.instantiate(): Imp…
-
Hi,
I have built a rag app and I am loading a LLM with Llamacpp. However I have problems with making Streaming work for FastAPI or Langserve requests. Streaming is working in my Terminal, but I don…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
I used the streaming code example from https://tda-api.readthedocs.io/en/latest/streaming.html
But I received this error:
tda.streaming.UnexpectedResponseCode: unexpected response code: 3, msg is …
-
### Summary
In partnership AVP and Brumfield Labs, UT-Austin would like to use their audio annotation tool AVAnnotate on AAPB streaming content.
To do this most efficiently on their end, they h…