-
I am using Next.js + vercels ai SDK to send additional data (sources of retrieved documents) with the streamed response. While developing locally, everything works as expected and `handleRetrieverEnd`…
-
const inference = new HfInference(HF_ACCESS_TOKEN);
It would be good to validate the HF Token here.
I am following up vercel ai sdk [HF example](https://sdk.vercel.ai/docs/guides/huggingface#use-t…
-
### Description
"chromadb": "1.6.1",
"langchain": "0.0.200",
"next": "13.5.4",
When I'm querying Chroma on localhost there is no issue, everything works like a charm but on production build on V…
-
Hi,
I have a **backend API** with the code for `abc.com/api/chat` as follows:
```js
const nonStreamingModel = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
}, configuarion)
…
-
### Astro Info
```block
Astro v3.0.12
Node v18.16.0
System macOS (arm64)
Package Manager yarn
Output server
Ad…
-
I have a simple working example where I use langchain and stream the response by calling a chain (as opposed to an LLM). The `route.ts` file contains this:
```typescript
import { StreamingTextRespon…
-
Can someone point me out in the right direction how to properly convert the route.ts from OpenAI to Anthropic?
I have read the Vercel SDK documentation and I can't make it work.
Please help. Tha…
-
I'm a bit lost as to how to actually use `stream: true` in this library.
Example incorrect syntax:
```javascript
const res = await openai.createCompletion({
model: "text-davinci-002",
promp…
-
### Description
Instead of the server action streaming the response as it gets chunks from OpenAI, it streams it after completion. Video attached to demonstrate.
Code is an almost exact copy from …
-
Hey everyone. I am attempting to use this template and replace the direct OpenAI calls with LangChain.
I updated the code in `/app/api/chat/route.ts`. I used the LangChain example here as a referen…