-
![1723019406508](https://github.com/user-attachments/assets/167e0c32-31e7-4a08-8ceb-fbb546fa285c)
这里能看到自定义的agent已经有值返回,但是最终控制台打印如下:
![1723019786031](https://github.com/user-attachments/assets/24e428…
ntucz updated
4 weeks ago
-
Reason: to support continual learning with LLM, such as:
* Use historical interaction log to improve the performance of an agent with the accumulation of interaction sessions.
* Summarize data that …
-
Just change the llm chain to an agent that can resonate. It should be able to better interact with the user when questions are asked.
-
Hello!
I'm trying to use SalesGPT with a locally-served model via Ollama.
I tested the LiteLLM part, it works.
```
from litellm import completion
response = completion(
model="orca-min…
-
### Summary
This issue is a list of enhancements aimed at improving tracing in the APM UI, particularly for RAG applications. The need for these improvements has emerged as the Security team has been…
-
- [ ] [Reader API](https://jina.ai/reader/#demo)
# Reader API
## Get LLM-friendly input from a URL or a web search, by simply adding `r.jina.ai` in front.
Add https://r.jina.ai/ to any URL in…
-
Hi, can you please provide a guide or support to use local llm models like Ollama lama3.1 8b or 70b
-
I am trying to build an Agent using Function Calling and try to implement it with DSPy, but I haven't found the instructions for using DSPy with function calling.
-
### Describe the bug
Hi, I assume nested chats to work even when using a custom `speaker_selection_method`, as it is nested and should follow the configuration
### Steps to reproduce
```py
f…
-
Hello, I'm trying to use the langchin integration but I cannot figure out how to use it, I'm following some examples in langchain:
```
import { LLM } from "llama-node";
import { LLamaRS } from "…