-
- [ ] [Introduction to AI Agents - Cerebras Inference](https://inference-docs.cerebras.ai/agentbootcamp-section-1)
# Introduction to AI Agents - Cerebras Inference
## Overview
Cerebras Inference ho…
-
Description: CrewAI uses the Litellm library to route LLM requests to the appropriate model. Currently, Litellm throws an error, "LLM Provider NOT provided", whenever a request is made for a non-OpenA…
-
### What happened?
Hi there.
My llama-server can work well with the following command:
```bash
/llama.cpp-b3985/build_gpu/bin/llama-server -m ../artifact/models/Mistral-7B-Instruct-v0.3.Q4_1.g…
-
### Description
My usecase is that I have a LangChain/Langgraph agent that I would like to import and use in marimo.
I understand `mo.ai` enables llm access, and `mo.ui.chat` renders a chat box to i…
-
### Bug Description
The agent function tools are not working after calling .update_prompts().
Is this an error in my usage, or is it a bug?
Thank you in advance for your response.
### Versio…
-
I deployed Qwen2.5-14B-Instruct on my local server and started llm correctly using vllm.
But when I executed the sample code,
```
from paperqa import Settings, ask
local_llm_config = dict(
…
-
### Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I have s…
-
```
task_manager = TaskManager(self.agent_config.get("agent_name", self.agent_config.get("assistant_name")),
bolna-app-1 | File "/app/bolna/agent_manager/task_manager.py", line 58, in __init__
…
-
# Problem
prompt engineering is slow, and in a multi-agent system is very hard. You need to experiment with a message arbitrarily deep in the stack quickly. The current replay feature is helpful, but …
-
## System Information
- MacOS Version: 14.5
- Letta: main branch Oct 31 ( 8AM EDT )
## Agent Configuration
- Agent Name: GroqOne
- Model: llama3-70b-8192
- Embedding Model: Letta-free
- A…