-
### Description
When using memory=True for a crew that uses Azure Open AI, there is an error creating long term memory.
### Steps to Reproduce
```
import os
from chromadb.utils.embedding_…
-
# trtllm-bench --model models/Llama-2-7b-hf throughput --dataset experiments/synthetic_128_128.txt --engine_dir models/Llama2-7b-trt-engine
[TensorRT-LLM] TensorRT-LLM version: 0.15.0.dev2024111200
…
-
Description: CrewAI uses the Litellm library to route LLM requests to the appropriate model. Currently, Litellm throws an error, "LLM Provider NOT provided", whenever a request is made for a non-OpenA…
-
Gives LLM bigger chunk of text as input and requesting the some sort of ''thinking'' leads to the hallucinations. On current state of models we cannot get rid of it Maybe in the future those will be g…
-
# Architecture
This document outlines the architecture of the AI Nutrition-Pro application, including system context, containers, and deployment views. The architecture is depicted using C4 diagram…
-
We have [recently announced](https://blog.langchain.dev/langgraph-platform-announce/) LangGraph Platform, a ***significantly*** enhanced solution for deploying agentic applications at scale.
We rec…
-
at the link
https://nayakpplaban.medium.com/building-an-llm-application-for-document-q-a-using-chainlit-qdrant-and-zephyr-7efca1965baa
-
### 🐛 Describe the bug
My current code:
```js
import { RAGApplicationBuilder, LocalPathLoader } from '@llm-tools/embedjs';
import { OpenAiEmbeddings } from '@llm-tools/embedjs-openai';
import { …
-
### Description
Currently, the cell output location is a user config display property. For some applications, it would be helpful to specify this as an application property that is maintained when sh…
-
### Jan version
0.5.7
### Describe the Bug
Using Jan v0.5.7 on a Mac with an M1 processor, running Llama 3.2 3B instruct q8 via the API. Occasionally, the server stops responding to POST requ…