-
Some changes in this project can be done i guess so that we can use any litellm supported models. It will help each and everyone to run 100+ LLM's easily without any issue so people can enjoy all open…
-
Whether GroundingDINO can support TensorRT-LLM multimodal ?
[TensorRT-LLM multimodal ](https://github.com/NVIDIA/TensorRT-LLM/blob/main/examples/multimodal/README.md)
-
### Your current environment
hardwark: A800
Driver Version: 535.54.03 CUDA Version: 12.2
vllm commit d3a245138acb358c7e1e5c5dcf4dcb3c2b48c8ff
model qwen72B
### Model Input Dumps
_No response…
-
# Architecture
This document outlines the architecture of the AI Nutrition-Pro application, including system context, containers, and deployment views. The architecture is depicted using C4 diagram…
-
# Architecture
This document outlines the architecture of the AI Nutrition-Pro application, including system context, containers, and deployment views. The architecture is depicted using C4 diagram…
-
# Architecture
This document outlines the architecture of the AI Nutrition-Pro application, including system context, containers, and deployment views. The architecture is depicted using C4 diagram…
-
- [ ] xAI's grok
- [ ] AWS
-
### Which API Provider are you using?
Google Gemini
### Which Model are you using?
any google LLM and openai compatible
### What happened?
Under Ubuntu, when using the Cline extension in Visual S…
-
This plugin currently hard-codes to using Claude 3 Haiku and the Anthropic client library:
https://github.com/datasette/datasette-query-assistant/blob/a777a80bcb3b42933b2933de895f4f2eb9376e9d/datas…
-
### Bug Description
Build a custom output parser and pass it to the model
I dug down the code and found that in the file llama_index.llama-index-core.llama_index.core.program.utils.py
the output…