-
### System Info
GPU: a10g
### Who can help?
@kaiyux
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task…
-
![image](https://github.com/paperswithlove/papers-we-read/assets/12858045/20087322-d388-45db-b0ed-2daab0ea5baf)
[https://arxiv.org/abs/2403.09611](https://arxiv.org/abs/2403.09611)
- 아니 애플에서 MLL…
-
Is there any method to pass hidden_states to llm directly, when using inflight batching?
For example:
In multimodal case, the image feature embedding is done by vision_tower and projector.
Ge…
-
### System Info
TRT-LLM version: 0.12.0.dev2024070900
### Who can help?
@ncomly-nvidia
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An offic…
-
### System Info
- RTX 4090
- x86_64 GNU/Linux
- main branch
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Ta…
-
### System Info
- GPU:L40S
- Tensorrt-llm:0.11.0.dev2024060400
- cuda:cuda_12.4.r12.4/compiler.34097967_0
- driver:535.129.03
- os:DISTRIB_DESCRIPTION="Ubuntu 22.04.4 LTS"(docker)
-
### Who…
-
Does tensorrtllm_backend supports multimodal LLM like LLaVA like those listed in https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/multimodal?
-
in models.ts I change the anthropic and openai's baseURL as below shown:
export function getModelClient(model: LLMModel, config: LLMModelConfig) {
const { id: modelNameString, providerId } = m…
-
**Goal**: To collaborate with more RL researchers on ScrimBrain
**Problem**: @wkwan writes like a zoomer
**Solution**: Generate a scientific paper for ScrimBrain using LLM's and multimodal model…
wkwan updated
1 month ago
-
### Duplicates
- [X] I have searched the existing issues
### Summary 💡
Adding support for Image, Video, and Audio inputs into the AutoGPT system is more than just supporting it at the fastapi serve…