-
Hey unsloth team, beautiful work being done here.
I am the author of [MachinaScript for Robots](https://github.com/babycommando/machinascript-for-robots) - a framework for building LLM-powered robo…
-
I want it to work on my existing project with multiple code files and with nested folders and multimodality with local models like ollama and lite-llm
-
Any data table for benchmark?
-
We could add filters to the leaderboard, similar to what we have for the plots. Could be even more complex, and lead to a re-ordering of the leaderboard.. Basically, could use all parameters that we a…
-
### Feature Description
Support Support Claude 3.5 Sonnet as Multimodal LLM in llama-index-multi-modal-llms-anthropic
### Reason
llama-index-multi-modal-llms-anthropic does not support Claude 3.5 S…
IngLP updated
1 month ago
-
Paper : [https://arxiv.org/pdf/2406.16860](https://arxiv.org/pdf/2406.16860)
Website : [https://cambrian-mllm.github.io](https://cambrian-mllm.github.io)
Code : [https://github.com/cambrian-mllm/cam…
-
### System Info
- nvidia:535.129.03
- cuda_version:12.4
- GPU:L40S
- OS:Ubuntu 22.04.4 LTS(docker)
- tensorrt-llm: 0.11.0.dev2024060400
### Who can help?
_No response_
### Information
…
-
With all the growing activity and focus on multimodal models is this library restricted to tune text only LLM?
Do we plan to have Vision or more in general multimodal models tuning support?
bhack updated
1 month ago
-
Hello,
Would you like to support mllm like llava?
```[tasklist]
### Tasks
```
-
Hi,
Love your work!
Could you please provide information on when the data and code will be released?
Excited to dive in!
Best,
PhucNDA.