-
- [ ] [awesome-llm-planning-reasoning/README.md at main · samkhur006/awesome-llm-planning-reasoning](https://github.com/samkhur006/awesome-llm-planning-reasoning/blob/main/README.md?plain=1)
# awesom…
-
Hello! Could you please add SALMONN series models?
Title | Venue | Date | Code | Demo
-- | -- | -- | -- | --
[SALMONN: Towards Generic Hearing Abilities for Large Language Models](https://arxiv.o…
-
- [ ] [LLM-Agents-Papers/README.md at main · AGI-Edgerunners/LLM-Agents-Papers](https://github.com/AGI-Edgerunners/LLM-Agents-Papers/blob/main/README.md?plain=1)
# LLM-Agents-Papers
## :writing_hand…
-
- [ ] [system-2-research/README.md at main · open-thought/system-2-research](https://github.com/open-thought/system-2-research/blob/main/README.md?plain=1)
# OpenThought - System 2 Research Links
He…
-
Dear authors,
@shuyansy @UnableToUseGit
I kindly think you need to discuss VoCo-LLaMA[1] in the "Intro" section of your paper at the very least.
As I find the citation and discussions related to …
-
*Sent by Google Scholar Alerts (scholaralerts-noreply@google.com). Created by [fire](https://fire.fundersclub.com/).*
---
###
###
### [PDF] [Attention Prompting on Image for Large Vision-Language…
-
- Here's the summary of consulting a LLM specialist:
---
- We have an initial thought in #74 as follows:
![image](https://github.com/user-attachments/assets/265a3d7d-0454-4e7b-9c99-a0dd9f9ecf7c…
-
To run LLaMA 3.1 (or similar large language models) locally, you need specific hardware requirements, especially for storage and other resources. Here's a breakdown of what you typically need:
### …
-
Dear Authors,
We'd like to add "GITA: Graph to Visual and Textual Integration for Vision-Language Graph Reasoning" to this repository, which has been accepted by NeurIPS 2024. [**Paper**](https:/…
-
Hi,
Thanks for your efforts on such a valuable collection!
Could you please add the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate"?
M…