-
# URL
- https://arxiv.org/pdf/2401.12087
# Affiliations
- Keqin Peng, N/A
- Liang Ding, N/A
- Yancheng Yuan, N/A
- Xuebo Liu, N/A
- Min Zhang, N/A
- Yuanxin Ouyang, N/A
- Dacheng Tao, N/…
-
# Task Name
Text-Guided Speech In-Context Learning
## Task Objective
This task aims to utilize textual instructions to guide the interpretation of sequential audio clips, ultimately determini…
-
- [ ] [Structured Prompting: Overcoming Length Limits in In-Context Learning](https://arxiv.org/abs/2212.06713)
# Structured Prompting: Overcoming Length Limits in In-Context Learning
## Snippet
"St…
-
***Please do not request features for the model as an issue. You can refer to the pinned Discussion thread to make feature requests for the model/dataset.***
**Please describe what you are trying t…
-
- [ ] [[2202.12837] Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?](https://arxiv.org/abs/2202.12837)
# [2202.12837] Rethinking the Role of Demonstrations: What Makes In-…
-
Hello,
Thanks for your work. I attempted the in-context learning training command from the experiment details, but encountered a 'loss is NaN' error. Could you share the command you used? Appreciate …
-
### Question
Hello, I want to input some in-context examples to LLaVA. But I can not find any guidance about how to insert images in input prompt. Could you give me some templates about multi-image i…
-
Hi,
I was really impressed by SPHINX's capability.
However, is it possible to do in-context learning with it?
Something similar to your example for Multimodal LLaMA2 https://alpha-vllm.github.i…
-
# URL
- https://arxiv.org/abs/2310.15916
# Affiliations
- Roee Hendel, N/A
- Mor Geva, N/A
- Amir Globerson, N/A
# Abstract
- In-context learning (ICL) in Large Language Models (LLMs) has emer…
-
Hi, thanks for the amazing models. I see the TTS models are added to the repo recently. Could you please give an example to provide the audio prompt to the TTS model (audioldm2-speech-gigaspeech) for …