-
-
Hello,
I would like to inquire whether the training data for the qwen2-audio-instruction model includes the IEMOCAP dataset for fine-tuning in speech emotion recognition tasks. Any clarification on …
-
I am experiencing a performance discrepancy when running the source code. The MELD results are consistent with the paper, but for IEMOCAP, I am seeing a lower F1 score (66.85) compared to the paper's …
-
Hello author, I am currently working on using IEMOCAP dataset with multi-label approach on your architecture, with audio, video and text as input. But I ran in some problems with your code, here are t…
-
I am trying to train the tacotrone 2 on the IEMOCAP dataset. In order to fully take advantage of all the speakers, i added a speaker reference encoder (concatenate with the original text encoder). How…
-
* Name of dataset: IEMOCAP
* URL of dataset: https://sail.usc.edu/iemocap/iemocap_release.htm
* License of dataset: not know
* Short description of dataset and use case(s):
The Interactive Emo…
-
* Name of dataset:
* URL of dataset:
* License of dataset:
* Short description of dataset and use case(s):
Folks who would also like to see this dataset in `tensorflow/datasets`, please thum…
-
Hi, I have a question. Obviously I know that the IEMOCAP dataset is multi-label, but if you look at the code, it seems that the accuracy was measured with single-label method. Maybe I misunderstood th…
-
## Adding a Dataset
- **Name:** IEMOCAP
- **Description:** acted, multimodal and multispeaker database
- **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf
- **Data:** https://sail.usc…
-
Hi,
I am currently working with the Codec-SUPERB dataset from the main branch and would like to obtain the test datasets for all downstream tasks. Specifically, I am looking to get the test dataset…