Open jaeyeonkim99 opened 1 year ago
Hi,
I think the unfused and fused models of huggingface transformers come from these two checkpoints of our own:
They are models we presented in the paper, while other models (such as music, music+speech+...) come from our continuing training after the paper publication.
Best, Ke
Hello! I am now using the CLAP model for my research, and the checkpoints from the huggingface transformers ('laion/clap-htsat-unfused") works best for me. However, I cannot find exactly which datasets are used for the checkpoint unlike the models linked in this repository.
Can I get exact information about the datasets used for the huggingface pretrained models?