Open CJ416 opened 2 months ago
Hi @CJ416 You can find the GPT-2 training related code in this file audioldm_train/modules/audiomae/sequence_gen/sequence_input.py
You might need to modify the yaml config file so that to use GPT-2 output as LDM condition
Hi @CJ416 You can find the GPT-2 training related code in this file audioldm_train/modules/audiomae/sequence_gen/sequence_input.py
You might need to modify the yaml config file so that to use GPT-2 output as LDM condition
Wow~ Thanks for reply~
hello, haohe. I really appreciate your work! Thank you for your kindness of open sourcing. In the learning of the training code, I can not find the training of GPT2. In the original paper, embeddings of different modalities were fed into GPT2. But the released code seems to directly use clap embeddings and film to fuse the LOA. Or have I missed some details? Hope you can solve my confusion!