Open IanZ2020 opened 2 weeks ago
And also, does Uni-MoE v1 trained two separate MoE models for processing audio and speech respectively? Is there any way we can integrate both Uni-MoE-Audio and Uni-MoE-Speech?
Thank you for your attention to our work. We are currently working on resolving this issue, and such features will be introduced in future versions.
I found that Uni-MoE v2 is not trained on audio understanding tasks and not utilizing the BEATs audio encoder.
Is Uni-MoE v2 not designed for understanding general audio events, like natural sounds?