declare-lab / MELD

MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
GNU General Public License v3.0
788 stars 200 forks source link

Using pre-trained models with my own audio/video files. #16

Open monilshah98 opened 4 years ago

monilshah98 commented 4 years ago

Hello, i have downloaded the pre-trained models from the link provided in the repository, but baseline.py doesn't provide any way to use my own audio/video (.mp3/.mp4) files directly. The authors are loading pickle files instead.

Can somebody give me any scripts so i can use the model with my own audio/video files.

Any help would be really appreciated.

She-yh commented 1 year ago

@monilshah98 Hello, Did you find the answer? I have the same problem .

monilshah98 commented 1 year ago

Hello Sheyh,

Hope you are doing well.

I’m no working on that project anymore.

But to answer your question, No I did not find a working solution to the problem you’re referring to.

Best Regards, Monil On Tue, Jan 31, 2023 at 5:14 AM Sheyh @.***> wrote:

@monilshah98 https://github.com/monilshah98 Hello, Did you find the answer? I have the same problem .

— Reply to this email directly, view it on GitHub https://github.com/declare-lab/MELD/issues/16#issuecomment-1410092163, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHJLPK5A4SLDQILAQK273PTWVDQWTANCNFSM4J6SWN2A . You are receiving this because you were mentioned.Message ID: @.***>