-
Knowledge-Guided Dynamic Modality Attention Fusion Framework for Multimodal Sentiment Analysis
Xinyu Feng, Yuming Lin, Lihua He, You Li, Liang Chang, Ya Zhou
-
Hello, I am very interested in your Knowledge-Guided program. Your paper on EMNLP 2024, "Knowledge-Guided Dynamic Modality Attention Fusion Framework for Multimodal Sentiment Analysis. Xinyu Feng, Yum…
-
### Proposal summary
## Feature Request
Enable Opik to display additional media formats, including audio, PDF, and video.
## Background
Opik currently supports only image display, which li…
-
Hello there! I'm currently trying to use the emotion2vec for sentiment analysis tasks and appreciate your work. After reading related papers and documentation, I noticed that you have provided instruc…
-
Thank you for your work on multimodal prompt learning for missing modalities.
I have a video dataset which is not for sentiment analysis or emotion recognition but I want to use your architecture f…
gak97 updated
2 months ago
-
please tell me where the code for multimodal sentiment analysis is,thank you!
-
### Author Pages
https://aclanthology.org/people/h/hui-chen/
### Type of Author Metadata Correction
- [X] The author page wrongly conflates different people with the same name.
- [ ] This author ha…
-
Hi Mai,your job,Hybrid Contrastive Learning of Tri-Modal Representation for Multimodal Sentiment Analysis,is very excelent. I had some question of it. Could you please share the source code?
-
Hello,
Can you share the way you extract audio features in the work "Multi-level Multiple Attentions for Contextual Multimodal Sentiment Analysis"? I have no idea that how to extract 100 dimensions s…
-
brother tell which exactly python version you used for running this project
I am facing compatibility issues on both python 3.8 version and python 3.9 version when I install necessary dependencies us…