Closed crinoiddream closed 3 months ago
感谢您的关注。
1.因为经费报销的相关原因,我们在今天上午(8月29号)把hugging face demo的相关服务暂停了。我们将重新开启hugging face demo到2024年9月1号,在此期间,你可以继续体验Emotion-LLaMA的在线demo。链接如下:
2.后续整理好代码后,会继续开源本地demo的相关代码。
请问作者在huggingface的demo最近链接不到,现有的代码可以本地部署吗?感谢您的回答
我们打算在10月份重新开启huggingface的demo。如果你打算本地部署,请参考huggingface上的code,是支持本地部署的。
请问作者在huggingface的demo最近链接不到,现有的代码可以本地部署吗?感谢您的回答
非常感谢作者耐心的回答,目前已复现了您的工作
但是训练和demo中遇到问题:1.请问是否能分享relative_train_NCEV.txt对应的数据集(类似test_NCEV对应的first_frames、HL-UTT等文件),就此我有一个疑惑,您提过为了gpu资源用的encoder的特征数据,那为什么会加载图像数据;2.demo本地部署始终报错,请问有解决建议吗,运行代码为 CUDA_VISIBLE_DEVICES=# torchrun --nproc_per_node 1 app.py --cfg-path eval_configs/demo.yaml ;3.不知2024版您最新的工作什么时候能开,若可以是否能更支持中文,谢谢!
1.Emotion-LLaMA有3个视觉Encoder,Local Encoder 和 Temporal Encoder 分别使用的是MAE和VideoMAE模型需要输入16张图片提取表情特征,比较占空间,就采取提前提前特征的方法。Global Encoder使用的是EVA模型,输入一张图片,就可以提取具有全局信息的feature。具体来说就是实现简单,保留了Global Encode,所以需要单独在输入一张图片。 2.demo部署这个问题,我之前遇到过,就是“gradio”的版本不合适,建议使用3.47.1版本。 3.最新的工作需要等我们忙完这段时间(大概到十月底),才能整理代码开源。目前不会考虑支持中文。
非常感谢!请问是否能分享relative_train_NCEV.txt对应的数据集(类似test_NCEV对应的first_frames、HL-UTT等文件)🌟🌟💐
我上传了所有特征到google drive:
https://drive.google.com/drive/folders/1DqGSBgpRo7TuGNqMJo9BYg6smJE20MG4?usp=sharing
谢谢作者大大的分享,上次demo问题是gradio包缺少了一个文件导致,现阶段又遇到一些问题想请教✨✨✨~ 1.demo在实例化运行时第247行报错如下,不知道是不是某个模型加载错了,我用的hubert_model是[huggingface.co/TencentGameMate/chinese-hubert-large]
2.数据方面,文章中MERR数据集包括28,618 coarse samples 以及4,487 fine-grained samples,但是train_NCEV和test3_NVEV分别是3373和834条,MERR和NCEV数据是独立的,不知道训练和推理中数据是怎么用的;执行eval_emotion.py是否不能"reason",会报错,提示MERR和NCEV数据对不上 3.训练方面,Stage 1: Pretraining和Stage 2: Multimodal Instruction Tuning是分别执行的吗,因为我看数据是同时加载coarse和fine-grained,如果分别执行是在train_configs中设置model_type吗,不知道这两个阶段是否分别有checkpoint 4.不知道是否可以分享train对应的3373条first_face文件
1.是否是在本地部署huggingface上的code?从输出来看,模型的加载是正确的。这个错误我没印象,能提供更多的错误信息吗?
2.train_NCEV和test3_NVEV分别是MER2023数据集中的训练部分和测试部分,与MERR数据集没有直接联系。文章中主要提及两个任务“emotion”和“reason”。 eval_emotion.py可以体验Emotion-LLaMA在“emotion”情绪识别任务上的准确性,不包含"reason"部分的代码。我们在hugging face 上部署了Demo可以体验Emotion-LLaMA的"reason"情绪推理能力。
3.两个Stage 是分别执行的,虽然在代码中同时加载了coarse和fine-grained文件,但是参与训练的样本,在minigpt4/configs/datasets/firstface/featureface.yaml
文件中指定。
4.这涉及数据集的版权问题,我咨询一下MER2023 dataset的官方,是否允许分享。你可以先申请使用MER2023 dataset并且参考相关的开源项目提取不同模态的特征。
1.是在本地部署huggingface上的[code],Emotion-LLaMA-demo/minigpt4/conversation/conversation.py文件的model_generate函数报错,参数*args为( ), kwargs的‘input_embeds'.shape=[1, 308,4096], self.model.llama_model=MiniGPTv2,具体报错如下: cuBLAS API failed with status 15 error detectedA: torch.Size([308, 4096]), B: torch.Size([4096, 4096]), C: (308, 4096); (lda, ldb, ldc): (c_int(9856), c_int(131072), c_int(9856)); (m, n, k): (c_int(308), c_int(4096), c_int(4096)) Exception in thread Thread-13: Traceback (most recent call last): File "/data/anaconda3/envs/emo_test/lib/python3.9/threading.py", line 980, in _bootstrap_inner self.run() File "/data/anaconda3/envs/emo_test/lib/python3.9/threading.py", line 917, in run self._target(*self._args, *self._kwargs) File "/data/crinoid/Emotion-LLaMA-demo/minigpt4/conversation/conversation.py", line 247, in model_generate output = self.model.llama_model.generate(args, kwargs) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/peft/peft_model.py", line 580, in generate return self.base_model.generate(kwargs) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/transformers/generation/utils.py", line 1572, in generate return self.sample( File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/transformers/generation/utils.py", line 2619, in sample outputs = self( File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, kwargs) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 688, in forward outputs = self.model( File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(args, kwargs) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 578, in forward layer_outputs = decoder_layer( File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, *kwargs) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 292, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, kwargs) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 194, in forward query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/peft/tuners/lora.py", line 502, in forward result = super().forward(x) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/bitsandbytes/nn/modules.py", line 242, in forward out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 488, in matmul return MatMul8bitLt.apply(A, B, out, bias, state) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/torch/autograd/function.py", line 506, in apply return super().apply(args, kwargs) # type: ignore[misc] File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 377, in forward out32, Sout32 = F.igemmlt(C32A, state.CxB, SA, state.SB) File "/data/anaconda3/envs/emo_test/lib/python3.9/site-packages/bitsandbytes/functional.py", line 1410, in igemmlt raise Exception('cublasLt ran into an error!') Exception: cublasLt ran into an error! 3.featureface.yaml文件加载的是train_NCEV,这个文件完全不包含MERR,我不太理解MERR是如何参与训练的,不知道可否解解惑 4.多谢了,我的邮箱是lixi2021@ia.ac.cn,已申请了MER2023,这篇工作的情感推理很赞,想好好学习下train的过程 再次感谢~~
https://github.com/oobabooga/text-generation-webui/issues/379
你好,我们将本地demo的代码和配置文件上传。你可以重新下载以下文件:
https://github.com/ZebangCheng/Emotion-LLaMA/blob/main/eval_configs/demo.yaml => Emotion-LLaMA/eval_configs/demo.yaml
https://github.com/ZebangCheng/Emotion-LLaMA/blob/main/app.py => Emotion-LLaMA/app.py
https://github.com/ZebangCheng/Emotion-LLaMA/blob/main/minigpt4/conversation/conversation.py => Emotion-LLaMA/minigpt4/conversation/conversation.py
https://github.com/ZebangCheng/Emotion-LLaMA/tree/main/examples => Emotion-LLaMA/examples
然后根据readme中的步骤,在本地执行demo。我们已经在本地成功执行,不知道是否上传完整,期待你的反馈。
你太赞了👍👍,非常感谢,minigpt4种除了conversation,还有其他文件和huggingface上的[code]的有些区别,比如models和datasets,但不影响demo的运行。 1.就之前demo的报错问题,若shape没有问题,应该是bitsandbytes在H800/100不兼容问题,请教能绕过bitsandbytes的方法; 2.因为没有做first_face数据(请问是取WER2023数据的第一帧吗),train没跑起来,不知道是否也需要bitsandbytes这个库
1.尝试把eval_configs的yaml配置文件中的low_resource为False,可能会有帮助。 2.first_face是MER2023数据集中视频的第一帧。Training相关的代码应该不涉及bitsandbytes库。
👍👍👍
Thanks for your great work!请问作者在huggingface的demo最近链接不到,看您有新作[SZTU-CMU at MER2024],是否会更新demo以及model