Closed songxujay closed 6 months ago
I found out the problem. I didn't follow your instructions " After installing the Mamba library, replace the mamba_simple.py file in the installation directory with the ./mamba_simple.py in this repository. The implementation of the Multi-modal Mamba Block (M3 Block) is located in this file." After this step, the problem was gone.
你好,请问测试出来的图像是什么样子,跑出来好像是全黑的
你好,请问测试出来的图像是什么样子,跑出来好像是全黑的
我只跑了红外可见光的图像,跟这个结果一样results/MambaDFuse_VIF/00036N.png,没有全黑
好的谢谢,跑出来了结果一样~
想问一下,就是我安装完环境之后,我import mamba_ssm之后不报错就说明我已经安装成功了吧,然后我把作者提供的mamba_simple放到了安装目录之下进行替换,然后进行test,报错了, 报错为: File "/root/miniconda3/lib/python3.8/site-packages/mamba_ssm/modules/mamba_simple.py", line 383, in forward out = mamba_inner_fn(TypeError: 'NoneType' object is not callable 数据,预训练模型什么的我都没有修改,只把测试代码中的gpu 0 1 2 修改为了0(因为我只有一块gpu)
@SadInSummer 你好,参考这个链接https://github.com/Lizhe1228/MambaDFuse/issues/4#issuecomment-2119812909
好,已经ok了,感谢
你好,请问测试出来的图像是什么样子,跑出来好像是全黑的
我只跑了红外可见光的图像,跟这个结果一样results/MambaDFuse_VIF/00036N.png,没有全黑
您好,我跑出来的图像并不是RGB的图,但是论文里的图像跑出来是有颜色的,请问这个应该怎么改
来信已收到^-^谢谢你的来信,保持联系
你好,请问测试出来的图像是什么样子,跑出来好像是全黑的
你好,我遇到了相同的问题,测试结果是全黑的,请问您还记得如何解决吗
Hello authors,
when i run python test_MambaDFuse.py --model_path=./Model/Infrared_Visible_Fusion/Infrared_Visible_Fusion/models/ --iter_number=10000 --dataset=VIR --A_dir=IR --B_dir=VI_Y
there is an error: File "test_MambaDFuse.py", line 93, in define_model model.load_state_dict(pretrained_model[param_key_g] if param_key_g in pretrained_model.keys() else pretrained_model, strict=True) RuntimeError: Error(s) in loading state_dict for MambaDFuse: Unexpected key(s) in state_dict: "M3_block1.multi_modal_mamba_block.A_b_log", "M3_block1.multi_modal_mamba_block.D_b", "M3_block1.multi_modal_mamba_block.D_c", "M3_block1.multi_modal_mamba_block.conv1d_b.weight", "M3_block1.multi_modal_mamba_block.conv1d_b.bias", "M3_block1.multi_modal_mamba_block.conv1d_c.weight", "M3_block1.multi_modal_mamba_block.conv1d_c.bias", "M3_block1.multi_modal_mamba_block.x_proj_b.weight", "M3_block1.multi_modal_mamba_block.x_proj_c.weight", "M3_block1.multi_modal_mamba_block.dt_proj_b.weight", "M3_block1.multi_modal_mamba_block.dt_proj_b.bias", "M3_block1.multi_modal_mamba_block.dt_proj_c.weight", "M3_block1.multi_modal_mamba_block.dt_proj_c.bias", "M3_block1.multi_modal_mamba_block.in_proj_extra1.weight", "M3_block1.multi_modal_mamba_block.in_proj_extra2.weight", "M3_block2.multi_modal_mamba_block.A_b_log", "M3_block2.multi_modal_mamba_block.D_b", "M3_block2.multi_modal_mamba_block.D_c", "M3_block2.multi_modal_mamba_block.conv1d_b.weight", "M3_block2.multi_modal_mamba_block.conv1d_b.bias", "M3_block2.multi_modal_mamba_block.conv1d_c.weight", "M3_block2.multi_modal_mamba_block.conv1d_c.bias", "M3_block2.multi_modal_mamba_block.x_proj_b.weight", "M3_block2.multi_modal_mamba_block.x_proj_c.weight", "M3_block2.multi_modal_mamba_block.dt_proj_b.weight", "M3_block2.multi_modal_mamba_block.dt_proj_b.bias", "M3_block2.multi_modal_mamba_block.dt_proj_c.weight", "M3_block2.multi_modal_mamba_block.dt_proj_c.bias", "M3_block2.multi_modal_mamba_block.in_proj_extra1.weight", "M3_block2.multi_modal_mamba_block.in_proj_extra2.weight", "M3_block3.multi_modal_mamba_block.A_b_log", "M3_block3.multi_modal_mamba_block.D_b", "M3_block3.multi_modal_mamba_block.D_c", "M3_block3.multi_modal_mamba_block.conv1d_b.weight", "M3_block3.multi_modal_mamba_block.conv1d_b.bias", "M3_block3.multi_modal_mamba_block.conv1d_c.weight", "M3_block3.multi_modal_mamba_block.conv1d_c.bias", "M3_block3.multi_modal_mamba_block.x_proj_b.weight", "M3_block3.multi_modal_mamba_block.x_proj_c.weight", "M3_block3.multi_modal_mamba_block.dt_proj_b.weight", "M3_block3.multi_modal_mamba_block.dt_proj_b.bias", "M3_block3.multi_modal_mamba_block.dt_proj_c.weight", "M3_block3.multi_modal_mamba_block.dt_proj_c.bias", "M3_block3.multi_modal_mamba_block.in_proj_extra1.weight", "M3_block3.multi_modal_mamba_block.in_proj_extra2.weight", "M3_block4.multi_modal_mamba_block.A_b_log", "M3_block4.multi_modal_mamba_block.D_b", "M3_block4.multi_modal_mamba_block.D_c", "M3_block4.multi_modal_mamba_block.conv1d_b.weight", "M3_block4.multi_modal_mamba_block.conv1d_b.bias", "M3_block4.multi_modal_mamba_block.conv1d_c.weight", "M3_block4.multi_modal_mamba_block.conv1d_c.bias", "M3_block4.multi_modal_mamba_block.x_proj_b.weight", "M3_block4.multi_modal_mamba_block.x_proj_c.weight", "M3_block4.multi_modal_mamba_block.dt_proj_b.weight", "M3_block4.multi_modal_mamba_block.dt_proj_b.bias", "M3_block4.multi_modal_mamba_block.dt_proj_c.weight", "M3_block4.multi_modal_mamba_block.dt_proj_c.bias", "M3_block4.multi_modal_mamba_block.in_proj_extra1.weight", "M3_block4.multi_modal_mamba_block.in_proj_extra2.weight", "M3_block5.multi_modal_mamba_block.A_b_log", "M3_block5.multi_modal_mamba_block.D_b", "M3_block5.multi_modal_mamba_block.D_c", "M3_block5.multi_modal_mamba_block.conv1d_b.weight", "M3_block5.multi_modal_mamba_block.conv1d_b.bias", "M3_block5.multi_modal_mamba_block.conv1d_c.weight", "M3_block5.multi_modal_mamba_block.conv1d_c.bias", "M3_block5.multi_modal_mamba_block.x_proj_b.weight", "M3_block5.multi_modal_mamba_block.x_proj_c.weight", "M3_block5.multi_modal_mamba_block.dt_proj_b.weight", "M3_block5.multi_modal_mamba_block.dt_proj_b.bias", "M3_block5.multi_modal_mamba_block.dt_proj_c.weight", "M3_block5.multi_modal_mamba_block.dt_proj_c.bias", "M3_block5.multi_modal_mamba_block.in_proj_extra1.weight", "M3_block5.multi_modal_mamba_block.in_proj_extra2.weight".
Could you help me? Thank you very much