Caoxuheng / HIFtool

A toolbox for HSI-MSI fusion/pan-sharpening, including MoGDCN, Fusformer, PSRT, MSST, DCTransformer, iDaFormer, HySure, HyMS, DBSR, UDALN,uHNTC, ZSL and pretrained weights
MIT License
18 stars 2 forks source link

case_lst = ['model','unsupervised','supervised'] case = case_lst[1]、Method = 'FeafusFormer'、 dataset_name = 'chikusei' #8

Open wiliankaien opened 1 week ago

wiliankaien commented 1 week ago

GT,LRHSI,HRMSI = np.array(hrhsi["lms"]).T, np.array(hrhsi["ms"]).T, np.array(hrhsi["pan"]).T

Traceback (most recent call last): File "/root/autodl-tmp/project/Network_training.py", line 46, in Fusion(model,model_folder=model_folder,blind=True,mat_save_path= mat_save_path ,dataset_name=None,srf=None) File "/root/autodl-tmp/project/fusion_mode.py", line 23, in Unsupervisedfusion GT,LRHSI,HRMSI = np.array(hrhsi["lms"]).T, np.array(hrhsi["ms"]).T, np.array(hrhsi["pan"]).T #从 .h5 文件中提取相同的数据部分,但需要先转换为 NumPy 数组,并转置(.T)以匹配可能的维度差异 File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "/root/miniconda3/lib/python3.8/site-packages/h5py/_hl/group.py", line 357, in getitem oid = h5o.open(self.id, self._e(name), lapl=self._lapl) File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py/h5o.pyx", line 241, in h5py.h5o.open KeyError: "Unable to synchronously open object (object 'lms' doesn't exist)"

Caoxuheng commented 1 week ago

Thank you for your feedback! I’ve resolved the issue. Originally, blind fusion was designed for real-world datasets, but it can now also be tested with simulated datasets. When using it, please ensure you correctly configure the hsi_channel, msi_channel, and sf parameters in the config, as well as the sp_range, which defines the spectral response function's coverage range.

I’d also like to remind you that if you want to reproduce the results from uHNTC, you can find the code at here

wiliankaien commented 5 days ago

相同情况下 hsi_channel=128 ,msi_channel=4, sf=32,dataname=Chikusei 报错 NetWorks .init_.py 文件中 elif 'FeafusFormer' in method: from .FeafusFormer.net import Feafusformer from .FeafusFormer.config import opt sp_range = np.array([range(4)])
model = Feafusformer(opt,sp_range,device)

Traceback (most recent call last): File "/root/autodl-tmp/project/Network_training.py", line 48, in Fusion(model,model_folder=model_folder,blind=False,mat_save_path= mat_save_path ,dataset_name=None,srf=sio.loadmat('Dataloader_tool/srflib/chikusei_128_4.mat')['R']) File "/root/autodl-tmp/project/fusion_mode.py", line 56, in Unsupervisedfusion Re = model(LRHSI, HRMSI) File "/root/autodl-tmp/project/Networks/FeafusFormer/net.py", line 76, in call Couple_init(SpaDNet, SpeDNet, MSI, HSI) File "/root/autodl-tmp/project/Networks/FeafusFormer/net.py", line 35, in Couple_init loss =L1(lrmsi, torch.bmm(pre_phi[:, :, :, 0], c.view(1, k, -1)).view(1, msi.shape[1], Height, Weight)) + L1(lrmsi,spe(hsi)) RuntimeError: shape '[1, 4, 8, 8]' is invalid for input of size 64

Networks.FeafusFormer.model.net 文件中 def Couple_init(spa,spe, msi, hsi, k=3): pre_phi = spe(phi) lrmsi = spa(msi) loss =L1(lrmsi, torch.bmm(pre_phi[:, :, :, 0], c.view(1, k, -1)).view(1, msi.shape[1], Height, Weight)) + L1(lrmsi,spe(hsi))

pre_phi = spe(phi) phi=[1,128,3,1] pre_phi= [1,1,3,1] spe(hsi) hsi=[1,128,8,8] spe(hsi) = [1,1,8,8]

lrmsi= [1,4,8,8]、torch.bmm(pre_phi[:, :, :, 0], c.view(1, k, -1)).view(1, msi.shape[1], Height, Weight)=[1,4,4,4]、spe(hsi)=[1,1,8,8] 正确应该为 lrmsi= [1,4,8,8]、torch.bmm(pre_phi[:, :, :, 0], c.view(1, k, -1)).view(1, msi.shape[1], Height, Weight)=[1,4,8,8]、spe(hsi)=[1,4,8,8]

Thank you for your feedback! I’ve resolved the issue. Originally, blind fusion was designed for real-world datasets, but it can now also be tested with simulated datasets. When using it, please ensure you correctly configure the hsi_channel, msi_channel, and sf parameters in the config, as well as the sp_range, which defines the spectral response function's coverage range.

I’d also like to remind you that if you want to reproduce the results from uHNTC, you can find the code at here

非常感谢您的回复!我在运行代码还遇到上述的另外一个的问题,麻烦您有空回复一下,谢谢!

Caoxuheng commented 5 days ago

这是sp_range没有配置正确,sp_range = [list(range(30)),list(range(13,50)),list(range(41,84)),list(range(68,128))]。 这个参数代表光谱响应的覆盖波段范围,在上次更新的文件中line 52是chikusei的。之前的4是world-view的。

wiliankaien commented 3 days ago

之前的程序包更改Networks.init.py line52 sp_range = np.array([range(4)]) 更为 sp_range = [list(range(30)), list(range(13, 50)), list(range(41, 84)), list(range(68, 128))]

相同情况下 hsi_channel=128 ,msi_channel=4, sf=32,data_name=Chikusei  NetWorks.FeaFusformer.model.config line 9 parser.add_argument('--pre_epoch', type=int, default=300, help='') 运行 Network_training .py
Fusion Mode: unsupervised Initialize Spectral Degradation Net Successfully. Epoch:499 lr:3.49e-03 PSNR:-0.01 Initialize Spatial Degradation Net Successfully. Epoch:499 lr:4.88e-07 PSNR:66.63 0 36.93536759115003 9.574825143681784 1.7931718584852887 0.9395353110498791 0.017333160013469334 0.8667411905520769 进程已结束,退出代码0

模型跑了一轮就结束?

下载最新程序包遇到报错 Networks.feafusformer.model.spedown Line 105 layer =self.act(self.conv_lstidx) Line 160 pre_msi = module(hsi_1)

Fusion Mode: unsupervised /root/miniconda3/lib/python3.8/site-packages/torch/nn/functional.py:3631: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn( Traceback (most recent call last): File "/root/autodl-tmp/project/Network_training.py", line 46, in Fusion(model,model_folder=model_folder,blind=True,mat_save_path= mat_save_path ,opt=opt,dataset_name=dataset_name,srf=None) File "/root/autodl-tmp/project/fusion_mode.py", line 39, in Unsupervisedfusion Re = model(LRHSI, HRMSI) File "/root/autodl-tmp/project/Networks/FeafusFormer/net.py", line 73, in call initialize_SpeDNet(module=SpeDNet, msi=MSI, hsi=HSI, sf=self.opt.sf) File "/root/autodl-tmp/project/Networks/FeafusFormer/Model/spe_down.py", line 194, in initialize_SpeDNet pre_msi = module(hsi_1) File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, *kwargs) File "/root/autodl-tmp/project/Networks/FeafusFormer/Model/spe_down.py", line 105, in forward layer =self.act(self.conv_lstidx) File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(input, **kwargs) File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 301, in forward return self._conv_forward(input, self.weight, self.bias) File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 297, in _conv_forward return F.conv1d(input, weight, bias, self.stride, RuntimeError: Expected 3-dimensional input for 3-dimensional weight[1, 1, 3], but got 2-dimensional input of size [1, 30] instead

非常感谢您的回复!我在运行代码还遇到新的问题,期待您的回复,谢谢!

Caoxuheng commented 2 days ago

1.在config里可以调整训练epoch。
2.下面的问题我测试一下,如果可以复现稍后会更新。