lseventeen / PHTrans

[MICCAI2022] PHTrans: Parallelly Aggregating Global and Local Representations for Medical Image Segmentation
Apache License 2.0
33 stars 5 forks source link

数据集分布以及如何预处理 #7

Open Susu0812 opened 1 year ago

Susu0812 commented 1 year ago

您好,readme里对于数据集应该如何布置文件夹以及对数据集进行预处理的步骤说明的不是很详细,造成我对您的code复现有些困难,我有复现过nnformer的部分训练。另外您给的BCV数据集是否与代码中呈现的synapse多器官分割数据不一致呢,在阅读文章的时候就有这个疑惑,希望能够得到解答,谢谢

lseventeen commented 1 year ago

数据预处理采用nnUNet默认的处理方式,数据划分采用nnformer的数据划分(实际上来源于transUNet的数据划分),详见https://github.com/lseventeen/PHTrans/issues/5 。

Susu0812 commented 1 year ago

Traceback (most recent call last): File "/home/zlj/anaconda3/envs/sjy-PHTrans/bin/PHTrans_train", line 33, in sys.exit(load_entry_point('phtrans', 'console_scripts', 'PHTrans_train')()) File "/home/zlj/workspace/sjy/PHTrans/PHTrans/phtrans/run/run_training.py", line 182, in main trainer.validate(save_softmax=args.npz, validation_folder_name=val_folder, File "/home/zlj/workspace/sjy/PHTrans/PHTrans/phtrans/training/PHTransTrainer.py", line 274, in validate ret = super().validate(do_mirroring=do_mirroring, use_sliding_window=use_sliding_window, step_size=step_size, File "/home/zlj/workspace/sjy/PHTrans/nnUNet/nnunet/training/network_training/nnUNetTrainer.py", line 595, in validate softmax_pred = self.predict_preprocessed_data_return_seg_and_softmax(data[:-1], File "/home/zlj/workspace/sjy/PHTrans/PHTrans/phtrans/training/PHTransTrainer.py", line 294, in predict_preprocessed_data_return_seg_and_softmax ret = super().predict_preprocessed_data_return_seg_and_softmax(data, File "/home/zlj/workspace/sjy/PHTrans/nnUNet/nnunet/training/network_training/nnUNetTrainer.py", line 517, in predict_preprocessed_data_return_seg_and_softmax ret = self.network.predict_3D(data, do_mirroring=do_mirroring, mirror_axes=mirror_axes, File "/home/zlj/workspace/sjy/PHTrans/nnUNet/nnunet/network_architecture/neural_network.py", line 145, in predict_3D res = self._internal_predict_3D_3Dconv_tiled(x, step_size, do_mirroring, mirror_axes, patch_size, File "/home/zlj/workspace/sjy/PHTrans/nnUNet/nnunet/network_architecture/neural_network.py", line 332, in _internal_predict_3D_3Dconv_tiled gaussian_importance_map = gaussian_importance_map.cuda(self.get_device(), non_blocking=True) AttributeError: 'numpy.ndarray' object has no attribute 'cuda'

您好,我训练完之后,进行测试的时候,遇到了如上的错误。指出了一个名为cuda的属性错误,这是PyTorch中用于将张量(tensor)移动到GPU上的函数,但是它试图在一个NumPy数组(numpy.ndarray)上调用。在尝试多次改错之后仍然未解决,请问作者有什么解决建议嘛