StanfordMIMI / DDM2

[ICLR2023] Official repository of DDM2: Self-Supervised Diffusion MRI Denoising with Generative Diffusion Models
125 stars 20 forks source link

RUNNING ERROR #6

Open qianquanfirstminds opened 1 year ago

qianquanfirstminds commented 1 year ago

Hi,thank you for your great work. I encountered this difficulty while running the program. Can you guide me on exactly where the error occurred?

Traceback (most recent call last): File "D:\DDM2\train_noise_model.py", line 25, in opt = Logger.parse(args, stage=1) File "D:\DDM2\core\logger.py", line 28, in parse with open(opt_path, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: 'config/sr_sr3_16_128.json'

Thank you very much!

qianquanfirstminds commented 1 year ago

My programming language skills are relatively poor, I hope this problem is not too foolish.

tiangexiang commented 1 year ago

Hi Quan, thanks for your interest in our work! This error simply means the corresponding config file: 'sr_sr3_16_128.json' cannot be found under the folder: 'config/'. Can you double-check to make sure the file does exist and there is not any typo in the file name? Feel free to leave further questions here :) Thanks!

qianquanfirstminds commented 1 year ago

非常感谢您的回答,我目前已经解决以上问题,还在想办法将您的方法融合到我们专业中,如果后续有什么问题,我会继续请教您的。再次十分感谢您的文章!

qianquanfirstminds commented 1 year ago

Hi Quan, thanks for your interest in our work! This error simply means the corresponding config file: 'sr_sr3_16_128.json' cannot be found under the folder: 'config/'. Can you double-check to make sure the file does exist and there is not any typo in the file name? Feel free to leave further questions here :) Thanks!

1.代码是在Linux系统下运行的吗?我在windos系统下会报错,如下: (DDM2) PS D:\DDM2> /bin/sh D:/DDM2/run_stage1.sh /bin/sh : 无法将“/bin/sh”项识别为 cmdlet、函数、脚本文件或可运行程序的名称。请检查名称的拼写,如果包括路径,请确保路径正确,然后再试一次。 所在位置 行:1 字符: 1

2.我想问一下您,原始数据的训练集图片大概是多少张呢?

谢谢!

tiangexiang commented 1 year ago
  1. 是的,我们的代码是在linux环境下开发/测试的,我们推荐您也使用linux机器(e.g. Ubuntu)或者windows虚拟机(e.g. WSL)来运行并拓展我们的代码。 这个问题确实是因为您对应的windows系统里找不到/bash/sh来执行shell脚本,换成对应指令即可。
  2. 我们每个模型都是在一个单独的4D volume上进行训练。e.g. 对于hardi_150数据,其大小为[106, 81, 76, 150], 其中“2D图像”大小为106 x 81,“共有”76 x 150张。由于算法特殊性,我们要求对于[106, 81, 76]这个3D volume, 其有150种对于同一个volume的不同描述,而并非150个毫无关系的图片。当然,150个不同描述也并非强制要求,我们实验中尝试过用只有6种不同描述的数据集,我们的算法依然有效 :)