Project-MONAI / tutorials

MONAI Tutorials
https://monai.io/started.html
Apache License 2.0
1.85k stars 681 forks source link

ValueError in maisi_train_controlnet_tutorial.ipynb #1838

Closed KumoLiu closed 1 month ago

KumoLiu commented 1 month ago
INFO:notebook:Inference...
2024-09-23 06:17:47,215 - INFO - 'dst' model updated: 158 of 206 variables.

INFO:maisi.controlnet.infer:Number of GPUs: 2
INFO:maisi.controlnet.infer:World_size: 1
WARNING:py.warnings:unclosed file <_io.TextIOWrapper name='./temp_work_dir_controlnet_train_demo/environment_maisi_controlnet_train.json' mode='r' encoding='UTF-8'>

WARNING:py.warnings:unclosed file <_io.TextIOWrapper name='./temp_work_dir_controlnet_train_demo/config_maisi.json' mode='r' encoding='UTF-8'>

WARNING:py.warnings:unclosed file <_io.TextIOWrapper name='./temp_work_dir_controlnet_train_demo/config_maisi_controlnet_train.json' mode='r' encoding='UTF-8'>

INFO:maisi.controlnet.infer:trained autoencoder model is not loaded.
INFO:maisi.controlnet.infer:trained diffusion model is not loaded.
INFO:maisi.controlnet.infer:set scale_factor -> 1.0.
INFO:maisi.controlnet.infer:trained controlnet is not loaded.
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/workspace/Code/tutorials/generation/maisi/scripts/infer_controlnet.py", line 207, in <module>
    main()
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/workspace/Code/tutorials/generation/maisi/scripts/infer_controlnet.py", line 159, in main
    check_input(None, None, None, output_size, out_spacing, None)
  File "/workspace/Code/tutorials/generation/maisi/scripts/sample.py", line 378, in check_input
    raise ValueError(
ValueError: The output_size[0] have to be chosen from [256, 384, 512], and output_size[2] have to be chosen from [128, 256, 384, 512, 640, 768], yet got (128, 128, 128).
E0923 06:17:50.348000 140369402987136 torch/distributed/elastic/multiprocessing/api.py:863] failed (exitcode: 1) local_rank: 0 (pid: 209184) of binary: [/usr/bin/python](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f74657374227d-0040ssh-002dremote-002b7b22686f73744e616d65223a2231302e31392e3138332e313930222c2275736572223a2279756e6c6975227d.vscode-resource.vscode-cdn.net/usr/bin/python)
Traceback (most recent call last):
  File "/usr/local/bin/torchrun", line 33, in <module>
    sys.exit(load_entry_point('torch', 'console_scripts', 'torchrun')())
  File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 919, in main
    run(args)
  File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 910, in run
    elastic_launch(
  File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 138, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
scripts.infer_controlnet FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-09-23_06:17:50
  host      : yunliu-MS-7D31
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 209184)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
KumoLiu commented 1 month ago

Hi @guopengf, could you please take a look at this issue? Thanks.

KumoLiu commented 1 month ago
INFO:creating training data:Using device cuda:0 
WARNING:py.warnings:You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. 

**ERROR:creating training data:The trained_autoencoder_path does not exist!** 
INFO:creating training data:filenames_raw: ['tr_image_001.nii.gz', 'tr_image_002.nii.gz'] 
[rank0]:[W923 05:56:48.650222570 ProcessGroupNCCL.cpp:1207] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present,  but this warning has only been added since PyTorch 2.4 (function operator()) 
WARNING:py.warnings:unclosed file <_io.TextIOWrapper name='./temp_work_dir/./embeddings/tr_image_001_emb.nii.gz.json' mode='r' encoding='UTF-8'>

WARNING:py.warnings:unclosed file <_io.TextIOWrapper name='./temp_work_dir/./embeddings/tr_image_002_emb.nii.gz.json' mode='r' encoding='UTF-8'>
[rank0]:[W923 10:10:11.778510110 ProcessGroupNCCL.cpp:1207] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present,  but this warning has only been added since PyTorch 2.4 (function operator())

In maisi_diff_unet_training_tutorial.ipynb

guopengf commented 1 month ago

Hi @KumoLiu, I think this error occurred after this PR https://github.com/Project-MONAI/tutorials/pull/1825. We added an input check function for the controlnet inference script. We can change the toy data in the controlnet tutorial to [256, 256, 128] with spacing [1.5, 1.5, 1.5] to pass this input check.

guopengf commented 1 month ago
INFO:creating training data:Using device cuda:0 
WARNING:py.warnings:You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. 

**ERROR:creating training data:The trained_autoencoder_path does not exist!** 
INFO:creating training data:filenames_raw: ['tr_image_001.nii.gz', 'tr_image_002.nii.gz'] 
[rank0]:[W923 05:56:48.650222570 ProcessGroupNCCL.cpp:1207] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present,  but this warning has only been added since PyTorch 2.4 (function operator()) 
WARNING:py.warnings:unclosed file <_io.TextIOWrapper name='./temp_work_dir/./embeddings/tr_image_001_emb.nii.gz.json' mode='r' encoding='UTF-8'>

WARNING:py.warnings:unclosed file <_io.TextIOWrapper name='./temp_work_dir/./embeddings/tr_image_002_emb.nii.gz.json' mode='r' encoding='UTF-8'>
[rank0]:[W923 10:10:11.778510110 ProcessGroupNCCL.cpp:1207] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present,  but this warning has only been added since PyTorch 2.4 (function operator())

In maisi_diff_unet_training_tutorial.ipynb

@dongyang0122 Would you like to look into this error for diffusion unet?