DavisMeee / LighTDiff

LighTDiff: Surgical Endoscopic Image Low-Light Enhancement with T-Diffusion
MIT License
14 stars 0 forks source link

Request for access to the dataset #6

Closed yjy-png closed 3 days ago

yjy-png commented 4 days ago

Hello, it is a great honor to see your research achievements. Congratulations on your paper being the runner-up for the best paper award at MACCAI 2024. I noticed that you mentioned you would make your dataset public. Could you please provide me with a copy?

DavisMeee commented 4 days ago

Sure I am preparing that right now. The link has been released in README.md right now.

yjy-png commented 3 days ago

Thank you for your response. I would also like to inquire about the configuration of the equipment you used. I am using a 3090 with 24GB of VRAM, and even with a batch size of 4, I still encounter an out-of-memory error.

DavisMeee commented 3 days ago

I am using RTX3090 with batch size = 8. And it only needs 8GB cuda memory. We are designing for light weight and it can be trained on GTX1060.

yjy-png commented 3 days ago

Thank you again for your reply. I am currently experiencing the following errors: Traceback (most recent call last):██████████████████████████████████████████████████████▌ | 3/4 [00:02<00:00, 1.93it/s] File "E:\code\model\LighTDiff\LighTDiff\lightdiff\train.py", line 12, in train_pipeline(root_path) File "e:\code\model\lightdiff\basicsr-light\basicsr\train.py", line 261, in train_pipeline model.validation( File "e:\code\model\lightdiff\basicsr-light\basicsr\models\base_model.py", line 48, in validation self.nondist_validation(dataloader, current_iter, tb_logger, save_img) File "e:\code\model\lightdiff\lightdiff\lightdiff\models\lightdiff_model.py", line 317, in nondist_validation self.test() File "e:\code\model\lightdiff\lightdiff\lightdiff\models\lightdiff_model.py", line 211, in test self.output = self.bare_model.ddim_LighT_sample( File "C:\Users\swl.conda\envs\lightdiff\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "e:\code\model\lightdiff\lightdiff\lightdiff\archs\ddpm_arch.py", line 242, in ddim_LighT_sample pred_noise = self.denoise_fn(torch.cat([F.interpolate(x_in, sample_img.shape[2:]), sample_img], dim=1), noise_level) File "C:\Users\swl.conda\envs\lightdiff\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "e:\code\model\lightdiff\lightdiff\lightdiff\archs\denoising_arch.py", line 366, in forward x = layer(x, t) File "C:\Users\swl.conda\envs\lightdiff\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "e:\code\model\lightdiff\lightdiff\lightdiff\archs\denoising_arch.py", line 249, in forward x = self.attn(x) File "C:\Users\swl.conda\envs\lightdiff\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "e:\code\model\lightdiff\lightdiff\lightdiff\archs\denoising_arch.py", line 208, in forward attn = torch.einsum( torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 25.00 GiB (GPU 0; 24.00 GiB total capacity; 27.58 GiB already allocated; 0 bytes free; 28.15 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 0%| | 0/596 [00:19<?, ?image/s]

DavisMeee commented 3 days ago

May I ask you why you got 27.58GiB allocated? TBH we never caught this before.

yjy-png commented 3 days ago

I apologize for the uncertainty regarding the cause of the situation; I will investigate it. I also noticed that you have made your dataset public, and I appreciate that. Thank you very much.

yjy-png commented 3 days ago

Thanks to your dataset, I realized that the reason for my issue was that my dataset did not handle the size properly. I built it based on Endovis18 using MATLAB for processing, but after observing your dataset, I found that my processing results are far from as good as yours. Could you please share the code for your preprocessing method? I greatly appreciate your enthusiastic response.

DavisMeee commented 3 days ago

I am using the basic resize and scitik-image package which is easy to replicate.

yjy-png commented 3 days ago

Okay, I understand.