Closed yjy-png closed 3 days ago
Sure I am preparing that right now. The link has been released in README.md right now.
Thank you for your response. I would also like to inquire about the configuration of the equipment you used. I am using a 3090 with 24GB of VRAM, and even with a batch size of 4, I still encounter an out-of-memory error.
I am using RTX3090 with batch size = 8. And it only needs 8GB cuda memory. We are designing for light weight and it can be trained on GTX1060.
Thank you again for your reply. I am currently experiencing the following errors:
Traceback (most recent call last):██████████████████████████████████████████████████████▌ | 3/4 [00:02<00:00, 1.93it/s]
File "E:\code\model\LighTDiff\LighTDiff\lightdiff\train.py", line 12, in
May I ask you why you got 27.58GiB allocated? TBH we never caught this before.
I apologize for the uncertainty regarding the cause of the situation; I will investigate it. I also noticed that you have made your dataset public, and I appreciate that. Thank you very much.
Thanks to your dataset, I realized that the reason for my issue was that my dataset did not handle the size properly. I built it based on Endovis18 using MATLAB for processing, but after observing your dataset, I found that my processing results are far from as good as yours. Could you please share the code for your preprocessing method? I greatly appreciate your enthusiastic response.
I am using the basic resize and scitik-image package which is easy to replicate.
Okay, I understand.
Hello, it is a great honor to see your research achievements. Congratulations on your paper being the runner-up for the best paper award at MACCAI 2024. I noticed that you mentioned you would make your dataset public. Could you please provide me with a copy?