Closed yugeshav closed 1 year ago
Hi.
Regarding model size: It's a 20 layer autoencoder model. The kernel, stride and channel size are detailed in Figure 1 in the paper https://arxiv.org/abs/2104.03838 (figure also present in README).
Regarding computational benchmarking: It took us about 48 hours on an Nvidia K80 per noise type for each training method (this is the same GPU available on Colab or on Azure data science VMs). You will need 12 GB of GPU memory for our 20 layered model. If you're looking to train it faster, you could use the smaller DCUnet10 model (Check out https://github.com/pheepa/DCUnet).
Hello
Please let me know the model size and computation.
Any other benchmarking available please post here
Regards Yugesh