Closed fjug closed 5 years ago
Hi,
Thanks for reporting!
Do you know what the memory limits of Google Colab are?
One hot-fix would be to reduce the batch-size until it runs. This will result in less stable convergence and might take longer to train.
I actually don’t know… and even google doesn’t really bring up an answer easily. If you ask the machine itself is says: MemTotal: 13335276 kB MemFree: 11129432 kB MemAvailable: 12643328 kB
You can see this via executing ‘!cat /proc/meminfo’ in a cell. Even “!top’ works if you prefer, but you’ll have to cancel the execution manually.
Thanks for the fast answer by the way.
Using n2v_patch_shape = (32, 32, 32)
in the config seems to work on google colab (CPU only), but then training takes very long. Using n2v_patch_shape = (16, 32, 32)
worked for the GPU setting on google colab and training is reasonably fast.
I am somewhat hesitating to change the current n2v_patch_shape = (32, 64, 64)
because it gives very nice results.
Maybe it would be enough to mention it as a note in the notebook?
Is there a way to reduce the memory requirement so that this example can run through on Google Colab? That would be nice, but likely not most pressing issue... ;)