juglab / n2v

This is the implementation of Noise2Void training.
Other
387 stars 107 forks source link

RAM requirement for 4D dataset #90

Closed ChengYJack closed 4 years ago

ChengYJack commented 4 years ago

Hi,

I am trying to train a model with a set of 4D movies. Each file is about 950MB big and the shape of the images are: [211, 9, 512, 512, 1] How much RAM do I need for a dataset like this? Also, will the size of patches affect the RAM usage?

Thanks!

Best,

Jack

tibuch commented 4 years ago

To process your data you would have split it up along the time dimension and train a 3D network. A 4D network does currently not exist. If the sampling of the movie frames is high enough, i.e. objects do not jump multiple pixels between consecutive frames, you could also keep the time dimension and split along Z or any other spatial dimension.

To enable training on large images, training patches are extracted. In our example notebooks you can see how we extract such training patches. The patch sizes we use in the examples are our recommended default values.

RAM usage is based on the patch size, network size and training batch size.

I am assuming that your Z-Dimension has size 9. Which is too small to get large enough training patches. You could either pad the Z-Dimension with zeros on both sides or as I stated above, you could slice along the Z dimension. A last approach would be to train a 2D network on XY slices.

I would recommend to try training on XY slices first and investigate other splitting options down the road.

Best wishes

ChengYJack commented 4 years ago

Thank you for your help!