I was curious if there was a way to split up computation across multiple GPUs. Our cluster has recently received multiple A40 servers with 4 cards each in them and I thought it would be really neat to process some of the huge datasets people are gathering across multiple cards somehow. Is this something that's possible? I was looking over the docs a little and it sounds like the data is stored as a flat numpy binary which I bet can be parallelized across cards/machines but maybe formats like zarr could be used for processing that are finally stitched into a flat numpy binary in the end so as not to break the GUI?
Hello Phy Devs!
I was curious if there was a way to split up computation across multiple GPUs. Our cluster has recently received multiple A40 servers with 4 cards each in them and I thought it would be really neat to process some of the huge datasets people are gathering across multiple cards somehow. Is this something that's possible? I was looking over the docs a little and it sounds like the data is stored as a flat numpy binary which I bet can be parallelized across cards/machines but maybe formats like zarr could be used for processing that are finally stitched into a flat numpy binary in the end so as not to break the GUI?
Phy is cool and I'm excited to use it!