The IPP is a Web portal for biologists to interact with High Performance Compute (HPC) systems. Designed and funded by the IMB Microscopy Facility & the Research Computing Centre (RCC)
Memory calculations to assist users in tiling their large datasets!
Ie: for an 11GB dataset it wont fit into the 16GB VRAM for deconvolution, so we need to cut it up into tiles, the newer versions of microvolution assist the users by telling them when the GPU usage is too large by highlighting in Red
But for the portal we need to do this automatically
I believe this has been done in the newer versions of microvolution
Default "un-ticked" Tickbox to allow users to enable Peer to Peer GPU mode to share GPU VRAM
Default "ticked" tickbox to Enable Auto-Tiling mode including memory usage estimator in API
JS: not clear how accurate the results are, delays waiting for slurm job to run estimate, can we provided estimated slurm scheduling time, looking for user experience similar to Huygens
IMB to do more testing, raise with Marc Bruce?
ME: open CRM issue if IPP found to be not correctly interpreting microvolution estimates, run resource estimate jobs on slurm debug queue to minimise slurm scheduling delay, can microvolution do estimate with list of available resources rather than having to run with actual resources, do bunya walltime limits enable better scheduling delay estimates?
Memory calculations to assist users in tiling their large datasets! Ie: for an 11GB dataset it wont fit into the 16GB VRAM for deconvolution, so we need to cut it up into tiles, the newer versions of microvolution assist the users by telling them when the GPU usage is too large by highlighting in Red
But for the portal we need to do this automatically I believe this has been done in the newer versions of microvolution
Default "un-ticked" Tickbox to allow users to enable Peer to Peer GPU mode to share GPU VRAM Default "ticked" tickbox to Enable Auto-Tiling mode including memory usage estimator in API
image