When I run predictions on image stacks of a few GB, sometimes my kernel dies after I get the message
flow/core/framework/cpu_allocator_impl.cc:81] Allocation of x exceeds 10% of system memory.
I can see that all my CPUs are close to 100%, but my RAM is always below 50% usage and my GPU memory is barely used (~10%).
How can I increase the usage of GPU memory? By increasing n_tiles? Could you please characterize n_tiles a little more? Is it [block_size, factor_dim, factor_dim]?
Is it correct that what people call batch (or block) size is actually the size of the sub-matrices of a block-diagonal matrix, as explained here?
When I run predictions on image stacks of a few GB, sometimes my kernel dies after I get the message
flow/core/framework/cpu_allocator_impl.cc:81] Allocation of x exceeds 10% of system memory.
I can see that all my CPUs are close to 100%, but my RAM is always below 50% usage and my GPU memory is barely used (~10%).
How can I increase the usage of GPU memory? By increasing
n_tiles
? Could you please characterizen_tiles
a little more? Is it[block_size, factor_dim, factor_dim]
?Is it correct that what people call batch (or block) size is actually the size of the sub-matrices of a block-diagonal matrix, as explained here?
Thank you.