Open tenorslowworm opened 7 years ago
You can use "nvidia-smi -l 1" it updates memory usage in realtime when using neural-style
But that's no different from running it repeatedly, right? I can even run it repeatedly for myself at a faster rate than once per second. But my fear is that I can miss brief spikes in memory usage that way, and also, I'm hoping that a software solution will have the additional benefit that it could catch attempted allocations that fail because not enough memory is available.
Could someone offer a code change to add reporting of peak memory usage for each gpu at the end of a multi-gpu run? It would really help with determining an optimal multigpu_strategy. Or is there an external tool that can do this? The best I can manage is to run nvidia-smi repeatedly, but that can miss short but significant slices of increased demand.
Update: Or better yet, a report of peak memory usage per layer.
Thanks!