Open cweld510 opened 3 weeks ago
Have you looked at https://github.com/google/gvisor/issues/10478 (which I believe was filed by from one of your colleagues :))?
I believe cuda-checkpoint
should work well within gVisor now that NVIDIA has fixed the issue described in that bug, and should allow GPU checkpointing to work in gVisor without the complexity of recording and replaying CUDA calls.
Interesting, I assumed that NVIDIA hadn't fixed the issue since https://github.com/NVIDIA/cuda-checkpoint/issues/4 is still open, but honestly, I haven't tried running cuda-checkpoint
again recently on pytorch within gvisor. I will do that.
I would recommend trying the latest driver (R565 I believe).
Thanks! I'll reply back when I've had a chance to try the latest driver. Really appreciate the help on this.
Description
We're interested in some form of GPU checkpointing - is this something that the gvisor team plans on supporting at any point?
Generally, existing GPU checkpointing implementations described in papers like Singularity or Cricket intercept CUDA calls via
LD_PRELOAD
. Prior to a checkpoint, they record stateful calls in a log, which is stored at checkpoint time along with the contents of GPU memory. At restore time, GPU memory is reloaded and the log is replayed. Both frameworks have to do some of virtualization of device pointers as well.It seems (perhaps naively) that a similar scheme might be possible within nvproxy, which already intercepts calls to the GPU driver. In theory, nvproxy could record a subset of calls made to the GPU driver and replay them at checkpoint-restore time, virtualizing file descriptors and device pointers as needed; and separately, support copying contents of GPU memory off the device to a file and back.
This is clearly complex. I'm curious if you all believe it to be viable and plan on exploring the scheme described above, or a different one, at any point?
Is this feature related to a specific bug?
No response
Do you have a specific solution in mind?
No response