I use vovnet as backbone. In the setting of vovnetcp, it seems use the checkpointing technique to save gpu memory. But I got the error.
Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 328 with name img_backbone.stage5.OSA5_3.ese.fc.bias has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.
Does it mean that in distribution training setting I can not use checkpointing?
I use vovnet as backbone. In the setting of vovnetcp, it seems use the checkpointing technique to save gpu memory. But I got the error.
Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the
forward
function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiplecheckpoint
functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations. Parameter at index 328 with name img_backbone.stage5.OSA5_3.ese.fc.bias has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.Does it mean that in distribution training setting I can not use checkpointing?