Both single- and multi-gpu training setup should work well when storing interactions
Current Behavior
Right know, if interactions are not aggregated and InteractionSaver is used only the leader process will save data, causing all the date in the other gpu to be lsot.
When storing the interaction the current callback does not the case where interactions are not aggregated and training happens with multiple GPUs https://github.com/facebookresearch/EGG/blob/master/egg/core/callbacks.py#L257-L258
Expected Behavior
Both single- and multi-gpu training setup should work well when storing interactions
Current Behavior
Right know, if interactions are not aggregated and InteractionSaver is used only the leader process will save data, causing all the date in the other gpu to be lsot.