pytorch / data

A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.
BSD 3-Clause "New" or "Revised" License
1.11k stars 148 forks source link

best practice for `snapshot_every_n_steps` #1283

Open ShoufaChen opened 1 month ago

ShoufaChen commented 1 month ago

Hello,

Thank you for your awesome implementation of StatefulDataloader.

I have a question about snapshot_every_n_steps. It seems there is not much detailed explanation about this argument.

cc @andrewkho

andrewkho commented 1 month ago

Hi @ShoufaChen thanks for the issue, we should update the documentation to explain this better.

To answer your questions: it depends mainly on the size and composition of your state. If you're storing eg an int representing index or file-offset, then it shouldn't be an issue. If your state is very large, involving eg buffers of data for shuffling, then the overhead of creating a checkpoint and passing it through multiprocessing queue may slow down training, this variable lets you decrease the frequency of checkpointing. If eg you know you're checkpointing every 1000 steps, you can set this value to 1000.