Closed xuxumiao777 closed 1 year ago
Hi! The checkpointing saving is implemented via the PyTorch-Lightning Checkpointing callback:
https://github.com/bennyguo/instant-nsr-pl/blob/2d8970ddf2cf405e99d09652560ed62e4d1aa7a5/launch.py#L71-L74
You could use all the parameters here in the checkpoint
section of the config files to customize the checkpoint saving behavior (now we just save the checkpoint after the last training step). You can find the checkpoint in the ckpt
directory in your trial output directory, and you may use torch.load
to inspect the state dict stored in it (including the hash tables of course) for further processing.
Thank you, your answer helps me.
Hi, bennyguo I hopes to find a simple way to save the trained model and the hashmap. Do you know how to do that?