Closed hansoogithub closed 10 months ago
Hi @hansoogithub
If you only want to inspect the state_dict
, you can simply:
ckpt = torch.load("mycheckpoint.ckpt")
ckpt
is a dictionary and you will find what you want in ckpt['state_dict']
if I am not mistaken. Otherwise just inspect the keys in ckpt
and you should easily find it :wink:
PS: if you are interested in this project, don't forget to give our project a :star:, it matters to us !
i gave this project a star, thank you sorry after trying your suggestion, based on the kitti360 checkpoint
is it possible to identify which transformer blocks are the encoders and decoders in that state_dict? which sections are the point segmentation module? do the kitti360 config files determine these?
is it possible to identify which transformer blocks are the encoders and decoders in that state_dict? which sections are the point segmentation module?
To answer these questions, I invite you to have a look at the code, it is fairly commented and should help you in understanding how the project works. In particular, check out:
PointSegmentationModule
is the pytorch-lightning
's LightningModule
wrapper holding the SPT
model in its net
attribute. Being familiar with pytorch-lightning
will help.SPT
is the SPT modelYou want to understand how these two work to be able to parse whatever you are interested in from the state_dict
.
do the kitti360 config files determine these?
The configs file rely on hydra
and have a nested inheriting structure. You can find the model configurations. To understand how these work, being familiar with hydra
and lightning-hydra
will help.
sorry i am having trouble accessing the trained model weights of the
PointSegmentationModule
and especially the encoder and the decoder, i would like to inspect them and edit them how can i do it? thank you