I have searched related issues but cannot get the expected help. Yes
I have read the FAQ documentation but cannot get the expected help. Yes
I have trained a simclr model on my custom dataset. Now, I would like to get the latent representation for each image in my dataset. I have extracted the backbone resnet from simclr using the tools/model_converters/extract_backbone_weights.py script. I have checked mmselfsup/demo/mmselfsup_colab_tutorial.ipynb. There, the author uses a benchmark config for resnet and runs a training loop but with a val_loader. I don't think that example actually saves the latent representations anywhere.
How can I simply get the latent representations per image by applying the exact same preprocessing steps of simclr as well as transforming my images to 224 x 224 and forward passing them through my backbone resnet?
Checklist
I have trained a simclr model on my custom dataset. Now, I would like to get the latent representation for each image in my dataset. I have extracted the backbone resnet from simclr using the
tools/model_converters/extract_backbone_weights.py
script. I have checkedmmselfsup/demo/mmselfsup_colab_tutorial.ipynb
. There, the author uses a benchmark config for resnet and runs a training loop but with a val_loader. I don't think that example actually saves the latent representations anywhere.How can I simply get the latent representations per image by applying the exact same preprocessing steps of simclr as well as transforming my images to 224 x 224 and forward passing them through my backbone resnet?