open-mmlab / mmselfsup

OpenMMLab Self-Supervised Learning Toolbox and Benchmark
https://mmselfsup.readthedocs.io/en/latest/
Apache License 2.0
3.14k stars 429 forks source link

How can I get the latent representations for each image in my custom dataset using the extracted backbone network? #760

Open artunboz opened 1 year ago

artunboz commented 1 year ago

Checklist

  1. I have searched related issues but cannot get the expected help. Yes
  2. I have read the FAQ documentation but cannot get the expected help. Yes

I have trained a simclr model on my custom dataset. Now, I would like to get the latent representation for each image in my dataset. I have extracted the backbone resnet from simclr using the tools/model_converters/extract_backbone_weights.py script. I have checked mmselfsup/demo/mmselfsup_colab_tutorial.ipynb. There, the author uses a benchmark config for resnet and runs a training loop but with a val_loader. I don't think that example actually saves the latent representations anywhere.

How can I simply get the latent representations per image by applying the exact same preprocessing steps of simclr as well as transforming my images to 224 x 224 and forward passing them through my backbone resnet?

Lhc0623 commented 8 months ago

I have a similar question, is there an unified API to get image embeddings or representations?