Closed krishnakanagal closed 3 years ago
π Hello @krishnakanagal, thank you for your interest in YOLOv5 π! Please visit our βοΈ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a π Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.
If this is a custom training β Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.
For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.
Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:
$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
Here's a start:
from models import yolo
from models.experimental import attempt_load
import torch
model = attempt_load("./yolov5s.pt", 'cpu')
print(model)
sequential_model = [m for m in model.modules()][1]
# Remove detection layer
model.model = sequential_model[:-1]
# Test forward works:
x = torch.zeros((1,3,64,64), dtype=torch.float)
res = model.forward(x)
print(res.shape)
# Export
ckpt = {'model': model} # Necessary so attempt_load works
torch.save(ckpt, "yolov5s_backbone.pt")
assert attempt_load("yolov5s_backbone.pt")
You might need to play about with where in the model you chop the end off, but that should do it. You can call attempt_load
on your exported model and run stuff through it as usual. Alternatively you can load the model, run your images and then check the activations at a particular layer - that's less trivial:
@krishnakanagal if you want to sample images by variety I would just look at the images directly. The augmented mosaics that are used during training don't have high correlation to the base image selected in the mosaic, i.e. I don't see any much rationale in your strategy as I understand it.
But you can always extract any layer outputs x
in the model forward()
method here or use the strategy proposed in https://github.com/ultralytics/yolov5/issues/4644#issuecomment-912993896:
https://github.com/ultralytics/yolov5/blob/fad57c29cd27c0fcbc0038b7b7312b9b6ef922a8/models/yolo.py#L155-L157
Hi @glenn-jocher I have a large dataset of images to manually look at them for sampling. I am using entropy uncertainty as a scoring function for identifying the images on which the model is uncertain and use the embeddings to calculate the similarity and try to sample diverse images. I am trying to implement this paper. https://arxiv.org/pdf/2004.04699.pdf
@jveitchmichaelis Thank you so much for your suggestion.
@krishnakanagal hmm interesting. Well training data variety is just as important as quantity so your approach should have merit.
π Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Access additional YOLOv5 π resources:
Access additional Ultralytics β‘ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLOv5 π and Vision AI β!
βQuestion
I want to extract the feature embeddings from the yolov5 backbone. Can someone give me some pointers on how would I do it? I am using the default backbone for my training.
Additional context
I am trying to implement active learning for my project and I want to use the embeddings to make sure I am extracting diverse images. Using embedding I can rank the images on similarity and sample them so that I get diverse images for next iteration of training.
Please let me know if the question needs more clarity.