K-bNd / DINOv1_implem

DINOv1 implementation in Pytorch
0 stars 0 forks source link
computer-vision distillation machine-learning torchvision unsupervised-machine-learning

README.md for DINOv1 Repository (redo from group project)

Overview

This repository implements a DINO (Distillation of Self-Supervised Learning) model, focusing on self-supervised learning techniques for computer vision. It's based on Vision Transformers (ViTs) and includes custom neural network architectures and data augmentation strategies.

Main Components

Training and Usage

To train the DINO model:

  1. Set up your environment and install required dependencies using Poetry
  2. Configure your model and dataset paths in the YAML files in the configs folder.
  3. Run train.py with the desired configuration file.

Evaluation

To evaluate the DINO model:

  1. Set up your environment and install required dependencies using Poetry
  2. Pick the pretrained model you want to evaluate in training_output
  3. Run the eval.py with the desired configuration from the CLI command

References

For more information on the underlying concepts and methodologies, refer to the original paper: Emerging Properties in Self-Supervised Vision Transformers.