Closed dnth closed 8 months ago
This PR:
Parameters for TimmEncoder
TimmEncoder
model_name
num_classes
0
pretrained
True
device
None
torch_compile
torch.compile
False
Usage
import fastdup from fastdup.embeddings.timm import TimmEncoder # Compute embeddings timm_model = TimmEncoder('resnet18') timm_model.compute_embeddings("../images") # Run fastdup fd = fastdup.create(input_dir=timm_model.img_folder) fd.run(annotations=timm_model.file_paths, embeddings=timm_model.embeddings)
Here's a bare minimum Colab notebook to try the integration - https://colab.research.google.com/drive/1hDI8SNQU1lhp6d3Q03BhxDk8ILT7dxGl?usp=sharing
I will make another PR for a proper notebook once this integration is merged.
@dbickson @amiralush This PR is ready for review
Merged manually in to version 1.46
This PR:
Parameters for
TimmEncoder
model_name
(str): The name of the model architecture to use.num_classes
(int): The number of classes for the model. Use num_features=0 to exclude the last layer. Default:0
.pretrained
(bool): Whether to load pretrained weights. Default:True
.device
(str): Which device to load the model on. Choices: "cuda" or "cpu". Default:None
.torch_compile
(bool): Whether to usetorch.compile
to optimize model. DefaultFalse
.Usage
Here's a bare minimum Colab notebook to try the integration - https://colab.research.google.com/drive/1hDI8SNQU1lhp6d3Q03BhxDk8ILT7dxGl?usp=sharing
I will make another PR for a proper notebook once this integration is merged.