Closed zhanghuayu-seu closed 10 months ago
Hello, if I understood correctly, you need to use both your dataset and new images to train the new PatchCore model; you cannot finetune it. PatchCore is not being trained in the usual way; instead, it extracts features from the training dataset and then uses these features to determine how different the new images are. You can find more details here.
As @abc-125 mentioned, those images may not help for the training for the patchcore model. They could help for the validation when tuning your threshold value to get a better performance.
What is the motivation for this task?
We need to update the model in the system.
Describe the solution you'd like
I have trained the patchcore model using my dataset and save the weights in .ckpt and onnx files. Now we get a few images (around 50 images), how can we update the model based on the trained patchcore model in the small datasets. We are looking forward to your reply.
Additional context
dataset: name: anomaly format: folder path: ./datasets/Anomaly normal_dir: normal abnormal_dir: anomaly mask_dir: mask/anomaly normal_test_dir: null extensions: null task: segmentation category: bottle train_batch_size: 1 eval_batch_size: 1 num_workers: 1 image_size: 256 center_crop: 224 normalization: imagenet transform_config: train: null eval: null test_split_mode: from_dir test_split_ratio: 0.2 val_split_mode: same_as_test val_split_ratio: 0.5 tiling: apply: false tile_size: null stride: null remove_border_count: 0 use_random_tiling: false random_tile_count: 16 model: name: patchcore backbone: wide_resnet50_2 pre_trained: true layers: