Closed shrinand1996 closed 6 months ago
Hi, such small defect are quite a challenging problem. We have a system in development for this problem (#1226), but even with that it's still quite challenging if they are really small. Can the defect appear over the entire image? If it's only limited to specific region, I'd use cropping to reduce the image size.
Hi, Thank you for answering. Can elaborate more about cropping? will tiling work here? The defect will appear only where it is present and not all over the image (that makes the defect size quite big). it is like a dust particle on a paper.
and also can you provide an example to make it work? ill share the results.
About cropping: my idea was that you should crop the image to only fit the area of interest as much as possible to avoid large files. But if the area fits the image well already, then this is not relevant. Tiling could work here, the tiled ensemble was specifically designed for such cases, but given the scale of your anomalies (dist particle on a paper) it'll still be quite challenging.
@shrinand1996 Can you share an example image of the anomaly you are trying to detect?
@TurboJonte Sure. This is a single image. multiply it by 4 to get the full image. As i was unable to load such a big file, icropped a part of it. To look at the image, youll have to download it and then zoom (really zoom in) to see some dirt black spots. There are multiple dust spots over the image. but this is the minimum size i need to detect. * defect in red circle
I see the defect. It is really small so I think that without the tiled ensemble this will probably not work. Defects would probably disappear with downscaling, but keeping a full image would cause memory problems. Check the PR #1226 for tiled ensemble, but it's not updated for v1.
@blaz-r thanks for your swift response. I will give it a try. can you provide a tutorial for the same? like a notebook or something? Thanks I tried following the tiling notebook in this repo without any luck. The results didnt change at all. i kept the stride as 256 and tile size as 256.
There isn't a notebook as of yet, but the fork also has docs l, but you'll need to check them manually. You'd need to use the training script inside the tools folder.
Closing this as it seems to be duplicate of https://github.com/openvinotoolkit/anomalib/issues/1727. You could follow the updates from this issue or from the PR itself.
I have a huge image which is obtained by horizontally adding feed from 4 5MP cameras. The defect in a single image itself is very small (4-5 pixels). I want to detect the same on the final image which is 4 times the size of the single image. Currently, I am unable to detect these small defects on a single image also. It will be helpful if some guidance can be provided to detect the same.
I am running the model with this configuration. (I have removed tiling as it didnt give better results). I have also tried with medium model with no luck.
Things I can do:
dataset: name: testing format: folder path: testing normal_dir: good abnormal_dir: bad normal_test_dir: good mask: null extensions: null task: classification train_batch_size: 1 eval_batch_size: 16 num_workers: 20 image_size: 512 # dimensions to which images are resized (mandatory) center_crop: null # dimensions to which images are center-cropped after resizing (optional) normalization: none # data distribution to which the images will be normalized: [none, imagenet] transform_config: train: null eval: null test_split_mode: from_dir # options: [from_dir, synthetic] test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode) val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic] val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
model: name: efficient_ad teacher_out_channels: 384 model_size: small # options: [small, medium] lr: 0.0001 weight_decay: 0.00001 padding: false pad_maps: true # relevant for "padding: false", see EfficientAd in lightning_model.py
generic params
normalization_method: min_max # options: [null, min_max, cdf]
metrics: image:
visualization: show_images: False # show images on the screen save_images: True # save images to the file system log_images: False # log images to the available loggers (if any) image_save_path: null # path to which images will be saved mode: full # options: ["full", "simple"]
project: seed: 42 path: ./results
logging: logger: [] # options: [comet, tensorboard, wandb, csv] or combinations. log_graph: false # Logs the model graph to respective logger.
optimization: export_mode: torch # options: torch, onnx, openvino
PL Trainer Args. Don't add extra parameter here.
trainer: enable_checkpointing: true default_root_dir: null gradient_clip_val: 0 gradient_clip_algorithm: norm num_nodes: 1 devices: 1 enable_progress_bar: true overfit_batches: 0.0 track_grad_norm: -1 check_val_every_n_epoch: 50 fast_dev_run: false accumulate_grad_batches: 1 max_epochs: 200 min_epochs: null max_steps: 70000 min_steps: null max_time: null limit_train_batches: 16.0 limit_val_batches: 1.0 limit_test_batches: 1.0 limit_predict_batches: 1.0 val_check_interval: 1.0 log_every_n_steps: 50 accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto"> strategy: null sync_batchnorm: false precision: 32 enable_model_summary: true num_sanity_val_steps: 0 profiler: null benchmark: false deterministic: false reload_dataloaders_every_n_epochs: 0 auto_lr_find: false replace_sampler_ddp: true detect_anomaly: true auto_scale_batch_size: false plugins: null move_metrics_to_cpu: false multiple_trainloader_mode: max_size_cycle