hmyao22 / DADF

The official implementation of the paper DADF for industrial VAD
MIT License
11 stars 4 forks source link

Performances per category in the MVTec LOCO #3

Open mjack3 opened 1 month ago

mjack3 commented 1 month ago

Hello,

I want to congratulate you for this great work. I wonder if you have the performance per subset of MVTec LOCO data in structural and logical terms. For example, what is the structural and logical performance of the "breakfast box" data subset? Regards.

hmyao22 commented 1 month ago

Hello,

I want to congratulate you for this great work. I wonder if you have the performance per subset of MVTec LOCO data in structural and logical terms. For example, what is the structural and logical performance of the "breakfast box" data subset? Regards.

Thank you for your attention to this work. Per-class results on the Mvtec LOCO dataset are available in the full accepted article.(H. Yao et al., "Dual-Attention Transformer and Discriminative Flow for Industrial Visual Anomaly Detection," in IEEE Transactions on Automation Science and Engineering, doi: 10.1109/TASE.2023.3322156. ) LOCO_DADF

mjack3 commented 1 month ago

Hi,I mean per-category and type of anomaly. For example, what is the structural and logical performance of the "breakfast box" data subset?

hmyao22 commented 1 month ago

Hi,I mean per-category and type of anomaly. For example, what is the structural and logical performance of the "breakfast box" data subset?

Hey, I understand what you mean bro. Here are the results: Breakfast box : "classification": { "auc_roc": { "logical_anomalies": 0.7577368296716278, "mean": 0.7528335564480143, "structural_anomalies": 0.7479302832244008 } }, "localization": { "auc_spro": { "logical_anomalies": { "0.01": 0.23851688165194795, "0.05": 0.42030049268785186, "0.1": 0.5171552883706604, "0.3": 0.6932737185740211, "1.0": 0.8876883259847754 }, "mean": { "0.01": 0.30339016998155405, "0.05": 0.5378453272294377, "0.1": 0.6309044485678232, "0.3": 0.7742798907730943, "1.0": 0.9186257028896639 }, "structural_anomalies": { "0.01": 0.36826345831116014, "0.05": 0.6553901617710235, "0.1": 0.744653608764986, "0.3": 0.8552860629721676, "1.0": 0.9495630797945525 } },

Bottle: { "classification": { "auc_roc": { "logical_anomalies": 0.9870392568175007, "mean": 0.9857672517677363, "structural_anomalies": 0.984495246717972 } }, "localization": { "auc_spro": { "logical_anomalies": { "0.01": 0.7045757602056713, "0.05": 0.8925472523245211, "0.1": 0.9444198073663437, "0.3": 0.9814014697440837, "1.0": 0.994420440923225 }, "mean": { "0.01": 0.6788655484650101, "0.05": 0.871671292865424, "0.1": 0.9218357518363609, "0.3": 0.9681944837869352, "1.0": 0.9900926861636277 }, "structural_anomalies": { "0.01": 0.6531553367243489, "0.05": 0.8507953334063268, "0.1": 0.8992516963063782, "0.3": 0.9549874978297866, "1.0": 0.9857649314040302 } }, Pushpins: { "classification": { "auc_roc": { "logical_anomalies": 0.7666825927695493, "mean": 0.8100723752897666, "structural_anomalies": 0.8534621578099839 } }, "localization": { "auc_spro": { "logical_anomalies": { "0.01": 0.34047545512150906, "0.05": 0.6517577880415395, "0.1": 0.7670278940137965, "0.3": 0.9065423178191194, "1.0": 0.9719626953457359 }, "mean": { "0.01": 0.40955201161586363, "0.05": 0.6637830423515244, "0.1": 0.7601456663276573, "0.3": 0.8913403497944923, "1.0": 0.9651557837533352 }, "structural_anomalies": { "0.01": 0.4786285681102182, "0.05": 0.6758082966615093, "0.1": 0.7532634386415182, "0.3": 0.8761383817698652, "1.0": 0.9583488721609346 } }, Screw bag: "classification": { "auc_roc": { "logical_anomalies": 0.6623788440828049, "mean": 0.7731126527491193, "structural_anomalies": 0.8838464614154338 } }, "localization": { "auc_spro": { "logical_anomalies": { "0.01": 0.12210382663195764, "0.05": 0.3753745777212909, "0.1": 0.503156157102832, "0.3": 0.7438426171030394, "1.0": 0.9193061257459721 }, "mean": { "0.01": 0.3080911160464428, "0.05": 0.570607389633623, "0.1": 0.6783363080090267, "0.3": 0.8437802984623054, "1.0": 0.95120238769088 }, "structural_anomalies": { "0.01": 0.4940784054609279, "0.05": 0.765840201545955, "0.1": 0.8535164589152213, "0.3": 0.9437179798215714, "1.0": 0.9830986496357876 } }, Connector: { "classification": { "auc_roc": { "logical_anomalies": 0.7848583877995643, "mean": 0.8634128814924663, "structural_anomalies": 0.9419673751853683 } }, "localization": { "auc_spro": { "logical_anomalies": { "0.01": 0.3972030896423277, "0.05": 0.622604697233868, "0.1": 0.739499826907959, "0.3": 0.8970071288936207, "1.0": 0.9683996637207837 }, "mean": { "0.01": 0.49607379173534916, "0.05": 0.7244507585255396, "0.1": 0.8127777381850738, "0.3": 0.9245598768802799, "1.0": 0.976937182344492 }, "structural_anomalies": { "0.01": 0.5949444938283706, "0.05": 0.8262968198172113, "0.1": 0.8860556494621887, "0.3": 0.9521126248669392, "1.0": 0.9854747009682003 } },

hmyao22 commented 1 month ago

Hi,I mean per-category and type of anomaly. For example, what is the structural and logical performance of the "breakfast box" data subset?

By the way, my WeChat number is 19911813371, feel free to contact me if any trouble

mjack3 commented 1 month ago

Thank you very much!

mjack3 commented 3 weeks ago

Hello, I hope you are doing well.

Could you tell me how I can launch this repository to train its method in the MVTec AD? To simplify things, let's assume that the dataset is at the root of the project. I would like to reproduce some anomaly maps to do a qualitative analysis.

Regards.