Benchmarking of Generative Anomaly Detection for Multiple Instance Learning problems. Inspired by GenerativeAD.jl.
cd path/to/repo/GroupAD.jl
julia --project
]instantiate
using GroupAD
data = GroupAD.load_data("Fox")
# the last line should ask for permission to download datasets
cd scripts/experiments_mill
julia vae_basic.jl 5 Tiger
julia GroupAD.jl/scripts/evaluate_performance_single.jl path/to/results
Source files can be found in src
. There are multiple modules used for utilities, and the model implementations and be found here.
Since every experiments is a little bit different, each group has its own experimental folder in the scripts
folder:
Each model has its own run script and bash script to submit to the cluster. Scripts for submitting experiments to run in parallel are also present. Always submit the run script from its script folder.
Note: Since LHCO dataset, Python is needed for data loading. Use Python/3.8 to install pandas
.
ml Julia
ml Python/3.8
slurm
. This will run 20 experiments with the basic VAE model, each with 5 crossvalidation repetitions on all datasets in the text file with 10 parallel processes for each dataset.
cd GroupAD.jl/scripts/experiments_mill
./run_parallel.sh vae_basic 20 5 10 datasets_mill.txt