Open Yun-Fu opened 2 months ago
First, thank you for the great question. In the case of the experiment, we aimed to investigate whether SLADE, which assumes all instances in the training set to be normal, can remain robust when the actual anomaly ratio in the training set is high (for example, it exceeds 10%). However, since there were no datasets with an actual anomaly ratio greater than 10%, we selected the first method you mentioned, where we repeatedly added known abnormal edges for this experiment. The method we used involved randomly selecting the desired percentage of abnormal edges, including duplicates, from the abnormal edges in the training set and then randomly assigning times within the training set. Since the goal of contaminating our training dataset was to increase the actual anomaly ratio as described, we used this simple method. Still, I believe there are more rigorous ways to increase abnormal edges related to this. I hope this answer is helpful. Thank you!
Thank you very much for your kind response! I have a general understanding of the anomalous edge addition process, which involves randomly selecting a certain percentage of existing abnormal edges from the training set, retaining their src, dst, and msg attributes, while randomly assigning different timestamps t within the training set and changing their labels to normal to construct different anomalous edges. Is my understanding correct? Since I want to replicate your robustness experiment for comparison, I have another question about model training: First, did you construct 10 different anomalous datasets to train 10 models separately for an average result, or did you create just one anomalous dataset to train 10 times to obtain the average result? Second, did you use the same random seed across different baselines to ensure that the randomly constructed anomalous datasets are consistent for fair comparison, or did you create 10 randomly constructed anomalous datasets? I greatly appreciate your assistance and guidance, and I look forward to your reply!
Hello. As you understand, the dataset contamination process goes as described. Regarding your first additional question, conducting experiments with 10 different anomalous datasets would result in a more rigorous experiment. However, we reported the average score after training 10 times on a single anomalous dataset in the current online appendix. We expected that, even if different anomalous datasets are generated based on different random seeds, SLADE will not show significant performance variations since it learns the major normal patterns. Regarding your second question, we used the same seed for the baselines and SLADE. Thank you!
Thanks for your wonderful work! But I am so confused about how to construct the training dataset contamination in the robustness experiment? Is it to repeatedly add known abnormal edges in the training set? Or randomly construct a certain number of edges connected to abnormal state nodes? If it is the first method, how to determine which edge to add? Specifically, assuming that there are 100 abnormal edges in the training set, if we want to add 50 anomalous edges, should we add the first 50 edges according to the timestamp?Thanks for your kindly reply!