If it is fine, could you run the program with the same seed again for several times to see. I guess it could be some issue similar in the previous issues, #17. Besides, we obtained our results on GPU, though it should be expected to behave the same as on CPU with same seed, but it is not guaranteed as there might exist some libraries that might invite randomness in their implementation. So for this case, you could also just try with some other seeds.
I preprocessed SWaT dataset according to scripts/process_swat.py
I set the hyperparameters for model training according to your comment: https://github.com/d-ailin/GDN/issues/4#issuecomment-876086591
I got significantly different scores compared to paper results. F1 score: 0.7699076110866696 precision: 0.9727626459143969 recall: 0.6371745858365192
Can you confirm that I set the correct hyperparameters?