Closed alex-golub99 closed 1 month ago
I know that the checks are failing but this check isn't related to this change and shouldn't have been affected by it at all. I think that there's some sort of randomness going on when we run the 'train_llp' function that causes the check to sometimes fail and sometimes succeed. Obviously this is something to look at but I don't think this failed check should be reason to not merge this PR
I tried increasing the size of the datasets that are being used in the test that's failing, which as far as I understand is just basically a test to check that the network is able to train. Obviously that didn't fix the issue, but I'm having trouble figuring out how to debug this given that I can't replicate this error locally. Suggestions on what to try next would be useful, as I am not entirely sure what to try at the moment.
I tried increasing the size of the datasets that are being used in the test that's failing, which as far as I understand is just basically a test to check that the network is able to train. Obviously that didn't fix the issue, but I'm having trouble figuring out how to debug this given that I can't replicate this error locally. Suggestions on what to try next would be useful, as I am not entirely sure what to try at the moment.
Note that the test failed once and passed once. The problem is the random number generator sometimes picks out a set of events that have nothing to train on for background or signal, and that causes this failure. If you keep running it, it will eventually pass or fail. It is only important if it fails all the time! So - no worries on this!
Function
low_or_high_pt_selection_train
in utils.py now correctly does low/high mass splitting, in a way that does not require QCD and BIB events to have artificial mH and mS values. Also fixed a couple of typos in the convert_divert.py file.