significant improvement to efficiency of trainTMVAforGammaHadronSeparation with a large reduction of the memory footprint (which was beyond 20 GB) and improvements in execution speed
main change is that a temporary root with events after signal/background pre-cuts is written to disk and used in a second step for the training (TMVA unfortunately reads in all events before cuts)
Notable algorithm changes:
a random fraction of events is selected from each input file to match the number of training events requested
if there are less events, the training option string is adjusted and the training proceeds with a lower number of events than requested (!!)
This PR includes:
trainTMVAforGammaHadronSeparation
with a large reduction of the memory footprint (which was beyond 20 GB) and improvements in execution speedNotable algorithm changes: