Closed valassi closed 3 months ago
The fact to have Ntry/goodhel to have a mirror index is ... as old as the matrix_madevent_group_v4.inc file (this is even the original reason of such file): https://github.com/mg5amcnlo/mg5amcnlo/tree/382e71efd61ef2c2f19be5664b1228220e095ebe (version number is 1.0.0 -> 2011)
Now I have added a test comparison that crash at fortran stage if this is not the case (in LTS) and in crashed for p p > t t~ j j
Added good helicity 4 3.1889029540868283 in event 1 local: 1 mirror 1
Added good helicity 8 0.63182311673402225 in event 1 local: 1 mirror 1
Added good helicity 12 2.3539048543925931 in event 1 local: 1 mirror 1
Added good helicity 16 2.8011885049441809 in event 1 local: 1 mirror 1
Added good helicity 18 1.3875388603326930 in event 1 local: 1 mirror 1
Added good helicity 19 8.1889062609589534 in event 1 local: 1 mirror 1
Added good helicity 22 0.44425339515692869 in event 1 local: 1 mirror 1
Added good helicity 23 1.6084314717877597 in event 1 local: 1 mirror 1
Added good helicity 26 1.0515001769765651 in event 1 local: 1 mirror 1
Added good helicity 27 3.2861119324407699 in event 1 local: 1 mirror 1
Added good helicity 30 2.2457791222478369 in event 1 local: 1 mirror 1
Added good helicity 31 4.8116593499408635 in event 1 local: 1 mirror 1
Added good helicity 34 4.8116593499408635 in event 1 local: 1 mirror 1
Added good helicity 35 2.2457791222478369 in event 1 local: 1 mirror 1
Added good helicity 38 3.2861119324407699 in event 1 local: 1 mirror 1
Added good helicity 39 1.0515001769765657 in event 1 local: 1 mirror 1
Added good helicity 42 1.6084314717877601 in event 1 local: 1 mirror 1
Added good helicity 43 0.44425339515692841 in event 1 local: 1 mirror 1
Added good helicity 46 8.1889062609589551 in event 1 local: 1 mirror 1
Added good helicity 47 1.3875388603326930 in event 1 local: 1 mirror 1
Added good helicity 49 2.8011885049441791 in event 1 local: 1 mirror 1
Added good helicity 53 2.3539048543925931 in event 1 local: 1 mirror 1
Added good helicity 57 0.63182311673402225 in event 1 local: 1 mirror 1
Added good helicity 61 3.1889029540868288 in event 1 local: 1 mirror 1
Added good helicity 1 1.2599838176532971 in event 2 local: 1 mirror 1
Added good helicity 5 0.41908560994560851 in event 2 local: 1 mirror 1
Added good helicity 9 0.93354495955340910 in event 2 local: 1 mirror 1
Added good helicity 13 2.3615765200130308 in event 2 local: 1 mirror 1
Added good helicity 52 2.3588857801434031 in event 2 local: 1 mirror 1
Added good helicity 56 0.93295325385162231 in event 2 local: 1 mirror 1
Added good helicity 60 0.41863175273867276 in event 2 local: 1 mirror 1
Added good helicity 64 1.2586012261521977 in event 2 local: 1 mirror 1
Added good helicity 1 1.2599838176532958 in event 1 local: 1 mirror 2
Added good helicity 4 10.209795896012128 in event 1 local: 1 mirror 2
Added good helicity 5 0.41908560994560839 in event 1 local: 1 mirror 2
Added good helicity 8 1.9285010395071618 in event 1 local: 1 mirror 2
Added good helicity 9 0.93354495955340966 in event 1 local: 1 mirror 2
Added good helicity 12 3.5860261579071175 in event 1 local: 1 mirror 2
Added good helicity 13 2.3615765200130303 in event 1 local: 1 mirror 2
Added good helicity 16 4.8956255377021494 in event 1 local: 1 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 1 local: 1 mirror 2
Added good helicity 19 2.3943788533168218 in event 1 local: 1 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 1 local: 1 mirror 2
Added good helicity 23 0.52860032119394884 in event 1 local: 1 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 1 local: 1 mirror 2
Added good helicity 27 1.9102840720006069 in event 1 local: 1 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 1 local: 1 mirror 2
Added good helicity 31 1.5725397269893320 in event 1 local: 1 mirror 2
Added good helicity 34 1.5725397269893304 in event 1 local: 1 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 1 local: 1 mirror 2
Added good helicity 38 1.9102840720006067 in event 1 local: 1 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 1 local: 1 mirror 2
Added good helicity 42 0.52860032119394840 in event 1 local: 1 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 1 local: 1 mirror 2
Added good helicity 46 2.3943788533168200 in event 1 local: 1 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 1 local: 1 mirror 2
Added good helicity 49 4.8967684628107904 in event 1 local: 1 mirror 2
Added good helicity 52 2.3588857801434022 in event 1 local: 1 mirror 2
Added good helicity 53 3.5867866708050791 in event 1 local: 1 mirror 2
Added good helicity 56 0.93295325385162242 in event 1 local: 1 mirror 2
Added good helicity 57 1.9288943118773216 in event 1 local: 1 mirror 2
Added good helicity 60 0.41863175273867259 in event 1 local: 1 mirror 2
Added good helicity 61 10.212733056325586 in event 1 local: 1 mirror 2
Added good helicity 64 1.2586012261521971 in event 1 local: 1 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 2 local: 1 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 2 local: 1 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 2 local: 1 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 2 local: 1 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 2 local: 1 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 2 local: 1 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 2 local: 1 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 2 local: 1 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 3 local: 1 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 3 local: 1 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 3 local: 1 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 3 local: 1 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 3 local: 1 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 3 local: 1 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 3 local: 1 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 3 local: 1 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 4 local: 1 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 4 local: 1 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 4 local: 1 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 4 local: 1 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 4 local: 1 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 4 local: 1 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 4 local: 1 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 4 local: 1 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 5 local: 2 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 5 local: 2 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 5 local: 2 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 5 local: 2 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 5 local: 2 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 5 local: 2 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 5 local: 2 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 5 local: 2 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 6 local: 2 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 6 local: 2 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 6 local: 2 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 6 local: 2 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 6 local: 2 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 6 local: 2 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 6 local: 2 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 6 local: 2 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 7 local: 2 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 7 local: 2 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 7 local: 2 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 7 local: 2 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 7 local: 2 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 7 local: 2 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 7 local: 2 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 7 local: 2 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 8 local: 2 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 8 local: 2 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 8 local: 2 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 8 local: 2 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 8 local: 2 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 8 local: 2 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 8 local: 2 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 8 local: 2 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 9 local: 3 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 9 local: 3 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 9 local: 3 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 9 local: 3 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 9 local: 3 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 9 local: 3 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 9 local: 3 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 9 local: 3 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 10 local: 3 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 10 local: 3 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 10 local: 3 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 10 local: 3 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 10 local: 3 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 10 local: 3 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 10 local: 3 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 10 local: 3 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 11 local: 3 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 11 local: 3 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 11 local: 3 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 11 local: 3 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 11 local: 3 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 11 local: 3 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 11 local: 3 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 11 local: 3 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 12 local: 3 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 12 local: 3 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 12 local: 3 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 12 local: 3 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 12 local: 3 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 12 local: 3 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 12 local: 3 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 12 local: 3 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 13 local: 4 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 13 local: 4 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 13 local: 4 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 13 local: 4 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 13 local: 4 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 13 local: 4 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 13 local: 4 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 13 local: 4 mirror 2
RESET CUMULATIVE VARIABLE
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 14 local: 4 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 14 local: 4 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 14 local: 4 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 14 local: 4 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 14 local: 4 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 14 local: 4 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 14 local: 4 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 14 local: 4 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 15 local: 4 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 15 local: 4 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 15 local: 4 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 15 local: 4 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 15 local: 4 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 15 local: 4 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 15 local: 4 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 15 local: 4 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 16 local: 4 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 16 local: 4 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 16 local: 4 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 16 local: 4 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 16 local: 4 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 16 local: 4 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 16 local: 4 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 16 local: 4 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 17 local: 5 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 17 local: 5 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 17 local: 5 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 17 local: 5 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 17 local: 5 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 17 local: 5 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 17 local: 5 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 17 local: 5 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 18 local: 5 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 18 local: 5 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 18 local: 5 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 18 local: 5 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 18 local: 5 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 18 local: 5 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 18 local: 5 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 18 local: 5 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 19 local: 5 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 19 local: 5 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 19 local: 5 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 19 local: 5 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 19 local: 5 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 19 local: 5 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 19 local: 5 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 19 local: 5 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 20 local: 5 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 20 local: 5 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 20 local: 5 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 20 local: 5 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 20 local: 5 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 20 local: 5 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 20 local: 5 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 20 local: 5 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 21 local: 6 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 21 local: 6 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 21 local: 6 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 21 local: 6 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 21 local: 6 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 21 local: 6 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 21 local: 6 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 21 local: 6 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 22 local: 6 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 22 local: 6 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 22 local: 6 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 22 local: 6 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 22 local: 6 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 22 local: 6 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 22 local: 6 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 22 local: 6 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 23 local: 6 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 23 local: 6 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 23 local: 6 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 23 local: 6 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 23 local: 6 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 23 local: 6 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 23 local: 6 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 23 local: 6 mirror 2
not adding good helicity (good for other mirror) 18 0.0000000000000000 in event 24 local: 6 mirror 2
not adding good helicity (good for other mirror) 22 0.0000000000000000 in event 24 local: 6 mirror 2
not adding good helicity (good for other mirror) 26 0.0000000000000000 in event 24 local: 6 mirror 2
not adding good helicity (good for other mirror) 30 0.0000000000000000 in event 24 local: 6 mirror 2
not adding good helicity (good for other mirror) 35 0.0000000000000000 in event 24 local: 6 mirror 2
not adding good helicity (good for other mirror) 39 0.0000000000000000 in event 24 local: 6 mirror 2
not adding good helicity (good for other mirror) 43 0.0000000000000000 in event 24 local: 6 mirror 2
not adding good helicity (good for other mirror) 47 0.0000000000000000 in event 24 local: 6 mirror 2
Adding only those helicities to the fortran code does not change (at all) the cross-section (so bit to bit compatibility and really 0 helicity
Cross-section : 11.35 +- 0.03021 pb
Nb of events : 10000
versus
Cross-section : 11.35 +- 0.03021 pb
Nb of events : 10000
So the issue is occuring due to uu>uu that has some helicity that contribute only for that process while in itself uu~ and u~u does have symmetric goodhel as expected but goodhel is here a global variable for all processes... Making it non global can create issue for the grid handling (and therefore for loop-induced/gridpack?)
But the good news is that for "gpu grouping" (so what we have now, this is indeed symmetric). This still needs more investigation to answer the question to with if we can handle this without the need of imirror or not.
Hi @oliviermattelaer thanks for looking at this :-)
I was wondering this morning, does Madgraph include the option of polarised beams? Because one physics case where I imagine different helicity lists is that of polarised beams. Well, typically that would be e+ against e- which is not symmetric anyway, but in principle if you have something like e- against e- and the left and right beams have different polarisations, then the list of helicities are different. Far fetched, but maybe something similar is the reason why this was introduced at the time?
(Reminder from yesterday's meeting: you said that the computation of helicities in fortran separately for each mirror was introduced by JohannA in one of the first bug fix releases 1.1.0 or similar, "to fix a bug" or similar, so maybe there was a use case - PS ops sorry I see you mentioned it it the post above, giving details , thanks!).
Anyway, in the meantime as discussed I will look at introducing a sanity check that if two lists of helicities are computed in fortran, then they are the same.
So the commit message of Johan is: Fixed problem when different helicities are zero for main process and mirror process for grouped subprocess output
related information in the UpdateNotes:
JA: Improved helicity selection and automatic full helicity
sum if needed. Optimization of run parameters.
So the issue is occuring due to uu>uu that has some helicity that contribute only for that process while in itself uu~ and u~u does have symmetric goodhel as expected but goodhel is here a global variable for all processes... Making it non global can create issue for the grid handling (and therefore for loop-induced/gridpack?)
But the good news is that for "gpu grouping" (so what we have now, this is indeed symmetric). This still needs more investigation to answer the question to with if we can handle this without the need of imirror or not.
Hi Olivier, thanks again for looking at this :-)
I implemented my sanity checks, I see no issues in pp->ttjj (well, actually in pp_tt012j).
Now, there is one important difference: in the cudacpp generated code, remember that we split the processes (maybe this is what you mean by gpu grouping?). In particular pp -> tt 0,1,2 jets has
ls -d1 pp_tt012j.mad/SubProcesses/P*
pp_tt012j.mad/SubProcesses/P0_gg_ttx/
pp_tt012j.mad/SubProcesses/P0_uux_ttx/
pp_tt012j.mad/SubProcesses/P1_gg_ttxg/
pp_tt012j.mad/SubProcesses/P1_gu_ttxu/
pp_tt012j.mad/SubProcesses/P1_gux_ttxux/
pp_tt012j.mad/SubProcesses/P1_uux_ttxg/
pp_tt012j.mad/SubProcesses/P2_gg_ttxgg/
pp_tt012j.mad/SubProcesses/P2_gg_ttxuux/
pp_tt012j.mad/SubProcesses/P2_gu_ttxgu/
pp_tt012j.mad/SubProcesses/P2_gux_ttxgux/
pp_tt012j.mad/SubProcesses/P2_uc_ttxuc/
pp_tt012j.mad/SubProcesses/P2_ucx_ttxucx/
pp_tt012j.mad/SubProcesses/P2_uu_ttxuu/
pp_tt012j.mad/SubProcesses/P2_uux_ttxccx/
pp_tt012j.mad/SubProcesses/P2_uux_ttxgg/
pp_tt012j.mad/SubProcesses/P2_uux_ttxuux/
pp_tt012j.mad/SubProcesses/P2_uxcx_ttxuxcx/
pp_tt012j.mad/SubProcesses/P2_uxux_ttxuxux/
\grep MIRRORPROCS pp_tt012j.mad/SubProcesses/P* -r | \grep DATA
pp_tt012j.mad/SubProcesses/P0_gg_ttx/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.FALSE./
pp_tt012j.mad/SubProcesses/P0_uux_ttx/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.TRUE./
pp_tt012j.mad/SubProcesses/P1_gg_ttxg/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.FALSE./
pp_tt012j.mad/SubProcesses/P1_gu_ttxu/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.TRUE./
pp_tt012j.mad/SubProcesses/P1_gux_ttxux/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.TRUE./
pp_tt012j.mad/SubProcesses/P1_uux_ttxg/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.TRUE./
pp_tt012j.mad/SubProcesses/P2_gg_ttxgg/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.FALSE./
pp_tt012j.mad/SubProcesses/P2_gg_ttxuux/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.FALSE./
pp_tt012j.mad/SubProcesses/P2_gu_ttxgu/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.TRUE./
pp_tt012j.mad/SubProcesses/P2_gux_ttxgux/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.TRUE./
pp_tt012j.mad/SubProcesses/P2_uc_ttxuc/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.TRUE./
pp_tt012j.mad/SubProcesses/P2_ucx_ttxucx/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.TRUE./
pp_tt012j.mad/SubProcesses/P2_uu_ttxuu/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.FALSE./
pp_tt012j.mad/SubProcesses/P2_uux_ttxccx/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.TRUE./
pp_tt012j.mad/SubProcesses/P2_uux_ttxgg/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.TRUE./
pp_tt012j.mad/SubProcesses/P2_uux_ttxuux/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.TRUE./
pp_tt012j.mad/SubProcesses/P2_uxcx_ttxuxcx/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.TRUE./
pp_tt012j.mad/SubProcesses/P2_uxux_ttxuxux/mirrorprocs.inc: DATA (MIRRORPROCS(I),I=1,1)/.FALSE./
In particular:
So, bottomline: remember that in cudacpp generated code (including generated fortran) we do NOT mix uu_XXXX and uux_XXXX together, so if these two have separate helicities, this is handled because they are separate directories. Changing this would require a much more general handling of nprocesses>2, which so far we do not use.
Rephrasing: I think that adding this sanity check should be enough, and in cudacpp we can safely keep two helicity computations in fortran and only one in cudacpp. Do you agree?
Yes, this is what I meant by GPU grouping.
Now this makes me realize that they are steps that I do not like in the way those helicities are handle and I did create a new branch(https://github.com/mg5amcnlo/mg5amcnlo/tree/goodhel) to implement some change and to allow to experiment more on this without being affected by the issue spotted for tt~. I do agree that the above tt~ case is not relevant for this plugin.
Now I still need to investigate this since
The obviously side effect of this fortran refactoring is that your patching method will likely crash since the base file will be different.
Hi Olivier, thanks. Yes good to investigate puzzles :-)
Just one question, are you trying to completely remove the second helicity calculation? Or to understand whty it is needed and keep it there? I mean I'd like to understand if eventually we need to implement two helicities in cudacpp
First step is to understand if /when it is needed. And if the answer is "it is not needed" -> then we can remove it.
Cheers,
Olivier
On 31 Jul 2024, at 11:39, Andrea Valassi @.***> wrote:
Hi Olivier, thanks. Yes good to investigate puzzles :-)
Just one question, are you trying to completely remove the second helicity calculation? Or to understand whty it is needed and keep it there? I mean I'd like to understand if eventually we need to implement two helicities in cudacpp
— Reply to this email directly, view it on GitHubhttps://github.com/madgraph5/madgraph4gpu/issues/941#issuecomment-2260097021, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AH6535S2OM56FTHTBZ27NWDZPCWGRAVCNFSM6AAAAABLUTRRYCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRQGA4TOMBSGE. You are receiving this because you were mentioned.Message ID: @.***>
With the new goodhel branch, performing additional tests (first in the SM):
Running test with sm-ckm model (like CMS often use) then the crash was on the first process (that I did test):
So I'm going to investigate that process now:
Added good helicity 18 for process 63 4.0343177724261636E-008 in event 12 local: 12 mirror 2
detected in G345:
fail for process 63
18 F T
So given that the limit to be "accepted is LIMHEL=1e-8" and that here we are (after 12 events) hitting the first event with only 4 times bigger than the limit... and that many other channel (like G1, does not include 18 for process 63 (in none of the imirror), I think that this is safe to say that this is a threshold effect. So yes here non identical list are occuring and are "fine" so here a simple check of identical list on the fortran side will just lead to an un-necessary crash. (and the result will be equivalent (up to single precision calculation).
To avoid such threshold check, I have updated the test to (the quite heavy now):
diff --git a/madgraph/iolibs/template_files/matrix_madevent_group_v4.inc b/madgraph/iolibs/template_files/matrix_madevent_group_v4.inc
index f3d0ade09..92ad6a419 100644
--- a/madgraph/iolibs/template_files/matrix_madevent_group_v4.inc
+++ b/madgraph/iolibs/template_files/matrix_madevent_group_v4.inc
@@ -35,6 +35,8 @@ c
c global (due to reading writting)
c
LOGICAL GOODHEL(NCOMB,2, MAXSPROC)
+ LOGICAL GOODHEL_STRONG(NCOMB,2)
+ DATA GOODHEL_STRONG /THEL*.false./
INTEGER NTRY(2, MAXSPROC)
common/BLOCK_GOODHEL/NTRY,GOODHEL
@@ -137,6 +139,20 @@ C ----------
IF ((ISHEL(IMIRROR).EQ.0.and.ISUM_HEL.eq.0).or.(DS_get_dim_status('Helicity').eq.0).or.(HEL_PICKED.eq.-1)) THEN
DO I=1,NCOMB
IF (GOODHEL(I,IMIRROR,%(proc_id)s) .OR. NTRY(IMIRROR,%(proc_id)s).LE.MAXTRIES.or.(ISUM_HEL.NE.0).or.THIS_NTRY(IMIRROR).le.10) THEN
+ IF (IMIRROR.eq.2.and.NTRY(1,%(proc_id)s).ge.MAXTRIES.and.NTRY(2,%(proc_id)s).ge.MAXTRIES) then
+ IF (GOODHEL(I,1,%(proc_id)s).neqv.GOODHEL(I,2,%(proc_id)s))THEN
+ IF(GOODHEL_STRONG(I,1).or.GOODHEL_STRONG(I,2)) then
+
+ write(*,*) 'fail for process %(proc_id)s'
+ DO J=1,NCOMB
+ IF (GOODHEL(J,1,%(proc_id)s).neqv.GOODHEL(J,2,%(proc_id)s))THEN
+ write(*,*) J, GOODHEL(J,1,%(proc_id)s), GOODHEL(J,2,%(proc_id)s)
+ ENDIF
+ ENDDO
+ STOP 34
+ ENDIF
+ ENDIF
+ ENDIF
T=MATRIX%(proc_id)s(P ,NHEL(1,I),JC(1),I)
%(beam_polarization)s
IF (ISUM_HEL.NE.0.and.DS_get_dim_status('Helicity').eq.0.and.ALLOW_HELICITY_GRID_ENTRIES) then
@@ -166,6 +182,9 @@ C ----------
JHEL(IMIRROR) = 1
IF(NTRY(IMIRROR,%(proc_id)s).LE.MAXTRIES.or.THIS_NTRY(IMIRROR).le.10)THEN
DO I=1,NCOMB
+ IF (.NOT.GOODHEL_STRONG(I,IMIRROR) .AND. (DABS(TS(I)).GT.100d0*ANS*LIMHEL/NCOMB)) THEN
+ GOODHEL_STRONG(I,IMIRROR)=.TRUE.
+ ENDIF
IF(init_mode) THEN
IF (DABS(TS(I)).GT.ANS*LIMHEL/NCOMB) THEN
PRINT *, 'Matrix Element/Good Helicity: %(proc_id)s ', i, 'IMIRROR', IMIRROR
With this factor of "100", the above process still crashes for 4 G directory out of 183. With the factor of "1000", all of them pass the check at survey time ... but one crash at refine stage :-(
fail for process 60
18 T F
22 T F
STOP 34
helicity 18: -1 -1 1 0 -1 -1 helicity 22: -1 -1 -1 0 -1 -1
cross-section of helicity 18+12 : 0.6 (contribution from process 60 only:5.718838519992133e-12) cross-section reported for sum over helicity: 0.29 (Careful not same normalization a factor of 4 is possible here between the two cross-section)
I will ignore the crash at the refine stage, since this might be an issue on how the grid is read/... While the spread in returned helicity seems quite large here (more than a factor 100), the "problematic" matrix-element/helicity are clearly not physically relevant so accepting such variance is here ok.
So conclusion for this crashing test are
Continue checking for more processes (MSSM):
So given that the limit to be "accepted is LIMHEL=1e-8" and that here we are (after 12 events) hitting the first event with only 4 times bigger than the limit... and that many other channel (like G1, does not include 18 for process 63 (in none of the imirror), I think that this is safe to say that this is a threshold effect. So yes here non identical list are occuring and are "fine" so here a simple check of identical list on the fortran side will just lead to an un-necessary crash. (and the result will be equivalent (up to single precision calculation).
Hi Olivier, thanks, nice finds :-)
One point: in the fortran generated through cudacpp, we use LIMHEL=0, since more than two years, see #419. So there is no sensistivity to threshold effect. We had discussed this extensively at the time.
For the release, I would keep this as it is (LIMHEL=0) and add the check that the two helicity lists are the same, to ensure that cudacpp and fortran tests give bit by bit the same results. From a software testing point of view, this is essential.
Later on, this can be rediscussed. One can even think of computing helicities in fortran and passing them to cudacpp.
That said, I still kind of favour having LIMHEL=0 in production full stop. I wonder how much performance we gain and in which specific processes by using LIMHEL>0. But definitely I do not have the full picture here.
For sm-ckm model, using limhel=0 can giveperformance hit of a factor of 4 or more ... So this is not negligeable. Obviously, I do not want to have a "normal" Fortran and a "cudacpp" fortran. You can set limhel to 0 for test if you want but in production this should be the same fortran code/design.
Cheers,
Olivier
On 2 Aug 2024, at 09:34, Andrea Valassi @.***> wrote:
So given that the limit to be "accepted is LIMHEL=1e-8" and that here we are (after 12 events) hitting the first event with only 4 times bigger than the limit... and that many other channel (like G1, does not include 18 for process 63 (in none of the imirror), I think that this is safe to say that this is a threshold effect. So yes here non identical list are occuring and are "fine" so here a simple check of identical list on the fortran side will just lead to an un-necessary crash. (and the result will be equivalent (up to single precision calculation).
Hi Olivier, thanks, nice finds :-)
One point: in the fortran generated through cudacpp, we use LIMHEL=0, since more than two years, see #419https://github.com/madgraph5/madgraph4gpu/issues/419. So there is no sensistivity to threshold effect. We had discussed this extensively at the time.
For the release, I would keep this as it is (LIMHEL=0) and add the check that the two helicity lists are the same, to ensure that cudacpp and fortran tests give bit by bit the same results. From a software testing point of view, this is essential.
Later on, this can be rediscussed. One can even think of computing helicities in fortran and passing them to cudacpp.
That said, I still kind of favour having LIMHEL=0 in production full stop. I wonder how much performance we gain and in which specific processes by using LIMHEL>0. But definitely I do not have the full picture here.
— Reply to this email directly, view it on GitHubhttps://github.com/madgraph5/madgraph4gpu/issues/941#issuecomment-2264759455, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AH6535Q5DZVUBYPTD5VRFULZPMZCBAVCNFSM6AAAAABLUTRRYCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRUG42TSNBVGU. You are receiving this because you were mentioned.Message ID: @.***>
So I guess that we can close this issue with the conclusion: Yes it can be removed. This will be done independently of the cudacpp plugin, I will follow the normal procedure for including such fix in 3.6.0. Which means that this associated issue will automatically be fixed when the merging with 3.6.0 will be complete. (which is our next big task anyway).
So I guess that we can close this issue with the conclusion: Yes it can be removed. This will be done independently of the cudacpp plugin, I will follow the normal procedure for including such fix in 3.6.0
Thanks a lot Olivier :-)
In the meantime while waiting for 360 etc, can we then merge #935 which adds the second reset cumulative call? Then of course this will need to be removed in 360 (I can do that at that moment, no problem), but at least in the meantime while waiting for 360 we have some internally consistent code where fortran and cudacpp are expected to give the same xsec bit by bit. I prefer to add a fix now that we need to remove later, rather than do nothing and have tests that keep failing. Esepecially because I want to test the CMS xsec issues and I need that bit by bit equality.
You can set limhel to 0 for test if you want but in production this should be the same fortran code/design.
Note, LIMHEL=0 is now in cudacpp production (NOT just in the madgrah4gpu repo). I am ok to remove that and move it ONLY to the madgraph4gpu repo, but can we at least add a runcard that sets limhel=0? (Or maybe it exists?) I think that would be very useful for debugging.
Hi @oliviermattelaer ,
as discussed in https://github.com/madgraph5/madgraph4gpu/pull/935#issuecomment-2256328718
This is a reminder to check if the second helicity filtering in fortran for mirror processes can be removed.
For the moment, my proposal to solve #872 via PR #935 is to
If, as you suggest, we eventually find that helicities can be computed only once in fortran, then
I assign this to you... Low priority, in my opinion Thanks Andrea