madgraph5 / madgraph4gpu

GPU development for the Madgraph5_aMC@NLO event generator software package
30 stars 32 forks source link

Different helicity numbering in fortran and cudacpp? #569

Closed valassi closed 1 year ago

valassi commented 1 year ago

I am making progress in the random choice of helicity #403.

However the chosen helicities have different indices, Example in ggtt

   1  0.64067963E-01  0.64067963E-01          1.00000000000 12 10
   2  0.58379673E-01  0.58379673E-01          1.00000000000 12 10
   3  0.70810768E-01  0.70810768E-01          1.00000000000 15 15
   4  0.67192668E-01  0.67192668E-01          1.00000000000  2  0
   5  0.71590585E-01  0.71590585E-01          1.00000000000 15 15
   6  0.72862110E-01  0.72862110E-01          1.00000000000 15 15
   7  0.14271254E-01  0.14271254E-01          1.00000000000  2  0
   8  0.63986754E-01  0.63986754E-01          1.00000000000 15 15
   9  0.46316382E-01  0.46316382E-01          1.00000000000 12 10
  10  0.35372741E-01  0.35372741E-01          1.00000000000 15 15
  11  0.73958407E-01  0.73958407E-01          1.00000000000 15 15
  12  0.70691203E-01  0.70691203E-01          1.00000000000  3  3
  13  0.70805000E-01  0.70805000E-01          1.00000000000 15 15
  14  0.30801404E-01  0.30801404E-01          1.00000000000  5  5
  15  0.64111868E-01  0.64111868E-01          1.00000000000 15 15
  16  0.74312047E-01  0.74312047E-01          1.00000000000  2  0
  17  0.60961835E-01  0.60961835E-01          1.00000000000  2  0
  18  0.67698020E-01  0.67698020E-01          1.00000000000  2  0
  19  0.49748773E-01  0.49748773E-01          1.00000000000 15 15
  20  0.71951996E-01  0.71951996E-01          1.00000000000  5  5
  21  0.52116331E-01  0.52116331E-01          1.00000000000 12 10
  22  0.69245648E-01  0.69245648E-01          1.00000000000  2  0
  23  0.64808141E-01  0.64808141E-01          1.00000000000  2  0
  24  0.66861231E-01  0.66861231E-01          1.00000000000 14 12
  25  0.70041112E-01  0.70041112E-01          1.00000000000 15 15
  26  0.61135249E-01  0.61135249E-01          1.00000000000 15 15
  27  0.66574932E-01  0.66574932E-01          1.00000000000  2  0
  28  0.67312068E-01  0.67312068E-01          1.00000000000 14 12
  29  0.47056643E-01  0.47056643E-01          1.00000000000 12 11
  30  0.70509435E-01  0.70509435E-01          1.00000000000  2  0
  31  0.23138767E-01  0.23138767E-01          1.00000000000 15 15
  32  0.76096234E-01  0.76096234E-01          1.00000000000  2  0

This is the "BothDebug" printout: event number, fortran ME, cudacpp ME, ratio, fortran helicity, cudacpp helicity.

It seems that 15=15, 2=0, 14=12, 3=3, 12=11 etc.

I will check the hardcoded code

valassi commented 1 year ago

This is ggtt cudacpp https://github.com/madgraph5/madgraph4gpu/blob/79a0b8f71f3a9e2922494493250c5fd8985706c0/epochX/cudacpp/gg_tt.mad/SubProcesses/P1_gg_ttx/CPPProcess.cc#L437

    // Helicities for the process [NB do keep 'static' for this constexpr array, see issue #283]
    static constexpr short tHel[ncomb][mgOnGpu::npar] = {
      { -1, -1, -1, -1 },
      { -1, -1, -1, 1 },
      { -1, -1, 1, -1 },
      { -1, -1, 1, 1 },
      { -1, 1, -1, -1 },
      { -1, 1, -1, 1 },
      { -1, 1, 1, -1 },
      { -1, 1, 1, 1 },
      { 1, -1, -1, -1 },
      { 1, -1, -1, 1 },
      { 1, -1, 1, -1 },
      { 1, -1, 1, 1 },
      { 1, 1, -1, -1 },
      { 1, 1, -1, 1 },
      { 1, 1, 1, -1 },
      { 1, 1, 1, 1 } };

This is Fortran I guess https://github.com/madgraph5/madgraph4gpu/blob/79a0b8f71f3a9e2922494493250c5fd8985706c0/epochX/cudacpp/gg_tt.mad/SubProcesses/P1_gg_ttx/auto_dsig1.f#L604

      INTEGER NHEL(NEXTERNAL,0:NCOMB)
      DATA (NHEL(I,0),I=1,4) / 2, 2, 2, 2/
      DATA (NHEL(I,   1),I=1,4) /-1,-1,-1, 1/
      DATA (NHEL(I,   2),I=1,4) /-1,-1,-1,-1/
      DATA (NHEL(I,   3),I=1,4) /-1,-1, 1, 1/
      DATA (NHEL(I,   4),I=1,4) /-1,-1, 1,-1/
      DATA (NHEL(I,   5),I=1,4) /-1, 1,-1, 1/
      DATA (NHEL(I,   6),I=1,4) /-1, 1,-1,-1/
      DATA (NHEL(I,   7),I=1,4) /-1, 1, 1, 1/
      DATA (NHEL(I,   8),I=1,4) /-1, 1, 1,-1/
      DATA (NHEL(I,   9),I=1,4) / 1,-1,-1, 1/
      DATA (NHEL(I,  10),I=1,4) / 1,-1,-1,-1/
      DATA (NHEL(I,  11),I=1,4) / 1,-1, 1, 1/
      DATA (NHEL(I,  12),I=1,4) / 1,-1, 1,-1/
      DATA (NHEL(I,  13),I=1,4) / 1, 1,-1, 1/
      DATA (NHEL(I,  14),I=1,4) / 1, 1,-1,-1/
      DATA (NHEL(I,  15),I=1,4) / 1, 1, 1, 1/
      DATA (NHEL(I,  16),I=1,4) / 1, 1, 1,-1/
valassi commented 1 year ago

Beurk yes they differ

Example 15=15 is actually 1,1,1,1 that is 15 (in 1-6) of Fortran and 15 (in 0-15) of cudacpp

valassi commented 1 year ago

I manually changed the cudacpp values to mimic fortran - much better!

   1  0.64067963E-01  0.64067963E-01          1.00000000000 12 11
   2  0.58379673E-01  0.58379673E-01          1.00000000000 12 11
   3  0.70810768E-01  0.70810768E-01          1.00000000000 15 14
   4  0.67192668E-01  0.67192668E-01          1.00000000000  2  1
   5  0.71590585E-01  0.71590585E-01          1.00000000000 15 14
   6  0.72862110E-01  0.72862110E-01          1.00000000000 15 14
   7  0.14271254E-01  0.14271254E-01          1.00000000000  2  1
   8  0.63986754E-01  0.63986754E-01          1.00000000000 15 14
   9  0.46316382E-01  0.46316382E-01          1.00000000000 12 11
  10  0.35372741E-01  0.35372741E-01          1.00000000000 15 14
  11  0.73958407E-01  0.73958407E-01          1.00000000000 15 14
  12  0.70691203E-01  0.70691203E-01          1.00000000000  3  2
  13  0.70805000E-01  0.70805000E-01          1.00000000000 15 14
  14  0.30801404E-01  0.30801404E-01          1.00000000000  5  4
  15  0.64111868E-01  0.64111868E-01          1.00000000000 15 14
  16  0.74312047E-01  0.74312047E-01          1.00000000000  2  1
  17  0.60961835E-01  0.60961835E-01          1.00000000000  2  1
  18  0.67698020E-01  0.67698020E-01          1.00000000000  2  1
  19  0.49748773E-01  0.49748773E-01          1.00000000000 15 14
  20  0.71951996E-01  0.71951996E-01          1.00000000000  5  4
  21  0.52116331E-01  0.52116331E-01          1.00000000000 12 11
  22  0.69245648E-01  0.69245648E-01          1.00000000000  2  1
  23  0.64808141E-01  0.64808141E-01          1.00000000000  2  1
  24  0.66861231E-01  0.66861231E-01          1.00000000000 14 13
  25  0.70041112E-01  0.70041112E-01          1.00000000000 15 14
  26  0.61135249E-01  0.61135249E-01          1.00000000000 15 14
  27  0.66574932E-01  0.66574932E-01          1.00000000000  2  1
  28  0.67312068E-01  0.67312068E-01          1.00000000000 14 13
  29  0.47056643E-01  0.47056643E-01          1.00000000000 12 11
  30  0.70509435E-01  0.70509435E-01          1.00000000000  2  1
  31  0.23138767E-01  0.23138767E-01          1.00000000000 15 14

Now I just need to change the fortran vs cudacpp array starting at or 0

valassi commented 1 year ago

Ok fixed in codegen, just changed one False to True...

    # AV - replace the export_cpp.OneProcessExporterCPP method (fix helicity order and improve formatting)
    def get_helicity_matrix(self, matrix_element):
        """Return the Helicity matrix definition lines for this matrix element"""
        helicity_line = '    static constexpr short helicities[ncomb][mgOnGpu::npar] = {\n      '; # AV (this is tHel)
        helicity_line_list = []
        for helicities in matrix_element.get_helicity_matrix(allow_reverse=True): # AV was False: different order in Fortran and cudacpp! #569
            helicity_line_list.append( '{ ' + ', '.join(['%d'] * len(helicities)) % tuple(helicities) + ' }' ) # AV
        return helicity_line + ',\n      '.join(helicity_line_list) + ' };' # AV
valassi commented 1 year ago

This is fixed in #570, which I will soon merge. Closing,