graphnet-team / graphnet

A Deep learning library for neutrino telescopes
https://graphnet-team.github.io/graphnet/
Apache License 2.0
94 stars 94 forks source link

Northeren Tracks: CosineLoss #326

Closed RasmusOrsoe closed 1 year ago

RasmusOrsoe commented 2 years ago

What is benchmarked Is there a difference in zenith and azimuthal performance, if we dynedge doesn't provide uncertainty estimations along with it's predictions?

Target variables used for evaluation zenith, azimuth

Step-by-step

  1. Make a fresh branch
  2. In this fresh branch, make a new LossFunction called "CosineLoss" where loss = 1 - cos(ø) = 1 - cos(true_angle - prediction)
  3. In benchmark.py with original settings, update the import statement

from graphnet.models.task.reconstruction import ZenithReconstructionWithKappa, AzimuthReconstructionWithKappa

to

from graphnet.models.task.reconstruction import ZenithReconstructionWithKappa, AzimuthReconstructionWithKappa, PassOutput1

  1. In benchmark.py, update the code that originally said
    
    if config["target"] =='zenith':
        task = ZenithReconstructionWithKappa(
            hidden_size=gnn.nb_outputs,
            target_labels=config["target"],
            loss_function=VonMisesFisher2DLoss(),
        )

elif config["target"] == 'azimuth': task = AzimuthReconstructionWithKappa( hidden_size=gnn.nb_outputs, target_labels=config["target"], loss_function=VonMisesFisher2DLoss(), )


to

if config["target"] =='zenith': task = PassOutput1( hidden_size=gnn.nb_outputs, target_labels=config["target"], loss_function=CosineLoss(), )

elif config["target"] == 'azimuth': task = PassOutput1( hidden_size=gnn.nb_outputs, target_labels=config["target"], loss_function=CosineLoss(), )


such that the model uses the PassOutput1 task with your CosineLoss function.

5. Run the modified training script for zenith and azimuthal regression with the Task and loss function. Consult logs to see if the training converged within the specified number of epochs.

**Benchmark deliverance**
Reply to this issue with two plots and attach the modified training script and other relevant changes.  Each plot is a `matplotlib.pyplot.subplots` figure, where the upper plot shows resolution, and the bottom plot shows the relative improvement. See attached example.

1. zenith resolution vs. true energy with curves from `dynedge baseline` and `dynedge with CosineLoss`. Plot has relative improvement shown in the bottom.
2. azimuth resolution vs. true energy with curves from `dynedge baseline` and `dynedge with CosineLoss`. Plot has relative improvement shown in the bottom.

 Relative improvement of the modification w.r.t. the baseline is given by (`relative improvement = (1 - resolution_modification/resolution_baseline)*100`). 

Example Figure

![billede](https://user-images.githubusercontent.com/48880272/197970973-7a67c472-81bb-4780-8bb6-4746902c51d3.png)
Andreas-MJ commented 1 year ago

northern_tracks_Zenith_resolution_new_dynedge_cosineloss northern_tracks_Azimuth_resolution_new_dynedge_cosineloss northern_tracks_Zenith_resolution_baseline_vs_SplineMPE_cosineloss northern_tracks_azimuth_resolution_baseline_vs_SplineMPE_cosineloss

troelspetersen commented 1 year ago

Hi Andreas

Seemingly, the SplineMPE is hard to beat. Only place I see it happening is in the last plot (azimuthal)… first three bins, the Baseline is better. Is this correctly understood?

Cheers, Troels

On 10 Jan 2023, at 15.31, Andreas Mosgaard Jørgensen @.***> wrote:

https://user-images.githubusercontent.com/72250101/211578331-f978e183-091b-4351-882b-866bdfa7baa6.png https://user-images.githubusercontent.com/72250101/211578369-35f17701-df4a-4e0e-b126-2cdae53da91b.png https://user-images.githubusercontent.com/72250101/211578386-8821f75d-bc1c-4d9c-bacb-8a3b6c49aee9.png https://user-images.githubusercontent.com/72250101/211578416-a3076ad0-32ed-4772-b12e-78353cc73a2a.png — Reply to this email directly, view it on GitHub https://github.com/graphnet-team/graphnet/issues/326#issuecomment-1377368176, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADLMFXNPTDZJRGGQMZLGW5LWRVXFBANCNFSM6AAAAAARO5YTVU. You are receiving this because you are subscribed to this thread.

RasmusOrsoe commented 1 year ago

@Andreas-MJ Thanks for this. Can you confirm that the dynedge-CosineLoss converged?

Andreas-MJ commented 1 year ago

@Andreas-MJ Thanks for this. Can you confirm that the dynedge-CosineLoss converged?

Yes, it did converge

Andreas-MJ commented 1 year ago

Hi Andreas Seemingly, the SplineMPE is hard to beat. Only place I see it happening is in the last plot (azimuthal)… first three bins, the Baseline is better. Is this correctly understood? Cheers, Troels On 10 Jan 2023, at 15.31, Andreas Mosgaard Jørgensen @.***> wrote: https://user-images.githubusercontent.com/72250101/211578331-f978e183-091b-4351-882b-866bdfa7baa6.png https://user-images.githubusercontent.com/72250101/211578369-35f17701-df4a-4e0e-b126-2cdae53da91b.png https://user-images.githubusercontent.com/72250101/211578386-8821f75d-bc1c-4d9c-bacb-8a3b6c49aee9.png https://user-images.githubusercontent.com/72250101/211578416-a3076ad0-32ed-4772-b12e-78353cc73a2a.png — Reply to this email directly, view it on GitHub <#326 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADLMFXNPTDZJRGGQMZLGW5LWRVXFBANCNFSM6AAAAAARO5YTVU. You are receiving this because you are subscribed to this thread.

Exactly, however I don't see an obvious reason why is beats SplineMPE.