Open daralevian opened 1 year ago
It seems like you did not place the models that you want to compare in the comparison folder that was mentioned in the other issues #3 #5 That's why you only receive the loss for the persistence model (which gets calculated by default). As for the other point: I do not know exactly why you receive different results for that model.
I put them like this and the picture is like this:
Interesting, so the plotting works, but the loss-calculation does not.
You can add print(os.listdir(model_folder))
in line 135 of test_precip_lightning.py
to check what models are visible at runtime for debugging. From where do you execute the script? It could be that the absolute path of the model folder is somewhere different you expect.
I will open an issue about that and use the Path module to fix this when I have time.
I tried ,and it response like this:
Ah, I see. Everything works correctly. The losses are also calculated. They are plotted. You probably only want them to print, right? You now have a pkl file with the model losses. In there is a dict with the model losses that you can use.
OK,I got it. Thank you so much!!
我把它们放成这样,图片是这样的: I can‘t find your e-mail sorry to border, can I ask you some question in private by e-mail?I can‘t find your e-mail
@codeGotham Is there a question?
@HansBambel I try to resd the pkl file ,and it results like this ,is it right?
我把它们放成这样,图片是这样的: I can‘t find your e-mail sorry to border, can I ask you some question in private by e-mail?I can‘t find your e-mail
ok ,my e-mail is 727416400@qq.com
Looks about right :)
But the values seem to be quite different than in the paper. I do not know exactly why that is the case. Could be that you have a different Pytorch version or something like that or chose a different checkpoint file.
Looks about right :)
But the values seem to be quite different than in the paper. I do not know exactly why that is the case. Could be that you have a different Pytorch version or something like that or chose a different checkpoint file.
ok,i'll try to change it ,but there's only one score about the model ,how do I print other scores like accuracy, sci?
Looks about right :)
But the values seem to be quite different than in the paper. I do not know exactly why that is the case. Could be that you have a different Pytorch version or something like that or chose a different checkpoint file.
yeah , but the pictures are really similar,so I‘m so tortured by the scores.
You are right, I forgot to add the script for calculating those metrics. I just added it: calc_metrics_test_set.py.
I think I will refactor some code after setting up the environment when I find the time. Then all losses and metrics will be calculated together.
You are right, I forgot to add the script for calculating those metrics. I just added it: calc_metrics_test_set.py.
I think I will refactor some code after setting up the environment when I find the time. Then all losses and metrics will be calculating together.
OK, I got it, so can you remind me if the work has done, please? Thank you so much!!!!!!!!
Will do, but until then you can use the script already as well.
The refactor will include upgrading the packages such as pytorch and lightning so I am not sure if the experiments will be reproducible. I made a snapshot of the code from the paper when it was published so that people can try to reproduce the experiments https://github.com/HansBambel/SmaAt-UNet/tree/snapshot-paper
Will do, but until then you can use the script already as well.
The refactor will include upgrading the packages such as pytorch and lightning so I am not sure if the experiments will be reproducible. I made a snapshot of the code from the paper when it was published so that people can try to reproduce the experiments https://github.com/HansBambel/SmaAt-UNet/tree/snapshot-paper
ok,I got it. thanks
I started a port: https://github.com/HansBambel/SmaAt-UNet/pull/8
This is still a WIP, so not everything works and it could be that the experiments are not 100% reproducible. So, for reproducing the beforementioned branch is necessary.
sorry to boder again, could I ask about the code about calculate MSE and NMSE ? I didn't find them in the code I use @HansBambel
The loss is already NMSE:
loss_func(y_pred.squeeze() * factor, y_true * factor, reduction="sum") / y_true.size(0)
To get the MSE I think you need to do this:
loss_func(y_pred.squeeze() * factor, y_true * factor, reduction="mean")
I'sorry to border again! I've successfully get the pictures through the ipynb file which similar to the article, but the result of the test_precip_lightning.py is like this: There're only one model's scores I'd like to ask how to get all the scores in the article I don't know if the result is set to be like the first one or is there any problem to the code. Thank you so much!