Open Colfi opened 2 years ago
Hi @Colfi, Do you mean the original data? Jeong-Hwan
Hi Jeong-Hwan, Yes the training data which went in the script - the full dataset. I guess is too big right ?
Just to double check and confirm - and then I will extract it myself one again - so the data going in the script , which in your sample would be CMIP5.input.36mon.1861_2001.nc https://zenodo.org/record/4646653/files/CMIP5.input.36mon.1861_2001.nc?download=1 ( with its correponsing label as CMIP5.label.36mon.1861_2001.nc https://zenodo.org/record/4646653/files/CMIP5.input.36mon.1861_2001.nc?download=1 ) contains 36 months of SST and HC anomalies and for each month the 'corresponding' ENSO index for that month is in the Hi Jeong-Hwan, Yes the training data which went in the script - the full dataset. I guess is too big right ?
Just to double check and confirm - and then I will extract it myself one again - so the data going in the script , which in your sample would be CMIP5.input.36mon.1861_2001.nc https://zenodo.org/record/4646653/files/CMIP5.input.36mon.1861_2001.nc?download=1 ( with its correponsing label as CMIP5.label.12mon.1861_2001.nc https://zenodo.org/record/4646653/files/CMIP5.input.36mon.1861_2001.nc?download=1 ) contains 36 months of SST and HC anomalies and for each month the 'corresponding' ENSO index for that month is in the CMIP5.label ... data right ? The 12 months comes from the fact that for an array of 36 months and a lead of 23 months you can only initialise 12 'ENSO' forecasts right ? The number 36 here is take just as a random example to test the code but in reality the total data would contain all the year which are listed in the paper's table like let's say for 100 years it would be 100*12 = 2100 months. All this is done for each ensemble model and then the average ENSO forecast is taken at the end and the ensemble mean is what is plotted in the paper right ?
Thanks, Ioana
https://maps.google.com/?q=71-75+Clarendon+Road,+Leeds,+LS2+9PH&entry=gmail&source=g
https://maps.google.com/?q=71-75+Clarendon+Road,+Leeds,+LS2+9PH&entry=gmail&source=g
On Fri, 10 Dec 2021 at 13:54, JHKim @.***> wrote:
Hi @Colfi https://github.com/Colfi, Do you mean the original data? Jeong-Hwan
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jeonghwan723/DL_ENSO/issues/9#issuecomment-990992792, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFTXVDZKL2NG2VU2QNZH2UDUQIBB3ANCNFSM5JXFUGHA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
Hi Ioana,
The data are full dataset for training and test (https://drive.google.com/file/d/1Ht6__G4bFWguZTJ3nKc3XuEY1KKLMIIN/view?usp=sharing).
The second dimension of input defined as consecutive '36' months from January (-2yr) to December (0yr). So, three months to match target season and lead month are automatically extracted in the python script ('nino34.training.sample') (I attached PPT for help you understand, link ).
Jeong-Hwan
Hi @jeonghwan723 jeonghwan723, This is a very good paper. I am a student now and I am learning about deep learning. Could you please send me the data and PPT you mentioned above again for me to learn? I would appreciate it very much. Look forward to hearing from you. Thanks, frdxxn
Hi @jeonghwan723 jeonghwan723, This paper is very groundbreaking. I would like to reproduce your work, but all the existing source data links are no longer valid. Could you please provide them again? Thanks, Tang
Hi @Tih18 ,
We're having issues with that our NAS, so here's a link to Google Drive instead: https://drive.google.com/file/d/10EnwnYPJHLLyNFhKF29IXmPMb4mpnBRb/view?usp=sharing
Hope this helps you,
Jeong-Hwan
Hi @jeonghwan723 jeonghwan723, This is a very nice paper as it is the second one with the training for all the months. I reproduced the results of your second paper and I'm looking now into this one. Is there a data archive where I could download the data for anywhere ? Thanks, Ioana