Open 3232731490 opened 1 year ago
Hey, we have downloaded dataset from different websites mentioned in our paper. After downloadind the data, you'll have to organize it in a suitable manner. The data utilized in this study originally existed in .nc (NetCDF) and .tif (TIFF) formats. To enhance accessibility in deep learning models, we transformed these datasets into .npy (NumPy array) format. The organizational structure involves four primary directories: hourly, daily, constant, and label. Within each directory, you'll find a NumPy array characterized by the following dimensions:
Hourly: (1, 2, 400, 350, 8) # (Number of samples, Time steps, Rows, Columns, Bands) Daily: (1, 2, 400, 350, 3) # (Number of samples, Time steps, Rows, Columns, Bands) Constant: (1, 400, 350, 5) # (Number of samples, Rows, Columns, Bands) Label: (1, 400, 350, 1) # (Number of samples, Rows, Columns, Bands)
Thank you very much for your reply! I was working on a project on fire spread prediction recently, and I read your paper while looking for materials, which is very enlightening for my work, and I still have some doubts about the processing of the dataset, but I will try to solve it according to your advice! Thank you again for your help!
You're welcome. If you want to know about other datasets in this field, you can take a look on another paper of us, which can be found at https://arxiv.org/abs/2310.19231 . Please let me know if there are any further issues.
Thank you, I will go and read this paper of yours carefully, thank you again for your help😊
I'd like to ask how this data is organized and what is the specific data format? E:/DeepTests/Data/Train/daily/ E:/DeepTests/Data/Test/daily/ E:/DeepTests/Data/Train/hourly/6Hourly/ E:/DeepTests/Data/Test/hourly/6Hourly/ E:/DeepTests/Data/Train/constant/ E:/DeepTests/Data/Test/constant/ E:/DeepTests/Data/Train/label/ E:/DeepTests/Data/Test/label/