Closed daviddemeij closed 4 years ago
Hi, I do not know this tool, but why not following the instructions available here ? https://github.com/CNES/Start-MAJA/blob/master/prepare_dtm/Readme.md SWBD already exits, why would you want to recreate it ? I am not sure a small lake in SRTM is provided as nodata for instance Best regards, Olivier
Hi @olivierhagolle,
Thanks for the quick response. The reason I was thinking about obtaining SBWD from SRTM is that I couldn't find a way to automatically download the SBWD files (which I found for SRTM). And downloading these files can take quite some manual labour if you want to process many tiles. Also I was hoping that by doing this I would also understand how to use a different DEM / water body data source if this is not available for the region of interest.
OK, I understand, earthexplorer is not very convenient...There used to be a ftp-site for SWBD download, but it seems to have disappeared.
I think my colleague @petket-5 has written something to replace SWBD with J.F.Peckel's water bodies mask. I'll ask him. Olivier
Hi David,
I automatized the retrieval of SRTM and GSW (Global-Surface-Water) data. The latter - as mentioned by Olivier - is based on JF Peckel's algorithm. The data is downloaded from this website.
The script which combines both is accesible on the reprog-rc1 branch.
I attached a CI instance to it, which currently shows me that it is not compatible with GDal >3.0.0, so make sure that you're still using a 2.x version (Python 3.x is also required).
Please don't hesitate to ask any further questions about the script.
Kind regards, Peter
Hi Peter, Thank you very much that is very helpful! I see there is also an option to give the dem_type
as an argument, does that mean it also works with different DEM input sources? What should I set this too when using a different DEM source?
I already added this parameter in view of the upcoming modifications :) Currently it's only working with good old SRTM, though.
Peter
Ok, clear. Do you think this feature would be ready soon?
I'm afraid not. We are still uncertain on what DEM to prioritise for the future: ALOS, ASTER, EuDEM, TanDEMX...
Feel free to give us your choice if you have a specific one in mind!
We were thinking to use EU DEM for Finland but it is not ideal because if a part of the tile is outside of the EU there won't be a DEM available and I guess that could mess up the processing.
Would it be possible to make it agnostic to the DEM input source? Just automatically resampling and cropping any arbitrary DEM input data to the correct format using GDAL.
Maybe something that is globally available like ASTER would be better so you can automate it completely making it automatically use ASTER when SRTM is unavailable.
@petket-5 I am having trouble running your automatic DTM creation. Could you maybe share your python environment (which exact versions you are using)
What is your exact error message?
The following works with a conda env:
The rest (such as numpy) should be installed automatically with it.
Peter
I get a core dumped
error after a while. I will try creating a new environment using the same library versions as you are using.
ubuntu@ip-172-31-46-119:~/efs/external_libraries/Start-MAJA/prepare_mnt$ python3 DTMCreation.py -p ~/efs/data/Sentinel2L1C/16PEV/S2A_MSIL1C_20190830T160901_N0208_R140_T16PEV_20190830T194823.SAFE/
Product: S2A_MSIL1C_20190830T160901_N0208_R140_T16PEV_20190830T194823.SAFE
Acq-Date: 2019-08-30 16:09:01
Platform: sentinel2
Level: l1c
Tile/Site: 16PEV
ERROR 1: NUMPY driver was compiled against GDAL 2.2, but the current library version is 2.4
0...10...20...30...40...50...60...70...80...90...100 - done.
Creating output file that is 10980P x 10980L.
Processing input file /tmp/prepare_mnt_unaft386/occurrence.tiff.
0...10...20...30...40...50...60...70...80...90...100 - done.
munmap_chunk(): invalid pointer
Aborted (core dumped)
It was a bit of a pain to get gdal=2.3.3
to install using conda, but I managed to do it and now it works :) thanks a lot!
For people using anaconda make sure you remove the conda-forge
channel from your ~/.condarc
file (only keeping the default channel) before using:
conda create -n maja python=3.6.8 gdal=2.3.3 scipy=1.3.0
to create the environment.
I am trying to automate the SRTM retrieval step using https://github.com/cmla/srtm4 and using these download SRTM files to determine the SBWD data aswell, as I understand it this can actually be retrieved from the nodata values in the SRTM file as these are water. Is this correct? And if this should be possible, could you give me some guidance on how to implement this?