Open Esteban82 opened 2 years ago
You are running on the test server and I just updated that file. Remove your server/gmt_data_server.txt file and try again.
OK. Thanks.
Is it possible to have another problems with remote data?
I am getting errors with the zoom earth animations.
For example, I got these frames from your script in #5887 and published on YouTube.
Just in case, in my script I used gmt math -T1/5/72+n 10 T POW -o1 -I = altitude.txt
instead.
@Esteban82 Could you please check if you still have the issue?
@Esteban82 Could you please check if you still have the issue?
Yes. I just re-run it.
Could you go into the relevant ...server/earth/earth_relief_xxx folder and look for files with unusually small file size? If something failed and a tiny file was written (it may even have an error message only in it) but GMT thinnks it is a nc or jp2 file then you can get missing tiles. I am not sure why (too many things trying at once perhaps - which of course will happen with movie and batch) and something goes wrong. Let me know if you find anything like that.
I manage to get the animation.
Paul. I I look in the folder. I deleted the tiny files (less than 1 MB) and I re-run the script up to 4 times. Progressively I got less frames with errors.
So, there is no error in the server.
But I think that there is something we should do for the rest of the users. Maybe to suggest to run scripts like this with few cores?
Now, when I run the script I got this message. All the frames are created correctly. The issue seems to be with ffmpeg although I can't see anything wrong on it.
...
movie [INFORMATION]: Frame 71 of 72 completed [100.0 %]
movie [NOTICE]: Running: ffmpeg -loglevel warning -f image2 -framerate 24 -y -i "/home/federico/E:/Dropbox/Facu/Curso_Cartografia/5_Blog/Proximos/4_ParaPublicar/Zoom_Earth/Zoom_Relief_100c_2/Zoom_Relief_100c_2_%02d.png" -vcodec libx264 -pix_fmt yuv420p Zoom_Relief_100c_2.mp4
[png @ 0x55e7ab735e00] chunk too big
[png @ 0x55e7ab737680] chunk too big
[png @ 0x55e7ab738f00] chunk too big
[png @ 0x55e7ab73a780] chunk too big
Error while decoding stream #0:0: Invalid data found when processing input
Last message repeated 3 times
movie [INFORMATION]: MP4 movie built: Zoom_Relief_100c_2.mp4
movie [INFORMATION]: 72 frame PNG files saved in directory: /home/federico/E:/Dropbox/Facu/Curso_Cartografia/5_Blog/Proximos/4_ParaPublicar/Zoom_Earth/Zoom_Relief_100c_2
I just tried to run this script on my laptop (have not run it since this first started on my desktop). Was fine at the start with the 05 and 04 data sets . When I started needing the 03m data it got crazy:
grdblend [NOTICE]: -> Download 90x90 degree grid tile (earth_relief_03m_p): S90W180
...ERROR 1: read: segment too long (5650) with max (17134) for codeblock 1 (p=1, b=1, r=1, c=0)
ERROR 1: Failed to decode.
ERROR 1: opj_get_decoded_tile() failed
ERROR 1: /Users/pwessel/.gmt/server/earth/earth_relief/earth_relief_03m_p/S90W180.earth_relief_03m_p.jp2, band 1: IReadBlock failed at X offset 0, Y offset 0: opj_get_decoded_tile() failed
ERROR 1: Stream too short
...
grdblend [ERROR]: ERROR - Unable to convert SRTM file /Users/pwessel/.gmt/server/earth/earth_relief/earth_relief_03m_p/S90W180.earth_relief_03m_p.jp2 to compressed netCDF format
These seems to come from GDAL when we try to convert the jp2 files to netCDF. Since those very same files were created by GDAL it is possibly something bad happened during the download (lib curl). We may need to check that return code from grdconvert and if so delete the failed file and try again, of something like that.
Yes, I think I had a similar error when I got the maps with the green patches.
In the last error that I reported I got all the grids well.
I can think of a couple of improvements here. I think we are pretty sure this is related to an incomplete lib curl download since if we manually delete incomplete files and rerun scripts it eventually works, so the jp2 files we created are fine - it is just they down always make it over unscathed. When we do get a damaged file then of course we end up with errors from GDAL or openjpeg library etc. So clearly the check for error is letting through these cases since otherwise we would not try the jp2 decoding:
if (error == 0) error = gmtremote_convert_jp2_to_nc (API, localfile);
Currently we have an optional check for file size and GMT_DATA_SERVER_LIMIT can be set to skip large file downloads (default is unlimited). Thus, most of the time the _gmtremote_skip_largefiles just immediately returns. However we could insist that for our remote data sets we first do get the file size from the server. This should be a simple http request that only sends a handfull of bytes and then we know the size of the file we wish to read. Now, we are at least in the position of being able to check the size of what we actually got and if not the same size then we know it got corrupted or was interrupted etc. We can then decide to delete it and try again. And give up after N such tries, where N is maybe 2 or 3?
I could probably craft a branch that does this as it should be simple:
Sounds good.
I will change the name of the issue for the record.
OK, will post a PR later today (too many zooms coming up now) but first checked that deleting all earth_relief then running movie -x2 (one frame at the time) of course works fine. I fixed a few bugs that gave scary messages and in fact the checking of the error returned by lib curl was not used to prevent the jp2 to netcdf conversion (another bug). Still, first attempt at running in parallel still gave some unclear error so more work to do this afternoon.
Yes, I try with -x2 and I have no problems with the frames.
Although I got this message which I think it is related with ffmpeg.
movie [INFORMATION]: Frame 71 of 72 completed [100.0 %]
*** buffer overflow detected ***: terminated
Zoom_Earth_MachuPicchu.sh: línea 22: 106838 Abortado (`core' generado) gmt movie main.sh -Sbpre.sh -NZoom_Relief_MachuPicchu_${dpc}c_v6 -Taltitude.txt -C15cx15cx100 -D24 -H8 -M30,png -Lc0+gwhite+f12p+t"Altitude = %6.1lf km" -V -x2 -Gblack -Fmp4
I forgot. The frames were created well. The mp4 don't.
buffer overflow detected : terminated
I think that this problem is related to the size of the names of frames. It is too long a ffmpeg
can't handle it. It happens to me frequently. I will run the script with a shorter name to confirm.
In this case the name were Zoom_Relief_MachuPicchu_100c_006.png
. It is 36 character with the extension.
Perhaps it is this issue https://github.com/GenericMappingTools/gmt/issues/5426 again?
The way to check is to re-run the ffmpeg command using what movie prints out "Running: " Assuming all PNGs are fine and this still crash then I think we can say it is ffmpeg. If so, maybe this is repeatable with less than 72 frames (e.g., remove 10 from the end (62-72) and try again, and keep removing until we have a small data set that fails. The final check would be to rename the files to shorter name and if that ffmpeg command works then we should post a bug report with ffmpeg.
Perhaps it is this issue #5426 again?
Yes, it was. I managed to get the mp4 with a shorter name.
The way to check is to re-run the ffmpeg command using what movie prints out "Running: "
The issue is that the ffmpeg
command do not appear when I have the buffer overflow error. It would be possible to get the command the gmt
will use somehow?
BTW, I could recreate the ffmpeg command (for the long name frames) and the mp4 was created perfectly.
Paul, in a script like this, the intensity grid (used for the shadow effect, -I+d) is calculated over and over again for each frame, right? So a lot of extra work is done? Is it possible to reuse the same intensity grid (for the frames made with the same resolution)?
gmt grdimage @earth_relief -Rd -JG$lon/$lat/14.9c+z\${MOVIE_COL0}+v60 -Yc -Xc -B0 -Ctopo.cpt -I+d
Since the region keeps changing (you may not see it but the grids returned by grdblend are all different size, it would be more trouble that worth it I think to expand the script to be so clever that it
Well, maybe not too much trouble but some scripting is needed. Compared to letting it rip and eat dinner.
Of course, likely to be much more stable, but this is a test to improve the robustness and stability of the remote mechanism (which clearly needs more work).
I run anim15 on and got the message below