Open Ijaswanth82 opened 2 months ago
Do you mean that with GDAL versions <3.6.0 the ram usage is low?
I believe that next someone will ask you to give more details about what you are doing. Think this way: Can someone re-produce my use case with the information that I have given?
As it, this report lacks details to be actionable:
I have not tested with older versions (<3.6.0).Assuming that the memory issues would have been fixed in later versions.
Do you mean that with GDAL versions <3.6.0 the ram usage is low?
I believe that next someone will ask you to give more details about what you are doing. Think this way: Can someone re-produce my use case with the information that I have given?
As it, this report lacks details to be actionable:
- Is this is a regression compared to previous GDAL versions? If so, it would help immensely if you could identify the precise version where the behavior changed, and even better, if you could "git bisect" down to the offending commit
- Do you set the GDAL_NUM_THREADS configuration option? It can increase RAM consumption
- Please share the output of gdalinfo on a typical TIFF file
- If its indeed a regression and you can't yourself identify the offending commit, then you'll likely have to produce a ready-made minimum reproducer for us to be able to investigate. This is a too much complicated topic to be able to guess
1.The behaviour remained the same in all the versions we used(v3.6.0 to v3.9.2).There is no precise version where a change in behaviour was observed.
2.GDAL_NUM_THREADS is not set explictly.only GDAL_CACHEMAX is set to 10%.
3.sample gdalinfo output
Driver: GTiff/GeoTIFF
Files:
What do you mean by "approximayely 25 simulataneous executions"? Do you start that many gdalwarp processes with your code?
As your use case involves multi-threading, you might want to rule out a potential issue with RAM fragmentation mentionned at https://gdal.org/en/latest/user/multithreading.html#ram-fragmentation-and-multi-threading
What do you mean by "approximayely 25 simulataneous executions"? Do you start that many gdalwarp processes with your code?
My application is rest based application which serves requests by using GDAL in the backend for every request .By having 25 simultaneous executions I am trying to simulate the load.In short,I start that many gdalwarp processes with my application's code to serve that many requests.
So if I understand right, it is not multithreading but running 25 gdalwarp or gdal_translate programs at the same time. I guess that the operating system in that case gives memory for the GDAL programs and because they are started from Java, also Jave adds some memory requirements into the mix. I would expect that the memory consumption grows linearly in the beginning. Have you tested with 1, 2, 3, 4.... processes?
I am not a programmer and I do not know if your approach is a good or not approach, but I feel that maybe it is not. Perhaps you should warp or translate with Java code instead of starting GDAL programs.
I am remembering that I have been doing something similar than you by starting several simultaneous gdalwarp programs with a script on Windows. By that time I noticed that it did not help to start more gdalwarps than I had physical processor cores, and even that might be too much if the bottleneck was not processing but the speed of the file system.
So if I understand right, it is not multithreading but running 25 gdalwarp or gdal_translate programs at the same time. I guess that the operating system in that case gives memory for the GDAL programs and because they are started from Java, also Jave adds some memory requirements into the mix. I would expect that the memory consumption grows linearly in the beginning. Have you tested with 1, 2, 3, 4.... processes?
I am not a programmer and I do not know if your approach is a good or not approach, but I feel that maybe it is not. Perhaps you should warp or translate with Java code instead of starting GDAL programs.
I am remembering that I have been doing something similar than you by starting several simultaneous gdalwarp programs with a script on Windows. By that time I noticed that it did not help to start more gdalwarps than I had physical processor cores, and even that might be too much if the bottleneck was not processing but the speed of the file system.
I am actually starting warp or translate with Java code instead of starting GDAL programs as part of my application.It is just for reference that i told i am executing multiple gdal warps.But in reality my application launches multiple threads and in each of them gdal warp is getting executed via gdal jni calls in java.
To give more context,For every request to my application there will be 1-5 gdal warp,1 gdal_translate executeions in the backend.Since my appplication is in java,there will be some heap memory allocated(2gb in our case).But the problem is that memory management of the warp and translate function will not come under JVM heap memory since we are calling C/CPP native code of GDAL.When we did load test our application's memory usage has crossed 4gb but the limit to jvm heap memory is set to 2gb.This additional memory utilization is because of the memory management of the warp and translate functions in C/CPP which at the moment we don't have control over.We have also observed that the JVM heap heap memory is occupied only in the order of 400mb out of the 4gb. I am looking for any suggestion to bring down memory utilization of the non JVM/non heap memory so that my applicaztion will not face Out of memory exceptions.
As your use case involves multi-threading, you might want to rule out a potential issue with RAM fragmentation mentionned at https://gdal.org/en/latest/user/multithreading.html#ram-fragmentation-and-multi-threading
I have actually looked into this section and tried building tcmalloc but my OS is alpine i have encountered built issues.I will try once more.
Do you have any indication that it should be possible to run 25 concurrent computations in the amount of RAM you have available? I run 8 concurrent GDAL commands in the shell and my RAM usage for them is 0.5-1.2 GB per command.
I don't know how the JNI bindings work, but you can try tuning the GDAL_CACHEMAX
value.
As @jratike80 is saying, you are most likely to just get churn when you go beyond the number of cores available for processing. If you make a thread/worker pool in Java with a fixed number of threads, and then have your Java threads submit jobs to that pool you should be able to get a bounded resource consumption of your application. There is a number of variations of worker pools available depending on which Java version you use, so with a modern Java you should be able to find something that suits your application.
What is the bug?
When gdal warp and translate are put under load( approximayely 25 simulataneous executions),the memory usage of the process is raising very steeply and becomes unpredictable.I am manipulating multiple tiff files to a get single raster of area of interest by specifying BBOX and using gdal_translate for converting it to jpeg/png.The final images are in the order of 4-5mb and even with 25 simulataneous requests the memory usage of the process that uses the warp is crossing 4gb.The interaction with gdal is via java bindings in my application.
Steps to reproduce the issue
Concurrent threads(approx 25) running gdal warp and gdal_translate with java bindings will replicate issue.
Versions and provenance
The application is run in the docker image of Alpine OS and the docker is run in Amazon linux 2.Faced the same issue with all gdal versions>3.6.0
Additional context
My application is a rest based application to serve rasters which leverages gdal in tiff file manipulation for geospatial operations.