Closed karpfen closed 1 month ago
Data + script here: reprex_error.zip
Processing this areas results in very large rasters (see below) which do not fit in memory on my machine. Maybe subdivide into smaller chunks and take the sum? Though this error message looks like an error that needs catching. Will look into it tomorrow.
# Error in df[[what]][index] <- as.numeric(my_sum) :
# NAs are not allowed in subscripted assignments
[[1]]
class : SpatRaster
dimensions : 22631, 10821, 1 (nrow, ncol, nlyr)
resolution : 0.00025, 0.00025 (x, y)
extent : 39.64975, 42.355, 8.8405, 14.49825 (xmin, xmax, ymin, ymax)
coord. ref. : lon/lat WGS 84 (EPSG:4326)
source(s) : memory
varname : vrt_Hansen_GFC-2022-v1.10_treecover2000_10N_030E156b53161f58
name : Hansen_GFC-2022-v1.10_treecover2000_10N_030E
min value : 0
max value : 95
[[2]]
class : SpatRaster
dimensions : 20216, 19822, 1 (nrow, ncol, nlyr)
resolution : 0.00025, 0.00025 (x, y)
extent : 35.257, 40.2125, 8.71475, 13.76875 (xmin, xmax, ymin, ymax)
coord. ref. : lon/lat WGS 84 (EPSG:4326)
source(s) : memory
varname : vrt_Hansen_GFC-2022-v1.10_treecover2000_10N_030E156b393b6a54
name : Hansen_GFC-2022-v1.10_treecover2000_10N_030E
min value : 0
max value : 100
[[3]]
class : SpatRaster
dimensions : 27510, 35363, 1 (nrow, ncol, nlyr)
resolution : 0.00025, 0.00025 (x, y)
extent : 34.13925, 42.98, 3.5095, 10.387 (xmin, xmax, ymin, ymax)
coord. ref. : lon/lat WGS 84 (EPSG:4326)
source(s) : memory
varname : vrt_Hansen_GFC-2022-v1.10_treecover2000_10N_030E156b1c6509cd
name : Hansen_GFC-2022-v1.10_treecover2000_10N_030E
min value : 0
max value : 100
[[4]]
class : SpatRaster
dimensions : 16075, 17020, 1 (nrow, ncol, nlyr)
resolution : 0.00025, 0.00025 (x, y)
extent : 34.877, 39.132, 4.439, 8.45775 (xmin, xmax, ymin, ymax)
coord. ref. : lon/lat WGS 84 (EPSG:4326)
source(s) : memory
varname : vrt_Hansen_GFC-2022-v1.10_treecover2000_10N_030E156b19830fd7
name : Hansen_GFC-2022-v1.10_treecover2000_10N_030E
min value : 0
max value : 100
Could you please try with the improve-gfw
branch? Should be slightly more efficient, though you would run into memory issues eventually if you increase the size of your polygons. I will investigate the possibility of automated chunking, but that is the topic for another issue.
Cool, I'm currently running this, thanks!
Chunking for large assets now implemented and available on main via d72e37e955991a6fe7b3efc9f1f526449c24a59c thus closing. Feel free to re-open if you still encounter issues.
I was processing a bunch of polygons to calculate the GFW coverage and had quite a lot of missing values in the results. I'm not sure what caused this or if it is entirely reproducible, because when I tried to reproduce the error, for some regions I got sensible results when I processed them one-by-one.
I put togehter a gpkg of regions for which I encountered the problem consistently when I run it like this (I commented out the parallelization stuff, it didn't change the behavior):
Could you please have a look at this @goergen95 ?