Open bettasimousss opened 2 months ago
Hello Sara,
thanks for your feedback and questions. We currently have not yet implemented scale and offset. There are two approaches possible for you:
We have plans to implement the scale and offset for the EBVCube netCDFs. However, this will only happen in the mid-term.
I started investigating the overwriting of the NoData value. Thanks for pointing it out!
Hello,
I have a set of raster datasets in geotiff format, they are stored as UINT16 (scale = 10000, offset = 0) with nodata set to 65535. After setting up the metadata on the portal, downloading the json I create and populate the netCDF file with these rasters as follows:
ebv_create_taxonomy(jsonpath = metadata_json, outputpath = newNc, taxonomy = taxo_path,sep = ',', lsid = FALSE, epsg = 3035, resolution = c(1000,1000), prec = 'integer', fillvalue = 65535, extent = c(723000.0000000000000000,7700000.0000000000000000,160000.0000000000000000,6615000.0000000000000000), overwrite=T, verbose=FALSE)
At this point, visualizing the data in panoply the arrays are filled with 65535 as expected. Now, I wonder if I should add the data layers as path to the tiff and in this case it seems ebvcube is taking them as float and overriding the NODATA value with the default one for float, and setting all valid values to 0.
Or is it preferable to add the data as arrays, providing the values as integers (not applying scale). In this case, how to setup the scale so that the visualization in the portal is still on the [0,1] range ?
Thanks in advance for your support !