Open ryanfranz opened 4 years ago
I'll add my 2 cents now: I think the choice of formats for imagery will come down to use cases. I'm not sure that we need to specify them in the conceptual model, but any profile or best practice should have at least one format that is best for the use case (data repo, performance, edge/constrained).
I could see JP2000 being good for the data repository case, with performance not as critical and having lossless compression being optimal. FLIF is interesting, but their web site only focuses on the compression ratio, not on speed, so might not be a good candidate without some experimentation.
For the performance case where you don't need lossless compression, KTX2 with .basis sounds good. But does this work for the edge case on mobile computing platforms? At one point, it seemed that desktop GPUs and mobile GPUs were using different types of image compression. Since KTX2 is just a container format and a variety of GPU compression types can be used within it, could we have compatibility issues taking a desktop CDB and moving to a mobile platform?
@lilleyse - Is there any benefit to storing uncompressed integer and floating point data in KTX2? I know that tiff supports some compression types (not very good types, but well supported).
I'll second Ryan's request to specify only a few recommended formats per best practice or application domain profile if we find full freedom necessary in the conceptual model by moving a comment from #8 he missed here because I think it is applicable to both determinism and imagery format:
I strongly oppose accommodating any image compression format (unless that's just conceptual for future expansion) in the CDB X specification because that's a moderately high burden, especially for a performance sensitive or resource constrained application to handle. If my customer states I have to be CDB compliant, they don't care that the data producer used a crazy one off expensive to decompress format in their CDB. They just want their fast jet simulator to fly it at full resolution without load latency or other artifacts, and it's currently the contractor's responsibility to make that happen somehow without changing the CDB data. I can see a similar situation for a cell phone app in a different memory constrained dimension. IMHO, we need a few well chosen formats with different strengths and profiles for their usage.
@ryanfranz It looks to me like the .basis format has taken mobile device format transcoding into account in the design pretty well, at least from the documentation here.
FLIF decompression speed seems an order of magnitude slower than other similar formats. Is the small compression benefit worth that level of performance cost?
This is not an endorsement of the referenced commercial format, but rather simply a performance and compression ratio comparison data source. Exerpt: Oodle compresses to tiny sizes like the highest compression lossless image formats (WebP, FLIF, JPEG2000, PNG, etc), but it destroys them in decode speed -- usually 5-10x faster than PNG and 200-500x faster than FLIF (not a typo).
Again great discussion - and if I can summarize what I think the bottom line is: Given the short time frame of the CDB X activity AND the broad usage and support for the current image formats in CDB that any experimentation, discussion, and so forth be deferred to a later time perhaps as an OGC Interoperability experiment.
That said, at this time use case documentation, listing of issues and requirements are probably a good idea and should be in the ER. This approach is consistent with defining CDB X at an abstract level that could accommodate image formats as dictated by use case(s) and operational environments (simulator versus Warfighter at the Edge using a mobile device).
@lilleyse - Is there any benefit to storing uncompressed integer and floating point data in KTX2? I know that tiff supports some compression types (not very good types, but well supported).
KTX2 is a bit limited in that respect, but it does support zstd and zlib compression schemes. We're hoping that a domain specific lossless compression gets added at some point while the spec is still young.
http://github.khronos.org/KTX-Specification/index.html#_supercompressionscheme
I think that we have to keep in mind some of the CDB X general objective which is to improve the repository/editing use case. To me, this imposes two constraints on the imagery format: Generally well supported encoding in GIS tools and a standard way to georeference the images. The first (adopted standard in tools) was not meet by CDB 1.X as J2K was not widely adopted at the time. We must be carefull at repeating this and put our "bet" on an emerging encoding. The second (support for standard georeferencing in the image metadata) I believe is one of the reason CDB 1.X selected JPEG2000 as its metadata allowed for georeferencing (Another reason for J2K selection was to improved encoding to support both the runtime extraction and the repository usecase at preserving most of the source quality).
I support the general direction that CDB X abstract model would not force an encoding but some profile would pick a finite list of encoding to avoid the issue pointed by @ccbrianf that content producers disregard the runtime usecase by selecting an inefficient encoding, pushing the performance issue to the runtime implementation.
Unless all users accept that a CDB datastore will always have to be converted to a selected profile for a selected use case.... Going back the exact thing that CDB was trying to fix 15 year ago .. too many copies of the same terrain, to much time to convert, to many errors injected in conversions!
This is a more general question than imagery .... How can we support multiple profiles while avoiding to fall back into the original issue that each users had to convert data offline before it can use it efficiently at runtime?
One technical addition to the JP2 discussion: There is an OGC Standard GML in JPEG 2000 for Geographic Imagery Encoding. The standard defines the means by which the OpenGIS® Geography Markup Language (GML) Standard http://www.opengeospatial.org/standards/gml is used within JPEG 2000 http://www.jpeg.org/jpeg2000/ images for geographic imagery. The standard also provides packaging mechanisms for including GML within JPEG 2000 data files and specific GML application schemas to support the encoding of images within JPEG 2000 data files.
This OGC standard is supported in GDAL as well as by a number of satellite organizations such as EUSC.
Thank you Carl for pointing this out. A similar level of standardization on image georeferencing should apply to other selected imagery encoding to have tools interoperability.
@cnreediii could you comment on the point that multiple profiles for the same datasets (imagery in this case) would force conversion of CDB and the fact that it may go against one of the original CDB objective of limiting/eliminating the need for conversion prior to runtime use? Are there other OGC standard that face this and implement converter from profile to profiles?
@PresagisHermann - The official OGC definition for "Profile" as approved by the membership is:
Profile: [ISO 19106:2004, Type 1 Profile] A profile is a pure subset of an existing standard including restrictions on or deletions of conformance clauses related to the subsetting. An example of a profile is the GML Simple Feature Profile.
This OGC definition of a profile could be applied to an encoding or format - such as the OpenFlight requirements and conformance clauses as currently specified in the CDB 1.x standard. The OGC definition of "profile" would allow any implementation of an OGC standard that supports all core conformance clauses to keep on running with no changes if supporting a profile of that standard is required.
I think what this group is actually discussing are application profiles or profiles with extensions. The OGC definitions for these concepts are:
AP (Application Profile) - Set of one or more base standards and - where applicable - the identification of chosen clauses, classes, subsets, options and parameters of those base standards that are necessary for accomplishing a particular function [ISO 19101, ISO 19106]
Profile with Extension: A profile with extension is a set of one or more conformance clauses from a base standard that includes at least one new conformance clause (extension). An example is OpenGIS® Web Map Services - Profile for EO Products.
So, I think when we start speaking of "profiling" a format, dataset, or encoding and looking for conversion between these profiles, we are speaking of application profiles. Now, if these application profiles are all based on the same logical model and core concepts them any conversions are fairly trivial. This is the approach CityGML/CityJSON has taken and continue to support.
All that said, I think for the CDB group to specify profiles of JP2000 or GeoTIFF may be a bridge too far and would definitely increase support/conversion/software costs! If everyone wants to purchase FME, then maybe . . . but the OGC has never ever suggested pursuing a technology approach that requires purchase of licenses software. That is a separate discussion for sure.
Apologies for the ramble but I think get our terminology straight here will help avoid confusion in our discussions going forward!
Is it appropriate at the level of a conceptual model to proclaim that lossless compression is a requirement? Do we have strong feelings about lossy vs lossless?
@christianmorrow I have a strong feeling that a source data repository should at least support a lossless option.
@christianmorrow At the logical model level (see definitions below :-) ), an image property could be lossless or lossy. Then the actual CDB 2.0 standard could specify lossless as a mandatory requirement for compressed imagery but leave the option open for lossy compression. Another implementation standard based on the same logical model could then specify lossy compression. The requirements and requirements classes stated in an implementation standard such as CDB 1.2 are what drives physical implementations. The logical model provides the framework and the lingua-franca that underpins the implementation standards. This is exactly how GeoPackage is structured. The GeoPackage geometry model is based on the Simple Features logical model (aka abstract model). The GeoPackage CRS is based on the ISO 19111: Referencing by Coordinates (ISO 19111:2019 defines the conceptual schema for the description of spatial referencing by coordinates) and so on.
Conceptual Model: description of common concepts and their relationships, particularly in order to facilitate exchange of information between parties within a specific domain. A conceptual model is explicitly chosen to be independent of design or implementation concerns.
Logical Model: Logical models represent the abstract structure of a domain of information. They are often diagrammatic in nature and are most typically used in business processes that seek to capture things of importance to an organization and how they relate to one another. A logical model is based on a conceptual model. Once validated and approved, the logical model can become the basis of a physical model and, for example, form the design of a database.
Most of y'all have already covered the salient points, but from my perspective the appropriate properties for imagery would be: 1.) Native color model (sRGB, linear RGB, YUV, etc.) - we say "lossless" encoding but lossless with respect to what color space and metric? 2.) Channel presence and resolution relative to the color space - do we have alpha? Floating point vs. fixed point? Range and precision of channel? 3.) Lossless vs. a particular lossy algorithm (multiple formats may use same or different lossy algorithms) - with metrics or parameters for lossiness to understand what we have 4.) Intended deployment target for lossy algorithms (i.e. is there a GPU can I directly splat this thing onto? is it a human perception model? how do I know I can use it for my intended use case?) 5.) Georegistration model so standalone tools can know wtf to do with it without traversing the full CDB structure
Points #1 and #2 also enable us to get into more generalized imagery that isn't just human visuals.
Some of these are more of a logical model than a conceptual model - although at the conceptual level we probably want to define what things like "lossless" mean as vocabulary at least.
The actual physical format gets into the bits and bytes of how these properties are captured into a storage format along with the actual coverage data (pixels, wavelets, etc.) and the performance implications thereof.
As for physical formats, GeoTIFF with conventional compression is "good enough" for lossless option in many use cases for linear RGB (and support high-precision 16-bit encoding where relevant) and is very widely supported.
PNG would also be a good lossless option except there's no supported georegistration standard (to my knowledge). Conventional tools tend to be super sloppy regarding colorspace and PNG though.
JPEG is a widely supported lossy option targeted at human perception, but the loss metrics aren't so good and it cannot be directly deployed to a GPU.
JPEG-2000 as conventionally used is not lossless but the loss metric is pretty small, but it's not directly deployable to the GPU and the resource use - CPU and memory - tend to make it unsuitable for a lot of my applications. There is a lossless option but I don't know much about its resource use.
DDS/DXTN is a lossy compression that can be directly deployed to desktop GPUs, but the loss metrics aren't great and it doesn't support georegistration. I expect this will be replaced in the future by KTX2. But maybe still worth discussing.
KTX2 / Basis-U is interesting as a future capability for lossy storage targeted at rapid GPU deployment. I need to get smarter on it to understand the full implications though.
I don't see any reason to look at anything outside this list.
@UnclePoole Good overview of the best imagery options. Going back in CDB history, CDB 2.1 used compressed Tiff for imagery, and that got transitioned to Jpeg2000 for CDB 3.0 based on the inefficient compressed size (I think this was the reason).
I think that the format chosen would be based on the use case or CDB profile. I think that the data repo case needs to be lossless and use something like LZW compressed Tiff (which can have more bit depth if source is available). For simulation, lossy data is likely ok and choosing a format like Jpeg or KTX2/Basis U. An end user or constrained device case would argue for the simplest and smallest format, like Jpeg.
I take lossless to mean no compression artifacts and keeping the gamma and color space the same as the original. Of course, the data might need transformed and warped from the original coordinate system and orientation into a particular CDB tile and WGS84, so the individual pixels might not be preserved. For color spaces, I am unaware of satellite or aerial imagery that uses color spaces other than sRGB. Although if you talk about model textures or light point colors, other color spaces are likely to be used.
And there is already a CDB extension that supports imagery and textures in Near Infrared and Short-Wave Infrared, for supporting newer sensor types and NVG applications.
Just listened to a presentation on cloud optimized GeoTIFF. The presentation is here https://portal.ogc.org/files/?artifact_id=94908.
Moving the imagery discussion here, since there are general comments and also some discussion from within the Tiling group. I'll try to summarize the state of the discussion here first
@christianmorrow Does it make sense to stick with JP2000?
Do we have time to experiment with OGC GML in lossy or lossless alternatives: FLIF? BPG? WebP? JPEG XR? AVIF? What are the trade-offs for storage size on disk, I/O performance, CPU resource consumption between lossy and lossless - compressed or uncompressed imagery - in the context of CDB-X. Just a thought for discussion -- I think we all agree that JP2000 is a performance bottleneck...cost of disk storage isn't the problem it was 15 years ago.
@ccbrianf JP2k: I believe the tiling sub-group is discussing alternate imagery file formats (and I heard about a specific discussion of FLIF), but lossless compression seems strongly preferred in that group and to support data responsibility use cases. Ironically, JP2k supports some of the large file and internal tiling schemes you suggest that are completely unused in current CDB, but it also has the obvious performance draw backs you mention. I would support a few well chosen format options per application domain profile though. For visualization, Basis Universal seems to make a lot of sense, but not so much for a data repository.
@lilleyse For JP2000 - the Cesium team has been looking into KTX2 as a next-gen imagery format as part of our SOCOM work - see Cesium Milestone 1 Report.pdf for a survey of KTX2 + Basis Universal. KTX2 also supports uncompressed integer and floating point data types but we haven't compared with JP2000 as carefully.
As a side note - is there a good place for imagery discussion on github? I don't want to derail this conversation too much since this is now getting a bit off topic.
@jerstlouis In our Tiles group, we discovered http://flif.info/ which seems like a very good alternative to JP2000.
@cnreediii I took a quick look at KTX2. The spec states, "The KTX file format, version 2 is a format for storing textures for GPU applications". This may not be appropriate for the general compression of any imagery that might be stored in a CDB data store. From the FLIF web site:
The Free Lossless Image Format (FLIF) is a lossless image compression format. It supports grayscale, RGB and RGBA, with a color depth from 1 to 16 bits per channel. Both still images and animations are supported. FLIF can handle sparse-color images (e.g. 256 color palettes) effectively. It has an interlaced and a non-interlaced mode; unlike PNG, interlacing usually does not come with worse compression. In terms of compression, FLIF usually outperforms other lossless compression formats like PNG, lossless WebP, BPG, JPEG 2000, JPEG XR, JPEG-LS, APNG, MNG, or GIF.
I think that the CDB X data store should accommodate the use of any image compression format but that after experiments, such as an OGC Interop Initiative, we would be in a much better position to recommend a best practice for image compression.