Open markcmiller86 opened 3 years ago
Happy to join in! :-)
I'm attaching our development document for how API versioning is implemented. As Mark says, the current version of the library can always read/write files that are produced with earlier versions of the library.
How would you like to proceed from here?
Super. Thanks @qkoziol.
Also, @visit-dav/visit-developers by next major release, I mean 3.2 or 3.3, not 4.0
How would you like to proceed from here?
Maybe its worth submitting a draft PR on just one HDF5 plugin (not sure which among the 20+ HDF5 plugins we have which would be the most illustrative) to demonstrate the work involved and continue the conversation there?
I am taking a guess that SAMRAI plugin might be best first case for illustrative purposes.
How would you like to proceed from here?
Maybe its worth submitting a draft PR on just one HDF5 plugin (not sure which among the 20+ HDF5 plugins we have which would be the most illustrative) to demonstrate the work involved and continue the conversation there?
I am taking a guess that SAMRAI plugin might be best first case for illustrative purposes.
OK, I can start there - which version of the HDF5 Library’s API should I use? The API routines from the latest version (1.12.0) or do you need the plugin to link with an earlier release of HDF5?
I am inclined to go with latest version (1.12.0) and see if we need for some reason to back away from that. I don't think we will since 99.9% of the cases for VisIt are read-only.
FYI: Both Silo and Conduit will need changes to use 1.12.0 API, do to changes to the compatibility macros
https://github.com/LLNL/conduit/issues/711
Here is an example compile issue for Silo:
silo_hdf5.c: In function ‘db_hdf5_initiate_close’:
silo_hdf5.c:5442:47: warning: format ‘%d’ expects argument of type ‘int’, but argument 4 has type ‘hid_t {aka long int}’ [-Wformat=]
sprintf(tmp, "\"%.235s\" (id=%d), ", name, ooids[i]);
~^ ~~~~~~~~
%ld
In file included from /masonry/build-mb-develop-ci-smoke/thirdparty_shared/third_party/hdf5/1.12.0/ci/include/H5public.h:32:0,
from /masonry/build-mb-develop-ci-smoke/thirdparty_shared/third_party/hdf5/1.12.0/ci/include/hdf5.h:22,
from silo_hdf5_private.h:71,
from silo_hdf5.c:72:
silo_hdf5.c: In function ‘db_hdf5_SortObjectsByOffset’:
/masonry/build-mb-develop-ci-smoke/thirdparty_shared/third_party/hdf5/1.12.0/ci/include/H5version.h:746:23: error: too few arguments to function ‘H5Oget_info3’
#define H5Oget_info H5Oget_info3
^
silo_hdf5.c:15779:18: note: in expansion of macro ‘H5Oget_info’
H5Oget_info(oid, &oinfo)<0 ||
^~~~~~~~~~~
In file included from /masonry/build-mb-develop-ci-smoke/thirdparty_shared/third_party/hdf5/1.12.0/ci/include/H5Apublic.h:22:0,
from /masonry/build-mb-develop-ci-smoke/thirdparty_shared/third_party/hdf5/1.12.0/ci/include/hdf5.h:23,
from silo_hdf5_private.h:71,
from silo_hdf5.c:72:
/masonry/build-mb-develop-ci-smoke/thirdparty_shared/third_party/hdf5/1.12.0/ci/include/H5Opublic.h:187:15: note: declared here
H5_DLL herr_t H5Oget_info3(hid_t loc_id, H5O_info2_t *oinfo, unsigned fields);
^~~~~~~~~~~~
silo_hdf5.c:15783:38: error: ‘H5O_info2_t {aka struct H5O_info2_t}’ has no member named ‘addr’
iop[i].offset = oinfo.addr;
^
Makefile:386: recipe for target 'silo_hdf5.lo' failed
make[4]: *** [silo_hdf5.lo] Error 1
Similar issue: #3621
The MCNP folks are interested in following progress on this. In particular Joel Kulesza from Los Alamos.
All (most) HDF5 plugins in VisIt are using HDF5’s 1.6 API (about 17 years old now). We are able to continue to build them against newer HDF5 libraries using the #define H5_USE_16_API macro before #include. However, this is beginning to show its age and it doesn’t work for methods added to HDF5 API after 1.6. So, those methods, if they wind up getting changed in a later release, wind up causing compilation errors.
In the next major release of VisIt, I would like to upgrade all the HDF5 code in VisIt to a newer (newest) HDF5. I even have a volunteer of coding support from an HDF5 developer (@qkoziol) ! ;)
We should still be able to read any HDF5 files produced by any HDF5 producer.
However, for any HDF5 writers (Blueprint, H5Part, MOAB), we’d need to take care that we don’t introduce HDF5 functionality to them that would produce data our older workflows (most of our codes are on HDF5 1.8) might not be able to support (that is probably only for Blueprint and now that I think about it, the relevant HDF5 calls are not in the plugin but in the TPL the plugin uses)
I would like to start a dialog, here, regarding any potential concerns.