Open aufdenkampe opened 7 years ago
I recently came across the TsTables library, which is a "A Python package to store time series data in HDF5 files using PyTables". It acts as a simple extension to PyTables and is applied within Pandas.
For Documentation, see http://andyfiedler.com/projects/tstables/
The use case is minutely data, and it boasts sub-second benchmarks to append or fetch a random month of data out of a 32 million row file.
I'm imagining that all the results from a given site could be mirrored from ODM2 into in a single HDF file, and that requests to fetch and subset the data would go to that file, rather than to the ODM2 database. Clients, however, would only write to the database.
Thoughts? This sounds less clunky than mirroring to an InfluxDB instance and fetching from there.
Here are the notes on how @roelofversteeg uses HDF5, from on our Aug. 29 ODM2 team call.
Integrate with HDF5 for DTS (Distributed temperature sensing)
@aufdenkampe - It isn't just speed of file access. We also have to have integrity of transactions and data across multiple simultaneous users with reads and writes in a web environment. Individual files are not well suited for this.
@horsburgh, the approach that I've read about uses a hybrid of a RDBMS (mostly PostgreSQL) with HDF5 files. The idea is that PostgreSQL would handle the all the writes from multiple users, and also most (if not all) of the reads. Therefore transactional integrity is maintained. I think there would be two approaches to integrate with HDF5 files. For both of these approaches, there would be a 1:1 map between a Result (or a Dataset) and its specific HDF5 file.
For the EnviroDIY (and perhaps most other) use cases. Only one user has the privileges to write data to a results values table, and the writing is happening via our applications (post requests), so the likelihood of simultaneous multi writes from different users is not possible for those systems.
@horsburgh, for the record, and based on Roelof's experience, it appears that there are other approaches -- such as Materialized views, storing the data as array objects, indexing, etc. (as described in issue #7) -- that would be worth trying before HDF5 integration.
@aufdenkampe - agreed, these may be initially more promising just because they use the existing machinery of PostgreSQL.
One of the ideas I had back in 2012-2013 when we were developing ODM2 was to use the HDF5 file format in certain cases to improve performance, because of the benefits of HDF5:
The two use cases I had in mind were:
Given that the ODM2PythonAPI uses the Pandas library, I think we could tap into HDF5 very easily for one or more of these uses. Here are a few links to information:
In writing this, I have come across some recent posts about people who are not happy with HDF5 (such as this: http://cyrille.rossant.net/moving-away-hdf5/ or https://www.rustprooflabs.com/2014/11/data-processing-in-python). I read the comments on the first of these two articles, and it sounds like most of the issues have been with improper direct use of the C library (and the lack of a Javascript library.
People who use the Python libraries (h5py and PyTables) seem to have very positive experiences. The fact that PyTables is actively supported by ContinuumIO and NumFocus, and is an important package in SciPy and the binary format of Matlab, all suggests that HDF5 is still well loved and useful to many.
If we approached the use of HDF5, we would want to use of refined libraries (such as the Pandas/PyTables integration), and in small steps, such as in the time-series data caching work that Jeff was just describing to me related to improving the CSV delivery of EnviroDIY datasets.
I'm interested in your thoughts!
cc: @horsburgh, @sreeder, @emiliom, @lsetiawan, @miguelcleon