Closed tylerjereddy closed 1 year ago
This is much closer now--test_reverse_record_array
even shows the C accumulator machinery operating on a pre-filtered dataframe that has been converted into a buffer of records, and retrieves somewhat-reasonable file count values locally from the derived metrics. If I don't pre-filter I do see the same total files as described in gh-867 (1026
total files), so that's a good sign...
There's a memory access/free
-related issue caused by something in test_reverse_record_array
-- feel free to give me a hand with that.. otherwise pretty close I think...
Ok, CI is passing now, so this is getting to be in pretty solid shape.
I'll summarize some useful things reviewers could help with:
memory_profiler
to stress test repeat callsxfail
ed test case in this PR and gh-867 discussion); that will take some digging to figure out I suspect, but since we match darshan-parser
perfectly when we don't filter the stat'd files, I'm not so sure we have any problem on the Python side here; still, it would be good to get an explanation for the filtered casetests/test_cffi_misc.py: 10 warnings
/home/tyler/python_310_darshan_venv/lib/python3.10/site-packages/darshan/backend/cffi_backend.py:716: UserWarning: implicit cast from 'char *' to a different pointer type: will be forbidden in the future (check that the types are as you expect; use an explicit ffi.cast() if they are correct)
buf[0] = ffi.from_buffer(rec_arr)
STDIO
and POSIX
here--for example, LUSTRE
may have variable-length fields that require additional shims, but I don't think LUSTRE
is supported for accumulation anyway. Related to this--more relevant test cases you'd like to see.C
/C++
under the hood of course, but only in the context of the ecosystem that accepts Python objects/buffers by design), "eventually" might mean a few months or years so that's why I'm providing this optionAnother thing I thought of--it may be possible to simplify the complexity in test_reverse_record_array
if we could replace stuff like darshan_accumulator_create()
with an equivalent struct initialization on the CFFI/Python/NumPy recarray side. Whether it is worth it depends on how complex the logic is, and how annoying it is to do portably in Python across the different modules.
I still need to review the tests, but the conversion code looks good to me. I don't have any suggestions yet, but maybe I will after I work through some of the issues you mention for reviewers. I'm not sure it's a big deal, but conceptually I would think you could always return a buffer of the underlying record type, as opposed to a char[]
as you do in the multi-record case -- I wonder if that has something to do with the warning you mention.
determine if there any modules that are supported by the accumulator machinery that wouldn't automatically work with the infrastructure that is working with STDIO and POSIX here--for example, LUSTRE may have variable-length fields that require additional shims, but I don't think LUSTRE is supported for accumulation anyway. Related to this--more relevant test cases you'd like to see.
Luckily, all of the modules that would ultimately support accumulation follow the same structure as what you already have implemented for POSIX/STDIO with little tweaks. Lustre is one that won't support that. The only issue I can think of is that the H5D
and PNETCDF_VAR
modules have an additional field in the structure that will need to be packed, but that's probably not a big hassle.
help me figure out if we should be worried about the mismatch in accumulator file count values for the stat-filter scenario (see the one xfailed test case in this PR and file category count mismatch between darshan-parser and darshan-job-summary.pl #867 discussion); that will take some digging to figure out I suspect, but since we match darshan-parser perfectly when we don't filter the stat'd files, I'm not so sure we have any problem on the Python side here; still, it would be good to get an explanation for the filtered case
I just pushed a fix for this and a couple of other smaller things for this test. The short story is that these derived metrics presented in the old Perl report are actually summed across both POSIX and STDIO modules, so we needed to account for that difference in this test since it's POSIX only. I modified the expected values to reflect the POSIX module only values -- I started to add STDIO support for the test, but it was getting messy and I'm not sure there's much value in us testing the expected values for combining modules.
For a little more background, in the past we have often combined POSIX/STDIO data when doing some analyses, since they are both found at the lowest layer of the I/O stack (that we can instrument) and they don't layer on top of each other (we don't have to worry about double counting statistics if the APIs don't layer on top of each other). If you recall, we had a similar issue with the "file access by category" plots, where we needed to update them to additionally include STDIO data. That said, I'm fine with our current approach of providing these stats on a per-module basis in the new PyDarshan job summary tool, so don't think there's much to follow up on here.
I just pushed another commit that, among other things, addresses the warning you mentioned related to an implicit cast.
I think it simplifies _df_to_rec()
a little bit as an added bonus, which now always returns a bytes object -- previously it was returning different things depending on whether you gave it one or multiple records.
The tests have been updated a bit, too, and are passing for me locally:
test_df_to_rec()
now casts the returned bytes object into appropriate record pointer type it can use to sanity check specific fieldstest_reverse_record_array()
was not handling one of the arguments to accumulator_emit()
routine properly, so I fixed that, though we aren't using this value yetI checked all 4 of your commits and they look like clear improvements to me. I suppose you could also update the return value docstring for _df_to_rec
now that it is simply bytes, but it is a _private
function anyway.
I guess some decision will have to be made on if you proceed with this PR first, then modify some of the older aggregation ones to use this machinery instead of the Python hand-feeding records to the accumulator approach from before.
mypy 1.0.0
was released two days ago--maybe just pin to mypy<1.0.0
to silence CI for now (probably easy to do the proper fix as well.. but not a priority).
I think all looks good here. Not an ideal solution, but our hands are tied until we can move more stuff into Python. In the mean time, this will let us use the existing C aggregation stuff while giving us flexibility (i.e., for filtering) using pandas.
add a per-record converter from DataFrame to darshan-util C record format that passes tests for single records (rows) with
counters
andfcounters
(but notrank
&id
yet)add a full df->multi-record converter; this one still leads to segfaults with the C inject func with stuff like (probably corrupted data..):
one of the two added tests passes, the other xfails, feel free to push in fixes
as discussed in meeting earlier today (Rob R. seems to agree), this likely shouldn't be a long-term solution--just temporary until we can hoist all analysis infrastructure to the Python level
some of the accumulator API prototypes are copied in here from other PRs for testing purposes only.. since the main purpose of doing something like this would be to allow filtering DataFrames and then passing the raw records to the C accum interface.. (well, you also avoid re-reading the log file records)
I'll leave backwards compatibility and support beyond STDIO/POSIX drafted here to Argonne folks most likely