Open rrdrake opened 2 years ago
Did not expect a model to have over 2billion element blocks, so haven't checked that code path very thoroughly... Will take a look. Would be interesting to see what else breaks (internally and externally) with >2 billion blocks...
Truth is (no pun indented :) that in this use case, the results variables would be uniform across all blocks, so there would no need to store or get the truth table. Thus, I consider this Issue low priority. It is just a consistency thing.
Yes, but there are many other areas in the code which would currently fail with >2 billion blocks.... I'm also not sure whether the underlying HDF5 or netCDF format would support this...
I've looked a little and found a few areas that would need fixing... Will create a branch that I can work on periodically. Not sure how soon you will need >2 billion blocks?
The SNL SABLE code is adaptive block structured and big problems could contain more than 2.x billion element blocks.
I noticed in the exodus source that the
ex_get_truth_table()
andex_put_truth_table()
are using anint
for the number of blocks. In other places, likeex_get_init()
, avoid_int*
is used in order to allow for 64 bit integers, andint64_t
is used forex_put_init()
.This is not an issue with
ex_get_object_truth_vector()
because it receives anex_entity_id
for the block id.Is this a lingering 64 bit TODO thing, or is there a reason that an
int
is being used here?