BigDataset(RecordType, File).
big_dataset_read(bd, offset, size, buf), big_dataset_write(bd, offset, size, buf)
Use BigArray to view the buf from the correct strides, and big_block_read / big_block_write to do the writing on each block in the data set.
big_dataset_read_mpi, big_dataset_write_mpi,
same as above. may need to plumb some flags.
big_dataset_grow(bd, size)
big_dataset_grow_mpi(bd, size)
Concern 1. may need to open and close 2xNcolumn physical files per read/write.
Is it fast enough for a PM step? Probably fine.
Concern 2. if some blocks in the record type exists, some doesn't exist? Perhaps need to create and grow on the fly.
These changes will allow us to use bigfile to do journals (e.g. record blackhole details per step).
RecordType = [ ( column name, dtype ) ]
big_record_set(rt, void record_buf, icol, const void value) big_record_get(rt, const void record_buf, icol, void value)
BigDataset(RecordType, File). big_dataset_read(bd, offset, size, buf), big_dataset_write(bd, offset, size, buf) Use BigArray to view the buf from the correct strides, and big_block_read / big_block_write to do the writing on each block in the data set. big_dataset_read_mpi, big_dataset_write_mpi, same as above. may need to plumb some flags. big_dataset_grow(bd, size) big_dataset_grow_mpi(bd, size)
Concern 1. may need to open and close 2xNcolumn physical files per read/write. Is it fast enough for a PM step? Probably fine.
Concern 2. if some blocks in the record type exists, some doesn't exist? Perhaps need to create and grow on the fly.