Open TonyXiang8787 opened 7 months ago
This will make the PyArrow conversions also easier ( https://github.com/orgs/PowerGridModel/discussions/2 )
Relates to https://github.com/PowerGridModel/power-grid-model-io/issues/190
Additional arguments to the deserialization functions
Old proposal: 2 arguments:
set[ComponentType] | list[ComponentType] | None | dict[ComponentType, set[str] | list[str] | None]
New: Only one additional argument:
set[ComponentType] | list[ComponentType] | None | ... | dict[ComponentType, set[str] | list[str] | None | ...]
PGM deduces dataset format from that data_filter.Note: The difference in both option is the ...
option.
# All row based
data_filter = None
# All column based
data_filter = ...
# specific components row based
data_filter = ["node", "line"]
data_filter = {"node", "line"}
# specific components, mixed row-based and columnar data
data_filter = {
"node": None, # row-based
"line": ["id", "p_from"], # columnar with these attributes
"transformer": ... # columnar with all attributes # <--- (proposed) Ellipsis is a python built-in constant
}
Implementation proposal step 7.ii:
- Make id optional in update dataset: in the main core, we need to have special treatment in is_update_independent to make id as optional attribute in the batch update dataset. ii. For a certain component, if the buffer is row-based, we use the same logic as is now.
Loosening the requirement that id
be present in columnar data means that the serialized data may not contain id
anymore. If a columnar dataset without id
s is serialized and then deserialized as row-based dataset, then the row-based dataset does not contain id
s anymore either (all na_ID
). According to your proposed logic, this is not allowed to be used as update data. This breaks the symmetry between row-based datasets and columnar datasets. I don't think we should do that, which means that the id-less lookup logic should also be updated for row-based datasets
Implementation proposal step 7.ii:
- Make id optional in update dataset: in the main core, we need to have special treatment in is_update_independent to make id as optional attribute in the batch update dataset. ii. For a certain component, if the buffer is row-based, we use the same logic as is now.
Loosening the requirement that
id
be present in columnar data means that the serialized data may not containid
anymore. If a columnar dataset withoutid
s is serialized and then deserialized as row-based dataset, then the row-based dataset does not containid
s anymore either (allna_ID
). According to your proposed logic, this is not allowed to be used as update data. This breaks the symmetry between row-based datasets and columnar datasets. I don't think we should do that, which means that the id-less lookup logic should also be updated for row-based datasets
Good point. I have adjusted the step proposal.
Continuing this thread and this thread from #799 for potential follow-up improvements to columnar data here because this issue is still alive and kicking. We can convert this to one or more new issues if necessary:
- Since we don't have a create buffer function for columnar buffers (since probably there is no value in it due to the nature of columnar data), we should document this somewhere as there is an "inconsistency" on how row and columnar data are managed at the C API level. Or at least those are my thought, it may lead to confusion from the user perspective.
I've also been thinking about that, but I don't know where to put it. Do you have a good proposal?
Not sure yet, it probably makes sense to directly comment something about it under PGM_create_buffer
, but it should probably be standalone somewhere else.
- Should we (at some point) make something like
is_columnar
available through the C API? It may not be needed, but given that we offer functions to check if something is batch and its size, it would probably go along with those (and may even provide value in the python side).I think that makes sense, but maybe out of scope of
v1.10.0
. @TonyXiang8787 do you have any thoughts on this?
Implementation proposal step 7.ii:
- Make id optional in update dataset: in the main core, we need to have special treatment in is_update_independent to make id as optional attribute in the batch update dataset. ii. For a certain component, if the buffer is row-based, we use the same logic as is now.
Loosening the requirement that
id
be present in columnar data means that the serialized data may not containid
anymore. If a columnar dataset withoutid
s is serialized and then deserialized as row-based dataset, then the row-based dataset does not containid
s anymore either (allna_ID
). According to your proposed logic, this is not allowed to be used as update data. This breaks the symmetry between row-based datasets and columnar datasets. I don't think we should do that, which means that the id-less lookup logic should also be updated for row-based datasetsGood point. I have adjusted the step proposal.
no action items, but since there was a miscommunication within the team, I will add the full customer journey illustrated in https://github.com/PowerGridModel/power-grid-model/issues/548#issuecomment-2333399651 here:
json_serialize_to_file
(without any additional options) so that they have a repro case
b. the update data now contains no IDsdebugging in progress a. assuming row-based and columnar behave differently):
b. assuming row-based and columnar do behave the same):
Background
Power Grid Model currently uses the row-based buffer to share the data across the C-API. For example, the memory layout of a node input buffer looks like:
Where
X
is a meaningful byte andO
is an empty byte to align the memory.In this way we can match the input/update/output data structs exactly as we do in the calculation core. This can deliver the performance benefits as we avoid any copies and the memory layout is exactly matching.
When we need to leave some attributes unspecified in input/update, we set the pre-defined
null
value defined in the place so the core knows the value should be ignored.Problem
While this design is CPU-wise very efficient, it could be memory-wise inefficient due to several reasons:
null
.There is a strong case to support columnar data buffers. We give two real-world examples of this issue.
Example of update buffer
If we have an update buffer of 1000 scenarios of 1000
sym_load
, the buffer size is24 * 1000 * 1000 = 24,000,000
bytes. However, we might only need to specifyid
andp_specified
. If we could provide these two array separately, the buffer size in total is(8 + 4) * 1000 * 1000 = 12,000,000
bytes. The reduction on memory footprint is 50%!Example of output buffer
If we get a result buffer of 1000 scenarios of 1000
line
, the buffer size is80 * 1000 * 1000 = 80,000,000
bytes. However, we might only need to know theloading
output, not evenid
, since we already know theid
order in the input. The buffer size is8 * 1000 * 1000 = 8,000,000
bytes. We can save 90% of memory footprint!Proposal
We propose to support columnar data buffers across the C-API (and further in the Python API). Both the PGM core and serialization need to support that.
C-API
We already have the
dataset
concept in the C-API boundary. Therefore, this feature should not have breaking change in the C-API. Concretely, we add additional functions asPGM_dataset_*_add_attribute_buffer
to allow user add columnar attribute buffers to the dataset. The user can call the dataset as below:Python API
In the Python API, four non-breaking changes are expected.
initialize_array
) or a dictionary of numpy homogeneous arrays (e.g.{"id": [1, 2], "u_rated": [150e3, 10e3]}
).output_component_types
. The Python wrapper needs to decide whether to create a structured array or dictionary of homogeneous arrays per component. We need to figure how maintain backwards compatibility.Decision made on step 3 (deserialization): For deserialization we support either row or column based deserialization (function argument: Enum). If a user wants to deserialize to columnar data the default is to deserialize all data present. A user can give an Optional function argument to specify the desired components and attributes. In that case, deserialization + a filter (for the specific components and attributes) is happening. Let's call this Optional function argument
filter
. Make sure this behavior is documented well + document that providing a filter might result in loss of data.Make id optional for batch update dataset in columnar format
From the user's perspective, the user would definitely like to provide a columnar batch dataset in a way that the
id
is not provided for a certain component. In that case, it should be inferred that the elements where attributes are to be updated via columnar buffer are in the exact same sequence of the input data. This is a realistic use-case and will be appreciated by the user, to save the additional step to just assign the exactly the sameid
as in the input data. The following Python code should work:Implementation Proposal
To make this feature possible, following implementation suggestions are proposed in the C++ core:
DatasetHandler
toDataset
.Dataset
.Dataset
, add buffer control and iteration functionality. It can detect if a component buffer is row or column based, and in case of column based, generate temporary object to have the full struct forMainModel
to consume.MainModel
to use the newDataset
. This also relates to #431.Serializer
, it should directly read the row and column based buffer and serialize them tomsgpack
andjson
.Deserializer
, it should write the attributes either in row- or column-based depending on what buffers are set in theWritableDataset
.is_update_independent
to makeid
as optional attribute in the batch update dataset.is_update_independent
should be per component instead of the whole dataset. So we can allow individualsequence
for each component.id
of the row-based buffer is not allNaN
, we use the current logic to determine if the component is independent.id
of the row-based buffer is allNaN
elements_per_scenario
is not the same as the number of elements in the input data (in the model). An error should be raised.sequence
from0
ton_comp
for this component. This will be consumed by the update function so the update function does not doid
lookup.id
attribute buffer is provided and it is not allNaN
, we look atid
to judge if the component is independent or not. We do not need to create proxy stuff which is waste of time. Just directly look atid
buffer.id
attribute buffer is not provided or if theid
is provided but they are allNaN
:elements_per_scenario
is not the same as the number of elements in the input data (in the model). An error should be raised.sequence
from0
ton_comp
for this component. This will be consumed by the update function so the update function does not doid
lookup.