Open FHomps opened 2 years ago
I thought about trying to fix this transparently by adding PDR type checks to encode_get_pdr_resp()
and decode_get_pdr_resp()
, and copying the PDR structure into the PLDM message piece by piece, removing the unwanted padding when needed. There would thus be two PDR representations in the library: one padded out using unions, and a temporary, spec-compliant byte representation used only within the scope of those functions.
However, this will not work, due to caller ownership of the message (for encode_get_pdr_resp()
) and of the record data (for decode_get_pdr_resp()
) in the current API. The client needs to initialize both those structures and is expected to know their size before calling the functions.
Fixing this issue will thus require changes to the API. I see three possible options to solve this:
compute_pdr_transmission_size()
function (or equivalent) to be called before the encode / decode functions. The record data / message size would then depend on the returned dynamic size, allowing the above-mentioned solution to be implemented.I am personally very much against option 2, and have a preference for option 3. However, all three do break the current API or current usage of the library.
https://github.com/openbmc/pldm/blob/ef773059fdead2135c96c4a4c3520e4752012ef0/libpldm/platform.h#L426:L455
Following DMTF DSP0248-1.2.1 (and earlier), some PDR fields are of variable length depending on value types specified earlier in the PDR. For example, in section 28.11, table 87:
This maxSettable field (and others) should be of a length determined by the effecterDataSize field present earlier in the PDR:
Using unions for such a purpose is not possible, as unions always have the size of their biggest member; GCC's
__attribute__((packed))
does not change this. This effectively makes all PDRs following this architecture incompatible with the DMTF specification.