Which casts from a uint64_t to a size_t. This isn't an issue on 64-bit targets, but for 32-bit targets this could cause a loss of precision. This was caught by our compiler due to "-Wshorten-64-to-32" being enabled.
Based on reading the spec for the long data tests, it seems like the ldt_expansion_size value could be as high as 2^36. If that got cast to a size_t (even with the division by 8), that would become 0. To fix the compile and address the potential issue with doing the cast prior to the SIZE_MAX check, I revised the code as this:
@@ -40,15 +40,15 @@ static inline int sha_ldt_helper(struct sha_data *data, struct buffer *msg_p)
int ret = 0;
if (data->ldt_expansion_size) {
- size_t ldt_exp_bytes = data->ldt_expansion_size / 8;
- size_t i, len;
+ uint64_t ldt_exp_bytes = data->ldt_expansion_size / 8;
+ uint64_t i, len;
if (SIZE_MAX < ldt_exp_bytes) {
logger(LOGGER_ERR, "LDT size not supported on IUT\n");
return -EINVAL;
}
- CKINT(alloc_buf(ldt_exp_bytes, msg_p));
+ CKINT(alloc_buf((size_t)ldt_exp_bytes, msg_p));
for (i = 0; i < msg_p->len; i += len) {
There is a line of code here:
https://github.com/smuellerDD/acvpparser/blob/master/parser/parser_sha_mct_helper.h#L43
Which casts from a uint64_t to a size_t. This isn't an issue on 64-bit targets, but for 32-bit targets this could cause a loss of precision. This was caught by our compiler due to "-Wshorten-64-to-32" being enabled.
Based on reading the spec for the long data tests, it seems like the ldt_expansion_size value could be as high as 2^36. If that got cast to a size_t (even with the division by 8), that would become 0. To fix the compile and address the potential issue with doing the cast prior to the SIZE_MAX check, I revised the code as this: