When I create a unsigned 64-bit signal in the DBC (full range, no offset/scaling) the generated code fails to compile because the MAX constant has a value that exceeds u64 limits.
The issue seems to be:
min and max signal bounds are both stored as f64
f64 has only 52 mantissa bits so generally fails to store values above 2^52-1 with full accuracy
rounding occurs and the value stored in max (f64) is effectively larger than 2^64-1
This seems like a flaw in the design. Here are a few workarounds I can think of:
If min / max are not interesting anyway and you can tweak the DBC, you can do that.
dbc-codegen could add #[cfg(feature = "range_checked")] to the MIN and MAX constants so with range checking disabled at least there is no compilation error.
Has anyone else faced this issue before? Do you see a better way to work around the issue?
I'm willing to spend time on this but need some guidance.
When I create a unsigned 64-bit signal in the DBC (full range, no offset/scaling) the generated code fails to compile because the MAX constant has a value that exceeds
u64
limits.The issue seems to be:
min
andmax
signal bounds are both stored asf64
f64
has only 52 mantissa bits so generally fails to store values above2^52-1
with full accuracymax
(f64
) is effectively larger than2^64-1
This seems like a flaw in the design. Here are a few workarounds I can think of:
dbc-codegen
could add#[cfg(feature = "range_checked")]
to theMIN
andMAX
constants so with range checking disabled at least there is no compilation error.Has anyone else faced this issue before? Do you see a better way to work around the issue? I'm willing to spend time on this but need some guidance.