Closed marcelbuesing closed 3 years ago
Thanks for continuosly adding awesome features! My coworker @bitbleep write bitsh, so maybe he'd like to have a look here too. :)
I'm all for picking one implementation, ideally a well-maintained and widely used crate ;)
Yeah, agree @killercup. If we can leverage the bitvec crate in the DBC code generation that is awesome. Bitsh was something we threw together as a hack to try out the codegen. Going forward using a more solid crate seems like the best option to me.
Thanks for the quick update! There's some tests failing, once they pass I think we can merge this.
@bitbleep if you have some time I'd love your review of this too :)
Yes still looking into those. I'm not yet sure whether the test is actually the issue or the big endian encoding. I want to highlight two things where I am currently not sure besides this.
Ok, so got the chance to look into this today. I think LocalBits is correct too.
Got the tests passing after making the change to the BigEndian offsets i commented above.
Regarding signed signals there seems to be very little information on how those are actually encoded. I've assumed they are 2-complement for what i did in bitsh, but i haven't been able to confirm this. If we are unlucky there is no standard way of dealing with signed signals. We should investigate this further to see if we can make sense of it, but i don't see this stopping merging this PR.
Ok, so got the chance to look into this today. I think LocalBits is correct too.
Got the tests passing after making the change to the BigEndian offsets i commented above.
Thanks I integrated your changes.
I'd say this is good to merge now @killercup ?
And also thanks for your time and work @marcelbuesing, awesome!
Looking at this I think the PR should incorporate this again.
I tried to generate using following dbc file:
toyota_corolla_2017_pt_generated.dbc.zip
But got error:
Error: Could not parse dbc file: NomError(
Error(
Code(
CompleteByteSlice(
[
67,
77,
95,
32,
34,
65,
85,
84,
79,
71,
69,
78,
69,
82,
65,
84,
69,
68,
...
I tried to generate using following dbc file:
toyota_corolla_2017_pt_generated.dbc.zip
But got error:
Error: Could not parse dbc file: NomError( Error( Code( CompleteByteSlice( [ 67, 77, 95, 32, 34, 65, 85, 84, 79, 71, 69, 78, 69, 82, 65, 84, 69, 68, ...
This originates in can-dbc
i created an issue.
Here comma.ai toyota_corolla_2017_pt_generated_fixed.dbc.zip is little bit modified so can-dbc
is able to parse it.
However now it panic here:
fn signal_from_payload(mut w: impl Write, signal: &Signal) -> Result<()> {
let read_fn = match signal.byte_order() {
...
can_dbc::ByteOrder::BigEndian => format!(
"self.raw.view_bits::<LocalBits>()[{start}..{end}].load_be::<{typ}>()",
typ = signal_to_rust_uint(signal),
start = signal.start_bit - (signal.signal_size - 1),
end = signal.start_bit - (signal.signal_size - 1) + signal.signal_size
),
};
...
thread 'main' panicked at 'attempt to subtract with overflow', src/main.rs:416:21
So it is this line:
start = signal.start_bit - (signal.signal_size - 1),
@marcelbuesing suggestion fixes panic:
@@ -405,16 +411,14 @@ fn render_signal(mut w: impl Write, signal: &Signal, dbc: &DBC, msg: &Message) -
fn signal_from_payload(mut w: impl Write, signal: &Signal) -> Result<()> {
let read_fn = match signal.byte_order() {
can_dbc::ByteOrder::LittleEndian => format!(
- "self.raw.view_bits::<LocalBits>()[{start}..{end}].load_le::<{typ}>()",
- typ = signal_to_rust_uint(signal),
- start = signal.start_bit,
- end = signal.start_bit + signal.signal_size
+ r#"self.raw.view_bits_mut::<Lsb0>()[{start_bit}..{end_bit}].store_le(value);"#,
+ start_bit = signal.start_bit,
+ end_bit = signal.start_bit + signal.signal_size,
),
can_dbc::ByteOrder::BigEndian => format!(
- "self.raw.view_bits::<LocalBits>()[{start}..{end}].load_be::<{typ}>()",
- typ = signal_to_rust_uint(signal),
- start = signal.start_bit - (signal.signal_size - 1),
- end = signal.start_bit - (signal.signal_size - 1) + signal.signal_size
+ r#"self.raw.view_bits_mut::<Msb0>()[{start_bit}..{end_bit}].store_be(value);"#,
+ start_bit = signal.start_bit,
+ end_bit = signal.start_bit + signal.signal_size,
),
};
@@ -454,7 +458,7 @@ fn signal_to_payload(mut w: impl Write, signal: &Signal) -> Result<()> {
can_dbc::ByteOrder::LittleEndian => {
writeln!(
&mut w,
- r#"self.raw.view_bits_mut::<LocalBits>()[{start_bit}..{end_bit}].store_le(value);"#,
+ r#"self.raw.view_bits_mut::<Lsb0>()[{start_bit}..{end_bit}].store_le(value);"#,
start_bit = signal.start_bit,
end_bit = signal.start_bit + signal.signal_size,
)?;
@@ -462,9 +466,9 @@ fn signal_to_payload(mut w: impl Write, signal: &Signal) -> Result<()> {
can_dbc::ByteOrder::BigEndian => {
writeln!(
&mut w,
- r#"self.raw.view_bits_mut::<LocalBits>()[{start_bit}..{end_bit}].store_be(value);"#,
- start_bit = signal.start_bit - (signal.signal_size - 1),
- end_bit = signal.start_bit - (signal.signal_size - 1) + signal.signal_size,
+ r#"self.raw.view_bits_mut::<Msb0>()[{start_bit}..{end_bit}].store_be(value);"#,
+ start_bit = signal.start_bit,
+ end_bit = signal.start_bit + signal.signal_size,
)?;
}
};
However I am still too noob about dbc
files to verify if everything actually works.
VAL_ 1162 TSGN3 0 "none" 1 "speed sign" 2 "0 unlimited" 7 "unlimited" 16 "highway" 17 "no highway" 18 "motorway" 19 "no motorway" 20 "in city" 21 "outside city" 22 "pedestrian area" 23 "no pedestrian area" 65 "no overtaking left" 66 "no overtaking right" 67 "overtaking allowed again" 129 "no entry";
produces
#[derive(Clone, Copy, PartialEq)]
#[cfg_attr(feature = "debug", derive(Debug))]
pub enum Rsa2Tsgn3 {
None,
SpeedSign,
0Unlimited,
Unlimited,
Highway,
NoHighway,
...
Notice there is 0Unlimited
, enums cannot start with 0.
BO_ 610 EPS_STATUS: 5 EPS
SG_ IPAS_STATE : 3|4@0+ (1,0) [0|15] "" XXX
SG_ LKA_STATE : 31|7@0+ (1,0) [0|127] "" XXX
SG_ TYPE : 24|1@0+ (1,0) [0|1] "" XXX
SG_ CHECKSUM : 39|8@0+ (1,0) [0|255] "" XXX
produces
impl EpsStatus {
pub const MESSAGE_ID: u32 = 610;
/// Construct new EPS_STATUS from values
pub fn new(ipas_state: u8, lka_state: u8, type: bool, checksum: u8) -> Result<Self, CanError> {
let mut res = Self { raw: [0u8; 5] };
res.set_ipas_state(ipas_state)?;
res.set_lka_state(lka_state)?;
res.set_type(type)?;
res.set_checksum(checksum)?;
Ok(res)
}
Notice keyword type
here which is not allowed.
Hey, awesome to see this discussion going on! I'm not actively working on this crate so I'm hesitant to interrupt you just to point out that we can merge this PR and iteratively add improvements ins smaller chunks :)
Looking at this I think the PR should incorporate this again.
@marcelbuesing This wasn't working before, was it? Or at least there was no test for it. Wanna implement this here or in a follow up? Not sure I follow the Lsb0 discussion.
@andresv thanks for checking out this crate! As you probably noticed by now it's in a pretty basic state. A quick fix for the invalid identifiers it produces is to make every function/field/variant/type name a raw identifier, i.e. turning foo
into r#foo
. Wanna give this a shot and see if it works out? :)
I created a PR where enum fileds that start with a number are prepended with X
: https://github.com/technocreatives/dbc-codegen/pull/9
I created a lot of unrelated noise here. Lets go back to main discussion:
Looking at this I think the PR should incorporate this again.
@marcelbuesing This wasn't working before, was it? Or at least there was no test for it. Wanna implement this here or in a follow up? Not sure I follow the Lsb0 discussion.
I have a dbc
file that panics without @marcelbuesing modifications (look couple of commits back for diff). So those modifications should be reviewed and then we can merge it and add following PRs on top of that.
@marcelbuesing what to do with the issue that you brought up? It seems this is step to right direction, I cannot process my dbc
file without those. So maybe you could commit those changes and then we could merge it.
@@ -405,16 +411,14 @@ fn render_signal(mut w: impl Write, signal: &Signal, dbc: &DBC, msg: &Message) -
fn signal_from_payload(mut w: impl Write, signal: &Signal) -> Result<()> {
let read_fn = match signal.byte_order() {
can_dbc::ByteOrder::LittleEndian => format!(
- "self.raw.view_bits::<LocalBits>()[{start}..{end}].load_le::<{typ}>()",
- typ = signal_to_rust_uint(signal),
- start = signal.start_bit,
- end = signal.start_bit + signal.signal_size
+ r#"self.raw.view_bits_mut::<Lsb0>()[{start_bit}..{end_bit}].store_le(value);"#,
+ start_bit = signal.start_bit,
+ end_bit = signal.start_bit + signal.signal_size,
),
can_dbc::ByteOrder::BigEndian => format!(
- "self.raw.view_bits::<LocalBits>()[{start}..{end}].load_be::<{typ}>()",
- typ = signal_to_rust_uint(signal),
- start = signal.start_bit - (signal.signal_size - 1),
- end = signal.start_bit - (signal.signal_size - 1) + signal.signal_size
+ r#"self.raw.view_bits_mut::<Msb0>()[{start_bit}..{end_bit}].store_be(value);"#,
+ start_bit = signal.start_bit,
+ end_bit = signal.start_bit + signal.signal_size,
),
};
@@ -454,7 +458,7 @@ fn signal_to_payload(mut w: impl Write, signal: &Signal) -> Result<()> {
can_dbc::ByteOrder::LittleEndian => {
writeln!(
&mut w,
- r#"self.raw.view_bits_mut::<LocalBits>()[{start_bit}..{end_bit}].store_le(value);"#,
+ r#"self.raw.view_bits_mut::<Lsb0>()[{start_bit}..{end_bit}].store_le(value);"#,
start_bit = signal.start_bit,
end_bit = signal.start_bit + signal.signal_size,
)?;
@@ -462,9 +466,9 @@ fn signal_to_payload(mut w: impl Write, signal: &Signal) -> Result<()> {
can_dbc::ByteOrder::BigEndian => {
writeln!(
&mut w,
- r#"self.raw.view_bits_mut::<LocalBits>()[{start_bit}..{end_bit}].store_be(value);"#,
- start_bit = signal.start_bit - (signal.signal_size - 1),
- end_bit = signal.start_bit - (signal.signal_size - 1) + signal.signal_size,
+ r#"self.raw.view_bits_mut::<Msb0>()[{start_bit}..{end_bit}].store_be(value);"#,
+ start_bit = signal.start_bit,
+ end_bit = signal.start_bit + signal.signal_size,
)?;
}
};
I wonder if it fails because of this issue that i manged to introduce: https://github.com/technocreatives/dbc-codegen/commit/ee84b9bae04889a1c67b412c2703ac0e720a3dd7#diff-db06603f82c0fa96425947567eb2a093f55f85cc496b1a569150391717088a97R367-R374
Found the issue, I am going to fix it in https://github.com/technocreatives/dbc-codegen/pull/13
No no, this is something else, those changes are not in this PR.
I think what's missing in the PR now is basically the signed signals part. FYI I just rebased on the main branch.
It only works with normally aligned message like this
BO_ 1024 Amet: 8 Sit
SG_ One : 0|2@0+ (1,0) [0|3] "" Dolor
SG_ Two : 2|8@0+ (0.39,0) [0.00|100] "%" Dolor
SG_ Three : 10|3@0+ (1,0) [0|7] "" Dolor
SG_ Four : 13|2@0+ (1,0) [0|3] "" Dolor
SG_ Five : 15|1@0+ (1,0) [0|1] "boolean" Dolor
Bar
has overlapping fields and this is not working.
It only works with normally aligned message like this
BO_ 1024 Amet: 8 Sit SG_ One : 0|2@0+ (1,0) [0|3] "" Dolor SG_ Two : 2|8@0+ (0.39,0) [0.00|100] "%" Dolor SG_ Three : 10|3@0+ (1,0) [0|7] "" Dolor SG_ Four : 13|2@0+ (1,0) [0|3] "" Dolor SG_ Five : 15|1@0+ (1,0) [0|1] "boolean" Dolor
Bar
has overlapping fields and this is not working.
Yes not sure though if the overlapping fields are valid.
Edit: just accidentally clicked close ..
@killercup is written tests for that https://github.com/technocreatives/dbc-codegen/blob/main/testing/rust-integration/tests/compare_to_socketcan.rs#L23, do you remember how did you generate test data, is some other lib supporting such overlapping fields?
I as
@killercup is written tests for that https://github.com/technocreatives/dbc-codegen/blob/main/testing/rust-integration/tests/compare_to_socketcan.rs#L23, do you remember how did you generate test data, is some other lib supporting such overlapping fields?
I think could have actually been me introducing this error in the dbc, I mean overlapping fields could be possible when mixing little and big endian in a message and the correct "range" so e.g. assuming you only want to encode 0-7 as unsigned but specify 5 bit as size. Or it was related to the bitsh big endian thing.
I just generated the c version using cantools. Maybe that helps somehow as a reference for some questions as well.
I generated the test message with node-socketcan FYI
I'm no expert on this on any level so I super much appreciate how thorough you guys are being! Let me know when to merge which PR :)
I think correct approach is to add more testcases here so we can be sure that cantools tests and our generator are giving out he same output.
Ok I think by now it's really just the negative numbers being incorrectly encoded. Packing unsigned fields in big endian now gives the same results as cantools.
Ok it's a little ugly due to the unsafe now. I am open to alternative ideas. Although there are not a large number of tests yet I'm fairly confident now that it works as intended and might just need some clean up here and there (e.g. unsafe). Cantool tests are no longer ignored.
Thanks for your continued efforts here! I'm looking forward to merging this :)
Ok just rebased it to include the arbitrary changes. I think this branch should be squashed once it's ready to be merged.
Unless there are objections or things I missed I think this could be squashed and merged now.
Thanks again for all your work on this!
TL;DR Add the ability to generate a decoding/encoding code that uses bitvec under the hood.
This may be a little controversial PR as it kind of bloats the code. I basically had some big endian encoding / decoding issues with bitsh. Created a PR with some property based tests for that. Tried to find the issue but was unable so far. Quite some time ago I noticed the bitvec crate which basically seems to do the same as bitsh to my understanding. And also seems to work on no_std.
I also added encoding/decoding comparison for bitvec and bitsh here which result in same encoding / decodings for little endian encoding but different encodings for big endian.