Closed thvdveld closed 7 months ago
Merging #840 (9a850cf) into main (c790174) will increase coverage by
0.13%
. Report is 26 commits behind head on main. The diff coverage is47.59%
.
@@ Coverage Diff @@
## main #840 +/- ##
==========================================
+ Coverage 79.39% 79.52% +0.13%
==========================================
Files 76 78 +2
Lines 27504 27848 +344
==========================================
+ Hits 21836 22147 +311
- Misses 5668 5701 +33
Files Changed | Coverage Δ | |
---|---|---|
src/socket/tcp.rs | 96.62% <0.00%> (+0.13%) |
:arrow_up: |
src/iface/interface/mod.rs | 44.42% <16.90%> (-1.21%) |
:arrow_down: |
src/iface/interface/ipv4.rs | 69.25% <55.55%> (+0.74%) |
:arrow_up: |
src/iface/interface/ethernet.rs | 89.79% <58.33%> (-1.70%) |
:arrow_down: |
src/iface/interface/ieee802154.rs | 97.50% <66.66%> (+8.75%) |
:arrow_up: |
src/iface/interface/sixlowpan.rs | 87.61% <71.42%> (ø) |
|
src/iface/interface/ipv6.rs | 91.38% <78.78%> (ø) |
|
src/iface/interface/igmp.rs | 50.57% <100.00%> (ø) |
|
src/tests.rs | 98.09% <100.00%> (+0.09%) |
:arrow_up: |
... and 6 files with indirect coverage changes
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
Can you show some example error messages from before and after this patch? Reading through the code, it looks as though it will differ.
I don't have any logs to show where I had error messages. I would have to create fake packets for that. I removed the check macro, which was mostly used for unwrapping the checked and parsed packets. Now they return an error, which is captured in https://github.com/smoltcp-rs/smoltcp/pull/840/files#diff-9797218dceaf8c0bdfeddfab917f17b8d1028fd1e817a5b3de18bc4814fd894eR657-R729. Previously, the logging happened at each check macro, and I think the compiler was unable to optimize it. I did extra measurements:
This is the server example on main, and I removed all the logging in the example such that I can disable the log
feature:
text data bss dec hex filename
709317 34640 400 744357 b5ba5 server
This is the server example on this branch, also with all logging disabled:
text data bss dec hex filename
709781 34640 400 744821 b5d75 server
As you can see, it's a bit bigger on this branch.
If I enable the log feature, but with all the logging removed in the example, and on main:
text data bss dec hex filename
820809 57048 416 878273 d66c1 server
The same but on this branch:
text data bss dec hex filename
813977 54904 416 869297 d43b1 server
The difference is very noticeable.
When all the logging is enabled (also in the example):
text data bss dec hex filename
1527745 80928 536 1609209 188df9 server
And on this branch:
text data bss dec hex filename
1520929 78784 536 1600249 186af9 server
We can see that the difference is again around 7kB.
I think that with my changes, the logging of the messages is still the same, but just now only logging in one place instead of every time when the check macro is used.
With these changes, I noticed that the PAN ID was not set for the examples. I noticed it with the ieee802154_wrong_pan_id
test, since this one failed now with the changes made by this PR. The purpose of this test is to not handle packets when the PAN ID is not the PAN ID of the interface. However, the interface previously returned None
because the packet is malformed and not because the PAN IDs don't match. Since in both cases a None
value is returned, this was not captured by the test. For the test to actually be correct, the interface should now the PAN ID of the device. Since I added the PAN ID, now the PAN ID is also changed for outgoing packets, which is why I had to change the data of the other tests.
@thvdveld My main question here is: If you've removed the check!
macro to remove the check!(new_checked(...))
construct, why do you add Result
to return types? I think I'm missing something so could you explain this to me please?
Previously, all functions returned Option<Packet>
, where the Packet
is a response on a processed packet. When new_checked
failed, the check macro logged this and just returned None
. But with my changes, the logging happens in 1 place, meaning that if None
would be returned, we could not know if it is just because there is no response on a processed packet or if there was an error from new_checked
.
Okay. This makes sense to me in principle. I would really like another reviewer to sign off on the change because I'm honestly pretty tired and am not sure I'll spot everything.
check!
was introduced in #617 because the goal was to stop propagating errors up. The rationale is returning errors to the user from iface poll()
is not actionable, because they can't know which packet was malformed nor handle it, and also errors occur in real-world networks all the time because we don't support all protocols out there.
This PR reverts that, which I'm not sure is something we should do. Why is propagating errors up and ignoring them in a centralized place better than ignoring them when we get them?
Also, it doesn't make sense to me that this reduces code size. It's strictly more code paths (previous code could return None, new code can also return Err). Are you sure it's not because of the trace!
call in the check!
macro? Real-world code would compile with trace!
-level disabled.
This PR reverts that, which I'm not sure is something we should do.
But the return type for poll
did not change. So I don't hink it is noticable for someone calling poll.
Also, it doesn't make sense to me that this reduces code size. It's strictly more code paths (previous code could return None, new code can also return Err). Are you sure it's not because of the trace! call in the check! macro? Real-world code would compile with trace!-level disabled.
If I disable all the logging, the binary sizes are equal (main branch vs this branch). This means that the impact of returning Result<Option<T>>
is an increase of 460 bytes.
When logging is enabled, there is a difference of around 7000 bytes, which is not the same as the 460 bytes difference. I not able to explain why there is a change, since I think the behavior (main vs this branch) for the logging is the same.
Real-world code would compile with trace!-level disabled.
True, however, a decrease of 7kB for just logging is a big impact if you want to run with traces enabled.
But the return type for
poll
did not change. So I don't hink it is noticable for someone calling poll.
If you want to actually act on errors returned from poll
it is noticeable.
If you want to actually act on errors returned from poll it is noticeable.
I don't understand this, I'm really sorry. How did my changes affect this? I didn't change anything in poll
. If there was an error when processing a packet, then that error would surface in socket_ingress
, where it is logged. Previously, it was logged in different places and returned None
. My changes also just return None
after logging in socket_ingress
.
Removing the check macro and just returning using question mark operator reduces the size of the binary.
Server example (main):
Server example (now):
Sixlowpan example (main):
Sixlowpan example (now):