Open henrikssn opened 4 years ago
I agree that functionality like this would be nice in #[bitfield]
. Until then you should be able to wrap your bitfield with a new type that does exactly this conversion that you have already described.
I might be able to contribute this, what would be your suggested API for it?
Tough question. I haven't made up too many thoughts about it, yet but I always try to keep the API simple and contained and if it is not possible to keep it as such with the addition of a new feature I mostly think that it is some way to measure that a feature does not naturally belong in there.
That's to say that there are some options forward of which I am not sure if I like them:
endian
specifier, e.g.: #[bitfield(endian = "big")]
that would entirely change how your internal bitfield works and it layed out. I kind of dislike this because it is a bit inflexible and also introduces, yet another bitfield parameter while I am on the other handside trying hard to get rid of some of them to reduce complexity of the crate's design.into_bytes_{le,be}
and from_bytes_{le,be}
methods very similar to the currently already generated from_bytes
and into_bytes
methods. This approach is more lightweight, yet introduces some complexity that should imo not be inherent to this crate and therefore bitfield types. Bitfield types according to the current spec are always internally layed out as little-endian, so having APIs for both does not feel natural for me.Don't get me wrong from the above. I think it might be nice for some projects that work mainly on bit-endian architectures to have this feature but I really don't want to introduce more complexity if we can prevent it.
Now I am curious about your ideas, @henrikssn ! :)
Thanks, 1 would work for my use cases and I can't come up with one where the same register needs to be sent both MSB and LSB (but it is certainly possible that someone else can). I don't think we can easily store e.g. integers as MSB if the platform is little endian, because each setter call would come with a non-trivial conversion cost.
Actually, what is the current memory layout on big endian platforms? I assume integers are stored MSB then? What about the result of into_bytes
?
Are there any performance implications (memory) to this choice, as in: are there situations where the current implementation can optimize away the copy into a byte buffer?
Hm, if I may put my two cents. My intuition leans towards option 2, both because e.g. u32 has to_be_bytes
and to_le_bytes
, and also because it doesn't force the person making the #[bitfield]
to decide between one or the other until they actually need it.
In particular, I'm working on a program which generates Rust-friendly wrappers around register maps, so my code is generating #[bitfield]
s, but I don't know how the user ultimately interacts with their devices (whether they want it in LE or BE). (in practice, because of the way I'm doing things, I end up forcing u32 numbers on the user, but they could use the bitfield structs directly and I don't know whether they would want LE or BE or both if they did)
I can't think of a third option though, sadly.
Actually, what is the current memory layout on big endian platforms? I assume integers are stored MSB then? What about the result of into_bytes?
It is always stored as little-endian at the moment. Even with #[repr(uN)]
we still store it as byte array which is kind of lying to the user in some way. However, if you provide #[repr(C)]
(which you should do anyways in those cases) then it is basically the same as long as you do not unsafely transmute but use the generated From
impls.
Are there any performance implications (memory) to this choice, as in: are there situations where the current implementation can optimize away the copy into a byte buffer?
I think this is where we should put the focus. Do we actually need this additional complexity or will the compiler take care of it anyways? I actually believe the compiler will optimize all these conversion down to the bare minimum needed as compilers nowadays are brilliant when it comes to bit twiddling tricks.
Hm, if I may put my two cents. My intuition leans towards option 2, both because e.g. u32 has to_be_bytes and to_le_bytes, and also because it doesn't force the person making the #[bitfield] to decide between one or the other until they actually need it.
Thank you very much for putting in your use case! This basically disqualifies option 1 in my opinion.
I can't think of a third option though, sadly.
I think the third option is to create some proper benchmarks using criterion
and check whether there are some performance implications by not introducing this feature to big-endian systems and if there are none we can simply close this issue. If the benchmarks show that the non-existence of such a feature has performance implications to those systems we should find a solution.
Actually, what is the current memory layout on big endian platforms?
It is always stored as little-endian at the moment. Even with
#[repr(uN)]
we still store it as a byte array which is kind of lying to the user in some way.
My 2¢: in my opinion this is also deserving of attention, because I think and being able to transmute()
to and from bitfield structs is valuable (even if those structs have to be tagged with specific #[repr()]
attributes), and I think the memory layout matching the bitfield declaration is one of the strengths of this crate.
My 2¢: in my opinion this is also deserving of attention, because I think and being able to
transmute()
to and from bitfield structs is valuable (even if those structs have to be tagged with specific#[repr()]
attributes), and I think the memory layout matching the bitfield declaration is one of the strengths of this crate.
Can you please elaborate why transmuting between bitfield structs and their underlying representation is important? In little-endian systems the From
impl is already a no-op and I can imagine that even for big-endian systems the compiler optimizer can do an amazing job as well. So no unsafe
transmute is needed from this perspective but please tell me if I am missing something!
So in conclusion I think we really should have some benchmarks to measure this impact before we further complicate the API of bitfields in this crate. I am willing to implement this (or be implemented by you) if it turns out to be useful.
To emphasize what I mean:
test.as_bytes()
returns[0x34, 0x12]
but I need it to be[0x12, 0x34]
. While I could reverse the order of the result fromas_bytes
, it is error-prone and would IMO be better captured by the bitfield itself.
With #[repr(u32)]
you could do u32::from(bitfield).to_be_bytes()
or u32::from(bitfield).to_be()
(depending if you need an array or an actual u32
) and be done with what you need and my current untested assumption is that the compiler will do a pretty good job at optimizing the byte-flip (mostly) away in the end.
Hm. I didn't realize the From
impl was a no-op for little-endian systems. Still, I am imagining a situation where you have a section in memory that you do not directly control (like hardware), but you can have a pointer to it, and thus you can transmute the pointer to a pointer to a bitfield struct.
Hm. I didn't realize the
From
impl was a no-op for little-endian systems. Still, I am imagining a situation where you have a section in memory that you do not directly control (like hardware), but you can have a pointer to it, and thus you can transmute the pointer to a pointer to a bitfield struct.
But why would you do that if you could as well just read out the bitfield behind the pointer and use the (mostly) no-op From
impl? This way you can stay in safe Rust and do not risk undefined behavior induced by using unsafe Rust transmute
.
Hm, I think I see the use case (correct me if I'm wrong)... imagine it's some HW register somewhere, you could:
#[bitfield]
struct WwdtTimeReg {
count: B24;
reserved: B8;
};
let reg = 0x4000_000c as *WwdtTimeReg;
let reg: &WwdtTimeReg = reg.as_ref().unwrap();
...
let current_count = reg.count();
vs. having to do a two-step process:
let reg = 0x4000_000c as *u32;
...
let register_value = WwdtTimeReg::from_le_bytes(*reg); // Sorry I forget the exact syntax right now
let current_count = register_value.count();
It'd prevent you from being able to atomically set/get all fields simultaneously, but that may be fine. In this case there's only one interesting field anyway.
Just to summarize here, if we do these changes:
into_bytes
and from_bytes
to into_le_bytes
and from_le_bytes
into_be_bytes
and from_be_bytes
This should not have any performance implications, as it is essentially just a rename for existing use cases.
I think allowing the user to additionally specify the memory layout of bitfield could probably be a FR on its own, as it would just determine whether into_le_bytes
or into_be_bytes
is the faster operation.
Going a step further... The internal bit storage order being able to be reversed would be nice. Trying to write a driver for this (look at page 63+) and am finding that a crate like this would be quite helpful to define how to speak with the device, but the LE ordering of the crate makes it so I have to do a lot of driver internal fiddling, even on the i8
register values. It gets way more obvious when you start looking at i16
and i24
values thanks to the combination of some registers. Even with a into_be_bytes
option this crate wouldn't let me store correct values in such cases.
I know you can swap the SPI controllers read/write mode to support LSB, but most controllers I've seen are MSB by default. It'd be nice to have a "protocol mode" as a result (for lack of a better term).
Yeah, in the meanwhile, I have even worked with protocols which use mixed LSB/MSB encoding (yes, i am talking to you, IEEE 802.15.4).
Probably the proper way is to introduce LE16
and BE16
types in place of B16
so that the endianness can be specified on a field by field basis. This gets very close to the zerocopy crate, but that one doesn't have bitfield support (which is kind of the selling point of this library), so I don't see a conflict here. We would need to define what the odd types (e.g. B15) means for the two variants. For example:
#[bitfield]
pub struct MyBitfield {
foo: B15,
bar: B1,
}
How does the memory layout when foo
is stored as LE differ from the layout when foo
is stored BE? What does it even mean to define endianness for a 15 bits? Maybe we should just not support this for non-even fields (i.e. LE16
but no LE15
)?
I'm interested in this feature as well. Getting bitfields and endianness to work correctly is very tricky and needs a library solution.
From my experience, the only way to handle this situation is to carefully define a bitfield layout in a way that's independent of endianness. You do this by defining it in terms of the underlying representation. So taking the example from the previous comment and using #[repr(u16)]
, it has the following layout:
1 1
5 4 0
--------------------------
|B1| B2 |
--------------------------
Notice that this layout works regardless of the machine endianness: within the 16 bits of a u16
, the MSB is B1
and everything else is B2
. No bytes are mentioned in this definition.
(Note, however, that the order of the fields within this representation is completely ambiguous. B1
could be the LSB, for example. In fact, GCC actually reverses the field layout based on endianness. This is a mistake IMO, and should not be repeated by this library. Pick one order, document it, and stick to that. Backwards compatibility will likely determine your choice.)
As a separate step, you define the byte order when this representation needs to be viewed as bytes. There are three options: host endian, little endian, and big endian. The representation in memory should always be in the host endianness (so the underlying u16
is just that), with options to read and write it explicitly in either big or little endian. In the end, that means providing from_le_bytes
and from_be_bytes
. Also, don't forget from_ne_bytes
for native endianness (like Rust does with primitive types, which is guaranteed to be fast). I'd also suggest renaming to to_{le,be,ne}_bytes
for consistency with std.
The above solution works with every interface I've worked with and layout I've seen from a C or C++ compiler. It's also compatible with how people do bitfields manually (bit shifts on integer values). (With the possible exception of mixed endianness mentioned above, though to be honest I don't even know what that really means in this context. Regardless, I think it's rare enough to ignore from a library perspective.)
This interpretation apparently departs from the current implementation, which uses byte arrays under the hood. To be frank, I think that's a dubious implementation to begin with. However, it does allow for space optimization when the number of bits is something like 24 bits instead of 32, and it allows for very large bitfields. I'm not sure the advantages are significant (you can break up a bitfield if needed and the space overhead is likely minimal). As a compromise if that's still desired, I suggest using a byte array when a representation isn't provided, but consider not providing a way to access the bytes.
In summary, I recommend doing the following:
into_bytes
and from_bytes
in_le_bytes
, in_be_bytes
, in_ne_bytes
, from_le_bytes
, from_be_bytes
, and from_ne_bytes
. These should basically just be calls to the equivalent function provided by the underlying type.With the possible exception of mixed endianness mentioned above, though to be honest I don't even know what that really means in this context. Regardless, I think it's rare enough to ignore from a library perspective.
Mixed endianness means the following:
#[bitfield]
pub struct MyBitfield {
foo: LE<U16>, // Little endian u16
bar: BE<U16>, // Big endian u16
}
let s = MyBitfield {
foo: 0x1234.into(),
bar: 0x5678.into(),
};
s.into_bytes()
should produce [0x34, 0x12, 0x56, 0x78]
. How the struct is stored in memory is an implementation detail which I care less about (most likely using the host native representation makes most sense there).
How common is mixed endianess in structs that could profit from modular_bitfield
supporting field-wise endianess?
I have the impression it is a niche feature and very much like the idea for a struct-wise endianess, e.g. stating at the top if all fields should be serialized and deserialized as big- or little- endian. Then later if that one works we could refine this feature to also work field-wise if there is greater need for this.
Until then users are totally free to define custom bitfield structs
to be used in #[bitfields]
structs as with the proposed LE
and BE
types. It should be easy to add some conversion methods to those.
I do not like the idea of limiting modular_bitfield
types to 128 bit since that causes a lot of restrictions and hinders generalization of the framework that is meant to be used in a modular way.
Yeah, if you can specify endianness on the struct level and it is supported to mix struct fields of different endianness in a bitfield struct, then I am happy (I would just wrap foo
and bar
into structs in my example above and call it a day).
For me to understand correctly, the current standing proposal is to add an endianess
parameter to the #[bitfield]
macro, which controls the endianess of all (non-struct) fields?
For me to understand correctly, the current standing proposal is to add an endianess parameter to the #[bitfield] macro, which controls the endianess of all (non-struct) fields?
There have already been many proposals in this thread but this one I consider to be implementable without the need of having way too many design discussions about potential pitfalls.
If a proc. macro cannot provide a proper solution to a problem I do not really consider implementing it since that's then better done at the user level. Proc. macros are already complicated enough so let's not try to creep them up with additional cognitive load that can be avoided and shifted onto the user layer.
@henrikssn
That makes sense when it comes to fields that are byte-aligned ... but what if you have something like this?
struct BitAlignedMixedEndian(LE<U4>, LE<U16>, BE<U16>, LE<28>);
Which bits of which bytes does each field come from?
@Robbepop 128 bits is only a limitation if the intention is to put everything inside a single bitfield, even things that don't have bitwise alignment requirements. Which, I will admit, is very tempting to do and simulates what other languages (C, C++, Ada) that have native bitfields provide.
But there's some significant downsides, most of which come because you're forcing this to be a packed representation. Even aligned fields could require unaligned reads (unless you're aligning the byte array?) which can have significant performance impacts, you're forcing everything to go through getters and setters (also unfortunate for other reasons, a topic for a different day), etc. (On the plus side, the interface doesn't allow forming a pointer to a field so you can't fall into that particular undefined behavior trap).
It takes a bit more work and has indirection to get at the fields you want, but I think it's overall better to see this as a member of a larger structure. Something like this, for example:
#[bitfield]
#[repr(u16)]
pub struct MyBits {
a: B3,
b: B13,
}
pub struct MyData {
c: u16,
d: MyBits,
e: u32,
}
(It's slightly unfortunate that Rust doesn't have nested anonymous structures like C, which makes this a bit less messy.)
From here, we can use something like the BinRead
crate to handle the I/O. If we assume from_be_bytes()
is provided to read from a byte array as big endian:
#[derive(BinRead)]
#[br(big)]
pub struct MyData {
c: u16,
#[br(map = MyBits::from_be_bytes)] // BinRead uses from_be_bytes to populate the data in d
d: MyBits,
e: u32,
}
(Incidentally, you can also solve byte-aligned mixed endianness this way since BinRead can apply different endianness to individual fields).
We're actually tantalizingly close to this already, come to think of it. Just change it to:
#[br(map = |x|MyBits::from(u16::from_be_bytes(x)))]
This workaround seems to do the right thing from my testing, but I haven't tried doing the reverse on a big-endian machine (and I wonder if this library is broken on big endian anyway...).
Anyway, this is why I'm suggesting from_be_bytes()
instead of a parameter to #[bitfield]
. It should be dead simple to implement since the infrastructure is already there. The same bitfield can be serialized in either endianness, too, which is a nice bonus.
I originally used the suggestion of creating a LE<U16>
and BE<U16>
types, but I had the need to define multiple enums that were backed by big endian ints, and defining into_bytes/from_bytes on a number of enums seemed a lot of toil.
I considered the from_be_bytes
approach, but it's unclear if folks actually needed to sometimes read their struct as LE and other times as BE, and it seems to be making decisions at runtime instead of generating the code once.
As a proof of concept, that allows setting #[endian = "big"]
on BitfieldSpecifier structs, e.g:
#[derive(BitfieldSpecifier)]
#[bits = 16]
#[endian = "big"]
pub enum QClass {
Reserved = 0, // [RFC6895]
Internet = 1, // [RFC1035]
// ...
}
The changes to the procedure macros seem simple enough, and this seems flexible to mix and match endian in a struct, but to otherwise be opinionated about the endian of the struct, without changing the API.
I plan to support
`endian = "host"` // the default and today's behaviour
`endian = "big"` // Big endian
`endian = "little"` // Little endian
I'm happy to extend the bitfield
to support endian at the struct level, and the per field level. That seems a simple enough API, is backwards compatible, and will no doubt solve 90% of cases. If you need something more custom, it is easy enough to implement modular_bitfield::Specifier to do that.
Hi all. I also needed a way to represent big endian byte orders for a network protocol. While @CirrusNeptune s fork (https://github.com/CirrusNeptune/modular-bitfield-msb) work well for me, I decided to continue on the POC from @bramp. You can check it out here: https://github.com/vhdirk/modular-bitfield/tree/endian
I've modified the original POC by creating an enum for specifying the byte order, and also supporting structs. The only thing I have left to do is to add support for little-endian orders for B2 - B128. Any suggestions how you may want that to work?
@vhdirk I would vary the order of the bytes which are being written out for types which are crossing the byte boundary. This means that aligned even bytes like B16
get the same representation as u16
and all flags are written out in the same order as they are specified. The drawback is that it takes a bit of time to wrap your head around it. Take the following example:
#[bitfield]
#[endian(Little)]
struct example {
// First two bytes
foo: U15,
bit_one: bool,
// Third byte is always unaffected by endianness
bit_two: bool,
bar: U7,
}
// In serialized format:
// [foo(lsb)] [foo(msb) bit_one] [bit_two bar]
// with big endian:
// [foo(msb)] [foo(lsb) bit_one] [bit_two bar]
I've been trying something like that, but I'm afraid it is still buggy. See PR here: https://github.com/Robbepop/modular-bitfield/pull/90
Basically, you can also set the endian
flag on each individual field. If it's not specified, it derives the byte order from the struct it's in.
#[bitfield(endian = "big"]
struct Example {
// First two bytes
foo: B15,
bit_one: bool,
// Third byte is always unaffected by endianness
bit_two: bool,
#[endian = "little"]
bar: B15,
}
This would yield
// [foo(msb)] [foo(lsb) bit_one] [bit_two bar (lsb)] [bar(msb)]
I'm quite sure my implementation is not quite correct stil, and the're some conflicts in specifications as well:
#[bitfield(endian = "little", bits = 2]
struct Inner {
bit_one: bool,
bit_two: bool,
}
#[bitfield(endian = "big"]
struct Example {
foo: B6
#[endian = "little"]
inner: Inner
}
What would be the byte order of inner
here?
I don't think a boolean should be affected by endianness. Additionally, the serialization of your example should be unaffected by endianness, since it is just one byte long.
Even if foo
was spanning multiple bytes, I expect Inner
to come after foo
and the order of the fields in Inner should be respected.
I would advise to wait with supporting specifying (altering) endianness on nested structs, since it makes the proposal a lot more complex with more edge cases to consider.
The order of the boolean in Inner
definitely does matter. It's exactly why I need this.
You're right that alternating endianness is a mess, but I don't think it should necessarily be that hard. The range of the bits to write into is specified by the container; how it is filled exactly is the responsibility of the field.
At least that's how I understand this crate.
In my case (assembler instructions), handling endianness of structs and fields is essential.
Basically, it is the order of whatever unit of data you are refering to... which is not part of the definition.
The unit of data can be a bit, u8
, u16
, u32
, u64
, u128
, or even exotic stuff like 5 bits (spr field of PowerPC, to retain compatibility with the POWER architecture).
Endianess can be different for each unit of data.
Rust only handles endianness at the byte level (le
and be
), which is fine because the minimum alignment of struct fields is 1 byte.
#[bitfield]
structIn this crate the alignment of struct fields is 1 bit, so it should view structs from a bit perspective. The internal representation can be anything, but the transition of into_bytes
, from_bytes
, and From
has to respect the intended endianess.
As reference... maybe check how C bitfields are layed out in little endian and big endian systems? Maybe the modular-bitfield-msb crate already matches big endian systems?
The endianness of a field only means something after defining the endianness of the #[bitfield]
struct.
A le
field in a be
struct and a be
field in a le
struct needs to have the order of the units of data reversed.
A le
field in a le
struct and a be
field in a be
struct is already fine.
Instead of defining the endianess of a field, it is more appropriate to define what units of data are reversed and in what order.
Example:
use modular_bitfield::prelude::*;
#[bitfield(order = Msb)]
pub struct MsbStruct {
msb_field: B24,
lsb_field: ReverseBytes<B24>, // TODO should bits be reversed?
signed_field: Signed<B6>,
weird_field: Reverse5Bits<B10>,
wtf_field: CustomSpecifier<ReverseBytes<ReverseBits<B32>>>,
}
If you can have a Specifier
that calls another Specifier
in chain then you can implement any kind of weird or composite "endianness" in fields.
If it's possible, also allow custom orders #[bitfield(order = ::path::to::CustomOrder<Msb>)]
.
Not sure how it would interact with #[derive(BitfieldSpecifier)]
.
I already tried to implement something like Signed
but failed at generics, so instead I implemented I1
to I128
with a macro and 128 macro invocations... :/
Currently (at least on my platform, ARMv7), bytes are returned in LSB order but my driver requires them to be MSB. How tricky would it be to add a configuration option for this?
Example:
test.as_bytes()
returns[0x34, 0x12]
but I need it to be[0x12, 0x34]
. While I could reverse the order of the result fromas_bytes
, it is error-prone and would IMO be better captured by the bitfield itself.