Closed RandomInsano closed 1 year ago
Just some quick notes here, it looks like following is what Linux does with I2C_RDWR and might be a great way to go if we're okay implementing a fallback if the feature isn’t supported at runtime:
ioctl(file, I2C_RDWR, struct i2c_rdwr_ioctl_data *msgset)
Do combined read/write transaction without stop in between.
Only valid if the adapter has I2C_FUNC_I2C. The argument is
a pointer to a
struct i2c_rdwr_ioctl_data {
struct i2c_msg *msgs; /* ptr to array of simple messages */
int nmsgs; /* number of messages to exchange */
}
The msgs[] themselves contain further pointers into data buffers.
The function will write or read data to or from that buffers depending
on whether the I2C_M_RD flag is set in a particular message or not.
The slave address and whether to use ten bit address mode has to be
set in each message, overriding the values set with the above ioctl's.
(from https://www.kernel.org/doc/Documentation/i2c/dev-interface)
Probably borrowing the buffers as they go in would be a good plan. Again, we can allow better ergonomics than C but it shows the basic way this could work.
Here's an example
The buffer-copy issue is one that I've run into time and again with embedded C code. It would be really cool if Rust provided a way to elegantly deal with complex transactions without unnecessary allocation/copying.
I think it will be important to provide access to a lower-level API, for use-cases where the concept of a transaction breaks down.
Well, the i2c calls in Linux just take pointers to memory. I have a quick test I’m dabbling with to see. Because we need to stick to the common denominators I’m not sure what the final API should look like, but I’m not sure going to be faster for small buffers and instead it’ll just be more ergonomic.
Totally onboard with improved ergonomics as well.
Also totally onboard with improvements! For anyone else reading, there is related discussion in #64.
I've been wondering whether a builder-style API would work, something like SPI.start().write(&[W]).read(&mut[W]).end()
could support transactions while allowing the underlying implementation to group operations if required/available?
I’m glad you guys are keen to help here! Should be better for everyone once we’re done.
I like this builder pattern plan. I’ve pushed some (pretty broken) attempts to get an underlying Linux rdwr thing going just to start mocking things up and haven’t done well (the ioctl is saying I haven’t packed things well, feel free to school me).
The trick at the moment for linux-embedded-hal is that an array needs to be defined on that platform to pass into the kernel and it might be tricky to have a builder trait. Mainly, I’m not sure how to create a buffer at compile time based on it.
I was playing around with the idea of allocating a fixed buffer of X elements. Maybe the builder could pack elements of the array?
If you check my library I linked, you’ll see the idea was to build an array of actions which rolls pretty close to the builder pattern but it makes it the crate consumer’s responsibility to own the stack allocation. It looks something like this:
let items = [
Message::write(&[0u8; 12]),
Message::read(&[0u8; 13]),
Message::write(&[0u8; 14]),
];
I’m not sure which I prefer to be honest, so feel free to bikeshed a bit and we’ll see where folks lean. I’m going back to hacking on this to see if I can get the kernel to play nice or the ioctl! macro from the nix
crate to properly import.
A little update: After banging on it a few hours I’m stuck on getting the ioctl to play ball. I’ll reach out to the resident master of these things @posborne.
I'm back after yet more head bashing. I got stuck on a weird drop problem, but my weird little POC is working and there are no memory copies in userland.
Currently at its lowest level I think the API could look like this:
const ADDRESS = 0x53;
const MESSAGE: &'static [u8] = &[0x01, 0x32, 0x04];
let data = [0u8; 1];
let bus = Bus::new(some_param_struct, ADDRESS);
let mut items = [
bus::write(&MESSAGE),
bus::read(&data),
bus::custom_thing(_, _, _),
];
bus.transfer(&items);
Is that a pattern that works for folks? I want to discuss implementation of this with @posborne for inclusion into his i2cdev then port the idea up the HAL once that's done. Does anyone have a better idea for the structure here? I'm not sure what the lowest common denominators are for this interface.
Hey @RandomInsano; this would definitely be a fine feature to add to the i2cdev crate. If you have something I think we could probably move forward with a PR or proposal for an API on https://github.com/rust-embedded/rust-i2cdev/issues/42.
@RandomInsano: Your comment in #35 made me aware of this issue. I am strongly against breaking downstream drivers. One of the main reasons for the design of shared-bus
was to keep API compatibility. What about changing your API to something like this?
let data = [0u8; 1];
let bus = Bus::new(some_param_struct);
bus.transfer(0x53, &mut [
bus::write(&[0x01, 0x32, 0x04]),
bus::read(&mut data),
bus::custom_thing(_, _, _),
]);
This would work perfectly fine with the existing ecosystem and would not reduce flexibility. For example, some complex driver might need to talk to multiple devices. Such a driver would then need multiple i2c peripherals/proxies for no real benefit. I can see why you'd want to bind a bus to an address but I don't think the increase in usability would be big enough to warrant a breaking API change.
On a completely different note, I really like @ryankurte's API proposal so I gave it a shot. What I came up with looks like this:
let mut buf = [0x00; 8];
bus.transaction(0x44) // Transaction for address 0x44
.write(&[0xde, 0xad, 0xbe, 0xef]) // Write step
.read(&mut buf) // Read step
.commit().unwrap(); // Actually perform the transaction
The full implementation can be found here.
I think from a user's perspective this API looks a lot nicer. Under the hood it is a lot more complex because of the use of a recursive type to circumvent the need for allocation but as this should not have too big of an effect on runtime/stacksize I would argue that it is ok. The complexity is also hidden from peripheral implementations which would look like this:
impl Transactor for Bus {
fn commit<'a, S: Transact<'a>>(&'a mut self, steps: &'a mut S) -> Result<(), ()> {
println!(" I2C: Start");
// The closure is called for each step of the transaction
steps.transact(|s| match s {
Step::Write(d) => println!(" I2C: Write {:?}", d),
Step::Read(buf) => {
println!(" I2C: Read {:?} bytes", buf.len());
for b in buf.iter_mut() {
*b = 0xff;
}
}
});
println!(" I2C: End");
Ok(())
}
}
One big downside I can see right away is that transactions whose steps are not known at compile time are not easily possible. I don't know how often they are needed though.
@Rahix I like your suggested alteration to @RandomInsano's suggestion, thought the address only makes sense for the i2c implementation anyway I think.
I think at the moment I am in favour of the array-of-parts approach for it's simplicity, as an additional trait. I wonder whether a default impl over other types would be viable, I think that raises the (SPI at least) question of from where and when the CS pin is controlled.
It should then be possible to experiment and implement a builder api on top as a separate crate if those ergonomics are preferred?
Ooo! I like the idea of something that actually works, and I'm really comfortable with the look of option number 2 it as it maps to other builder interfaces I've used.
I'll try to counter argue how we might benefit more from having an abstraction that owns the bus address / select lines. I've been mulling it over, and we don't need to break existing drivers and the porting effort would (hopefully) be minimal and beneficial.
In the end, this is a good thought exercise and as long as the community benefits it's all fun for me! I don't mind throwing my work out if it's not the right path.
Let's say we have a few abstractions:
I2CProxy(u16)
SPIProxy(OutputPin)
OneWireProxy(u64)
LegacySPIProxy(OutputPin)
LegacyI2CProxy(u16)
We should be able to have the Legacy
flavors provide the familiar interface but no-op the addresses that are sent by the address
parameter of read()
and write()
. This would allow backward compatibility with a new system while others opt-in to the new interface. The back end of this new system would ideally start by consuming the existing APIs provided by the device crates, but overtime implementations that work closer to the metal would surface. For i2c, it also allows for devices with multiple addresses to be selected by the consumer.
In the end, i2c, SPI, serial and 1-wire are all transferring either by full or half duplex. I like the idea of having a consistent experience across bus types even if access patterns under the hood are different.
I'm not aware of any devices that require two distinct addresses / busses. If anyone has one, that'll be a great curve ball to think through.
//! Device / Driver consumer examples for the legacy case
// SPI Example
let bus_master = BusManager::new_spi(spi);
let select_expander = some_device::gimme_pin();
let bus_node = bus_master::legacy_spi(select_expander);
let stretch_armstrong = Mcp3008::new(bus_node, BusManager::dummy_pin());
// I2C Example
let bus_master = BusManager::new_i2c(i2c);
let bus_node = bus_master::legacy_i2c(Mag3110::ADDRESS);
let magna_doodle = Mag3110::new(bus_node);
// Multiple Light Sensors
let bus_master = BusManager::new_i2c(i2c);
let base_address = Tsl2561::BASE_ADDRESS;
let light_bright1 = Tsl2561::new(bus_master::legacy_i2c(base_address), 0x00);
let light_bright2 = Tsl2561::new(bus_master::legacy_i2c(base_address+0x10), 0x00);
let light_bright3 = Tsl2561::new(bus_master::legacy_i2c(base_address+0x20), 0x00);
@Rahix That looks great already, but I'm wondering if there's a way to turn that into an iterator based approach, so instead of commit()
you'd simply iterate on your iterator to do the read byte by byte lazily.
It should then be possible to experiment and implement a builder api on top as a separate crate if those ergonomics are preferred?
@ryankurte: Hmm, unfortunately the builder API can't easily be built ontop of a slice API. The issue is that I cannot transform the builder type (which is recursive) into a slice without allocation.
I noticed both APIs could be implemented simultaneously if we were to add an impl for slices of Steps:
impl<'a> Transact<'a> for &'a mut [Step<'a>] {
fn transact<F: FnMut(&mut Step<'a>)>(&'a mut self, mut f: F) -> F {
for mut s in self.iter_mut() {
f(s);
}
f
}
}
// ------------------------------------------------------------------------ //
bus.commit(&mut [
Step::Write(&[0x35, 0xc3]),
Step::Read(&mut buf1),
Step::Write(&[0x35, 0xc3]),
].as_mut()).unwrap();
We could even go a step further and implement Transact
for Iterators to allow for maximum flexibility:
impl<'a, T: std::iter::Iterator<Item = &'a mut Step<'a>>> Transact<'a> for T {
fn transact<F: FnMut(&mut Step<'a>)>(&'a mut self, mut f: F) -> F {
for mut s in self {
f(s);
}
f
}
}
// ------------------------------------------------------------------------ //
bus.commit(&mut [
Step::Write(&[0x35, 0xc3]),
Step::Read(&mut buf1),
Step::Write(&[0x35, 0xc3]),
].iter_mut()).unwrap();
An updated playground: https://play.rust-lang.org/?version=stable&mode=debug&edition=2015&gist=661c092720504bafc93bb2d9e7e07979
If you don't want this API in embedded-hal, we could alternatively use an iterator based API. This would allow a separate crate for the builder to be implemented ontop. The slice example would look like this:
bus.transaction([
Step::Write(&[0x35, 0xc3]),
Step::Read(&mut buf1),
Step::Write(&[0x35, 0xc3]),
].iter_mut()).unwrap();
That looks great already, but I'm wondering if there's a way to turn that into an iterator based approach, so instead of
commit()
you'd simply iterate on your iterator to do the read byte by byte lazily.
@therealprof: Yes, iterators would be way better here. I have not yet gotten the implementation running with iterators (mainly because I haven't had time yet) but I am positive that it can be done. Some design idea/proof of concept: https://play.rust-lang.org/?version=stable&mode=debug&edition=2015&gist=dbc4ee112032a569cd61d2ed224cb2a7
The more I think about it the more I'd like the transaction API to be fully Iterator based, kind of like this:
User -> <Steps as Iterator> -> Bus -> <Bytes as Iterator> -> User
or in Rust (code not tested):
enum Step<'a> {
Write(&'a mut std::iter::Iterator<Item = u8>), // Write bytes from an iterator
WriteSlice(&'a [u8]), // Write a slice (mainly for convenience)
Read(u8), // Read N bytes, these will be available in the return iterator
ReadSlice(&'a mut [u8]), // Read bytes into a buffer (for convenience again)
}
trait Transactor {
type TransactionIterator: std::iter::Iterator<Item = u8>;
pub fn transaction<'a, S: std::iter::Iterator<Item = Step<'a>>(&mut self, steps: S) -> Self::TransactionIterator;
}
@RandomInsano: I think we should move the discussion about owning addresses to a separate issue and keep this one purely about transactions. Would you be ok with opening a new issue and pasting your arguments there?
Yup, can do. I’m just happy to see the transaction idea getting some love.
@ryankurte: Hmm, unfortunately the builder API can't easily be built ontop of a slice API. The issue is that I cannot transform the builder type (which is recursive) into a slice without allocation.
Ahh, right. I forget we can't do C99 style dynamically sized types on the stack.
Iterator based approach to transactions seems good to me! I have a few concerns about iterators instead of slices for data though:
Linux's i2c_rdwr ioctl receives an array of pointers to buffers which is what my proposal was based on. You could use the standard read/write interface with single byte buffers, but the context switch overhead between userland/kernel for individual bytes is inefficient.
I agree on the DMA issue. Technically, the HAL doesn't support this as it stands AFAICT but it would be nice to allow consistency in interfaces when it does.
I suppose, what is the value of iterators in this use case? What are the benefits versus the drawbacks?
Aha, I thought this was familiar, we've already had this discussion wrt. iterator based SPI which was designed to address part of this issue https://github.com/rust-embedded/embedded-hal/pull/47 (both the linux overhead and DMA were raised, I am back to my original perspective that I much prefer passing known-sized types).
Agreed about interface consistency. The kind of use case I am thinking in an embedded context is if an OS wanted to provide SPI/I2C either backed by DMA or using a syscall (both needing understanding of size), it should be possible to build that to the same interface and hide the implementation detail so any other SPI consumer is still compatible with it.
The benefits of an iterator I can see are that you can build arbitrary iterator types (and probably compute things on the fly if that was a requirement), that you get all the magic iterator operations, and that you can assemble things (but imo the transaction approach achieves that more ergonomically).
I think the read function above would need to take a mutable iterator for ownership to make sense?
read
does not take input, so it does not need an iterator. Instead it returns an iterator of bytes. Take a look at the second playground link for clarification (sorry for the messy code ...).
Maybe as an example for what I had in mind:
let t = bus.transaction()
.write(&[0xde, 0xad])
.read(4)
.write(&[0xbe, 0xef])
.read(8)
.commit().unwrap();
for (i, b) in t.enumerate() {
// First iterates over the 4 bytes from the first read
// Then the 8 bytes from the second read
//
// Maybe it'd make sense to include some sort of marker in
// the iterator which read a byte is from
println!("Byte [{:02}]: 0x{:2X}", i, b);
}
Linux's i2c_rdwr ioctl receives an array of pointers to buffers
This is an array of "steps", right? Do we absolutely need to suppy all steps at once or would it be possible to preallocate eg a 16 step buffer and reuse that multiple times? I feel like this would be an ok solution on Linux. On bare-metal devices we don't need it because we don't need an array.
Apart from that, in my experience transactions with lots of steps are pretty rare. Because of this, performance should be good anyway if we use the iterator based approach.
As raised on the other thread, I suspect DMA read or copy can't work from an iterator? Which would preclude automated transfers with yeilds which might be nice for large transfers and multitasking.
I agree on the DMA issue. Technically, the HAL doesn't support this as it stands AFAICT but it would be nice to allow consistency in interfaces when it does.
Hmm, DMA is asynchronous which this API is not. At least that is what I had in mind while designing my proposal.
I agree that an asynchronous API shouldn't be iterator based. But it could still look similar:
let mut buffer_future = Future::new([0x00; 10]);
bus.dma_transaction()
.write(&[0x12])
.read(&mut buffer_future)
.commit().unwrap();
let buffer = buffer_future.wait().unwrap();
Well, I don't think we actually need transactions in the form start/commit to do the iterators. Instead I'd simply map what's happening on the I2C bus directly into an iterator interface which works lazily with writes acting as a sort of barrier and read returning an iterator. I also wouldn't allow to make this endless, i.e. one "transaction" can either be:
Dropping of the transaction or iterator would clean up the I2C peripheral for the next transaction, i.e. also if the iterator is not exhausted for some reason.
I also wouldn't specify the amount of bytes to read()
but leave that to iterator operations or the peripheral. So your example would look like:
for (i, b) in bus.transaction()
.write(&[0xde, 0xad])
.read()
.enumerate() {
println!("Byte [{:02}]: 0x{:2X}", i, b);
}
for (i, b) in bus.transaction()
.write(&[0xbe, 0xef])
.read()
.enumerate() {
println!("Byte [{:02}]: 0x{:2X}", i + 4, b);
}
or
for (i, b) in bus.transaction()
.write(&[0xde, 0xad])
.read()
.chain(bus.transaction()
.write(&[0xbe, 0xef])
.read())
.enumerate() {
println!("Byte [{:02}]: 0x{:2X}", i, b);
}
or similar, there're tons of options how to use the iterator. NB: I assume you left off the address for brevity reasons?
But yes, this is a blocking operation. For async I'd also go the Future
route.
read does not take input, so it does not need an iterator. Instead it returns an iterator of bytes. Take a look at the second playground link for clarification (sorry for the messy code ...).
I can't seem to grasp from the example how the iterator over those returned bytes could be used? Also if we return an iterator it seems to me like the .next()
function would hide the bus timing?
Hmm, DMA is asynchronous which this API is not. At least that is what I had in mind while designing my proposal.
The HAL API is synchronous / blocking, passing iterators precludes the HAL internals being asynchronous. This is an important distinction imo, the caller shouldn't have to know what occurs underneath.
I also wouldn't allow to make this endless
I don't know about endless, but I don't see any reason to limit it, and have SPI use cases that are WriteRead(cmd, &mut status) -> Read(&mut len), Read(&mut data[0..len])
and WriteRead(cmd, &mut status) -> Write(body) -> Write(crc)
.
I also wouldn't specify the amount of bytes to read() but leave that to iterator operations or the peripheral. [...]
I also vastly prefer handling explicitly sized buffers rather than unknown-length iterators.
But yes, this is a blocking operation. For async I'd also go the Future route.
I agree, but, async is a different problem.
Also if we return an iterator it seems to me like the .next() function would hide the bus timing?
I'm sorry I don't follow. You don't see the bus timing anyway, that's why you have to set a speed at the master and let the peripheral work at it's own pace.
This is an important distinction imo, the caller shouldn't have to know what occurs underneath.
I disagree, a caller always needs to know whether a call is blocking or non-blocking even if just for the error handling. (A-)synchronicity is often related to blocking / non-blocking but not always the same thing.
I also vastly prefer handling explicitly sized buffers rather than unknown-length iterators.
Why? You can still easily transfer the values into your sized buffer, like:
let mut vales : [u8; 8] = [0; 8];
for (v1, v2) in values.iter_mut().zip(bus.transaction()
.write(&[0xde, 0xad])
.read()
.take(8)) {
*v1 = v2;
}
Why? You can still easily transfer the values into your sized buffer, like:
I'd agree that we don't need size specifications if we were to only allow single reads in a transaction. But that would entirely miss the point of transactions: To allow more complex bus operations to be done atomically.
I don't know about endless, but I don't see any reason to limit it, and have SPI use cases that are WriteRead(cmd, &mut status) -> Read(&mut len), Read(&mut data[0..len]) and WriteRead(cmd, &mut status) -> Write(body) -> Write(crc).
This is exactly what transactions are for. Limiting would miss the point; in that case you could simply use the existing traits.
Because of this, I agree with @ryankurte that we need size specifications.
We could allow iterating beyond the length of the transactions steps and map that to further reads, but that could get confusing/lead to bugs, that aren't easy to see.
I disagree, a caller always needs to know whether a call is blocking or non-blocking even if just for the error handling. (A-)synchronicity is often related to blocking / non-blocking but not always the same thing.
I agree with you on that, we should not hide this implementation detail.
I'm sorry I don't follow. You don't see the bus timing anyway, that's why you have to set a speed at the master and let the peripheral work at it's own pace.
I was imagining the returned iterator being returned directly and passed into other read functions or something, but, maybe that's not possible?
I disagree, a caller always needs to know whether a call is blocking or non-blocking even if just for the error handling. (A-)synchronicity is often related to blocking / non-blocking but not always the same thing.
Agreed, but, a function that appears blocking could have other scheduling underneath it, an OS with a syscall based approach to peripheral access could preempt the running thread and schedule something else, or put itself to sleep, while the transfer occurred. To the caller, this appears blocking, but on the processor other things can be happening while it blocks. Similar to the linux example, it's not possible to implement this effectively without knowing the transfer size.
Why? You can still easily transfer the values into your sized buffer, like:
I'm not good enough at rust to work out what your iterator example is doing :-/
As opposed to something like this?
let mut values : [u8; 8] = [0; 8];
bus.transaction([
Step::Write(&[0xde, 0xad]),
Step::Read(&mut values),
].iter_mut()).unwrap();
We could allow iterating beyond the length of the transactions steps and map
Sounds v complicated, I think I agree / don't think that's ideal for the hal?
I was imagining the returned iterator being returned directly and passed into other read functions or something, but, maybe that's not possible?
Oh, in theory it is possible but passing Iterators is a bit iffy (especially in embedded) so unless absolutely necessary I wouldn't think anyone does that on purpose.
Agreed, but, a function that appears blocking could have other scheduling underneath it, an OS with a syscall based approach to peripheral access could preempt the running thread and schedule something else, or put itself to sleep, while the transfer occurred. To the caller, this appears blocking, but on the processor other things can be happening while it blocks.
How's this different from embedded? You can also serve other interrupt, or if you're handling it in an interrupt handler, run stuff from your main routine.
Similar to the linux example, it's not possible to implement this effectively without knowing the transfer size.
Well, the other side could throw you a curveball and simply not comply with what you're expecting in transfer size so you (and the system) need to be prepared to handle other amounts as well.
I'm not good enough at rust to work out what your iterator example is doing :-/
It packs 8 bytes from a write-then-read operation into an array of your choosing (you can leave out the take(8) actually, I just added it for more clarity that you can limit the amount of data read explicitely).
Iterators allow you to so just so much more than fixed arrays:
In many cases I'm just reading into arrays/slices so I can then iterate over the data which is rather annoying.
I'd agree that we don't need size specifications if we were to only allow single reads in a transaction. But that would entirely miss the point of transactions: To allow more complex bus operations to be done atomically.
Okay, my point is you're trying to model something that the hardware does not guarantee you. In I2C there's exactly the three mentioned cases of write/read/write-then-read which can be considered one transaction. So from my perspective it does not make too much sense to build an abstraction of a "super-transaction" on top of that.
How's this different from embedded? You can also serve other interrupt, or if you're handling it in an interrupt handler, run stuff from your main routine.
Only in that you can't make a syscall with knowing how much memory to transfer. I think that's an argument for using slices for both steps and data. Also if a similar works for linux maybe it's a design worth considering?
Well, the other side could throw you a curveball and simply not comply with what you're expecting in transfer size so you (and the system) need to be prepared to handle other amounts as well.
Sure, but it should always be less than expected, and knowing the size lets you work out whether it's ok or not.
Iterators allow you to so just so much more than fixed arrays:
They do, but they're also more complicated, and imo less easy to write determinate code around. You cant rationalise about the size until they're evaluated, as you mentioned you can't really pass them. Slices imo are a simpler thing to base everything on. I'm also worried that having both could be bad for compatibility between hal dependents?
- Variable amounts of data
Interesting, I'm not sure how to handle this in a single transaction with slices. ime I usually read the size in one transaction from one memory location, then the data in a separate one from another location, in which case it's not a huge issue.
- On the spot data conversion and processing
I like in place / zero copy parsing of data to structs, I think they're mutually exclusive?
- early abort on faulty data
Like, in between operations in a transaction?
- introspection of incoming data while it is coming in (for me this is huge!)
Yeah, that'd be neat, what do you usually use it for?
In many cases I'm just reading into arrays/slices so I can then iterate over the data which is rather annoying.
I find the opposite :-/
In I2C there's exactly the three mentioned cases of write/read/write-then-read which can be considered one transaction
I think maybe the term transaction is not clear here, my understanding is that we're talking about one transaction as one of the listed write/read/write-then-read
, but, assembled from a set of different slices to make the hal more friendly? For example am eeprom read might consist of a const address slice, an immutable offset slice, and a mutable data slice for returned data.
I had a bash at a slice based transactional api over the existing SPI trait, though it's not quite expressive enough I think.
I think maybe the term transaction is not clear here, my understanding is that we're talking about one transaction as one of the listed write/read/write-then-read, but, assembled from a set of different slices to make the hal more friendly?
This is my definition. See the OP for two examples. I should go looking around for more.
They do, but they're also more complicated, and imo less easy to write determinate code around.
I could not disagree more.
You cant rationalise about the size until they're evaluated
Well, you can evaluate them in place, same as the transaction with slices.
as you mentioned you can't really pass them.
You can pass them but it's not really a good idea, but the same can be said for slices. You really want to parse your data into useful structures ASAP.
Slices imo are a simpler thing to base everything on.
You can still use slices if you want to do that. I do find them rather annoying, converting them into anything useful is a major PITA and often involves iteration anyway, cf. https://github.com/rust-embedded/cortex-m-rt/pull/107/files#diff-cdc6ef2a70d26bf65a4057f6a49a336d
I like in place / zero copy parsing of data to structs, I think they're mutually exclusive?
Not sure what you're trying to say. With iterators everything is happening in place, no need to create a buffer on the stack, fill it and then pull out the relevant data. If you have a huge dump of data with iterators you can simply skip the irrelevant data (or just validate that you get what you're expecting).
Like, in between operations in a transaction?
As in: You're reading a record of data and there's an invalid flag pretty much in the beginning of it (like GPS records). In that case you can simply abort the bus operation by exiting the iteration.
Yeah, that'd be neat, what do you usually use it for?
Debugging. When you're working with devices which have less than optimal documentation it's great if you can simply pop in aninspect()
in your iterator chain and have a peek at the data you're getting. The next best way to debug data on a bus is to physically hook up a logic analyser or my oscilloscope with a protocol decoder to look at the communication. (Yeah, I know some people really enjoy running a dev build in a debugger but I'm not one of them and sometimes it's not even possible because that seriously screws with the timing).
IIRC there were a few discussions about iterator interfaces in other issues. Maybe it'd be worth looking through these...
Taking a step back from the conversation and surveying the landscape of it so far, it looks like the sides of the argument are driven by personal preference. One is traditional to interfaces in other languages and seems simpler to implement. The other allows an interface and usage style that models more to how the way physical busses actually operate.
These kinds of arguments are fun and are a great way to find limitations in either implementation but usually spinlock with neither side wanting to yield. We need @japaric to do the BDFL thing and pick a baby or put it up to a community vote so we can all work together to implement a solution to this problem.
I don't have a heck of a lot of time lately due to work and life commitments but I think both sides (slices vs iterators) need to polish up proposals from what's been said back and forth here to make it digestible for either Jorge or that vote. Then it's pistols at dawn!
Is November 9th a good date? That'll be a little under three weeks, and it's two months after I opened this guy.
Hey sorry I've been a bit overwhelmed atm / haven't had time to work on this further.
I think perhaps there's a related discussion to be had about were CS
fits with our SPI trait, as well as what the goals for the hal are (#112).
FYI this is now supported in rust-i2cdev (docs)
I put together a PR (https://github.com/rust-embedded/embedded-hal/pull/178) with simple transactional I2C and SPI traits for discussion / review.
@eldruin i'm not totally clear on the need for the I2CMessage trait here, is there a reason we would need to promote something like this to the embedded-hal crate?
@ryankurte Sounds good. From the top of my head I do not think we need the I2CMessage trait but honestly I do not remember why I chose that solution for linux-embedded-hal right now. I would need to think about it again.
Does this still need to be open?
Indeeed. We support I2C and SPI transactions now. Thank you to everybody involved for the fruitful discussion. Also thanks @Dominaezzz for noticing.
Both SPI and I2C have some concept of a transaction. SPI you assert an enable pin before sending data, I2C has
start
andend
messages.I'm finding that as I write and review more crates, the ergonomics of requiring single fixed sized buffers for communication are becoming cumbersome or require a lot of code duplication for variably large receivers. An example of the latter is the required code duplication in the eeprom24x.
After some research, it looks like the buffer copy for SPI and I2C in Linux-land might be unavoidable because the
I2C_RDWR
ioctl support isn't required. That means it would need to be wrapped somehow in the linux-embedded-hal crate.Looking at Arduino, they do have the concept of protocol transactions. I'm not sure if I agree with the API entirely, but the concept is good. I'm sure Rust opens up some interesting design improvements there.
Can someone poke holes in this idea for me? I'd like some sort of solution to allocating these extra buffers on memory constrained devices, especially when it leads to reading and writing different sized data.
Here's a list of code that I find where buffer copying was required but didn't need to be: