Closed xgroleau closed 9 months ago
NOTE: there's also #60
NOTE: there's also #60
That was what caused me to want to explore. Though #60 seems to only update the dependencies that have async suppport but did not add any async functions to the library itself. That's why I created a seperated one
Thanks for taking this on! This is a lot of work done already.
I think in principle this looks ok but on the higher level these are things I'd like addressed:
I'm wondering if it'd be possible to unify the implementations (e.g. async_set_pixels vs set_pixels) such that the name would be same and only the methods would changed with the feature. Essentially a hard xor, if async then we get only the async ones, if sync we get the sync ones but the signatures/names (apart from the asyncness parts) remain same. This has always been a pain with sync vs async though.
That would be possible with a trait, that is essentially what I've done with model, both have the write_pixels
. It's very similar to how the embedded-hal
and embedded-hal-async
do it.
Once unification from point 1 (if at all possible) is done, we could then possibly unify the model inits and other parts using a macro that generates sync vs async calls (e.g. dcs.write_command in both cases but .await? in case of async etc.)
That would be with maybe-async
or maybe-async-cfg
. I remember needing to use maybe-async-cfg
instead of maybe-async
a while ago but I don't remember why exactly. I could check.
When it comes to the iterator vs raw, the main reason things are done the way they are currently is because many MCUs cannot fit the full framebuffer size in RAM. We could definitely think about how to make this work better for big chunks though.
I do agree about the iterator. Also it's just very nice to use to convert the types using map
. Although in the case I encountered using slint
I need to render the lines and then draw them, so I cannot use an iterator 😕.
Currently I migrated to embedded-hal 1 alpha and using nightly features (for async traits). If those are feature gated, is that ok with you or you prefer only using stable features and library?
Currently I migrated to embedded-hal 1 alpha and using nightly features (for async traits). If those are feature gated, is that ok with you or you prefer only using stable features and library?
We can use nightly if it's locked behind the async feature considering async traits aren't exactly solved in stable yet.
I can try to unify the implementation using maybe-async
when I'll have some time and see how it goes
I've split the repo into a workspace with mipidsi-async
being a separate crate. Please put any async work there.
Using a separate crate will prevent the use of maybe-async
. Thus we would need to duplicate most of the codebase (for all display models). It is one way to do it and fine if that is what is preferred but from what I understood you wanted to avoid duplication.
Using a separate crate will prevent the use of
maybe-async
. Thus we would need to duplicate most of the codebase (for all display models). It is one way to do it and fine if that is what is preferred but from what I understood you wanted to avoid duplication.
I'd like to avoid duplication but not at cost of readability. I'll take a proper look at your PR but from first glance the maybe-async approach seems convoluted to me.
Using a separate crate will prevent the use of
maybe-async
. Thus we would need to duplicate most of the codebase (for all display models). It is one way to do it and fine if that is what is preferred but from what I understood you wanted to avoid duplication.I'd like to avoid duplication but not at cost of readability. I'll take a proper look at your PR but from first glance the maybe-async approach seems convoluted to me.
Yes, I do agree that it's not easily readable, even more so for future new contributors. Then we just need to duplicate the code and add some async/await keywords. That's the approach most lib does from what I've seen anyway so maybe it's the way to go. Just need to lookout to make sure both api stays the same down the line.
Using a separate crate will prevent the use of
maybe-async
. Thus we would need to duplicate most of the codebase (for all display models). It is one way to do it and fine if that is what is preferred but from what I understood you wanted to avoid duplication.I'd like to avoid duplication but not at cost of readability. I'll take a proper look at your PR but from first glance the maybe-async approach seems convoluted to me.
Yes, I do agree that it's not easily readable, even more so for future new contributors. Then we just need to duplicate the code and add some async/await keywords. That's the approach most lib does from what I've seen anyway so maybe it's the way to go. Just need to lookout to make sure both api stays the same down the line.
Let's separate it then. The master branch has the async crate ready for changes, let's just duplicate things there for now. We can try and refactor common pieces into a mipidsi-common
if it seems like a good way forward later.
@xgroleau Plase, could you add instructions to MIGRATION.md on how to convert the code from 0.6 version to new version with async?
Let's say, how to modify this code which works with 0.6:
mipidsi::Builder::ili9342c_rgb565(di)
.with_display_size(320, 240)
.with_orientation(mipidsi::Orientation::PortraitInverted(false))
.with_color_order(mipidsi::ColorOrder::Bgr)
.init(&mut delay, Some(reset))
@xgroleau Plase, could you add instructions to MIGRATION.md on how to convert the code from 0.6 version to new version with async?
Let's say, how to modify this code which works with 0.6:
mipidsi::Builder::ili9342c_rgb565(di) .with_display_size(320, 240) .with_orientation(mipidsi::Orientation::PortraitInverted(false)) .with_color_order(mipidsi::ColorOrder::Bgr) .init(&mut delay, Some(reset))
This PR is not relevant anymore. I'll actually close this, see #72 instead.
Also, in #72, the code is based on 0.7.1 and there is no migration from 0.7.1 required since no previous code was touched. for 0.6 to 0.7, I don't know the exact migration required, maybe you can checkout the release notes here to get the info you need.
Though if you want to use the async version, you need to make sure your hal supports the release candidate 1.0 for embedded-hal
and embedded-hal-async
traits and use the async crate instead, otherwise, the api should be the same.
Exploratory work for async support. Based on the async draft done here. The goal of the PR is not to merge the changes, but more to discuss an async api and discuss the possible shortcomings.
I've tested and benchmarked a bit using the st7789v2 on an nrf52840 using embassy.
From my tests, mipidsi use of an iterator of color is quite limiting (even for sync), using a raw slice of u16 improved the rendering speed quite a lot. The iterator api should probably be avoided for async for big writes since it would wake the task quite a lot. It will give significantly worse performance than the sync api since all the wakes are probably not negligible due to the quantity of it.
For the sync api, I tested using a slice instead of an iterator and drawing speed increased by about 1.25 to 1.5 times.
The iterator api allows a lot of flexibility and is nice to work with, but leaves a lot of performance on the table.
Providing a "raw" api would probably be interesting. For example if I had enough memory and send the full display to be redrawn, using the current async api from the PR with iterators, it would cause 900 wakes (display-interface-spi splits in packets 64 for
U16BEIter
and the st7789v2 has240*240
).I'm not sure if you were interested in an async api at all, but just wanted to share the information I've collected.
Also note that in the PR, only the changes for st7789 were done, but it would be pretty similar for the rest of the models.
@almindor let me know what you think!