Closed haukepetersen closed 5 years ago
Addtionally see #7947. And I have to disagree that pm is not usable. I have it fully implemented for atmega and it works great.
Here's the discussion, Long story short, shell will become completely incompatible with sleep modes — UART will prevent system from going to sleep, shell can not be stopped, UART can not be disabled while shell is running.
This seems to me rather a matter of implementation, fixable with either something like shell_disable()
or stdio_poweroff()
or similar.
With current code (without pm_layered) I can enable and disable LPM whenever I like to. With proposed changes I can't. Between can't and can I prefer latter.
This gets more interesting: if we are able to switch to any power mode basically any time, this means that we have to harden all our implementations against/for that, adding quite some complexity, especially to periph and device drivers. One of the beauties of the pm
system is, that we are always in a deterministic state, having never to worry to be put into some unexpected sleep state by some other thread.
specially to periph and device drivers
well that's why there are layers, right? And without pm_layered, these is no power mode switching, except for idle state, right?
jupp
Ok, here's the problem: I don't see why enabling/disabling some peripheral should disable/enable low power modes.
E.g. if UART is enabled and it has RX callback registered — it prevents system from entering STOP mode.
1) Let's distinguish between enabled peripheral and active peripheral. If you have running DMA transfer on UART port, it is enabled and active, and the system must not sleep, or you loose some data. Here we need pm_layered.
2) Enabled UART with registered callback is not always an active UART. E.g. sys/shell is always running and corresponding UART is always enabled — so sys/shell will block LPM forever. UART can't be disabled while shell is running — it will hang on the very first TX then.
3) Multiple workarounds which are being proposed to fix the situation seem to add unnecessary complexity:
As for me, I don't see why UART and other similar enabled-not-active peripherals should block LPM at all.
1) User always knows if he needs UART or not, and when exactly he needs it. Let him decide should he block LPM or not, there are a lot of situations when he doesn't need UART right now (shell on a sleepy device, wake-by-UART devices, etc.).
2) Low power modes should be blocked by peripheral drivers only when it may loose some data because of the system going to SLEEP/STOP/whatever — e.g. DMA transfers and other interrupt-driven procedures.
3) All other cases, let the user control it.
continued from #8401:
That being said the real reason this was first introduced was because at the original state the "low power modes" implemented for stm32l1 weren't only sleep modes but also low power run modes. These modes required switching to MSI clock, with in turn required recalculating baudrate, timer config, etc.
This is good point: the pm
module actually is designed to handle a CPU's sleep modes, not really considering different run modes, as the concept so far only considers one static run mode (@kaspar030 right?). So maybe having some (potential CPU specific) functions for handling the run-mode (e.g. different core clocks) in parallel to what the pm
module is doing could be the way to go here?
I ended up cutting them because RIOT only defines 4 low power modes (3 discarding normal run) (why only 4 BTW??)
This is actually not true. You can define any number of power modes if needed. 4 is just the default.
Enabled UART with registered callback is not always an active UART. E.g. sys/shell is always running and corresponding UART is always enabled — so sys/shell will block LPM forever. UART can't be disabled while shell is running — it will hang on the very first TX then.
and you cannot RX if you disable it, except if you have interrupts and uart is low-power capable.
addition functions to enable/disable/reinitialize shell (and for sure other functions if they tend to use enabled-but-not-active peripherals) add complexity, in case of time-critical functions it may be crucial manually overriding LPM mode set by some OS component undermines the whole concept of automatic LPM
true, not a fan of that. IMO: if you want to deep sleep: you can't use the shell/uart (at least not in RX mode!), or you have to enable/disable the uart everytime you want to use it.
Low power modes should be blocked by peripheral drivers only when it may loose some data because of the system going to SLEEP/STOP/whatever — e.g. DMA transfers and other interrupt-driven procedures.
true! or blocking SPI transfer (talking about atmega stuff here that I am using)
All other cases, let the user control it.
what are the other cases?
and you cannot RX if you disable it, except if you have interrupts and uart is low-power capable.
This is my point. The actual (good, btw) design was thought to take into account most of general cases. But then you have a bunch of devices coming (because there was a need, a market) which embed a lot of new low power features. Thus, the general approach should be either extended or superseded.
true, not a fan of that. IMO: if you want to deep sleep: you can't use the shell/uart (at least not in RX mode!), or you have to enable/disable the uart everytime you want to use it.
I tend to say enable/disable at will. Developers should know what they want, and when.
what are the other cases?
Other kind of peripherals which support dynamic configuration to achieve low power modes.
IMO: if you want to deep sleep: you can't use the shell/uart
Why? I can combine it, our current firmware enables sleep 15 seconds after booting, so the user have a few moments to change some devices' settings if he needs to. Then the system goes to sleep (STOP actually), UART is going to sleep automatically (CPU stops its clock), but any time the system wakes up, UART is up and running too, TX-ing log/debug/etc. messages.
So actually to make it compatible with proposed pm_layered I need a function not to disable shell nor UART, but to unregister UART RX callback function.
Seems absolutely counterintuitive to me, it's the logic that will be impossible to comprehend by people not familiar with RIOT internals.
what are the other cases?
Any case when going to sleep doesn't mean loosing data. Should gpio_init_int block STANDBY mode just because GPIO interrupts doesn't work in STANDBY? I don't think so.
P.S. I'm talking about Cortex-Ms mostly, so its usually:
User always knows if he needs UART or not, and when exactly he needs it. Let him decide should he block LPM or not, there are a lot of situations when he doesn't need UART right now (shell on a sleepy device, wake-by-UART devices, etc.).
Who is "User"? A developer using periph/uart? A developer using shell (which might use periph/uart or rtt_stdio? An application developer using coap, which uses 6lowpan over xbee, which expects packets to arrive unless xbee is set to sleep?
When writing a specific application for a specific target hardware, "User" might know.
Low power modes should be blocked by peripheral drivers only when it may loose some data because of the system going to SLEEP/STOP/whatever — e.g. DMA transfers and other interrupt-driven procedures.
I think this has to be extended: Low power modes should be blocked by peripheral drivers when a requested function would otherwise not work. Thus if an application asks to e.g., get uart input, that must be guaranteed. Power management is one level below.
All other cases, let the user control it.
Would something like a forced / exported "pm_set_mode()" suit your needs? How would you design that in a portable way?
Low power modes should be blocked by peripheral drivers when a requested function would otherwise not work
Should STANDBY mode on Cortexes be blocked by gpio_init_int?
Should STANDBY mode be blocked by any driver as it doesn't preserve GPIO state so external devices may or may not work properly if CPU enters STANDBY?
Should peripheral drivers on Cortexes check if specific peripheral is enabled or not in SLEEP mode, and if it is not, block SLEEP mode too? Should they check it every time CPU goes to sleep?
What's approximate cost (CPU cycles, microseconds) of manually checking and reconfiguring all those nice peripherals every time CPU goes to sleep or wakes up?
P.S. Should xtimer_init block all sleep modes forever? Should pwm_init do the same?
Would something like a forced / exported "pm_set_mode()" suit your needs?
Actually no, because I'll have to check every time is some new driver has its' own opinion on when to disable LPM — on init only or every thirteenth minute of an hour too.
All I want is a system which doesn't get in my way if there's no disastrous consequences. Ok, save me from getting HardFault, I hate'em.
But taking control from the user on an embedded system is a really, really bad idea.
Should STANDBY mode on Cortexes be blocked by gpio_init_int?
What is this STANDBY mode of which Cortexes? Are the Cortex-M sleep modes even that standardized?
If the question is whether "gpio_set_int()" should block the sleep mode that it needs, yes it should.
Should STANDBY mode be blocked by any driver as it doesn't preserve GPIO state so external devices may or may not work properly if CPU enters STANDBY?
Should peripheral drivers on Cortexes check if specific peripheral is enabled or not in SLEEP mode, and if it is not, block SLEEP mode too?
I'm confused by what you mean with SLEEP and STANDBY modes.
The concept of pm requires periph drivers to block sleep modes that would prevent requested functions to work. Only at that level, this decision can be made, as only periph code can know about the power modes on a specific target. Fiddling with power modes on a higher level makes code non-portable.
This does require higher-level APIs (e.g., a device driver using SPI) to explicitly enable or disable used periphals (e.g., SPI or a gpio pin used as interrupt) when needed. It is very possible that some API is missing (like, disable a gpio int but not deconfigure it, ...). Still, I believe the general concept works out.
which embed a lot of new low power features.
but then it's a new class of drivers, which handle LPM / pm_layered differently. I don't see a problem within pm_layered here so far.
current firmware
what exactly are you talking about here? I am talking about general RIOT stuff, but this behavior with STOP is new to me.
I'd propose to generally look at the sleep modes and their impact. For atmega this is quite simple and can be managed by pm_layered. However for other MCUs, especially the cortex family, we need to reconsider a lot more.
Fiddling with power modes on a higher level makes code non-portable.
Absolutely agree here. IMO this should be totally transparent to the user and be hidden from the user. If the user users shell -> no sleep. If the user doesnt have any shell and only does a i2c_read()/_write() once in a while, the node can go to sleep easily.
Problem here is xtimer obviously
Actually no, because I'll have to check every time is some new driver has its' own opinion on when to disable LPM — on init only or every thirteenth minute of an hour too.
I'm not talking about blocking any PM. I'm asking whether forcing a low power mode would suit your needs. If your argument is that you cannot know whether that would break your some new driver - exactly that is what pm tries to prevent, in a portable way.
All I want is a system which doesn't get in my way if there's no disastrous consequences. Ok, save me from getting HardFault, I hate'em. But taking control from the user on an embedded system is a really, really bad idea.
I think you're trying to basically do manual power management, or you'd like to do it half-automatic. In your application, you don't care about shell's uart rx. Others might. How do we express that in code?
I understand that your specific use-case might have some trouble fitting in the pm concept. I'm very interested in fixing either your application, or pm, or both.
Are the Cortex-M sleep modes even that standardized?
Cortex-M itself has two standard modes, sleep (stops core clock) and deep sleep (stops everything). On specific CPUs there are nuances, I'm mostly referring to STM32 scheme as the discussion started with it.
Specifically, SLEEP mode on STM32 stops core clock, but peripherals may (or may not) be clocked, depending on APBxLPENR register settings.
Should the driver check that registers every time CPU is going to enter SLEEP? Because if it doesn't, peripheral served by that driver may not work in SLEEP mode.
Besides that:
Is there anything left of LPM yet? Counting that xtimer can't be fixed because you just don't have any other timer on some CPUs, let alone high-resolution one?
In your application, you don't care about shell's uart rx. Others might.
Others should learn that UART goes to sleep if CPU goes to sleep. They should disable sleep if they need UART. There is no, I repeat no, way to work with MCU without learning some things about your MCU internals. Even Arduino requires that (sometimes).
I think disabling enabling periph should be automatic only if higher level (obscure) modules are using it. For example if uart or spi is being used by some kind of sensor, disabling LPM should be left to the user. If it regards radios and you are using GNRC on top of it some disabling should be automatic because sometimes the thread will be blocked waiting for a response, neighbor discovery, etc that a user may not have knowledge of as it happens under the hood. Here it should prevent MCU from going into deep-sleep modes that prevent GPIO ISR or other important functions. But this should happen information can be lost and not while bl_rx. (In this cases low power rune modes would be the ideal way to go)
On other cases I think the user should decide when to block LPM and which ones. Going back to my original comment and @haukepetersen comment:
So maybe having some (potential CPU specific) functions for handling the run-mode (e.g. different core clocks) in parallel to what the pm module is doing could be the way to go here?
This would be very cpu_dependent but I think it's needed when dealing with more diverse LPM.
This is actually not true. You can define any number of power modes if needed. 4 is just the default.
My mistake then, I was just referring to documentation which stated "CPUs define up to 4 power modes (from zero, the lowest power mode, to PM_NUM_MODES-1, the highest)", I though that was the standard we had to adapt to (for now at least).
Again, let's please keep concept and implementation issues seperate.
anything that uses any UART blocks LPM
... anything that registers a callback for rx blocks the corresponding low power mode, if the uart needs that.
There could be a low-power clocked UART that allows wakeing up the CPU without losing bytes. Hardware flow control could trigger wakeup. Maybe not and UART RX requires the CPU to busy wait. All depending on the UART implementation and configuration.
Any application or driver of periph/uart would keep running, unchanged, on all of them. If the configuration or implementation doesn't allow deep sleep while UART RX is requested, then it uses more power. But driver developers don't have to deal with hardware specific power management.
sys/shell blocks LPM
on most platforms, because of the uart RX as described above.
anything that uses any high-frequency timer block LPM
Because if the corresponding power mode is not blocked, anything using the high-frequency timer just doesn't work. Unless the user is doing manual power management, the MCU might go to sleep any time.
xtimer blocks LPM
Can we agree that xtimer is a known problem? :)
pwm blocks LPM
It should only do so if the implementation requires that while it is being used.
I think disabling enabling periph should be automatic only if higher level (obscure) modules are using it. For example if uart or spi is being used by some kind of sensor, disabling LPM should be left to the user.
Again, who is the user?
Stuff gets layered. coap might use saul might use random_driver which might use i2c or spi depending on the target board. Whether something is "obscure" or "totally clear" depends very much on the application.
What are the other enabled but not active peripherals other than the UART? As I see it the only asynchronous periph driver right now is the UART, all others are completely controlled by the software.
These are the peripheral drivers that I could think of now, but only the UART is problematic in my opinion.
From my view there are two opposing ideologies here, both have their merits, but neither is the best in all situations. I think the key here is that we find something that works well in most situations instead of finding the optimal solution for a single problem.
So to summarize, the two opposing views are:
(current RIOT method used by pm_layered) let each subsystem request higher power modes when they need it, (pm_block/pm_unblock). Benefits of this method are:
Drawbacks are:
In the other method, the power management is controlled by the application, enabling/disabling low power is explicit and the application has full control of when it is possible to go to sleep.
Benefits of this method are:
Drawbacks:
Personally, I think that the best general purpose default is to use a method like pm_layered which works well for most applications, and only needs some small improvements to deal with the UART RX situation. Moving the control to the application puts more demands on the application developers and makes it difficult to do any significant power savings in a networked multi-tasking system with multiple separate applications running on a device, without sacrificing system reliability.
These are my personal views, please correct me if I have misinterpreted the positions.
as a reference regarding 'active' peripherals: #4758
anything that registers a callback for rx blocks the corresponding low power mode, if the uart needs that
Kaspar, come on, seriously. You want user (programmer of any level) to know absolutely nothing about CPU internal states and peripherals and at the same time you require him to understand the concept of power consumption dependency — by several orders of magnitude — on registered callback. It's bloody magic to everyone not familiar with RIOT internals.
anything using the high-frequency timer just doesn't work
Why xtimer_spin doesn't work? It does. We use it. It blocks LPM because it blocks switching to idle thread, yes, but it works. I can't replace it with anything else because I don't have anything else on STM32 capable of measuring microsecond intervals.
And I can use any kind of HF timer while CPU is not sleeping. It may not sleep right now because I blocked LPM, because I'm running something in a long loop or just because timer is used to provide ticks for something I don't care about while sleeping, and CPU is not sleeping for some other reason not directly related.
Can we agree that xtimer is a known problem? :)
It's unfixable problem. On some CPUs (including some STM32s) you don't have any timer which ticks during sleep and has intervals below 1 second.
Actually I thought a bit about xtimer (before today's discussion), and I think that xtimer_sleep should block LPM while other functions should not.
Yes, users (all and any species of them) should learn that xtimer is kind of strange on sleepy system. But that's the only way, besides putting a ban on sleepy systems.
BTW, here is a lifehack: if you want to debug LPM without any external hardware, check xtimer_now() periodically. It is a time CPU spent running (plus-minus a bit).
What are the other enabled but not active peripherals other than the UART?
Timers, unfortunately.
xtimer_now() doesn't have a target, it is broken with LPM enabled and it can't be fixed.
You want user (programmer of any level) to know absolutely nothing about CPU internal states and peripherals and at the same time you require him to understand the concept of power consumption dependency — by several orders of magnitude — on registered callback. It's bloody magic to everyone not familiar with RIOT internals.
I'm thinking about scale, and failure cases.
With pm:
If a clueless C developer uses periph/uart and has clue about the CPU internal states and periphials, they will probably use more power than necessary by setting a UART callback but not using it, if the uart implementation then requires blocking of a low power mode.
An experienced C / RIOT developer can do the right thing - don't initialize something that is not needed, implicitly not using power for that.
With manual power management:
If clueless C developer uses periph/uart and has no clue about the CPU internal states and periphials, they will probably use more power than necessary by default.
An experienced C / RIOT developer cannot do the right thing, as it is CPU specific. edit ... and it might not clear in advance on which platform a specific piece of code might run.
Thus, for a system like RIOT, something like pm is an absolute necessity.
That doesn't mean that we cannot allow overriding pm's decisions, or allow manual mode selection.
@olegart can you share your code?
@olegart
xtimer_now() doesn't have a target, it is broken with LPM enabled and it can't be fixed.
Let's just discuss the concepts here instead of the particular implementation of xtimer.
xtimer is a known issue, it needs better configuration, and implementation improvements to handle heterogeneous timer constellations. The reason why xtimer implicitly (xtimer never calls pm_block AFAIK) blocks all low power modes on most boards is that a high frequency timer is configured as the xtimer backend, and combined with the fact that xtimer always sets a long target if no other timer target is set, in order to be able to count past 0xffffffff ticks (which happens after 1h11m on 1 MHz timers).
xtimer is a known issue and there are multiple people working on improvements to it, and any contributions to it are welcome. If you need low power right now and don't mind reducing the accuracy of the xtimer targets you can reconfigure it to use a low power timer which does not call pm_block during timer_set (for example, Kinetis supports this when using the 32768 Hz timer instead of the high frequency timers). Better solutions are being worked on to allow high precision timing in combination with low power modes.
@kaspar030 From MCU programmers' point of view «using more power by setting and not using callback» is meaningless. Why should I care about callbacks at all if I suppose that in sleep there's no UART working and so there will be no callbacks? I expect the peripheral to stop, not to prevent me from using sleep.
I really do not understand the logic behind the idea.
If I enabled LPM, I did it for a reason. Whatever it was — there was a reason, and I don't want the «OS knows better what you really need» behavior. It is a very strange concept indeed on a system where I can get all sorts of trouble, including but not limited to full system crash, using the most trivial functions from gpio.c.
Why at all should OS override my decisions? That is the real question. To keep me safe and sound? But almost any function provided by RIOT can be used to crash the whole system just by supplying wrong pointer/index/etc. to it. RIOT is not designed to be safe at all, not on any level, and for some functions — gpio.c for example — it was a deliberate decision made by maintainers. Even while GPIO_UNDEF seems to be a perfectly legitimate value, gpio_read(GPIO_UNDEF) has an undefined behavior, probably an immediate Hard Fault.
To provide some kind of «guaranteed functionality»? Why UART should be guaranteed and sleep should not? Why at all and why here? RIOT is not designed for it, a simple «gpio_init_int(GPIO_PIN(PORT_A, 10), GPIO_IN_PU, GPIO_BOTH, &cb, NULL); gpio_irq_disable(GPIO_PIN(PORT_B, 10));» will surprise you quite a bit on STM32.
There was a nice simple implementation that really worked well: idle thread puts CPU to sleep if there's no flag (global variable) set to prevent it. Ok, replace variable with something more robust in multitasking environment.
I really do not understand what's wrong with it and why it should be replaced with «OS knows better» PM and a bunch of workarounds. I don't understand why everyone is talking about some «manual control» — we don't have manual control, we don't need it, idle system goes to sleep, that's it.
In my understanding, the only reason for OS to deliberately keep CPU from sleep is preventing a suicide — when stopped clock means data loss and undefined behavior.
In all other cases, where behavior is well defined, even not properly understood by user, OS should not intervene. UART and xtimer are clearly such cases.
can you share your code?
Which code exactly? We have quite a lot of it here https://github.com/unwireddevices/RIOT/tree/loralan-public, we are using RIOT in commercially produced battery powered systems.
@gebart We do have xtimer, rtctimers with 1-second accuracy and rtctimers-millis with 5-ms accuracy. We use all of them (well, almost, in latest build rtctimers was completely replaced by rtctimers-millis, as we dropped MCUs that don't support it from production devices). We replaced xtimer wherever it was possible exactly because of power consumption, but there are places where we need microseconds precision.
In fact, if the next RIOT release will have UART and xtimer to block LPM, we will remove those blocks in our fork.
I suppose that in sleep there's no UART working
That is maybe a valid thing to suppose. You might as well suppose "when I set up UART for reading, I suppose the system does not go to sleep underneath, until I tell it I don't want UART RX anymore".
I guess both are valid assumptions, but unfortunately orthogonal.
idle thread puts CPU to sleep if there's no flag (global variable) set to prevent it.
This behavior is very simple to get (by disabling pm_layered, defining PROVIDES_PM_SET_LOWEST
and providing a custom pm_set_lowest()
).
You're very free to develop your applications that way.
But this was certainly not what RIOT's users expected.
On the contrary, most developers expect the system to automatically manage power. pm offers an (optional) infrastructure to do that. It will never be optimal (compared to hand-written, target specific and application-tuned power management), but hopefully it will be good enough that for 95% of the use cases it is just not worth the effort to not use it.
It is in early stages, and I guess that is mostly why we're discussing it right now - once most periph drivers, xtimer and the network stack actually make use of it, we'd be discussing fine tuning and whether dynamic CPU scaling is worth the effort.
I don't understand why everyone is talking about some «manual control» — we don't have manual control, we don't need it, idle system goes to sleep, that's it.
How and where do your applications decide to sleep (set lpm_prevent_sleep) or not?
when I set up UART for reading, I suppose the system does not go to sleep underneath, until I tell it I don't want UART RX anymore
Why should I suppose that, exactly? Can you show me a specific page of the Reference Programming Manual of any MCU or maybe Cortex-M core which states such behavior?
No. Because no MCU behaves like that. It is a concept pushed by RIOT, and RIOT only. And I don't understand why anyone needs it.
On the contrary, most developers expect the system to automatically manage power
No, they don't. No more then they expect the system to automatically initialize everything that is described in periph_conf.h (why do I have to explicitly call uart_init anyway?) or to have int main() and not only void loop() exposed to them (yes, and there are systems doing both things).
Developers expect embedded system to behave predictably and exactly as described in MCU's programming guide.
RIOT is going to break it.
How and where do your applications decide to sleep (set lpm_prevent_sleep) or not?
It sleeps whenever it enters idle thread, if something didn't set lpm_prevent_sleep.
And something is a very, very, very rare thing. I'm not even sure if we still have it anywhere in our code except startup — probably not. Our programmers are encouraged to avoid it because fiddling with sleep on a battery-powered device is a dangerous thing to do.
we'd be discussing fine tuning and whether dynamic CPU scaling is worth the effort
In other words, RIOT's PM will be unusable in real world applications for another year or two. Is it only me who thinks MBED is kind of miles ahead already?
No. Because no MCU behaves like that.
The atmega family does exactly that. If you go to sleep (and dont block some sleep modes) you will not receive any data, that's why you have to block the sleep modes, if uart is used for receiving.
I see you point that you can go to sleep while the uart is enabled which is obviously not a use case, and that's why we are blocking it, yes :)
No, they don't.
Well, we do :-) I just want it to be as power effective as it ca be. I don't want to know about all the low-power stuff. I just want a notice telling me: WARNING, you are using in RX mode, so your node will never sleep
. Or: GREAT! Your node will use sleep modes
. That's what my group would really like to see. And that's exactly how I implemented pm_layered for atmega_common / atmega1284p.
It sleeps whenever it enters idle thread, if something didn't set lpm_prevent_sleep.
I can't find such function in RIOT. Where is it implemented? Is it the cortex m stuff from the references pulls/issues?
In other words, RIOT's PM will be unusable in real world applications for another year or two.
I am using it, but we can't compare AVR with Cortex/ARM processors, I agree on that! For me (avr world) pm_layered works great.
@roberthartung Any MCU without buffered LPUART can not receive data while sleeping because UART is not clocked.
This is normal. This is expected behavior. Peripherals are not clocking while sleeping.
But MCU that refuses to sleep because it has its UART clocked is a highly unexpected thing. AVR doesn't do that by itself. Cortex-M doesn't do it. 8051 doesn't do it. On any existing MCU programmer has full control over sleep modes and peripheral clocks.
I see you point that you can go to sleep while the uart is enabled which is obviously not a use case, and that's why we are blocking it, yes :)
Are you really suppose to block all things which you — you personally — considered as «not a use case»? Seriously? Not a use case for whom? That nice Apple's way «our fine marketing engineers decided that you users don't need it»?
I can't even imagine the logic behind deliberately blocking users from using specific perfectly legitimate functionality.
What comes next? «We removed headphone jack UART support because UART is an outdated interface, our users don't need it, they should use I2C instead»? «We don't support UART modes except 8-N-1 because we have no use case for them»?
Oh. wait. The last one is not a joke, it's https://github.com/RIOT-OS/RIOT/pull/5899. So we had to implement it ourselves because a f*cking lot of industrial automation uses different baudrates and modes over half-duplex channel.
Developers expect embedded system to behave predictably and exactly as described in MCU's programming guide.
RIOT is going to break it.
This statement compares Apples and Bananas... RIOT is not forcing any MCU to do anything that is not stated in any reference or programming manual (nor will it ever, as this behavior is fixed in silicon and hard to alter...), so RIOT does not break anything.
Its like working with any software library: there is an API description (reference manual for the MCU) that shows us all the functions we can use and describes their behavior. Any programmer can take this description and use the library in the way he/she sees fit. So same with using MCUs: RIOT tries to look at all the ways possible to use a MCU and tries to figure out the most useful subset of these functions and make these accessible to the 'user' (RIOT developer on a higher layer). What we in the end do with a MCU is only limited by the things the MCU allows us to do... And here is where there is room for creative and novel concepts (-> implicit, functional centric pm
). If we just do things everybody else is doing already, there would be no RIOT...
@roberthartung Any MCU without buffered LPUART can not receive data while sleeping because UART is not clocked.
Again, not true. One can configure e.g. the SiLabs or samr0 MCUs to do just that without using any buffered LPUART...
This is normal. This is expected behavior. Peripherals are not clocking while sleeping.
There is no such thing as 'normal'. Each vendor has their own style of designing clock domains and low power modes, some are strict (atmega), some are quite configurable (SiLab's Geckos, Atmel's sam0s). So for an OS like RIOT it is all about to have a general high-level concept in place, that allows to map all these different MCU designs into a portable and easy to use API.
But MCU that refuses to sleep because it has its UART clocked is a highly unexpected thing. AVR doesn't do that by itself. Cortex-M doesn't do it. 8051 doesn't do it. On any existing MCU programmer has full control over sleep modes and peripheral clocks.
Again, there is no highly unexpected thing
! The behavior is clearly defined (though of course there is always room for improvement in the documentation part of town :-) ). Its actually the beauty of the pm
concept, that the system is always in deterministic state, and one don't have to care about inconsistent state due to an unexpected change of power modes...
The discussion is getting off track. @olegart Please open a separate issue for your UART API concerns if you have some changes that you want to introduce, or respond in #5899. To me it seems like that thread had a very constructive discussion around the API design of the UART.
Regarding this topic, I think this can quite easily be solved by introducing a new module as an alternative to the pm_layered module. The new module would do exactly what @olegart describes, checking a variable for pm yes/no whenever pm_set_lowest is called. Since the drivers are using the conditional #if MODULE_PM_LAYERED
, it should be straightforward to apply it and nothing should break. It will simply be an alternative to the pm_layered, and will let the user to decide if they need more control, with the caveat that they need to take care to not stop the clock while something is using it.
@gebart I have no issues with UART — we implemented all the functionality that has «no use case» a while ago. We just never felt a need to spent time on lengthy github discussions about use cases.
Here I tried to explain that deliberately guarding users from things those users are too lazy to understand is a really bad idea.
Ok, RIOT maintainers have another opinion.
That's fine with me, we will continue to use our own fork. Besides, it's the only production-ready RIOT version for STM32 anyway — all the fancy new APIs and code shifting around modules still didn't change the fact that LPM, I2C, ADC and even GPIOs are all broken for these CPUs.
I agree with @gebart that we need several options instead of forcing all users to one because we think that's the best choice. I know we work on very flexible/usable enough APIs to make RIOT very friendly (as our motto says), but it's true that we cut a little bit the freedom of some users who want to go a bit beyond the proposed solutions.
As Kaspar and Hauke said, there's actually no limitation on the use of RIOT features (or MCU features), but then there are NACKs on the PRs of the people who want to share their view, which stalls development towards better or extended versions of our APIs.
That's fine with me, we will continue to use our own fork. Besides, it's the only production-ready RIOT version for STM32 anyway — all the fancy new APIs and code shifting around modules still didn't change the fact that LPM, I2C, ADC and even GPIOs are all broken for these CPUs.
This is actually a very strong statement, which might be true for some, but for some others not. What is clear here is that in the middle of the fight some good contributions might be lost, and so we lose the aim of an open source community. But that's also off topic (anyways half of the conversation is), thus we need to refocus and hopefully @olegart along with @fjmolinas are still motivated to converge in something everyone (or at least most of the people) find useful for RIOT, regarding Power Management.
I've played with customizing power management. See #8445.
all the fancy new APIs and code shifting around modules still didn't change the fact that LPM, I2C, ADC and even GPIOs are all broken for these CPUs.
It is customary (and more efficient) to open separate issues for specific problems that have been pinpointed. @olegart you have mentioned a number of issues on your stm board (you listed I2C, ADC, GPIO...). Could you please open separate issues for these, describing each problem in more details? It would help a great deal discussions towards solving these issues.
I think I'll make PRs, probably even today. We fixed all those problems, and it's pretty basic code, so should not be a problem to transfer it to recent RIOT code (we are using 2017.10 now).
@olegart sounds great!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If you want me to ignore this issue, please mark it with the "State: don't stale" label. Thank you for your contributions.
I see there's consensus among maintainers that the concept of pm_layered is OK, that implementation is a bit lacking, and that we agree it is easy enough to work around automatic pm (e.g., #8445). So I consider the original question answered.
Please re-open if you disagree.
Please re-open if you disagree.
You need to close it first to allow for that ;-)
Description
I created this issue to provide a central place for the similar/overlapping discussions started in #8401 and #7787 and to move to make the discussion about the actual
pm
concept in RIOT instead of having the general discussion in those specific, STM related PRs.I see the main goal in this issue to find out where (or why) the actual
pm
concept is 'totally useless', and to see if we either need to re-think the hole thing or what needs to fixed/added, or if this is just a matter of clarification.So why is
pm
not usable and which specific use cases does it prevent from working? Keep in mind to separate between missing implementations (no pm mode blocking in periph drivers, not 'disable/powerdown` functions in higher level modules like the shell) and broken concept(s).References
8401, #7787, #6802