Open hubertmis opened 4 weeks ago
An initial API proposal if we extend pm_policy
would be something like:
struct pm_policy_resource {
void (request*)(uint32_t value_us);
void (release*)(void);
void *data;
};
void pm_policy_resource_request_add(const struct pm_policy_resource *resource,
struct pm_policy_latency_request *request,
uint32_t value_us);
The API is essentially the same as pm_policy_
, but any resource can be managed through implementing the APIs in the struct pm_policy_resource
.
Working backwards, pm_state
could theoretically be abstracted as struct pm_policy_resource
as well, allowing us to have one way to deal with all resources. Not sure about that one though :)
mention @gmarull @JordanYates
What is expected from these resources ? For example, be "active" to do something in a given time ?
In that case, what that means ? Just ensure that the device / resource is active ? For devices that would basically mean do something like pm_device_runtime_get(time - x)
where x
is the time required to the device be able to do something in time
?
What is expected from these resources ? For example, be "active" to do something in a given time ?
In that case, what that means ? Just ensure that the device / resource is active ? For devices that would basically mean do something like
pm_device_runtime_get(time - x)
wherex
is the time required to the device be able to do something intime
?
I would summarize what we want as: Fulfill this latency requirement until latency requirement is lifted.
For example: A UART (the resource in this case) is resumed. It must be able to respond to uart_poll_out()
. The UART can be configured internally into multiple power states to preserve power, even to a point where the device is potentially not even clocked, at the cost of latency to fulfill the API call. But how will the UART (and SoC) know how low a power state the device can go into? pm_policy_resource_request_add()
:) No request means preserve as much power as possible, a latency requirement of 10us means "don't go into a power state where the latency to actually poll out is longer than 10us".
What is expected from these resources ? For example, be "active" to do something in a given time ? In that case, what that means ? Just ensure that the device / resource is active ? For devices that would basically mean do something like
pm_device_runtime_get(time - x)
wherex
is the time required to the device be able to do something intime
?I would summarize what we want as: Fulfill this latency requirement until latency requirement is lifted.
For example: A UART (the resource in this case) is resumed. It must be able to respond to
uart_poll_out()
. The UART can be configured internally into multiple power states to preserve power, even to a point where the device is potentially not even clocked, at the cost of latency to fulfill the API call. But how will the UART (and SoC) know how low a power state the device can go into?pm_policy_resource_request_add()
:) No request means preserve as much power as possible, a latency requirement of 10us means "don't go into a power state where the latency to actually poll out is longer than 10us".
Right, but that is the thing, the PM subsystem has no idea of device power states other than "suspended"/"resumed" and "on"/"off". So, we would need to have a new API hooked in these resources. What I am wondering is that this basically starts to introduce this concept of multiple sleep states for a device increasing the overall complexity. That's said the usage seems valid to me.
Is your enhancement proposal related to a problem? Please describe.
We have multiple sets of functions intended to manage IRQ handling latency:
For IRQs that we know when happen: https://github.com/zephyrproject-rtos/zephyr/blob/1e83368d88222ae573fe5ff6ad1b7b8187cee554/include/zephyr/pm/policy.h#L161
For IRQs that we don't know exact timing: https://github.com/zephyrproject-rtos/zephyr/blob/1e83368d88222ae573fe5ff6ad1b7b8187cee554/include/zephyr/pm/policy.h#L281
However, the IRQs are not the only events that require latency management. We can imagine multiple examples like applying clocks to peripherals configured to stream data through DMA (without CPU involvement) or keeping power domains enabled on the route of a timer synchronization signal between multiple power domains.
These scenarios are similar to IRQ handling. For some of them we know when the low latency must be applied (periodic synchronization signal), for others we don't (SPI slave configured to stream data through a cryptography module to a radio).
Describe the solution you'd like
I would like to extend the existing latency handling API with a concept of a "resource". The scope of the currently implemented solution would apply to the resource being CPU (requesting low CPU latency immediately, registering future events requiring low CPU latency). But the implementation would apply to other SoC- or event Product-specific resources like:
It seems that at least parts of the existing implementation for the CPU latency handling can be reused for the latency handling needed for other resources.
Describe alternatives you've considered
Instead of extending the existing API, we could create new, similar sets of functions that would work on the resources.
One more option is to create feature-specific sets of functions like "latency management for SPI slave", "latency management for radio", etc. However, it looks like unnecessary duplication. Most probably, the implementation would be similar or the same for multiple resources.