thesofproject / linux

Linux kernel source tree
Other
91 stars 134 forks source link

[DNM][POC] asoc: sof: add a hard-coded array of loadable extensions #5156

Open lyakh opened 3 months ago

lyakh commented 3 months ago

The driver currently loads drivers that are missing in the firmware base image but are present in the topology. It might also be necessary to load auxiliary loadable objects. Use a hard-coded array for that.

I'm not proposing to merge this approach, but we need something to load such auxiliary modules.

ranj063 commented 3 months ago

The driver currently loads drivers that are missing in the firmware base image but are present in the topology. It might also be necessary to load auxiliary loadable objects. Use a hard-coded array for that.

I'm not proposing to merge this approach, but we need something to load such auxiliary modules.

@lyakh what are these aux loadable objects exactly? Am I understanding this correct that you're talking about modules that depend on other modules? If so, why can't they also be specified in topology?

lyakh commented 3 months ago

Replying to @plbossart and @ranj063 - one day I should finalise https://github.com/thesofproject/sof-docs/pull/493 . In short - currently we can make Module Adapter instances as loadable LLEXT, with added dependency support we will also be able to dynamically load "auxiliary" code, e.g. Maths libraries like FIR, needed for eq_fir and one or two other component drivers, but if your topology contains such pipelines but they aren't in use ATM, that code can stay in IMR without being copied to SRAM.

plbossart commented 3 months ago

"if your topology contains such pipelines but they aren't in use ATM, that code can stay in IMR without being copied to SRAM."

What's the benefit? Can this memory be reused for the heap then?

There's also the added risk/benefit question. New degrees of freedom are great until they start raising new validation problems with dynamic allocation failing randomly at some point... Exhibit A was the introduction of dynamic pipelines.

Edit: an additional question is the increased latency to enable a new pipeline that wasn't used before. This would move some of the latency from the boot or resume time to the pipeline start proper. Not necessarily good in all cases.

lyakh commented 2 months ago

"if your topology contains such pipelines but they aren't in use ATM, that code can stay in IMR without being copied to SRAM."

What's the benefit? Can this memory be reused for the heap then?

Potentially - yes, ATM it can at least be reused for other modules. Additionally unused SRAM banks can be powered down.

There's also the added risk/benefit question. New degrees of freedom are great until they start raising new validation problems with dynamic allocation failing randomly at some point... Exhibit A was the introduction of dynamic pipelines.

Sure, in general more code / features usually means higher maintenance costs.

Edit: an additional question is the increased latency to enable a new pipeline that wasn't used before. This would move some of the latency from the boot or resume time to the pipeline start proper. Not necessarily good in all cases.

That's true as well. And even dynamic pipelines themselves, as you mentioned, add to this latency too. Maybe we should consider adding a feature to lock some pipelines in SRAM. But maybe the user-space should just keep them open in such cases.

plbossart commented 2 months ago

Seems like we need a whole new level of infrastructure across firmware and host driver to power-managed banks, lock modules, deal with dependencies, manage fragmentation, etc.

This isn't a simple addition to the list of features... And this begs the question: what happens if we don't support this capability?