google / ExoPlayer

This project is deprecated and stale. The latest ExoPlayer code is available in https://github.com/androidx/media
https://developer.android.com/media/media3/exoplayer
Apache License 2.0
21.7k stars 6.02k forks source link

Leverage Android Q Thermal API to request low bit rate stream as needed #6284

Open Ethan1983 opened 5 years ago

Ethan1983 commented 5 years ago

[REQUIRED] Use case description

Android devices getting warm can throttle CPU/GPU and this could affect streaming performance.

Proposed solution

Android Q provides thermal API for apps to be notified and fallback to low performance mode. These callbacks could be used by ExoPlayer to request low quality stream from multiple tracks (if available).

Alternatives considered

Android Apps could do this logic explicitly but this is something all apps would need and an out of box solution from ExoPlayer would help avoid redoing logic across applications.

AquilesCanta commented 5 years ago

@tonihei, mind having a look in the context of performance?

tonihei commented 5 years ago

Thanks for the feature request. We can probably look into this, but it's slightly unclear what the logic should be exactly. I have the feeling different apps may want different behavior depending on the available formats and the various thermal severity levels.

One very basic approach could be to listen to the thermal status in SimpleExoPlayer and force set the forceLowestBitrate flag once we reach THERMAL_STATUS_SEVERE. This is very conservative and I guess no one would object to that.

Note that the API is currently only supported by Pixel devices running Q, so the practical effect is limited for now.

Ethan1983 commented 5 years ago

@tonihei A default handling that can be overridden or disabled by applications for custom logic (if needed) would help. Yes, at this moment it helps only Pixel devices but hopefully it should see wider adoption by other vendors.

ojw28 commented 5 years ago

This is interesting :). When considering this, we should think about the pros/cons of using the thermal status directly vs more user visible metrics like dropped frame count (and also the possibility of using a combination of the two).

One very basic approach could be to listen to the thermal status in SimpleExoPlayer and force set the forceLowestBitrate flag once we reach THERMAL_STATUS_SEVERE. This is very conservative and I guess no one would object to that.

Could this approach end up repeatedly switching back and forth between the highest and lower resolutions (i.e. in the case where the highest resolution is sufficient to cause the device to heat up to THERMAL_STATUS_SEVERE, and the lowest resolution causes the device to cool down again).

tonihei commented 5 years ago

Could this approach end up repeatedly switching back and forth between the highest and lower resolutions (i.e. in the case where the highest resolution is sufficient to cause the device to heat up to THERMAL_STATUS_SEVERE, and the lowest resolution causes the device to cool down again).

We should use some form of hysteresis to prevent the scenario you describe. For example by only disabling the special mode once we reach THERMAL_STATUS_LIGHT or THERMAL_STATUS_MODERATE. Not sure how fast these states are supposed to change.

Ethan1983 commented 5 years ago

For an API consumer, it would be great if the usage can be made something similar to DefaultBandwidthMeter

https://exoplayer.dev/doc/reference/com/google/android/exoplayer2/upstream/DefaultBandwidthMeter.html

weivincewang commented 5 years ago

Could this approach end up repeatedly switching back and forth between the highest and lower resolutions (i.e. in the case where the highest resolution is sufficient to cause the device to heat up to THERMAL_STATUS_SEVERE, and the lowest resolution causes the device to cool down again).

We should use some form of hysteresis to prevent the scenario you describe. For example by only disabling the special mode once we reach THERMAL_STATUS_LIGHT or THERMAL_STATUS_MODERATE. Not sure how fast these states are supposed to change.

The status reflects the temperature based throttling level in device level due to thermal stress (either from power usage or external heat or even both). Device mitigation has hysteresis to avoid jittering dramatically and in some low mitigation level, thermal governor usually will adjust capacity dynamically instead of setting a hard floor. Being said, I might be better to kick in mitigation THERMAL_STATUS_MODERATE, while LIGTH could be used for preventive actions e.g. estimate how quickly thermal stress built up etc. And in deeper level e.g. CRITICAL, it might be meaningful to put everything to minimal where system capacity will be limited to floor anyway.

tonihei commented 5 years ago

@weivincewang Thanks a lot for the input!

So the suggestion would be:

weivincewang commented 5 years ago

@weivincewang Thanks a lot for the input!

So the suggestion would be:

  • Switch to mitigation mode (low-resolution playback) when reaching MODERATE.
  • Turn off mitigation when the thermal status is back to LIGHT (because the platform is handling the hysteresis for us).
  • Optional: Stop playback when we reach SEVERE?

maybe we can go lowest resolution in SEVERE? and stop in CRITICAL or EMERGENCY?

tonihei commented 5 years ago

So back to the original proposal:

szaboa commented 4 years ago

Is this open for external contribution, or someone is already doing this? I've found this really interesting and started to work on this, however I am not sure if I am on the right track. You can find the commit here.

So, I've followed the approach as the bandwith meter is implemented (or at least tried) and wired in a thermal level meter into the TrackSelector, as we have the forceLowestBitrate flag there (DefaultTrackSelector). Is this the right direction? If so, I would have a couple of questions:

  1. Where should we start listening to thermal status changes? Or is there a place from where we could periodically pull it from the meter?
  2. If I set the forceLowestBitrate flag to true with setParameters(buildUponParameters().setForceLowestBitrate(true)) from the thermal status change callback, then it seems the player does not respect it. But it applies correctly if I change the initialValue. Any idea why is this?
  3. I believe listening to the thermal status shouldn't be enabled by default? So the "default" meter could be the dummy one and if the user wants to react on thermal status changes, then the DefaultThermalLevelMeter should be passed to the player?
ojw28 commented 4 years ago

As alluded to earlier, I'm fairly skeptical of the kind of solution being proposed in https://github.com/google/ExoPlayer/issues/6284#issuecomment-530276736. A simple rule like switching to the lowest bitrate when reaching thermal status X feels like it wont give the best behaviour in all circumstances. In particular, if we're able to play a higher bitrate without any user visible degradation in thermal status X, there's really no need for us to switch to a lower one, and doing so seems like it would be negative for the user.

In my opinion it makes a lot more sense to switch down based on user visible performance degradation (e.g. dropped frames or audio underruns), which may or may not occur as a result of thermal throttling and is what the user actually cares about. If we were to approach it this way, then I can see some ways in which the thermal API might be useful:

I'm aware this doesn't really answer any of your specific questions, but it does imply that we should really be implementing down-switching based on user visible performance degradation before we start looking at this.

weivincewang commented 4 years ago

Yes, that doable if performance degradation can be detected reliably, maybe need some thing to filter like PID control.

In particular, if we're able to play a higher bitrate without any user visible degradation in thermal status X, there's really no need for us to switch to a lower one, and doing so seems like it would be negative for the user.

Providing a best UX is top interest from app, and I fully understand, on the other side, being a good citizen to help system recover from overheating is another thing. (but yes, thermal can be resulted from something else, even external heat source) Anyway, we are trying to place proper CTS to make sure the throttling report and behavior aligned.

szaboa commented 4 years ago

@ojw28 You are right, I see your point.

stevemayhew commented 2 years ago

@ojw28 and @tonihei we are seeing this issue with 4k content, widevine at 60fps on their puck and dongle style devices. These are AmLogic SoC. It is over temp that is triggering the CPU throttling after which we see frame drops.

Our solution restricts the highest frame rate and/or resolution using existing constraint based track selection. Note, we are using the latest async render mode which helps a lot (thanks @christosts !)

It would be better if it was more dynamic (in the adaption rather then a full trackselection) which would eliminate the buffering on switch

stevemayhew commented 2 years ago

P.S. are their CTS tests that cover correct implementation of Android Q thermal API's?

tonihei commented 2 years ago

As @ojw28 pointed out in the comment above, we should probably implement "down-switching based on user visible performance degradation" first before using this signal. Using number of (recent) frame drops as a signal allows to switch down based on the actual playback degradation. So a stream that plays just fine on an overheated device despite throttling can continue to do so, whereas a stream that runs into frame drops should switch down. We haven't implemented this yet, but it seems generally preferable to using the more hypothetical signal of the Thermal API.

stevemayhew commented 2 years ago

Excellent, this is pretty much what we have done externally with the standard override track selection API. On the device we are seeing this happen it is the @.*** FPS that causes the stress, my hypothesis is that this is because frame rate increases are what affect the CPU and it contributes the most to over temp

We’ll keep you in the loop how it goes in production and watch for and test any in player solution you come up with

On Fri, Jun 17, 2022 at 5:13 AM tonihei @.***> wrote:

As @ojw28 https://github.com/ojw28 pointed out in the comment above, we should probably implement "down-switching based on user visible performance degradation" first before using this signal. Using number of (recent) frame drops as a signal allows to switch down based on the actual playback degradation. So a stream that plays just fine on an overheated device despite throttling can continue to do so, whereas a stream that runs into frame drops should switch down. We haven't implemented this yet, but it seems generally preferable to using the more hypothetical signal of the Thermal API.

— Reply to this email directly, view it on GitHub https://github.com/google/ExoPlayer/issues/6284#issuecomment-1158810532, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADBF6CDZ62Q32EPWF27ZITVPRTU3ANCNFSM4IK5PWNA . You are receiving this because you commented.Message ID: @.***>