Open xpojosh opened 1 year ago
We've done a little more work that may provide some insight here. We ran DXVA checker while running the two different runtimes.
With Adobe 31, we can see that in fact video decoding is being performed by the GPU. With the Harman runtime, the GPU is never called for decoding and thus must be falling back to software (event though an "accelerated" event is reported by the video texture)
See attached DXVA screenshots.
Adobe Traces
Harman Traces
Okay thanks .. so it looks like I'm seeing the same thing here although I also see a "video processing" thing going on with ADL, just not a "video decoding". With AIR SDK 31 it has the decoding too, per your view there. I will see if we can find out what's changed then, may need to dig through some of the Adobe history.
AIR 31: works with hardware decode
AIR 32.0.0.89: works with hardware decode
AIR 32.0.0.116: doesn't play the video?
Netstream status: NetStream.Play.Start onMetaData called. Video started. Total time = 634.5333333333333 Netstream status: NetStream.Play.Failed Netstream status: NetStream.Play.Stop
AIR 50.2.1: plays but software decode
So I'm suspecting this is to do with the changes that Adobe originally made, and then we fixed to get some of the video working again, but maybe the routing/decisioning internally is still a bit screwed up...
thanks
Ah .. so it looks like the video decoder logic in AIR needs some initial processing of the H.264 stream and then slices are passed to the DirectX hardware decoders to then accelerate the decoder and display. But that first bit of processing is something that's been removed when Adobe pulled out the software codecs that required the patent licensing.
I recall a few years ago we were hooking up a VDPAU decoder on an embedded Linux platform, and think there were some similar challenges with that... let me talk with the guy who did that and see whether we can pull together some code from there.. or alternatively we can see whether there is more flexibility in the DirectX stuff to hook this in at an earlier point, I'd have hoped that this is possible..!
thanks
Andrew --
This is all good news! Thanks so much for the frequent updates. They are much appreciated and help us plan our strategy going forward.
Cheers!
Josh
FYI I've also just used that DXVA checker to see what happens with our new multimedia framework - still work in progress but currently it can output to a NativeWindow... and that does use the hardware video decoder component. Hopefully it will still do that when we add the custom presenter logic so that we can redirect the output to textures, stagevideo and also (via software copying of the buffers) a normal video object...
That's awesome news!
In case this helps (available/working from 50.2)
package {
import air.media.FileSource;
import air.media.MediaElement;
import air.media.Pipeline;
import air.media.States;
import air.media.VideoOutput;
import flash.display.Sprite;
import flash.display.StageDisplayState;
import flash.events.Event;
import flash.filesystem.File;
public class Main extends Sprite {
public function Main() {
stage.displayState = StageDisplayState.FULL_SCREEN_INTERACTIVE;
var fileSource : FileSource = new FileSource();
fileSource.file = new File(@"c:\adobe\videos\bbb4k.mp4");
var viOp : VideoOutput = new VideoOutput(this.stage.nativeWindow);
var pipeline : Pipeline = Pipeline.createPipeline(fileSource, viOp);
pipeline.addEventListener(MediaElement.STATE_CHANGE, onPipelineStateChange);
}
private function onPipelineStateChange(e : Event) : void
{
var pipeline : Pipeline = e.target as Pipeline;
trace("Pipeline state change -> " + pipeline.readyState);
if (pipeline.readyState == States.READY) pipeline.play();
}
}
}
Andrew --
This is so exciting! We are living in the future.
Here's our confirmation that we are getting GPU video decoding with the new Pipeline class:
We'd also be interested if that VDPAU code could be revived for AIR on Linux, unless you are already pursuing another path there.
@ajwfrost
New Pipeline
looks very promising. I hope we could get documentation about it and make some experiments.
On which platforms it available right now? Only Windows or all desktops?
About hardware/software decoding we also have the same issue right now with AIR 33+.
May be related issue https://github.com/airsdk/Adobe-Runtime-Support/issues/155
NetStream::useHardwareDecoder
seems was designed to control that but now it not working..
@itlancer it's still a bit early for a roll-out; the only thing that's hooked up so far is on Windows, audio outputs are working (but not to any internal audio mixing component, just to the available sound output devices) and video output is only available to a NativeWindow, we're looking to add support for the other options (Video, StageVideo, VideoTexture). Focusing first on the standard cases where someone has an external file/URL, or an embedded bytearray of an audio clip etc, and just wants to play it in a simple way...
Great to hear this is progressing. Thanks
I have my doubts about new classes. Is this really something that a developer needs to concern themself with?
Also, does this work with RTMP connections?
It was partly because we needed a fresh start; the NetStream implementation is so complex (C++ code that started in pre-AS3 days and has had so many different mechanisms come and go) and the idea of a more pipeline-based API set is fairly common. But trying also to make the common use cases as straightforward to code as possible.
No reason to not support RTMP, I know they're also looking to adapt/improve the spec - see https://github.com/veovera/enhanced-rtmp The goal we have here though is to allow other folk to also create components/filters that can be plugged in to the pipeline, such that the RTMP protocol could be handled by a separate/open source component controlled by the AS3 application and displaying the video content within a Stage3D texture or whatever..
It's such a relief to see that improvements and fixes to video playback hasn't been dropped. From my perspective, video performance and stability issues are the one thing I'm worried about in terms of future proofing my (Windows) apps - too large for porting to a new platform to be feasible. A VideoTexture implementation of this new Pipeline, which simply plays from local files is all I would need, personally speaking. Is this still on the agenda? And would it make HEVC decoding possible too?
Well, I'm getting flawless, smooth playback on a 4K video with Pipeline. The same video freezes only a second after playback starts using the current system and VideoTexture.
@ajwfrost
When new pipeline planned to be released? We need smooth 4K video playback at least for Windows with VideoTexture
or Video
.
This is such a big issue for me, I would purchase an additional Enterprise Tier membership if it would help push the issue up the priority list?
It's moving forwards, albeit slowly. A fair bit more complicated than we had first thought, to get this working consistently across different platforms/operating systems with a fairly low-level API...
So we had been thinking to also provide some higher-level utility mechanisms, to simplify the usage but also to provide an interim way of doing things (which allows us to use some simpler OS-specific mechanisms for these short-cut cases, whilst building up the ability to also provide the same things using the 'pipeline' behind the scenes).
Taking an example: audio playback on Android is something we have working via the pipeline classes but it's not quite fitting into our current pipeline APIs because of how the Android native audio mechanisms work. So we will need to be updating the APIs again (which is a reason we've not officially published them!). But there is no reason why we couldn't provide a simpler API for playing back of sound files, or of video files, which we would hope covers 80%+ of use cases. And with that approach I would hope we can then get something more usable and complete released a bit sooner...
@ajwfrost
When new pipeline will be released for Windows? Right now there are a lot of performance issues with any type of video playback (VideoTexture
, Video
, StageVideo
):
https://github.com/airsdk/Adobe-Runtime-Support/issues/2162
https://github.com/airsdk/Adobe-Runtime-Support/issues/1963
https://github.com/airsdk/Adobe-Runtime-Support/issues/646
https://github.com/airsdk/Adobe-Runtime-Support/issues/2163
Problem Description
Since moving from the Adobe version 31.0 SDK to the Harman 50.1.1, we've seen severe degradation on Windows video performance. This first surfaced as a Windows 10 device that was no longer able to smoothly play 4K video.
Upon further investigation, it became evident that there was a big resource usage difference between the two runtimes:
Adobe 31.1 (CPU utilization 2-4%, GPU utilization 46%) Harman 50.1 (CPU utilization 70-85%, GPU utilization 65%)
The primary test system was a Gen 11 Intel NUC with a i3-1115G4 @ 3.00GHz processor, integrated graphics, 8GB ram (dual 4GB sticks) -- however, I saw the same issues on a Rizen embedded V1605B system with 8GB of ram.
The primary class tested was a simple Videotexture object; however, I saw the same performance issues when using StageVideo.
Steps to Reproduce
I've created a the following basic test which requires no additional frameworks, only a prepositioned video file. I've added a simple logger class which is not displayed here, but it can be ignored.
Known Workarounds
Unfortunately our workaround has been to downgrade the runtime to 31.0; however, that doesn't seem like a sustainable approach, as that SDK is no longer available on the Internet. (The last Adobe runtime, 32.0, had a bug where it would not play desktop video (hosted on local filesystem) for Windows.)