Closed GoogleCodeExporter closed 9 years ago
This is not viable because its too slow on anything but NVIDIA.
Original comment by h.lepp...@gmail.com
on 17 Feb 2012 at 8:10
I'm not sure that's true. MadVR does DXVA2 deinterlacing on ATI cards in
realtime. Obviously it's not also decoding the video on the GPU, but does it
really make such a big difference; or is there some other reason it would be
slower in lavfilters than madVR?
Original comment by mikegros...@gmail.com
on 17 Feb 2012 at 3:06
madVR does not require to copy the image back from the GPU to system memory,
which is the slow part.
Original comment by h.lepp...@gmail.com
on 17 Feb 2012 at 3:29
That makes sense; what about the native DXVA2 that's being worked on, which as
I understand it omits the copy back?
Original comment by mikegros...@gmail.com
on 17 Feb 2012 at 6:37
Sure, but whats the point of adding deinterlacing there?
The native DXVA only works with EVR based renderers to begin with, and in that
case the renderer will perform the same hardware based deinterlacing.
Original comment by h.lepp...@gmail.com
on 18 Feb 2012 at 12:35
Ideally the deinterlacing would work when used in avisynth for encoding. That
is to say one could use lav video decoder in a graph, then use
DirectShowSource(graph) in a script and get hardware accelerated deinterlacing
(and decoding I suppose).
I've never seen an avisynth filter that uses DXVA2 for any video processing, so
maybe it's not possible. I've also never written a DirectShow filter either,
much less one that uses DXVA2, so I don't really know how feasible it is.
http://msdn.microsoft.com/en-us/library/windows/desktop/cc307941.aspx just
piqued my interest because of this paragraph:
"In DXVA 1, the software decoder must access the API through the video
renderer. There is no way to use the DXVA 1 API without calling into the video
renderer. This limitation has been removed with DXVA 2. Using DXVA 2, the host
decoder (or any application) can access the API directly, through the
IDirectXVideoDecoderService interface."
Of course that only mentions the VideoDecoderService, not the
VideoProcessorService. So if it won't work without a renderer, then I agree
there's not much point; just let the renderer deinterlace. If it could work
without the renderer though, then I definitely think it would be worth
implementing.
Original comment by mikegros...@gmail.com
on 18 Feb 2012 at 5:31
Of course its possible, but like i said its just too slow on AMD.
Original comment by h.lepp...@gmail.com
on 18 Feb 2012 at 6:53
Well realtime playback doesn't really matter for encoding. It just has to be
faster than software deinterlacing, and getting the same results as hardware
deinterlacing with software can mean performance as bad as a few fps currently.
So the feature definitely still has some value even if it's slow.
Original comment by mikegros...@gmail.com
on 19 Feb 2012 at 4:03
With AMD it would be 3-5 fps, if even that fast.
For that, its not worth doing.
Original comment by h.lepp...@gmail.com
on 19 Feb 2012 at 10:26
Original issue reported on code.google.com by
mikegros...@gmail.com
on 17 Feb 2012 at 12:27