Open David-Wells99 opened 5 years ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had recent activity. If you wish to continue this issue, then please create a new issue as we do not monitor closed issues.
any news ?thanks
I am a user of the Whirligig VR video player - best VR player on Windows, in my opinion. Its developer said he uses this plugin to drive the video playback. It works 100% alright for the usual case of one GPU, using the Media Foundation path.
If the user has a secondary GPU, currently it is ignored. Would it be technically feasible to adjust this plugin so that it tried to use all decoding engines available? (Not at the same time, but, if one failed, it could try the next one). I don't know if MF would accommodate that.
In some tests using DirectShow with LAV Filters, using the decoding engine on a secondary GPU seems to work, but the performance was very bad for high resolution videos (> 5500 x 5500), while using that GPU directly with MF had flawless performance. For 2D videos, DirectShow and LAV Filters work alright, and even FFmpeg can be used to encode and decode using the secondary GPU, leaving the main one free for other tasks.
The main benefit of implementing that change would be convenience for the user, as it would not need to manually change HDMI cables of the VR headset and monitor to another GPU every time he needed to watch something (and change it back once done). The target audience would mostly be users of AMD cards that are powerful for 3D engine, but have weak decoding engines, and that used a cheap NVIDIA card for decoding. Probably not a very large group, so it would be totally understandable if this feature were passed over for more pressing ones.
I have read talks about viewport tech, in which a video would be broken down in multiple smaller videos and the player would select some of them to decode and display, depending on the direction the user was facing. So maybe in the future using multiple decoding engines in parallel may prove to be a very interesting option too.