Closed emilyst closed 1 year ago
Reasonably sure this is a dupe of #7856, but this has more information. We're aware of this issue, but it's further complicated by the Intel GPU setting itself as both a high-performance and power-saving GPU in some cases, which causes our adapter selection logic to go wonky. There's definitely some room for improvement on our init checks, however.
Some thoughts, while I’m here.
I was re-reading this just now and looked more closely at https://github.com/obsproject/obs-studio/commit/c83eaaa51c260c3844baaf1cb76de63e0f096cea. This is probably the thing that needs adjustment for my case, but I do not understand why choosing the iGPU there is preferable in any case. This is all a bit outside of my typical wheelhouse.
That commit also mentions, "The user can still choose the dGPU if they change the adapter index, but the adapter index will now be the second value instead of the first value.” Presumably this refers to developers who are using libobs-d3d11
rather than end-users who are using OBS Project.
The most basic remedy in my case would be to have some way for me to manually choose the GPU to use (i.e., to allow me to change the adapter index) as an end user, but I know that capability was removed from OBS in the past for a reason.
I am happy to retrieve any additional information you want. I’m a software engineer in my day job, so I am willing to go as far as patching and building OBS myself, if need be.
Oh, and one more thing. In my case, I am attempting to use a capture card, not display capture. That’s in the logs, but I thought I’d call it out explicitly. (I’m reading up on https://github.com/obsproject/obs-studio/pull/3686 for context.)
As a note, we're hoping to have at least some solution for this before 29 is fully released (not a guarantee; we're trying), but hardware availability has proven to be a bottleneck for testing.
I was re-reading this just now and looked more closely at c83eaaa. This is probably the thing that needs adjustment for my case, but I do not understand why choosing the iGPU there is preferable in any case. This is all a bit outside of my typical wheelhouse.
We're aware. If you check the git author of that commit, you'll note it is from Intel.
As a note, we're hoping to have at least some solution for this before 29 is fully released (not a guarantee; we're trying), but hardware availability has proven to be a bottleneck for testing.
Is this something I can help with? I’m happy to run tests or generate logs on my hardware. If you’re local to me, I could even lend you mine. (I am not sure if I am ready to donate $1,500 worth of hardware to this project, but otherwise, I’ll do what I can.)
I was re-reading this just now and looked more closely at c83eaaa. This is probably the thing that needs adjustment for my case, but I do not understand why choosing the iGPU there is preferable in any case. This is all a bit outside of my typical wheelhouse.
We're aware. If you check the git author of that commit, you'll note it is from Intel.
Who is “we”, and what are you aware of?
(I did look at the author of the commit, but I see nothing in the author’s bio on this site, nor in the commit, nor in the pull request linking them to Intel. I’ll take your word on it.)
In addition, @Fenrirthviti did raise questions about the utility of those changes which were never addressed in the pull request, so I think it’s reasonable to wonder what happened.
"We" is the OBS maintainers.
Pretty much all the original QSV code was submitted by Intel, as a loose collaboration. We were in contact with them at the time, the team has since been shuffled around, and I don't believe the original developer is still working for the same department, or even at Intel anymore iirc.
There's a lot of legacy stuff that was "just working" so we never bothered to clean it up, which is how we ended up in the situation we were in. The tl;dr of the situation is that the Intel dGPUs at the time were so... unimpressive, we'll say, that we never imagined a situation where someone would want OBS to select one to do anything. That has very clearly changed, so we have some sins of the past to clean up.
I’m very grateful for this clarifying reply, @Fenrirthviti.
I’m just speculating, but I see a few ways forward:
They all seem equal measures of unpleasant to me. Again, if I can be any help, I’m happy to.
I’m very grateful for the work you did in #7987. Would you like any feedback from testing with my system?
Operating System Info
Windows 11
Other OS
No response
OBS Studio Version
29.0.0-beta2
OBS Studio Version (Other)
No response
OBS Studio Log URL
https://obsproject.com/logs/Z25AUmSJZYlbjgSk
OBS Studio Crash Log URL
No response
Expected Behavior
When choosing QSV AVC or HVEC to encode a video, I expect OBS to use the discrete GPU (Intel Arc A770) and to record/stream without issues.
Current Behavior
When choosing QSV AVC or HVEC to encode a video, OBS chooses to encode using the integrated GPU, which is less powerful. During recording, it drops the majority of frames and creates an unusable recording.
Steps to Reproduce
Anything else we should know?
I'm running OBS Studio on the NUC 12 Enthusiast Kit, which has both an integrated Intel Iris Xe GPU and a discrete Intel Arc A770 GPU. (This is possibly an unusual case.)
I have tried several remedies, including explicitly configuring Windows to use the discrete GPU in its settings, as suggested in the Laptop Troubleshooting guide. Nothing there is helpful.
This issue seems to occur both on stable (version 28) and beta (version 29), but since the stable version does not supply the same encoders, I am not sure.
There is one successful workaround available, however. In the Device Manager, if I disable the Intel Iris Xe GPU entirely, OBS uses the discrete GPU without any issues.
In attempting to diagnose the problem, I found a set of lines in the encoder source code which seem to suggest that the choice to use the integrated GPU is intentionally hardcoded.
It appears that change was made in this commit. Earlier the same day/pull request, this commit was made to hack the adapter order. It's possible that these two commits do not play well together, but that's a total guess on my part.
Whatever the design or intention, I believe that the code on line 93, as written, will always prefer the integrated Intel GPU because it has been re-ordered to come first, and because it also supports hardware AV1.