Closed CoolJosh3k closed 11 months ago
What error message popup does trying to initialize the device with WASAPI show?
Does the device have more than 2 audio channels?
What error message popup does trying to initialize the device with WASAPI show?
Does the device have more than 2 audio channels?
"WASAPI error: Could not open audio stream!"
24bit, 48kHz. 5.1 channel audio.
https://github.com/Dn-Programming-Core-Management/Dn-FamiTracker/pull/143#issuecomment-1148515718
Issue: "WASAPI error: Could not open audio stream!" on 2012 MacBook Pro running Boot Camp Windows 7.
is this a fucking joke, it requires 4-channel audio output
I suppose at this point it might be a good idea to implement channel upmixing (filling the remaining channels with zero), so we can send audio output to devices which require more than 2 channels of audio. Not sure when this will be implemented.
So looking at ChatGPT-hallucinated code:
#include <windows.h>
#include <mmdeviceapi.h>
#include <endpointvolume.h>
#include <iostream>
int main() {
HRESULT hr;
IMMDeviceEnumerator* pEnumerator = NULL;
// Create a COM object for the device enumerator
hr = CoCreateInstance(__uuidof(MMDeviceEnumerator), NULL, CLSCTX_ALL, __uuidof(IMMDeviceEnumerator), (void**)&pEnumerator);
if (hr != S_OK) {
std::cerr << "Failed to create device enumerator. HRESULT=" << hr << std::endl;
return -1;
}
// Get the default audio endpoint
IMMDevice* pDevice = NULL;
hr = pEnumerator->GetDefaultAudioEndpoint(eRender, eMultimedia, &pDevice);
if (hr != S_OK) {
std::cerr << "Failed to get default audio endpoint. HRESULT=" << hr << std::endl;
pEnumerator->Release();
return -1;
}
// Get the device ID string
LPWSTR pwszID = NULL;
hr = pDevice->GetId(&pwszID);
if (hr != S_OK) {
std::cerr << "Failed to get device ID string. HRESULT=" << hr << std::endl;
pDevice->Release();
pEnumerator->Release();
return -1;
}
// Open the device
IAudioClient* pAudioClient = NULL;
hr = pDevice->Activate(__uuidof(IAudioClient), CLSCTX_ALL, NULL, (void**)&pAudioClient);
if (hr != S_OK) {
std::cerr << "Failed to activate audio client. HRESULT=" << hr << std::endl;
CoTaskMemFree(pwszID);
pDevice->Release();
pEnumerator->Release();
return -1;
}
// Get the audio format
WAVEFORMATEX* pwfx = NULL;
hr = pAudioClient->GetMixFormat(&pwfx);
if (hr != S_OK) {
std::cerr << "Failed to get audio format. HRESULT=" << hr << std::endl;
pAudioClient->Release();
CoTaskMemFree(pwszID);
pDevice->Release();
pEnumerator->Release();
return -1;
}
// Get the audio session control
IAudioSessionControl* pSessionControl = NULL;
hr = pAudioClient->GetService(__uuidof(IAudioSessionControl), (void**)&pSessionControl);
if (hr != S_OK) {
std::cerr << "Failed to get audio session control. HRESULT=" << hr << std::endl;
CoTaskMemFree(pwfx);
pAudioClient->Release();
CoTaskMemFree(pwszID);
pDevice->Release();
pEnumerator->Release();
return -1;
}
// Get the audio endpoint volume control
IAudioEndpointVolume* pEndpointVolume = NULL;
hr = pDevice->Activate(__uuidof(IAudioEndpointVolume), CLSCTX_ALL, NULL, (void**)&pEndpointVolume);
if (hr != S_OK) {
std::cerr << "Failed to get audio endpoint volume control. HRESULT=" << hr << std::endl;
pSessionControl->Release();
CoTaskMemFree(pwfx);
pAudioClient->Release();
CoTaskMemFree(pwszID);
pDevice->Release
pEnumerator->Release();
return -1;
}
// Get the channel count
UINT32 nChannels = pwfx->nChannels;
std::cout << "Default audio output: " << std::endl;
std::wcout << L" Device ID: " << pwszID << std::endl;
std::cout << " Channels: " << nChannels << std::endl;
// Enumerate all audio endpoints
IMMDeviceCollection* pCollection = NULL;
hr = pEnumerator->EnumAudioEndpoints(eRender, DEVICE_STATE_ACTIVE, &pCollection);
if (hr != S_OK) {
std::cerr << "Failed to enumerate audio endpoints. HRESULT=" << hr << std::endl;
pEndpointVolume->Release();
pSessionControl->Release();
CoTaskMemFree(pwfx);
pAudioClient->Release();
CoTaskMemFree(pwszID);
pDevice->Release();
pEnumerator->Release();
return -1;
}
UINT nCount;
pCollection->GetCount(&nCount);
std::cout << "Other available audio outputs: " << std::endl;
for (UINT i = 0; i < nCount; i++) {
IMMDevice* pDevice = NULL;
hr = pCollection->Item(i, &pDevice);
if (hr != S_OK) {
std::cerr << "Failed to get audio endpoint. HRESULT=" << hr << std::endl;
continue;
}
// Get the device ID string
LPWSTR pwszID = NULL;
hr = pDevice->GetId(&pwszID);
if (hr != S_OK) {
std::cerr << "Failed to get device ID string. HRESULT=" << hr << std::endl;
pDevice->Release();
continue;
}
// Open the device
IAudioClient* pAudioClient = NULL;
hr = pDevice->Activate(__uuidof(IAudioClient), CLSCTX_ALL, NULL, (void**)&pAudioClient);
if (hr != S_OK) {
std::cerr << "Failed to activate audio client. HRESULT=" << hr << std::endl;
CoTaskMemFree(pwszID);
pDevice->Release();
continue;
}
// Get the audio format
WAVEFORMATEX* pwfx = NULL;
hr = pAudioClient->GetMixFormat(&pwfx);
if (hr != S_OK) {
std::cerr << "Failed to get audio format. HRESULT=" << hr << std::endl;
pAudioClient->Release();
CoTaskMemFree(pwszID);
pDevice->Release();
continue;
}
// Get the channel count
UINT32 nChannels = pwfx->nChannels;
std::wcout << L" Device ID: " << pwszID << std::endl;
std::cout << " Channels: " << nChannels << std::endl;
// Release resources
CoTaskMemFree(pwszID);
pAudioClient->Release();
CoTaskMemFree(pwfx);
pDevice->Release();
}
// Release resources
pCollection->Release();
pEndpointVolume->Release();
pSessionControl->Release();
CoTaskMemFree(pwfx);
pAudioClient->Release();
CoTaskMemFree(pwszID);
pDevice->Release();
pEnumerator->Release();
return 0;
Looks like we get the proper output channel count from IAudioClient::GetMixFormat()
. Interestingly we already call that in our code:
We have to change the code to use the mix format's channel count, after 2 output channels (stereo) fails, or replacing the hard-coded 2 channels altogether. Most of the remaining code can remain the same, except CSoundStream::WriteBuffer
will have to handle m_outputChannels > 2
.
Though before merging, I should test the resulting code on my 2012 MacBook Pro with 4 speaker channels, to verify that outputting audio to the first 2 channels produces good-sounding audio (and we don't need to, eg. every single app includes its own speaker crossover and outputs to the first 2 vs. last 2 channels).
Can you test my channel upmixing build (download .zip, built from https://github.com/nyanpasu64/Dn-FamiTracker/actions/runs/4582531440) and see if it works on your audio output?
Unfortunately I can't test on my MBP because it's currently loaded to Mac OS 10.15 instead of Windows 10 Boot Camp, and I don't want to take out the hard drive (not SSD! god hard drives are slow) to reimage the OS with my Windows 10 backup.
Can you test my channel upmixing build (download .zip, built from https://github.com/nyanpasu64/Dn-FamiTracker/actions/runs/4582531440) and see if it works on your audio output?
Unfortunately I can't test on my MBP because it's currently loaded to Mac OS 10.15 instead of Windows 10 Boot Camp, and I don't want to take out the hard drive (not SSD! god hard drives are slow) to reimage the OS with my Windows 10 backup.
Personally not wanting to run this myself, especially without any easy way to quickly build from source files. I do really hope someone else can though, or that you end up with a situation in which you can personally test.
The main thing here is just having that DirectSound fallback. Maybe a new thread for what your trying to fix?
The build linked above is an automated build by Github Actions. If you want to compile the code yourself, see this document.
We have deprecated the DirectSound backend due to its instability and latency. If you insist on using DirectSound, use Dn-FT versions 0.4.0.1 and lower.
The build linked above is an automated build by Github Actions. If you want to compile the code yourself, see this document.
We have deprecated the DirectSound backend due to its instability and latency. If you insist on using DirectSound, use Dn-FT versions 0.4.0.1 and lower.
I just kinda sucks to miss out on post-0.4.0.1 just becuase of
Isn't the whole point of the 0CC and Dn to add new features without ever breaking compatablity or removing things?
Dn-Famitracker is very nice to use and I'd hate to be stuck on 0.4.0.1. Plus one day I might forget the whole reason I needed 0.4.0.1 and exactly which version broke things. Knowing me, years later I'd end up back on installing 0CC on a new system becuase I will have forgetten why Dn won't work.
there is no legacy support for DirectSound
too bad
WASAPI does not support devices with more than 2 channels of audio.
can you test the build I linked? It's supposed to fix >2 channel audio devices, but I need you to verify it works before I merge it.
The source files are at https://github.com/nyanpasu64/Dn-FamiTracker/tree/multi-channel-wasapi, and the direct GitHub Actions build link is at https://github.com/nyanpasu64/Dn-FamiTracker/suites/11961191866/artifacts/627399382.
After many updates, the patch notes about dropping DirectSound won't be so obvious anymore.
I predict at that point, many people who encounter the issue may feel like like Dn-Famitracker is broken fork of Famitracker. They might fall back to the original, or even 0CC, believing it is superior.
Even if the current issues with WASAPI is fixed, there might be issues in the future for which an option to fallback to the legacy DirectSound mode would be really useful to have.
I am not a programmer as such (yet), so I don't understand why is is so necessary to strip out DirectSound entirely, instead of just having some UI toggle to reenable it.
I am hesitant to run that code because I do not understand it. I wish I could so I would, but I don't. Going to need help from someone else for that: perhaps from a different issue thread?
After many updates, the patch notes about dropping DirectSound won't be so obvious anymore.
The change from DirectSound to WASAPI has been highlighted as Important changes
in the version 0.5.0.0 changelog.
If this issue is purely asking to restore DirectSound, i may have to close it. >2 channel WASAPI fix may be opened as a seperate thread/issue.
Well the real issue is that this initial implementation of WASAPI does not currently work for all devices. Both my monitor's speaker output are fine, but my dedicated sound card gives the generic error message. Or maybe someone (like myself?) could just change the title of this issue if possible? I'm not super familiar with GitHub.
Perhaps it is a good idea to have an error message give more specific details on exactly what went wrong, such as if it was due to having more than 2 channels? Hopefully there is a way to user friendly way give more details about exactly why it could not open an audio stream.
I guess a new thread could be opened that addresses issues with WASAPI specifically, rather than requesting the DirectSound system still be included as an option. The code that was previously supplied above should be included too, in the hopes that someone is available to test it.
On a side note though: why does DirectSound support need to be removed completely?
I maintained this repo for years and wrote the WASAPI backend to begin with; if you don't trust me, then stop using Dn-FT (and while you're at it, stop using Windows 10+ because it's all spyware).
The reason FT uses WASAPI is because the audio API has less latency and stutters in all testing performed and all platforms (Windows and especially Wine) than DirectSound. I decided against writing my own multi-backend API because it would add work for minimal benefit. And since porting audio backends, there have been refactors which changed the API between the audio backend and tracker (https://github.com/Dn-Programming-Core-Management/Dn-FamiTracker/commits/main/Source/SoundInterface.h), and keeping the DirectSound backend and adapting it to these changes would be pointless busywork for minimal benefit.
We couldn't port to an existing multi-backend audio library without introducing an additional thread (increasing complexity and latency), because FamiTracker is written so the tracker pushes data to the audio backend (also contributing to deadlocks, see #154), but audio libraries expect to be in control of the audio thread's main loop and request audio from a callback function in fixed-size chunks. And writing a push-mode multi-backend audio API would further entrench bad API decisions and make it even more difficult (require changing more code) to port to a pull-mode API in the future.
Perhaps it is a good idea to have an error message give more specific details on exactly what went wrong, such as if it was due to having more than 2 channels? Hopefully there is a way to user friendly way give more details about exactly why it could not open an audio stream.
That idea would be worth implementing. WASAPI mostly just gives a generic error message when failing to open an audio format. If you wanted to implement that, it's possible to compare pClosestMatch
with FamiTracker's internal format to figure out what the audio backend dislikes, but WASAPI backends like to lie about why the format is unsupported (eg. if you try to open a speaker in 1-channel int16 mode, WASAPI will tell you you need to open it in 2-channel float32 mode, even though 2-channel int16 mode works just fine). Additionally the current CSoundInterface::OpenFloatChannel
API inherited from DirectSound does not allow returning an error message beyond a single nullptr
value and would need to be changed too.
So far I've approached this error by trying to find out what triggers it, then add support to prevent the message from showing up.
Hi I've just known this apps that looks cool, So I downloaded the app from the latest release (v501_x64_Release) I got this error:
I checked this thread and tried your updated changes, and seems the problem still persist
there's no problem if I used my laptop crappy speaker as the sound device
but if I change it to my speaker, the problem's come back
Hi I've just known this apps that looks cool, So I downloaded the app from the latest release (v501_x64_Release) I got this error:
I checked this thread and tried your updated changes, and seems the problem still persist
there's no problem if I used my laptop crappy speaker as the sound device
but if I change it to my speaker, the problem's come back
Are those audio devices stereo or multichannel such as 5.1?
@Bear39 can you test the build at https://github.com/Dn-Programming-Core-Management/Dn-FamiTracker/files/11129078/Dn-FamiTracker_Dn0.2.1-507-g76bc3b9_4582531440_x64_Release.zip and see if it properly outputs audio to the front speakers? If so I can rebase and merge this branch, and it will be fixed in the next release.
@CoolJosh3k for the speaker yeti x does it means it multichannel 7.1?
@nyanpasu64 I tried to use that build but I still get the WASAPI error, I tried to use my other device that has multichannel 5.1 it gives the same error and still can't play the demo file
@CoolJosh3k for the speaker yeti x does it means it multichannel 7.1?
@nyanpasu64 I tried to use that build but I still get the WASAPI error, I tried to use my other device that has multichannel 5.1 it gives the same error and still can't play the demo file
Hopefully you are in a scenario where you can help nyanpasu64 with testing, but in the mean time you can change to a stereo configuration, an alternative device that is stereo, or revert back to using the older 0.4 version before support for the DirectSound backend was dropped.
Figured out how to create a virtual multi-channel audio device:
And I found that it would crash when trying to open with more than 2 channels, even with format.nChannels
increased. Why?
https://learn.microsoft.com/en-us/windows/win32/api/mmreg/ns-mmreg-waveformatex
WAVEFORMATEX can unambiguously describe mono or stereo PCM formats for which the number of valid bits per sample is the same as the sample container size. To describe a PCM format with more than two channels requires WAVEFORMATEXTENSIBLE, which has a channel mask to specify the speaker configuration (that is, the mapping of channels to physical speaker positions).
fuck
Figured out how to create a virtual multi-channel audio device:
And I found that it would crash when trying to open with more than 2 channels, even with
format.nChannels
increased. Why?https://learn.microsoft.com/en-us/windows/win32/api/mmreg/ns-mmreg-waveformatex
WAVEFORMATEX can unambiguously describe mono or stereo PCM formats for which the number of valid bits per sample is the same as the sample container size. To describe a PCM format with more than two channels requires WAVEFORMATEXTENSIBLE, which has a channel mask to specify the speaker configuration (that is, the mapping of channels to physical speaker positions).
fuck
Ah yes, if you can reproduce the issue via just emulating a mutichannel audio device that'd help a lot. :)
yep, it does work
TY @nyanpasu64
Before creating a PR (or merging?), I might still have to figure out how to handle mono audio outputs. If the input has type not WAVE_FORMAT_EXTENSIBLE
, I don't know what dwChannelMask
is correct. Or should I make the output WAVE_FORMAT_IEEE_FLOAT
rather than WAVE_FORMAT_EXTENSIBLE
, to avoid worrying about synthesizing a channel mask?
Fixed as of #226.
I noticed that the new 0.5 release with "DirectSound backend now replaced with WASAPI" no longer works for my Creative Sound Blaster X3.
I expect there will be other devices that also produce an error when trying to use WASAPI, that used to work just fine with DirectSound.
Even a simple toggle option to use DirectSound instead would really help.
Also, thanks a bunch for keeping FamiTracker alive! <3