sharpdx / SharpDX

SharpDX GitHub Repository
http://sharpdx.org
MIT License
1.69k stars 639 forks source link

XAudio2.Voice.FilterParameters Access Violation Issue #300

Closed as00374 closed 10 years ago

as00374 commented 10 years ago

I am having some issues with XAudio2. I am attempting to implement single-voice-multiple-output using SourceVoice.SetOutputVoices by creating an array of VoiceSendDescriptors, but I get access violations for the filter parameters for all voices of any type, making this process fail. Can anyone tell me why this is and how I can fix it (Code below). Any help appreciates. Thanks:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.IO;
using System.Threading.Tasks;
using SharpDX;
using SharpDX.XAudio2;
using SharpDX.Multimedia;

namespace ConsoleApplication4
{
    class Program : ThreadStaticAttribute
    {
        public const float pan = -1;
        /// <summary>
        /// SharpDX XAudio2 sample. Plays wav/xwma/adpcm files from the disk.
        /// </summary>
        static void Main(string[] args)
        {
            XAudio2 search = new XAudio2();
            XAudio2[] xaudio2 = new XAudio2[3];
            MasteringVoice[] masteringVoice = new MasteringVoice[3];
            int outputsFound = 0;

            for (int i = 0; i < search.DeviceCount; i++)
            {
                if (search.GetDeviceDetails(i).DisplayName.Contains("1-2"))
                {
                    xaudio2[0] = new XAudio2();
                    masteringVoice[0] = new MasteringVoice(xaudio2[0], 2, 48000, i);
                    outputsFound++;
                }
                else if (search.GetDeviceDetails(i).DisplayName.Contains("3-4"))
                {
                    xaudio2[1] = new XAudio2();
                    masteringVoice[1] = new MasteringVoice(xaudio2[1], 2, 48000, i);
                    outputsFound++;
                }
                else if (search.GetDeviceDetails(i).DisplayName.Contains("5-6"))
                {
                    xaudio2[2] = new XAudio2();
                    masteringVoice[2] = new MasteringVoice(xaudio2[2], 2, 48000, i);
                    outputsFound++;
                }
            }
            if (outputsFound == 3)
            {
                // No pan
                PlaySoundFile(xaudio2, masteringVoice, "1) Playing a standard WAV file- no pan", "ergon.wav", 0);

                // Pan
                PlaySoundFile(xaudio2, masteringVoice, "2) Playing a XWMA file- Device 1 panned left, Device 2 panned right, Device 3 no panning", "ergon.xwma", 1);

                // Reverse pan
                PlaySoundFile(xaudio2, masteringVoice, "3) Playing an ADPCM file- Device 1 panned right, Device 2 panned left, Device 3 no panning", "ergon.adpcm.wav", 2);

                for (int i = 0; i < xaudio2.Count(); i++)
                {
                    masteringVoice[i].Dispose();
                    xaudio2[i].Dispose();
                }
            }
            else
            {
                Console.WriteLine("Devices missing. Check UA 1-2, UA 3-4 and UA 5-6");
                throw new Exception("Devices missing. Check UA 1-2, UA 3-4 and UA 5-6");
            }
        }

        /// <summary>
        /// Play a sound file. Supported format are Wav(pcm+adpcm) and XWMA
        /// </summary>
        /// <param name="device">The device.</param>
        /// <param name="text">Text to display</param>
        /// <param name="fileName">Name of the file.</param>
        [STAThread]
        static void PlaySoundFile(XAudio2[] device, MasteringVoice[] mVoices, string text, string fileName, int panning)
        {
            Console.WriteLine("{0} => {1} (Press esc to skip)", text, fileName);
            var stream = new SoundStream(File.OpenRead(fileName));
            var waveFormat = stream.Format;
            var buffer = new AudioBuffer
            {
                Stream = stream.ToDataStream(),
                AudioBytes = (int)stream.Length,
                Flags = BufferFlags.EndOfStream
            };
            stream.Close();
            if(device.Count() >= 3) {
                var sourceVoice = new SourceVoice(device[0], waveFormat, true);
                VoiceSendDescriptor[] sendDescriptors = new VoiceSendDescriptor[3];
                for (int i = 0; i < sendDescriptors.Count(); i++)
                {
                    sendDescriptors[i] = new VoiceSendDescriptor(mVoices[i]);
                }
                sourceVoice.SetOutputVoices(sendDescriptors);
                // Adds a sample callback to check that they are working on source voices
                if (panning != 0)
                {
                    DeviceDetails deviceDetails = device[0].GetDeviceDetails(0);
                    VoiceDetails voiceDetails = sourceVoice.VoiceDetails;
                    float[][] outputMatrix = new float[device.Count()][];
                    outputMatrix[0] = new float[deviceDetails.OutputFormat.Channels * voiceDetails.InputChannelCount];
                    outputMatrix[1] = new float[deviceDetails.OutputFormat.Channels * voiceDetails.InputChannelCount];
                    outputMatrix[2] = new float[deviceDetails.OutputFormat.Channels * voiceDetails.InputChannelCount];

                    for (int i = 0; i < device.Count(); i++)
                    {
                        for (int j = 0; j < deviceDetails.OutputFormat.Channels * voiceDetails.InputChannelCount; j++)
                            outputMatrix[i][j] = 0;
                    }
                    if (panning == 1)
                    {
                        outputMatrix[0][0] = 0.5f - pan / 2;
                        outputMatrix[0][1] = 0.5f - pan / 2;
                        outputMatrix[0][2] = 0.5f + pan / 2;
                        outputMatrix[0][3] = 0.5f + pan / 2;

                        outputMatrix[1][0] = 0.5f + pan / 2;
                        outputMatrix[1][1] = 0.5f + pan / 2;
                        outputMatrix[1][2] = 0.5f - pan / 2;
                        outputMatrix[1][3] = 0.5f - pan / 2;

                        outputMatrix[1][0] = 1;
                        outputMatrix[1][1] = 0;
                        outputMatrix[1][2] = 0;
                        outputMatrix[1][3] = 1;
                    }
                    else
                    {
                        outputMatrix[1][0] = 0.5f - pan / 2;
                        outputMatrix[1][1] = 0.5f - pan / 2;
                        outputMatrix[1][2] = 0.5f + pan / 2;
                        outputMatrix[1][3] = 0.5f + pan / 2;

                        outputMatrix[0][0] = 0.5f + pan / 2;
                        outputMatrix[0][1] = 0.5f + pan / 2;
                        outputMatrix[0][2] = 0.5f - pan / 2;
                        outputMatrix[0][3] = 0.5f - pan / 2;

                        outputMatrix[1][0] = 1;
                        outputMatrix[1][1] = 0;
                        outputMatrix[1][2] = 0;
                        outputMatrix[1][3] = 1;
                    }
                    sourceVoice.SetOutputMatrix(sendDescriptors[0].OutputVoice, 2, 2, outputMatrix[0]);
                    sourceVoice.SetOutputMatrix(sendDescriptors[1].OutputVoice, 2, 2, outputMatrix[1]);
                    sourceVoice.SetOutputMatrix(sendDescriptors[2].OutputVoice, 2, 2, outputMatrix[2]);

                }
                sourceVoice.BufferEnd += (context) => Console.WriteLine(" => event received: end of buffer");
                sourceVoice.SubmitSourceBuffer(buffer, stream.DecodedPacketsInfo);
                sourceVoice.Start();

                int count = 0;
                while (sourceVoice.State.BuffersQueued > 0 && !IsKeyPressed(ConsoleKey.Escape))
                {
                    if (count == 50)
                    {
                        Console.Write(".");
                        Console.Out.Flush();
                        count = 0;
                    }
                    Thread.Sleep(10);
                    count++;
                }
                Console.WriteLine();

                sourceVoice.DestroyVoice();
                sourceVoice.Dispose();
                buffer.Stream.Dispose();
            }
        }

        /// <summary>
        /// Determines whether the specified key is pressed.
        /// </summary>
        /// <param name="key">The key.</param>
        /// <returns>
        ///   <c>true</c> if the specified key is pressed; otherwise, <c>false</c>.
        /// </returns>
        public static bool IsKeyPressed(ConsoleKey key)
        {
            return Console.KeyAvailable && Console.ReadKey(true).Key == key;
        }
    }
}
ArtiomCiumac commented 10 years ago

This is more likely an XAudio2 usage issue, I am not sure if you will get much help here. Try to enable native debugging. If you are on Windows 7 - try to enable XAudio debugging.

Btw, why you put STAThread to called methods instead of Main? This looks quite strange. Why do you need the base class ThreadStaticAttribute there?

ArtiomCiumac commented 10 years ago

@as00374, have you made any progress on this issue?

as00374 commented 10 years ago

Thanks for looking into this @ArtiomCiumac. I have changed from using SharpDX to NAudio as native ASIO support was the easiest solution to this issue. FTW, I was using STAThread because audio process is real-time critical, therefore I don't want multiple threads accessing it. I wasn't running the code on the main because my main thread is Kinect skeletal tracking using real-time object tracking and didn't want to be doing image processing and sensitive audio processing on the same thread.

Thanks again

ArtiomCiumac commented 10 years ago

I still don't understand if there is a problem with SharpDX itself or its usage. Try to start with a sample and modify it to suit your needs. Also, I am not sure if you need the STA attribute - as this is more related to UI like WinForms or WPF, but DirectX works fine in multithreaded environments.

If you have time - have a look at the SharpDX.Toolkit.Audio - it provides a simplified wrapper over the XAudio2 API.

I will close this issue for now - feel free to reopen it if you you believe your usage is correct and there is an issue with SharpDX.

mattAtCSL commented 10 years ago

I deleted my post after I saw there was a different way of setting the effects, which I've now got working. I'm not sure if the other way should work, I was looking at C++ XAudio2 examples and trying to do the same.

I'll stick my test code back up if needed.