rainbowcreatures / FlashyWrappers

AIR / Flash video recording SDK
17 stars 10 forks source link

FWSoundMixer and NetStream #43

Closed ROBERT-MCDOWELL closed 6 years ago

ROBERT-MCDOWELL commented 6 years ago

Is FWSoundMixer can create a track from multiple NetStream? if yes, any example of how to use FWSoundMixer within FWVideoEncoder? thanks

ROBERT-MCDOWELL commented 6 years ago

Ok I tried to add audio into the FWencoder with the method addAudioData() specified on the doc but compiler complains: Call to a possibly undefined method addAudioData through a reference with static type com.rainbowcreatures.swf:FWVideoEncoder. Any clue?

ROBERT-MCDOWELL commented 6 years ago

I did a describeType of FWEncoder and addAudioData() method does not exist (mayb update the doc?) also I noticed these methods: setAudioRealtime(); setLogging(); addVideoFrame(); addSoundtrack(); addAudioFrame(); setRecordAudio();

I tried to use addAudioFrame() onEnterFrame in conjunction with FWEncoder.capture(); but all the sound is recorded on 1 frame, so it sounds like an extremely accelerated play. doc on all the function above will be very helpful thanks

ROBERT-MCDOWELL commented 6 years ago

I joined a video in mp4 from flash 28 I used capture() in non realtime with addAudioFrame on each frame (30fps). the audio seems to not respect the PTS video.zip

rainbowcreatures commented 6 years ago

Where methods are not documented in the pdf it was on purpose. Those methods are either replacable by some higher levels calls or are not needed for most of the use cases(addAudioFrame for example is wrapped into several higher level API methods, both for capturing microphone or adding soundtracks).

I didn't want to confuse most users. However, they should be all documented in the API docs:

http://flashywrappers.com/asdoc/

Accelerated sound might be also mono/stereo mismatch (you record one but supply another, for stereo all floats are sent twice).

I did test recording internet radio through NetStream and got it to work, but it was a long time ago with older versions of FW. I'm not sure if you purchased FW, but if not, unfortunately I won't be able to get to this soon as its kind of a special case, most likely in a week or two.

Thanks for understanding-

ROBERT-MCDOWELL commented 6 years ago

My goal is to just grab SoudMixer.computeSpectrum() byteArray and add it to FWEncoder, but don't know what function to use. Also at least a clear example of how to add audio bytearray into FWEncoder would be appreciated. I personally don't make any profit from my code so I avoid to ruin my life to buy every piece of code I use it from my project thank you.

rainbowcreatures commented 6 years ago

Try using addAudioFrame for that. Looking at computeSpectrum though I think you might need to do a bit of byte manipulations if you wanted stereo, it saves left channel first, then right(LLLLLLLLLRRRRRRRRRR), while FW expects LRLRLRLRLR in stereo. For quick test you might just use mono audio in FW and send it half of the computeSpectrum bytes(left channel only lets say). Btw. computeSpectrum will not give good audio quality, it was designed mostly for drawing out the spectrum in a graph, not replay as audio.

I currently also dont make any profit from FW in the past several months, so all of my support is free.

I can paste part of code from inside FW as file which is used for microphone capture, it might be related in terms of working addAudioFrame usage(this is for mono float samples):


// earlier in the code
                samplesMic.endian = Endian.LITTLE_ENDIAN;

        // take the audio samples from microphone and add them into myEncoder
        private function sndDataMic(event:SampleDataEvent):void {                   
//          ExternalInterface.call("console.log", "Microphone rate when not testing: " + microphone.rate);              
            if (isRecording) {
                if (event.data.bytesAvailable > 0)
                {
                    while(event.data.bytesAvailable > 0)
                    {
                        samplesMic.writeFloat(event.data.readFloat());
                    }
                }
                if (samplesMic.length >= 8192) {
                    samplesMic.position = 0;
                    addAudioFrame(samplesMic);
                    samplesMic.length = 0;
                }
            }
        }
ROBERT-MCDOWELL commented 6 years ago

thanks for the example. Yes SoundMixer.computerSpectrum() is impossible to use for any sample_data grab and audio replay, I hit my head on the wall for days without success. I then submit a feature request on Adobe to create this necessary feature since "SoundMixer" means what it means, but a class without any channel to mix in stereo and play with audio data has no value at all. Please upvote https://tracker.adobe.com/#/view/FP-4198745 Also I really wish to you and all of us who are coding since years to get some decent revenues in a fair internet market soon....

rainbowcreatures commented 6 years ago

Upvoted, though I'm skeptical they will do something about it.