centricular / gstcefsrc

A simple gstreamer wrapper around Chromium Embedded Framework
84 stars 45 forks source link

Several audio source not mixed #1

Closed thierrygayet closed 4 years ago

thierrygayet commented 4 years ago

Hi Mathieu,

I just found your project github module gstreamer for CEF and I really love your idea. Indeed, I work with embedded cef and the approach of linking it as a gstreamer module is really great! Well done !

So I tested it a lot and it works really well.

I modified it to point to a newer version because my url is a website recovering Audio and Video sources in webrtc. Indeed, in versions prior of CEF, it has bug with the ICE protocol (STUN / TURN).

In addition, I observed an issue. Before testing your cef module, I mixed the sound with pulse audio mixer and it went well when I had two separate sounds: they were well added. As you know, the on_paint() callback from CEF also manages a callback for the audio but the latter remains unitary and two sounds from two different sources are not added.Previously, when I overloaded the audio and video callback, I also realized an internal audio mix based on the different audio callbacks.

Do you want to modify your module to manage the mix internally in the gstsrc module? Indeed, without this mix it adds a contrait because only the first audio source is heard, the second is not managed.

Sample of simple test page:

<html>
                <body style="background-color:aliceblue;"> 
                        <embed type="video/mp4" src="Comptines_et_chansons_pour_enfants.mp4" autostart="true" width="600" height="720" volume="1"/> 
                        <embed type="video/webm" src="Volcano_Lava_Sample.webm" autostart="true" width="400" height="720" volume="0.3"/> 

                </body>

</html>

Thank you in advance for your return.

Thierry

MathieuDuponchelle commented 4 years ago

A priori, multiple audio sources should be exposed as multiple pads on the cef demuxer. You can then perform mixing at your discretion with eg audiomixer. Can you confirm you do get multiple audio pads in your case?

MathieuDuponchelle commented 4 years ago

Also for reference re pulse, that was the original design of the larger application I use this element in, as audio support hadn't yet been merged upstream at the time, but being able to switch to the new CEFAudioHandler made things decidedly simpler and easier to integrate.

Note that I wouldn't object to merging a higher-level bin that would take care of instantiating both the source and the demuxer, and perhaps expose a property to allow performing mixing of the audio streams, eg mix-audio-streams. One could make that bin implement the GstChildProxy interface as well, so that the user can also set the mix-matrix property if more control is desired over how to perform audio mixing.

MathieuDuponchelle commented 4 years ago

ping @thierrygayet :)

thierrygayet commented 4 years ago

Hi Mathieu, trace-8.log https://drive.google.com/a/dazzl.tv/file/d/1f6FchVvo3jA-SE4eAp2-M3qyNOen1JKi/view?usp=drive_web By the way i need to preload the cef dynamic library before any runtime :

export LD_PRELOAD=/overlay/cef-gstreamer/Release/libcef.so

About the the tests i include my html (test-2-audio-sources.html), the two video sample used (Volcano_Lava_Sample.webm and Volcano_Lava_Sample.webm) and the logfile (trace-8.log) in debug level 8. Comptines_et_chansons_pourenfants.mp4 <https://drive.google.com/a/dazzl.tv/file/d/11D5gLjrX8gXwPBKgfEAixEVM5zDf1bX/view?usp=drive_web> Volcano_Lava_Sample.webm https://drive.google.com/a/dazzl.tv/file/d/1ft3gezmzkx9OV10monze6WU-bY5gAyaL/view?usp=drive_web Here is my pipeline : gst-launch-1.0 cefsrc url="file:///home/tgayet/Workspace/gstcefsrc-75.0.6+g90ecd35+chromium -75.0.3770.80-DAZZL/build/Release/test-2-audio-sources.htm" ! queue ! cefdemux name=d d.video ! video/x-raw ! queue ! videoconvert ! autovideosink d. ! audio/x-raw ! queue ! audioconvert ! autoaudiosink a sync-handling=true > ./trace-8.log 2>&1

Futhermore, i have also generated the dot files used a debug and converted to PDF.

BR Thierry

Le ven. 4 oct. 2019 à 14:15, Mathieu Duponchelle notifications@github.com a écrit :

ping @thierrygayet https://github.com/thierrygayet :)

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/centricular/gstcefsrc/issues/1?email_source=notifications&email_token=ALMH2346VMDELCC7HD2IHXTQM4XXNA5CNFSM4I33TIGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEALOR4A#issuecomment-538372336, or mute the thread https://github.com/notifications/unsubscribe-auth/ALMH235HDIKPMRTBPMJ7RMTQM4XXNANCNFSM4I33TIGA .

--

Regards

Thierry GAYET

(Courrouze) Village by CA 3 avenue Germaine Tillon Saint Jacques de la Lande.

MathieuDuponchelle commented 4 years ago

export LD_PRELOAD=/overlay/cef-gstreamer/Release/libcef.so

You need to do that when you don't use the appropriate option when building CEF / chromium yourself. The spotify builds that are used by default are built with that option, I'm afraid I don't remember how to set it however, you can probably find this in the CEF forums.

Futhermore, i have also generated the dot files used a debug and converted to PDF.

That should let you verify how many audio pads were exposed by cefdemux, you can upload the dot files here if you want me to take a look.

In general, if you want to use this element with an arbitrary number of audio streams, I would recommend either writing a simple application around it, or implementing a wrapper bin as I suggested in my earlier comment, I'll be happy to review / upstream it :)

MathieuDuponchelle commented 4 years ago

from https://magpcss.org/ceforum/viewtopic.php?f=6&t=14122:

The CEF binaries are built with tcmalloc by default. You need to either run with LD_PRELOAD=libcef.so or build CEF locally without tcmalloc. See https://bitbucket.org/chromiumembedded/cef/issues/1827 for additional details.

MathieuDuponchelle commented 4 years ago

@thierrygayet you also haven't attached your html file anywhere that I can see ;)

thierrygayet commented 4 years ago

yes i know but html if not a supported extension on github.

That's why i have sent them by email directly.

Here is a sample of html file :

  <html>
  <body style="background-color:aliceblue;">
  <embed type="video/mp4"  src="Comptines_et_chansons_pour_enfants.mp4" autostart="true" width="600" height="720" volume="1"/>
  <embed type="video/webm" src="Volcano_Lava_Sample.webm"               autostart="true" width="400" height="720" volume="0.3"/>
  </body>
  </html>

BR Thierry

MathieuDuponchelle commented 4 years ago

OK so with the spotify build, the video stream for "Comptines_et_chansons_pour_enfants.mp4" never starts, and the "Volcano_Lava_Sample.webm" starts but pauses immediatley, so I can't really reproduce your issue here, afaict their binaries are not necessarily built with the ffmpeg backend, didn't look further.

Regarding playback of multiple audio streams however, when using wave files things seem to work as expected:

gst-launch-1.0 audiotestsrc num-buffers=3000 ! audio/x-raw, channels=2 ! wavenc ! filesink location=sine.wav
gst-launch-1.0 audiotestsrc num-buffers=3000 wave=white-noise ! audio/x-raw, channels=2 ! wavenc ! filesink location=snow.wav
<html>
                <body style="background-color:aliceblue;">
                  <embed autostart="true" type="audio/wav" src="sine.wav" />
                  <embed autostart="true" type="audio/wav" src="snow.wav" />
                </body>     
</html>
gst-launch-1.0 -v cefsrc url="file:///home/meh/Downloads/foo.html" ! queue ! cefdemux name=d d.video ! video/x-raw ! queue ! videoconvert ! xvimagesink d.audio_1 ! audio/x-raw ! queue ! audioconvert ! pulsesink volume=1.0 async=false d.audio_2 ! audio/x-raw ! queue ! audioconvert ! pulsesink volume=1.0 async=false

Both streams are output to the audio sinks :)

thierrygayet commented 4 years ago

Hi Mathieu,

Yes it was normal that you could not read both video because the CEF version from the spotify's server does not include any h264 support!

On our side, cef has been rebuilded so that we add this feature as we need with webrtc.

In any case, thank you for the pipeline. I did not try to get the different audio source as you did in your example.

I would try to make an audio mix while gstreamer without having to go through pulse audio as I work environment server.

BR Thierry

MathieuDuponchelle commented 4 years ago

I would try to make an audio mix while gstreamer without having to go through pulse audio as I work environment server.

+1 to this, if you manage to make your use case work that way it's definitely the nicer solution :)

Closing this as I think we've come to a conclusion here, please reopen if the original issue statement (can't mix several audio sources) still gives you problems :)

thierryg commented 4 years ago

Hi Matthew,

I had a question when there are several different audio in a web page. The last tests showed that we got two audio pads (audio_0 and audio_1).

My question, if you will answer it to continue the discussion we had, would be to know, if dynamically, it would be possible to manage on the fly, several pads.

When I mean dynamically, I mean that the proof of concept page could create different audio. So the question is whether a pipeline written in python for example could dynamically manage an additional pqd or add a module without disturbing the stream.

BR Thierry

MathieuDuponchelle commented 4 years ago

Yes, it is possible for an application written in any language to properly manage a dynamic GStreamer pipeline.

thierryg commented 4 years ago

Indeed, this is what I have seen on this website that shows several examples in python's script: https://github.com/MaZderMind/dynamic-gstreamer-pipelines-cookbook.git