Open TNSTomas opened 4 years ago
Stereo would be great . I'd go a step further and look for how to modify input sound source attributes:
@TNSTomas can you elaborate on how you are going about streaming audio in?
I've come up with several "tricks": a USB audio input source (a $5-$15 dongle works), and I either feed audio in from an iphone/ipod into the mic in, or if I'm running audio off the compute, loop audio from the speaker output headphone jack into the mic in. For the "sound source", then login a user from the PC/mac and set the mic input source to the USB audio mic input. Either should work, however, some usb audio dongles have a very low mic gain and it might not be loud enough.
Let me know how you are getting audio streaming in, as I may have overlooked a better approach.
Hey @truedat101
We're experimenting with a couple of software's right now that are running through pro tools then into hubs. We use jacktrip to stream audio of different performers to our audio engineers computer. He takes those streams and runs it through Pro Tools where he syncs it up and adds reverb etc. From pro tools we use a virtual audio cable to connect the output of Pro Tools to the mic input of an avatar within hubs.
Our issue is that once the audio is brought into hubs its converted to mono instead of the stereo mix we have in Pro Tools. We're hoping to find a way to stream the Stereo mix in.
We've experimented with the audio modes in Hubs but it doesn't seem to actually change the audio into stereo. Because we've been streaming the audio through an avatar it doesn't seem like there are many options to change.
So yeah, the audio into an avatar is likely optimized for voice. Probably pre-pandemic, performance oriented and broadcast oriented audio for VR use cases was a low priority. I am interested in a solution for this problem as well and will keep you posted what I find. I will be looking through the code to understand a bit more about the audio, since I want to change the behavior of the audio culling and potentially create speaker objects that would allow an audio tech to manage the sound in VR for a performance space. Need to learn more about how they manage sound for voice and possibly see how to turn off some of the voice optimizations in favor of a more performance/broadcast oriented use case.
Will it work if the performance is live streamed on another site (YouTube/Twitch...) and then drag and drop the website link into the room? I have yet to try but am really keen in putting up performances. Instead of protools, I use Ableton Live.
@namiemeow interesting, yes, I'm also an Abelton user. There might be some things to explore with web midi specs, but I think that would be related to a different issue.
Regarding the links, yes this for sure works:
although I've had some problems with Youtube videos as links into video objects.
The thing to consider is how the effect of music and voice conversation mix in a scene. One possibility is to disable the mics in a room if performance audio is adversely affected.
@truedat101 thank you. will try the links. hopefully no issues with YouTube for me.
@truedat101 thank you. will try the links. hopefully no issues with YouTube for me.
The YouTube issues happen, I believe, when the Mozilla Hub server exceeds its allocated API limit for resolving videos with YouTube. Seeing as you can't predict when the API limits will be broken your experience with YouTube is likely to be hit and miss.
@davegoopot I've worked around the "api limit" thing by sharing my desktop window of a browser session streaming youtube. It works. The downside with the user screen share (along with user camera share) is that other users can grab the video and toss it out of the view. I haven't figured out if there is a way to prevent that capability. Especially when people are new into VR they tend to click all the controls.
SAT-Metalab as some work on this apparently ? https://gitlab.com/sat-metalab/forks/hubs/-/commit/80af2fe9b09f2f41deb3f17da0fc3be6ce7ca056
We are trying to do a similar thing for a quadraphonic sound performance or installation. Is there a way to sync (pre-recorded or live) audio sources? So it would be 4-mono channels coming in with synchronized starting points. For live application we use Ableton Live, too.
We are looking to use hubs for a performance event and have worked out a solution to stream the audio in
I'm curious on how did you achieve this. When I tested it, it had way too much latency so as to be usable for a band rehearsal.
I've done a live-streamed music performance with video and stereo audio in Hubs. Here's what you need to do:
Expect serious latency, like 10-60 seconds. Anyone who solves the problem of internet stream latency before venues reopen will be able to pay off all their student loan debts tenfold.
SAT-Metalab as some work on this apparently ? https://gitlab.com/sat-metalab/forks/hubs/-/commit/80af2fe9b09f2f41deb3f17da0fc3be6ce7ca056
I applied the changes in this to my naf-dialog-adapter script, but it did not improve the music audio quality when streamed through Hubs.
For others looking for a music live stream solution: try Twitch. You can grab the audio-only stream URL from a Twitch stream using twitch-m3u8 and paste that into an Audio node in Spoke. Voila, live high quality audio in Hubs (with Twitch-like latency).
We are looking to use hubs for a performance event and have worked out a solution to stream the audio in, but everything is interpreted as a mono sound once it is in hubs. Does hubs have any sort of feature that would allow stereo audio to be brought into the room? At the moment we are using Virtual Inputs to stream audio from protools into the space but it always comes in as Mono and there is nothing in the documentation stating anything about switching to stereo sound. Any insight into this issue would be greatly appreciated. Thank you!