Open DanielBUBU opened 2 months ago
your post is a bit vague but changing the behaviour of how a stream acts when piped is bad imo (& a breaking change). there are specific sites that can work right now if you create an audioresource from them and i think that's what you mean with the "web radio" stuff; this is probably the best option
your post is a bit vague but changing the behaviour of how a stream acts when piped is bad imo (& a breaking change). there are specific sites that can work right now if you create an audioresource from them and i think that's what you mean with the "web radio" stuff; this is probably the best option
Well, my idea is "using audioplayer as a web radio server controller", not just playing a 24/7 stream using audioPlayer
.
So it should be work more like this:
queued sources(audioResources)=>single discord audioPlayer<==Subs==>multiple VoiceConnections (and web radio stream should be add here works like VoiceConnections)
The framework above is already done and worked except web radio stream part, the example below works right now:
And I want my web radio stream server has sync audio from AP1 too.
I suppose VoiceConnection
works like a stream, so method2 I mentioned will fit in this situation more.
Besides, it should be fine if you pipe a stream into multiple streams
My bot has ability to play 24/7 stream now already btw, in case you misunderstood my thought.
The framework below is what I have to do if you don't add the feature. This is the WORST solution in my project since it requires lot of controllers to block async functions, and sync audio between web radio and single discord audioPlayer
sources(fs audio stream or something)=>web radio=>discord audio resource=>single discord audioPlayer<==Subs==>multiple VoiceConnections
what's wrong with piping your input to the site separately from the audioplayer (but at the same time)? does it have to go "through" the audioplayer?
what's wrong with piping your input to the site separately from the audioplayer (but at the same time)? does it have to go "through" the audioplayer?
Yes, it has to go through the audioplayer first ,so my web radio site will have same audio that sync with discord.
In my best implementation plan, it should have nothing to do with audioResource
and audioPlayer
; only modified VoiceConnection
or VoiceConnection
extension is necessary.
What you've suggested for VoiceConnection
doesn't make sense. It doesn't manage streams, that is the job of AudioPlayer
.
Reading back, you said the ideal solution was
Output silence when player is not in AudioPlayerPlayingState instead of close/end the output stream
Have you tried setting the maxMissedFrames
behaviour to infinity?
What you've suggested for
VoiceConnection
doesn't make sense. It doesn't manage streams, that is the job ofAudioPlayer
.
Oh I didn't know that, so the method2 is kinda useless now.
Reading back, you said the ideal solution was
Output silence when player is not in AudioPlayerPlayingState instead of close/end the output stream
Have you tried setting the
maxMissedFrames
behaviour to infinity?
ye, I did set maxMissedFrames
to infinity, but it doesn't output silence audio data (no green circle around icon) in Idle, Paused, or Buffering state (not sure about buffering state because it's too short).
Besides, there's no way to get data that is processed by AudioPlayer
and decode it right now.
ye, I did set
maxMissedFrames
to infinity, but it doesn't output silence audio data (no green circle around icon)
Yeah, silence frames don't set a "green circle around icon," that would be counterintuitive to actually sending a voice stream. You can get the "data processed by AudioPlayer" by just using it before you pass it into the AudioPlayer. You can synchronise a different stream & the audioplayer by just running the same actions on them.
Besides, there's no way to get data that is processed by
AudioPlayer
and decode it right now.
Sorry but it doesn't make sense at all to get the output of what you're piping into AudioPlayer; it will literally be the same stream, discord does not send the audio data back so you will not know if it's actually received by the channel & if you want silence frames on your secondary stream you can manually push them.
If you remove the context of discord it also doesn't make sense. You shouldn't rely on one service to send two identical streams of data to two services, just send the stream to both places separately of context of the other.
If you remove the context of discord it also doesn't make sense. You shouldn't rely on one service to send two identical streams of data to two services, just send the stream to both places separately of context of the other.
I just did a experiment, and I found that on data event that is emit by the stream (the stream I use to create a audioResource
) is sync with discord audio, here is my working solution:
wrapStreamToResauce(stream, BT = false) {
try {
var streamOpt;
var ffmpeg_audio_stream_C = fluentffmpeg(stream)
var audio_resauce;
if (BT) {
console.log("Set BT:" + Math.ceil(BT / 1000));
ffmpeg_audio_stream_C.seekInput(Math.ceil(BT / 1000))
}
ffmpeg_audio_stream_C.toFormat('hls').audioChannels(2).audioFrequency(48000).audioBitrate('1536k');
ffmpeg_audio_stream_C.on("error", (error) => {
this.handling_vc_err = true;
console.log("ffmpegErr" + error);
if (error.outputStreamError) {
if (error.outputStreamError.code == "ERR_STREAM_PREMATURE_CLOSE") {
this.clear_status(false, () => {
try {
//stream.destroy();
} catch (error) {
console.log(error);
}
this.playingErrorHandling(audio_resauce, error)
})
return;
}
}
this.playingErrorHandling(audio_resauce, error);
});
streamOpt = ffmpeg_audio_stream_C.pipe();
streamOpt.on("data", (chunk) => {
//console.log(chunk.length)
this.webAudioStream._transform(chunk);
})
streamOpt.on("end", () => {
console.log("streamEnd")
})
audio_resauce = createAudioResource(
streamOpt, { inputType: StreamType.Arbitrary, silencePaddingFrames: 10 }
);
audio_resauce.metadata = this.queue[this.nowplaying];
return new Proxy(audio_resauce, {
set: (target, key, value) => {
//console.log(`${key} set to ${value}`);
target[key] = value;
if (key == "playbackDuration" && process.send) {
process.send(value);
}
return true;
}
});
} catch (error) {
console.log("ERRwhenwarp");
throw error;
}
}
Yeah, silence frames don't set a "green circle around icon," that would be counterintuitive to actually sending a voice stream. You can get the "data processed by AudioPlayer" by just using it before you pass it into the AudioPlayer. You can synchronise a different stream & the audioplayer by just running the same actions on them.
Sorry but it doesn't make sense at all to get the output of what you're piping into AudioPlayer; it will literally be the same stream, discord does not send the audio data back so you will not know if it's actually received by the channel & if you want silence frames on your secondary stream you can manually push them.
This changed my thought a lot, TY.
This is what it looks like now:
2 VLC players and bot in 2 guild, all 4 audio outputs have almost sync audio. The issue can be closed now ig.
Which application or package is this feature request for?
voice
Feature
I am trying to build a site that can play same music on discord.
here's the part of the code I wrote:
BufferTransformStream
in my class for music processing:
this.webAudioStream
is aBufferingTransform
object I want data pass from aAudioPlayer
object tothis.webAudioStream
(so music sync with discord)Ideal solution or implementation
AudioPlayerPlayingState
instead of close/end the output streamMethod1-Add pipe function
Pipe to another stream and AudioPlayerObject doesn't close when the stream that pipe into is closed
Method2-Pretend it's a connection
Init a
VoiceConnection
object using a stream, so it can be sub/unsub just like aVoiceConnection
or
Alternative solutions or implementations
No response
Other context
My target is create a web radio that sync with discord. It can be done if I pipe them like this:
But the annoying part is that I have to maintain more stuff. And discord audio might be unstable due to the web radio part, in a discord bot project, so it become a tradeoff between reliability and new web radio function for my bot