Closed DineshkumarKandasamy closed 1 year ago
Hey @DineshkumarKandasamy , you should not try to use AVAudioSession
in the Broadcast Upload Application Extension, it is not functional because it is an Application Extension and not an actual Application. Doing so may lead to unexpected behavior.
What happens when you run our sample app without any changes? https://github.com/aws-samples/amazon-ivs-broadcast-ios-sample/blob/main/ScreenCapture/SampleHandler.swift
Thanks @bclymer Could you please guide how to set audio codec in audio settings and profile level in video settings? Also Is it possible to merge two audio resource together and broadcast?
Hi @bclymer @slawrence @hyandell Could you please share any suggestion? It would be more helpful. Thanks in advance.
Hi @DineshkumarKandasamy We don't support configuring audio codec and profile level.
You cannot access mixer with IVSReplayKitBroadcastSession
. Alternative way is integrating ReplayKit using IVSBroadcastSession
. However, you will probably experience crashes due to ReplayKit memory limit.
https://aws.github.io/amazon-ivs-broadcast-docs/1.7.1/ios/Classes/IVSReplayKitBroadcastSession.html
Hi @thmatuza Thanks for the feedback. I have checked with both IVSReplayKitBroadcastSession & IVSBroadcastSession. But still facing the same audio issue. Enable both audioMic & audioApp is the right approach? In IVS sample project both are enabled but In our case, audio issue occurs. Please share your suggestion. It will be more use full in our live projects.
Hi @DineshkumarKandasamy So IVS sample project work for you, right?
What is the difference between IVS sample project and your code with IVSReplayKitBroadcastSession
?
Could you share your IVSReplayKitBroadcastSession
code within processSampleBuffer
? (just audioMic
& audioApp
case statement)
Hi @thmatuza Thanks for the reply. We are trying to broad cast WebRTC cal (Two users connected on call). I am using below code in my project. Kindly check
RPScreenRecorder.shared().startCapture(handler: { (sample, bufferType, error) in
if CMSampleBufferDataIsReady(sample) {
switch bufferType {
case .video:
self.session?.systemImageSource.onSampleBuffer(sample)
case .audioMic:
self.session?.microphoneSource.onSampleBuffer(sample)
case .audioApp:
self.session?.systemAudioSource.onSampleBuffer(sample)
@unknown default:
break
}
}
}) { (error) in
debugPrint(error ?? "error")
}
HI @thmatuza Could you please check the code and give your feedback
It looks ok. However, I am not sure if you can use ReplayKit for WebRTC call audio. Have you tried it on pure screen recording to file? If you cannot hear audio on the file, I don't think it works with ReplayKit. I think you can still consider other solutions.
Thanks for the feedback @thmatuza We have tested with screen recording to file. But the audio is not available on the file. Can we broadcast WebRTC call using IVSBroadcastSession or with any other solution? Kindly suggest
Hi @thmatuza Could you please suggest alternate approach or any other solution?
Hi @DineshkumarKandasamy We have released a new feature today. https://aws.amazon.com/blogs/media/add-multiple-hosts-to-live-streams-with-amazon-ivs/ I hope it helps what you want to accomplish.
Hi @thmatuza It's really awesome feature. Thanks for the sharing.
Just for better understanding I am doing POC in IVSReplayKitBroadcastSession with reply kit. Let me share my findings. I hope it will be useful for everyone. I have tested in iPhone 6S (version 13) and iPhone12. Let me share the list of Scenario's I have tested,
iPhone 6S and iPhone 12 was connected via WebRTC call
Scenario 1: iPhone 6S (Version 13) was source stream device and iPhone 12 was not streaming.
Result: Both audio and video quality was good. It was working as expected
Scenario 2: iPhone 12 was source stream device and iPhone 6S was not streaming.
Result: Video quality was good. But audio not working.
Scenario 3: iPhone 6S (Version 15) was source stream device and iPhone 12 was not streaming.
Result: Video quality was good. But audio not working.
Kindly check and give your suggestion. Thanks
Hi @DineshkumarKandasamy , it is interesting. If you do same experiment with pure screen recording to file, what happens?
Hi @thmatuza Thanks for reply. Let me share the scenario, I tested.
I connected by WebRTC call in both iPhone12 & iPhone6S devices. I started broadcast from iPhone12. Enabled screen record in iPhone12 and saved the video. Audio not available in saved file.
Hi @thmatuza Can you please give some suggestion on this?
If audio doesn't work in screen record, I don't know if there is anything we can do. Could you consider to use Stage feature? https://aws.amazon.com/blogs/media/add-multiple-hosts-to-live-streams-with-amazon-ivs/ With our Stage feature, you can do what you can with WebRTC.
Going out this issue due to inactivity. Final conclusion is that if the native Photos app can not record audio from WebRTC call, our SDK is not capable of doing it either.
Hi @zy-ivs
I have tried to broadcast screen share using ReplyKit and IVS broadcast. I can able to see the broadcast video. But not able to hear the audio. I have used below settings for audio. Please suggest how to resolve audio issue?