Open iamrototo opened 7 years ago
Did you get it working on chrome for iOS? I'm having the exact same problem, but i can't get chrome on iOS to ask for the microphone permission.
No it seems that Chrome for iOS doesn't have getUserMedia yet... Maybe we have to wait an upgrade. In fact it works on Chrome (PC). I forgot to precise the platform, sorry.
Not working yet in Chrome (iOS 11) and not working with WebRTC new iOS 11 Safari. update: working in Safari 11 preview (only on macOS)
I'm also having the same issue on iOS11, it's working with Safari 11 preview on macOS.
Well, feeding the audiostream from the getUserMedia callback directly into an audio element works. Which suggests there is an issue while converting the raw bytes into wav.
Indeed. In my test I succeed to get a MediaStream and set it to an audio but can't store the byte of it. Check on internet, they all redirect to MediaRecorder that is not yet implemented on safari 11 and other lib like RecordRTC that does not support yet. Seems for now is a dead end, any clue how we can overcome this?
I checked deeply in the code and I notice that in RecorderRTC.js (line 2874), the function onAudioProcessDataAvailable is never triggered.
jsAudioNode.onaudioprocess = onAudioProcessDataAvailable;
seems that on iOS 11 safari, the onaudioprocess is not triggered at all.... So we don't get the data and the buffer remains empty.
I found this interesting post: https://forums.developer.apple.com/thread/83854 But I don't understand how to proceed with:
Actually, it appears the issue is that the user needs to explicitly activate audio output via a touch action. Even though this is for audio input, it still uses AudioContext, which requires explicit touch activation.
=> I try using a simple button but got same result.
<!DOCTYPE html>
<html>
<head>
<script src="https://webrtcexperiment-webrtc.netdna-ssl.com/RecordRTC.js"></script>
<script src="https://cdn.webrtc-experiment.com/gumadapter.js"></script>
</head>
<body>
<audio id="toto" controls></audio>
<div id="result" style="max-width:500px; word-wrap: break-word; "></div>
<button onclick="record()">RECORD</button>
<script>
function successCallback(audioStream) {
// RecordRTC usage goes here
var recordRTC = RecordRTC(audioStream, {recorderType: StereoAudioRecorder});
recordRTC.startRecording();
setTimeout(
function() {recordRTC.stopRecording(function(audioURL) {
var recordedBlob = recordRTC.getBlob();
recordRTC.getDataURL(function(dataURL) {
var audio = document.querySelector('audio');
audio.src = dataURL;
document.getElementById("result").innerHTML = dataURL;
});
});}, 1000);
}
function errorCallback(error) {
// maybe another application is using the device
}
function record() {
var mediaConstraints = { audio: true };
navigator.mediaDevices.getUserMedia(mediaConstraints).then(successCallback).catch(errorCallback);
}
</script>
</body>
</html>
I don't have yet an apple developer account. Anyone here to add some comment there ? Maybe we can have a solution from them.
I'm using a button as well without success.
@iamrototo After some digging, your conclusion seem right, I tried to do a recording without this library (raw bytes) and couldn't get the onaudioprocess
function to trigger on an iPad Pro.
Got this thread, WebRTC not working in iOS11 beta, looks like developers are having issues with the WebRTC implementation since beta.
Edit: More threads:
The latest update of iOS (11.0.1) did not fixed the problem =(
Same problem
It seems that the recorder feature is not even planned by Apple. https://github.com/muaz-khan/RecordRTC/issues/323#issuecomment-332275840
@iamrototo I have been struggling with the same issue the last few days. In your example you are indeed triggering the initial call using a button. The problem is that getUserMedia returns a promise and when the success handler is called Apple sees this as a script rather than a user input and blocks the AudioContext
from operating.
I have a working demo here but it's really just a working demo at this stage and not a usable library.
The ultimate solution for the RecordRTC library is to initialise the AudioContext
from the click handler then use this instance later on once the audio stream has been captured. Hope that makes sense.
regarding your answer @danielstorey, I have migrated my example to initialize the audiostream at the loading of the page and then on click initialize the RecordRTC with the previous initialized audiostream. Good news, it works =).
Here is the HTML sample:
<!DOCTYPE html>
<html>
<head>
<script src="https://webrtcexperiment-webrtc.netdna-ssl.com/RecordRTC.js"></script>
<script src="https://cdn.webrtc-experiment.com/gumadapter.js"></script>
</head>
<body>
<audio id="toto" controls></audio>
<div id="result" style="max-width:500px; word-wrap: break-word; "></div>
<button onclick="record()">RECORD</button>
<script>
var audioStream;
function record() {
// RecordRTC usage goes here
var recordRTC = RecordRTC(audioStream, {recorderType: StereoAudioRecorder});
recordRTC.startRecording();
setTimeout(
function() {recordRTC.stopRecording(function(audioURL) {
var recordedBlob = recordRTC.getBlob();
recordRTC.getDataURL(function(dataURL) {
var audio = document.querySelector('audio');
audio.src = dataURL;
document.getElementById("result").innerHTML = dataURL;
});
});}, 1000);
}
function errorCallback(error) {
// maybe another application is using the device
}
navigator.mediaDevices.getUserMedia({ audio: true }).then(
function success(audioStreamCaptured) {
audioStream = audioStreamCaptured;
}
).catch(errorCallback);
</script>
</body>
</html>
Of course this is a demo test, better to patch the lib directly as you suggested but I would like to thank you for your progress on the resolution of this issue =). Maybe @muaz-khan can look at it ?
@danielstorey Nice finding! Thanks! Using the @iamrototo solution worked for me!
Hi @iamrototo not working on my iphone 6 iOS11. Just after recording, pressing stop, there's an indication 'error'. using: https://www.webrtc-experiment.com/RecordRTC/simple-demos/audio-recording.html
@cloone1 : Please try again using my HTML sample instead of the audio-recording.html and tell me if it works.
Has anyone tested the example code in https://www.webrtc-experiment.com/RecordRTC/simple-demos/audio-recording.html on the newest iOS (11.1)? As far as I can see, it doesn't work anymore. However, the example code in https://danielstorey.github.io/webrtc-audio-recording/ works the first time you record, but not subsequent times you try to record. Since I would like to have several recordings done on one page, this appears to be a blocker for me on iOS.
I did some examples with several APIs and different format, i hope this help someone https://recordtest-1e799.firebaseapp.com
@GersonRosales Thanks for your input. However, when I test your example code on an iPhone with iOS 11.1, it doesn't work. It appears to record, but can not play back the recording.
Secondly, your example code does several recordings, but does so by splitting it up into one recording per page. The layout I already have working on all other browsers than Safari, does several recordings on the same page. I would therefore like to change as little as possible in the existing design while also supporting Safari, but that is proving difficult. Currently, I can get one recording per page to work correctly on iOS 11.1, but not more than that.
@larschristensen I think that you can use the https://recordtest-1e799.firebaseapp.com/example_3.html and create a li for each recording when you make click stop, and so create a list of records
@GersonRosales Example 3 indeed seems able to correctly start and stop multiple recordings on iOS11.1, so this approach looks promising. Thanks for pointing me in this direction, I'll look more into it.
@iamrototo I can't your example to work on the second record, on Safari macOS...have you been able to?
@GersonRosales Example 3 worked for me too, thank you! I used little pieces of it in a button I open sourced it here, creds included!
I love to help!, @nick-jonas If it's not too much to ask, you could put my name in a tag @GersonRosales please.... =D
Here is the code for the example 3, just in case someone needs it https://github.com/GersonRosales/Record-Audios-and-Videos-with-getUserMedia
@GersonRosales Just wanted to let you know that I have now successfully integrated your example code into my existing page. Thanks a lot for your help with this.
@GersonRosales done!
None of these solutions appear to work for a Node + React app using RecorderRTC. I've tried the basic HTML5 demo version that @iamrototo posted but it results in the same no data issue as before.
Are there any working examples of RecordRTC working for Safari iOS11 using the NPM version of RecordRTC?
@keyhole425 there is an issue with the current version of RecordRTC in terms of support for iOS. have you tried this example from @GersonRosales?
@nick-jonas @GersonRosales -- https://recordtest-1e799.firebaseapp.com/example_3.html is the only example I have tried that has actually worked in Safari. For me, it worked on iOS Safari 11 but NOT in MacOS Safari 11 (where the recording contained only silence on playback). Still, I'm excited to finally see something working on iOS!
What I am ultimately looking to do is record audio then upload it to a server via: let reader = new window.FileReader(); reader.readAsDataURL(blob); // where "blob" is a new recording created in Safari
This approach works in all other browsers. Do either of you know offhand if the output produced from the method used in that working example is compatible with being passing into a FileReader?
I can test myself and report back but it would be good to know before I put in the work to rip RecordRTC out and put in that example code instead.
I haven't managed to get RecordRTC working at all in Safari 11 but I'm not sure if my use of reader.readAsDataURL() is the problem. I've tried all the workarounds mentioned in the other threads (specifying/not specifying StereoAudioRecorder, specifying mono, ensuring the process kicks off with a click event, messing with RecordRTC Chrome user agent).
One more thing to add is that I've verified that iOS simulator on Mac doesn't support WebRTC. In the example above, it throws the error "Invalid constraint". So if you want to test some code locally in mobile Safari, you can't use Desktop Safari or iOS simulator as a stand-in (since the former is different and the latter doesn't work at all) and you have to use a proxy like https://ngrok.com/
@DrLongGhost I just tested and it works for me on MacOS Safari 11.
Safari Version 11.0 (12604.1.38.1.7) MacOS Sierra (OSX 10.12.6)
Make sure you've allowed access, and you don't have other tabs in any other browser requesting microphone access, and that you're mic input is properly set (system pref > sound > input).
@DrLongGhost try to make a recording with headphones in MacOS Safari 11 https://gersonrosales.github.io/Record-Audios-and-Videos-with-getUserMedia/example_3.html
I was able to get desktop Safari working with that example by rebooting my laptop. Previously when it was not working, I definitely had everything set up properly in my System Preferences, since the example worked in all other browsers. Rebooting Safari did not fix things but rebooting my laptop did.
Desktop Safari now seems to honor my System Preferences input settings (as long as I reboot Safari in between changes) whereas previously it was stuck in a broken state. Now, I can even have a tab in Chrome authorized to use the microphone without it breaking Safari, so I'm not sure why it wasn't working previously.
I'm going to proceed with trying to integrate the approach in this example into my application (minus the mp3 encoding). Hopefully, it'll go smoothly. Thanks for the help.
@GersonRosales @nick-jonas -- I wanted to update you both on my (lack of) progress in the hope that either of you may be able to either help me out or at least confirm my findings.
No matter what I do I cannot get audio data recorded via WebRTC to submit to a backend server via XHR in Safari.
My original attempts used RecordRTC to get the audio then I'd build a FormData() and submit it. It worked in all browsers but failed in Safari.
Next, I started from Gerson's Example 3 and removed all the mp3 encoding and instead used this MediaRecorder polyfill (https://github.com/ai/audio-recorder-polyfill) to convert the audio buffers from the stream to a wav file. I then used the FileReader API to get a base64-encoded version of the wav file and uploaded that to my server. The result was the same as my original attempts with RecordRTC -- an example that works in all browsers except Safari where I end up uploading a wav file that contains only silence.
I then changed tactics and kept the Example 3 code as unchanged as possible and modified only one line. I changed "audioElement.src = URL.createObjectURL(encoder.finish())" TO:
let mp3File = new File(
[encoder.finish()], 'Recording.mp3', {type: 'audio/mp3', lastModified: Date.now()}
);
let formData = new FormData();
formData.append('title', 'Safari mp3');
formData.append('audio', mp3File);
doXhrPost(formData);
Once again I end up with an example that works flawlessly everywhere EXCEPT Safari. It seems like the "formData.append('audio', mp3File)" silently fails in Safari. This is difficult to verify since Safari apparently doesn't support FormData.get() or any of the other methods that let you see what is inside an instance.
This is what I think is happening... there are security restrictions around audio data created with WebRTC in Safari. These restrictions prevent the data from being sent in an XHR POST or appended to a FormData instance. Is this plausible? It's equally possible I'm just dealing with some combination of mistakes on my end and Safari bugs.
I'd really like a path forward so if you have any suggestions or can verify what I'm seeing, I'd greatly appreciate the help.
The only other thing I can think to do is to update my backend to handle the ByteArrays from the audio buffers and try POSTing those (and doing the wav conversion in Node). Or maybe try to clone the objects somehow to fool the security restrictions (assuming those restrictions are even real).
Hope you find something with iOS Safari. What pain ... ;)
@DrLongGhost try see how works the method encoder.finish()
Encoder.prototype.finish = function(mimeType) {
var nBytes = lame_encode_flush(this.gfp, this.dstPtr, this.dstSz);
this.mp3Buffers.push(new Uint8Array(this.dstBuf.subarray(0, nBytes)));
var blob = new Blob(this.mp3Buffers, {type: mimeType || 'audio/mpeg'});
this.cleanup();
return blob;
};
maybe you can split this code and build the blob in the other side.
@DrLongGhost other idea could be transform the result of encoder.finish()
to base64 and the send it
let reader = new window.FileReader();
reader.readAsDataURL([audioData]);
reader.onloadend = function() {
var audioBase64 = reader.result;
let audioTurned = audioBase64.substr(audioBase64.indexOf(',')+1);
};
and the then come back from base64 to blob
Well, I finally had a bit of a breakthrough but my excitement is tempered by Safari's overall instability.
I was completely unable to upload audio via FormData() in Safari. BUT what finally worked was using reader.readAsDataURL() on the mp3 generated from the code in Example 3. That finally gave me valid audio I could upload and play back from the server.
The problem now is that Safari seems way too unstable for me to add in all this complexity to get a semi-working solution. Desktop Safari seems to work ok until it breaks for some unknown reason and then needs a laptop reboot to fix it. Mobile Safari is a little more predictable in that it will work exactly once before I need to do the "swipe up" app reboot of Safari to be able to make my second recording. The recording produced on my iPhone 6S was also choppy at the beginning and there are long pauses while it initializes all the WebRTC/mp3 stuff, so while it worked, it wasn't a great experience.
I now need to make the judgment call whether users will be more inconvenienced by seeing a "Not Supported" message when they go to record in Safari or have something that seems like it works but silently fails 30-50% of the time. It's a tough call. I think I may flag Safari support as "experimental" in my UI.
Thanks for all the help. I appreciate it a lot and I'll post again if I have any more new findings.
The example 3 works the first time. But it's not work after locking and unlocking iPhone.
Safari Version 11.0 iOS 11.2.1
It work now on my iphone 6... ver iOS 11.1.2 and OS 10.13.1 Safari. Good news. The first start is kind of 2 - 3 seconds clipped. After is ok (on iPhone).
Example 3 works on my iPhone 5S using iOS 11 Safari which is great to see, however, I can still get into a state of capturing silent recordings after coming out of a screen lock and doing a couple of page reloads (not browser restarts). As with my original try at an iOS recorder, I believe in this mode onaudioprocess is still firing and AudioContext is state still 'running' but will need to instrument the Example 3 code more to confirm. It seems Example 3 is the best performer so far but I assume most applications must function properly under all conditions, that is, following a power cycle of the device, browser restart, page refresh, or phone unlock which are the 4 conditions I've been using to "break" the code.
I was able to confirm that in this silent audio mode that onaudioprocess is firing as expected and the audioContext state is "running" so neither of those is the problem. The interesting thing is that the value of the variable 'volume' passed to drawAudio is =0 in this case, which is not the case when things are working normally. You can observe this visually as the meter does not flicker when you're in this failed state.
I am wondering now if this is some deeper problem with iOS 11 that cannot be fully circumvented with code gymnastics meaning we have to wait for Apple to "fix it" (or not). Here's a loosely related thread that may or may not be related, but possibly a symptom of the same or similar problem:
Is anyone else facing the phone unlock problem with Example 3 or have any suggestions what might be the cause? As another clue the small red mic icon normally shown in the upper left corner appears to be missing in this state, even though the microphone access has been granted by the user.
@shejazi Did you manage to find a solution to the silent audio problem? I'm having the exact same issue.
I try to convert the blob created by encoder.finish() to base64 but I always get Uncaught TypeError: Failed to execute 'readAsDataURL' on 'FileReader': parameter 1 is not of type 'Blob'
I opened a bug report with Apple for the iOS issue with audio input breaking when you sleep your phone, switch to another app, or simply randomly. I doubt they'll fix it anytime soon, but it's pretty clearly a bug on their end.
On the plus side, I may have a partial work around. Using @GersonRosales test page, I noticed a pattern in when the audio input will and will not break. The upshot is this -- if you open a new tab in Safari every time you want to make a new recording, the process seems to work reliably. The downsides of this approach are that it's aesthetically unappealing and you need to re-authorize mic access each time. But it seems like this is the only thing that will work because once a tab is "broken", it cannot be fixed by reloading or navigating away to a different page. It is poisoned and must be put down in a humane fashion.
Assuming mobile Safari allows you to programmatically open and close tabs automatically, I think this method will work for my purposes and make recording on iOS work reliably. I'm intending to experiment with it next week and see how well it works in practice.
The completed solution would work as follows:
1) A user clicks a link/button in the UI to begin recording 2) A new tab is automatically opened 3) The new tab requests access to the microphone 4) The user records their audio and clicks Stop 5) The tab closes automatically once the audio has been uploaded or processed or whatever you intend to do with it. The user now sees the original page. 6) If the user wants to record a second recording, they start again with Step 1
Hi @muaz-khan @iamrototo ,
it's not working on iphone, is it not compatible with ios or I'm missing something?
Note: I've tried the code which @iamrototo provided above, it's giving me blank file..
Thanks!
Lots of information above. I believe there are multiple underlying issues. Per more recent comments, I also ran into remaining problem that recordings on ios/safari, after the first one, resulted in empty/silent audio. I think I've found reason and workaround for that as well, without having to reload page or launch new tabs.
I couldn't getUserMedia to work for ios/safari on jsfiddle or codepen, so using glitch instead:
This demo shows ability to record multiple recordings on ios/safari (and all other device/browsers) with immediately playback in browser. Tested on iPad Pro 9.7" running iOS 11.2.6.
In addition to the other caveats in earlier comments, the additional caveat is that the audiostream returned by getUserMedia can not be reused for multiple recordings. A new one is needed for each recording. This seems to only be true for ios/safari. The result when reusing the audiostream is that everything appears to work, but the resulting audio is silent.
@kaliatech I don't have iPhone right now so I'll test your demo in morning.
I need record, play, pause and resume functionality in my web app and also the ability to save the recording in mp3 format, is it possible with the demo you provided?
@iamrototo - It should also work on iOS 11.3 beta 4. I have now verified it works on ipad/11.2.5, iphone/11.2.6 and ipad/11.3 beta 4. A common reason for the permission error is that the page is being accessed via HTTP instead of HTTPS. The getUserMedia only works over HTTPS.
@umairm638 - Your question is outside scope of this github issue. But yes, you can record/play/pause/resume. The native ios recording format is wav, but there are ways to convert to mp3.
Since iOS 11 and the support of getusermedia, I would like to use the record feature for safari on iPad but is not working at all.
I checked this issue (https://github.com/muaz-khan/RecordRTC/issues/275). I have adapted my code to use StereoAudioRecorder. I get no error in the console but just that I record nothing (the data is 44 Bytes and is just empty) despite it works on chrome (PC).
Here is my dumb test index. I try to record the voice of the user for 1 second and then I stop and get the data. To easily debug I push in the html the base64 and I have a quick audio html5 to be able to play the sound.
Thanks for your help.