muaz-khan / RecordRTC

RecordRTC is WebRTC JavaScript library for audio/video as well as screen activity recording. It supports Chrome, Firefox, Opera, Android, and Microsoft Edge. Platforms: Linux, Mac and Windows.
https://www.webrtc-experiment.com/RecordRTC/
MIT License
6.49k stars 1.75k forks source link

Safari (iOS) not supported anymore #748

Open jeroenwallaeys opened 3 years ago

jeroenwallaeys commented 3 years ago

I'm using RecordRTC for a while now, and only recently the RecordRTC support for iOS and safari started failing. The issue is caused on line 2142 in the RecordRTC.js file. The initialization of the MediaRecorder failes. Any ideas or work around?

sca1235 commented 3 years ago

I have same issue. Haven't found a workaround yet.

nnirror commented 3 years ago

I am also having this issue. I can record audio and video with RecordRTC on my computer, with Chrome and FF. But on iOS I can't record video or audio, in both Chrome and Safari. Safari also doesn't work on my computer, with this message in the console:

Your browser does not support the Media Recorder API. Please try other modules e.g. WhammyRecorder or StereoAudioRecorder.

devrony commented 3 years ago

I was able to get my functionality working again. Review these items in your code.

  1. Only capture audio/video from a user invoked event handler (i.e. button click). You'll see in my code that I call attemptMediaCapture() from a button click event handler. You cannot just start recording (i.e. when user starts speaking anymore).
  2. Every time you press record button, attempt to capture audio and create new RTC audioRecorder object.
  3. The last gotcha for me was using mediaStream.clone() when providing the mediaStream object to the hark object constructor (which I use to capture when user is speaking/stopped speaking). In my case, I now use a button to start recording and use some logic to determine when user stops speaking after 2 seconds to auto stop and transcribe/translate.

Hope this helps some.

/**
   * Attempts to capture media from user. Should ONLY call this from a user triggered event (i.e. record button click event).
   * */
  private attemptMediaCapture(callback?: (captureAvailable: boolean) => void) {

    // Reset We'll need to recapture each time this is called and create the audioRecord and mediaStream.
    this.isMediaCaptureReady = false;

    // IMPORTANT: Must destroy and re-create each time.
    // Do not just call audioRecorder.reset(). This will not work and
    // cause future audio blobs to be basically blank after starting again.
    if (this.audioRecorder) {

      this.audioRecorder.destroy();
      this.audioRecorder = null;
    }

    if (this.mediaStream) {

      this.mediaStream.removeAllListeners();
      this.mediaStream = null;
    }

    const isEdge = navigator.userAgent.indexOf('Edge') !== -1 && (!!navigator.msSaveOrOpenBlob || !!navigator.msSaveBlob);
    const isSafari = /^((?!chrome|android).)*safari/i.test(navigator.userAgent);

    navigator.mediaDevices.getUserMedia({
      audio: isEdge ? true : {
        echoCancellation: false
      }
    }).then((mediaStream) => {

      this.mediaStream = mediaStream;

      this.audioOptions = {
        type: 'audio',
        mimeType: "audio/wav",

        disableLogs: true,

        // This sampleRate should be the same in your server code
        sampleRate: 44100,

        // used by StereoAudioRecorder
        // the range 22050 to 96000.
        // let us force 16khz recording:
        desiredSampRate: 16000,

        // MediaStreamRecorder, StereoAudioRecorder, WebAssemblyRecorder
        // CanvasRecorder, GifRecorder, WhammyRecorder
        recorderType: RecordRTC.StereoAudioRecorder,

        // Requires mono audio
        // Dialogflow / STT requires mono audio
        numberOfAudioChannels: 1,

        // get intervals based blobs value in milliseconds
        // only needed if using ondataavailable() callback
        //timeSlice: 1000, 

        // Only use if you want to track audio blob chunks manually
        //ondataavailable: (blob: Blob) => { },

        // only for audio track
        //audioBitsPerSecond: 128000
      };

      this.audioRecorder = new RecordRTC(this.mediaStream, this.audioOptions);

      // REF: https://github.com/otalk/hark
      // FUTURE: Tweak these options to support different needs (i.e. volume threshold, etc.)
      const speechOptions = {};

      // IMPORTANT!: MUST use mediaStream.clone(), else Safari will fail to record properly
      const speechEvents = hark(this.mediaStream.clone(), speechOptions);

      // Bind to speech events (i.e. sound detection).
      speechEvents.on('speaking', () => {

        this.logger.add("Speaking...");

        this.isSpeaking = true;

        this.processSpeaking();
      });

      speechEvents.on('stopped_speaking', () => {

        this.logger.add("Stopped Speaking...");

        this.isSpeaking = false;

        this.processStoppedSpeaking();
      });

      this.isMediaCaptureReady = true;

    }).catch((err) => {

      this.isMediaCaptureReady = false;

      this.logger.add("Unable to capture microphone", err)

    }).finally(() => {

      callback?.(this.isMediaCaptureReady);

    });
  }

public onStartRecording() {

    // Must ALWAYS check if media is available. Only do from event handler.
    this.attemptMediaCapture((captureAvailable) => {

      if (captureAvailable) {

        try {

          this.audioRecorder.startRecording();

          this.isRecording = true;

          this.logger.add("Started recording");
        }
        catch (err) {

          this.isRecording = false;

          this.logger.add("Start recording error", err);
        }
      }
    });
  }