JamesBrill / react-speech-recognition

💬Speech recognition for your React app
https://webspeechrecognition.com/
MIT License
657 stars 119 forks source link

to use useSpeechRecognition hook like local useState #74

Closed erdenebayrd closed 3 years ago

erdenebayrd commented 3 years ago

hi, how to I use transcript of "const { transcript, resetTranscript } = useSpeechRecognition();" like local state, I have some textfields which are using useSpeechRecognition and its value is "transcript", but when one of textfields value changed, all of the components that contains textfield has changed

petrkrejcik commented 3 years ago

I have the same issue. I've solved it by extracting the hooks into a context which is used by all the inputs. The downside of this approach is that all child componentes of this context are re-rendered when transcript changes.

But I would prefer to use the useSpeechRecognition independently in each component.

JamesBrill commented 3 years ago

Hi @erdenebayrd and @petrkrejcik I'm not sure I fully understand the problem scenario. Let's see if I can confirm my understanding:

Bear in mind that there is only one microphone and only one transcript being produced at any one time - this transcript is effectively a global state. However, there are ways you can handle it differently at a local level:

const Dictaphone = () => { const [transcribing, setTranscribing] = useState(false) const enableTranscribing = () => setTranscribing(true) const disableTranscribing = () => setTranscribing(false) const { transcript } = useSpeechRecognition({ transcribing })

if (!SpeechRecognition.browserSupportsSpeechRecognition()) { return null }

return ( <input type='text' value={transcript} onFocus={enableTranscribing} onBlur={disableTranscribing} /> ) }

export default Dictaphone

* If you want the input to be editable both by keyboard input and voice, you'll probably need your own local state for the input value. If you set the value as `transcript`, the input will become uneditable by keyboard. While it's possible to support both input types in the same text input, I'm not sure if this is a great user experience - whenever the input gets updated by voice, the cursor will jump to the end of the input. That said, here's a quick sketch of how you _could_ support both voice and keyboard input (potentially buggy due to jumpy cursor):

import React, { useState, useEffect } from 'react' import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'

const Dictaphone = () => { const [text, setText] = useState('') const { finalTranscript, resetTranscript } = useSpeechRecognition() useEffect(() => { if (finalTranscript !== '') { // User has finished speaking, so add speech to the input setText(${text} ${finalTranscript}) // Create a fresh transcript to avoid the same transcript being appended multiple times resetTranscript() } }, [finalTranscript, resetTranscript])

const onChange = (e) => { setText(e.target.value) }

if (!SpeechRecognition.browserSupportsSpeechRecognition()) { return null }

return (

) }

export default Dictaphone



I'm somewhat guessing your use case here, so perhaps you could provide some example code so that I can understand it better and give more helpful insights. Thanks!
sebastienbarre commented 1 year ago

I'd be curious to know how people solved that one. This was very puzzling to witness in my own app. Idiomatically, I expected useSpeechRecognition() to behave like useState(), meaning the values (aka transcript) would be completely local to the component, not global to all components that call the hook. But it is true that there is only one microphone, so listening certainly would be global.

That first solution, however, I'm not sure how this can work since most UI will have a separate button to activate the recognition itself (say, next to the input field itself), and that button will steal the focus, which will trigger the onBlur of the input, which will set transcribing to false right before the recognition even starts.

Thanks.

rohankm commented 1 year ago

the above solutions are not working for me...is there a better way to use it like useState() locally

alimehasin commented 1 year ago

This custom hook might helps

import { activeDictAtom } from '@/atoms';
import { useAtom } from 'jotai';
import { useCallback, useMemo } from 'react';
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition';

export default function useDictation(key: string) {
  const [activeDict, setActiveDict] = useAtom(activeDictAtom);

  const {
    transcript,
    listening: lstn,
    resetTranscript,
    browserSupportsSpeechRecognition,
  } = useSpeechRecognition({ transcribing: activeDict === key });

  const listening = useMemo(() => lstn && activeDict === key, [lstn, activeDict]);

  const start = useCallback(() => {
    setActiveDict(key);
    SpeechRecognition.startListening();
  }, []);

  const stop = useCallback(() => {
    setActiveDict(key);
    SpeechRecognition.stopListening();
  }, []);

  const toggle = useCallback(() => {
    listening ? stop() : start();
  }, [listening]);

  return {
    stop,
    start,
    toggle,
    listening,
    transcript,
    resetTranscript,
    browserSupportsSpeechRecognition,
  };
}