Open y-71 opened 1 year ago
@y-71 you can start with your device and browser configuration, your framework used. Post your code snippets, or put your code in codesandbox or somewhere I can reproduce the bug.
Hi @chengsokdara
I've actually made it work and fogot to come back,
I had to copy the code of all of the lib to my codebase
I've used the code on the readme
import { useWhisper } from '@chengsokdara/use-whisper'
const App = () => {
const {
recording,
speaking,
transcribing,
transcript,
pauseRecording,
startRecording,
stopRecording,
} = useWhisper({
apiKey: process.env.OPENAI_API_TOKEN, // YOUR_OPEN_AI_TOKEN
})
return (
<div>
<p>Recording: {recording}</p>
<p>Speaking: {speaking}</p>
<p>Transcribing: {transcribing}</p>
<p>Transcribed Text: {transcript.text}</p>
<button onClick={() => startRecording()}>Start</button>
<button onClick={() => pauseRecording()}>Pause</button>
<button onClick={() => stopRecording()}>Stop</button>
</div>
)
}
Here is my package.json
{
"name": "goethe-speakeasy.spa",
"private": true,
"version": "0.0.0",
"type": "module",
"scripts": {
"start": "vite",
"build": "tsc && vite build",
"preview": "vite preview",
"generateMyAppConfig": "node generateMyAppConfig.cjs",
"prettier:write": "prettier --write src",
"prettier:check": "prettier --check src"
},
"dependencies": {
"@chengsokdara/react-hooks-async": "^0.0.2",
"@chengsokdara/use-whisper": "^0.2.0",
"@ffmpeg/ffmpeg": "^0.11.6",
"@lessonnine/design-system.lib": "4.5.0",
"@lessonnine/design-tokens.lib": "^4.2.0",
"axios": "^1.3.4",
"hark": "^1.2.3",
"lamejs": "^1.2.1",
"openai": "^3.2.1",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-router-dom": "^6.9.0",
"recordrtc": "^5.6.2",
"regenerator-runtime": "^0.13.11"
},
"devDependencies": {
"@types/hark": "^1.2.2",
"@types/node": "^18.15.5",
"@types/react": "^18.0.28",
"@types/react-dom": "^18.0.11",
"@types/recordrtc": "^5.6.10",
"@vitejs/plugin-react": "^3.1.0",
"jsonfile": "^6.1.0",
"prettier": "^2.8.5",
"typescript": "^4.9.3",
"vite": "^4.2.0"
}
}
I'm using the latest Chrome
Hi @chengsokdara!
There is no way I can get it to work and I have tried what @y-71 says about adding the dependencies to the project.
When I press the "Start" button it detects that I am speaking (speaking -> true) but it does not record (recording -> false y transcribing -> false)
transcribing.blob -> undefined transcribing.text -> undefined
I use
Mac M1 React 18.2.0 Vite Chrome: 111.0.5563.64 (official build) (arm64)
Code
import { useWhisper } from '@chengsokdara/use-whisper';
const App = () => { const { recording, speaking, transcribing, transcript, pauseRecording, startRecording, stopRecording, } = useWhisper({ apiKey: import.meta.env.VITE_OPENAI_API_KEY, });
return (
Recording: {recording ? 'recording' : ''}
Speaking: {speaking ? 'speaking' : ''}
Transcribing: {transcribing ? 'transcribing' : ''}
Transcribed Text: {transcript.text ?? 'nothing here'}
); };
export default App;
Thanks in advance!
facing same issue
I've tried to use it, I wasn't able to get any info why it wasn't working, I wanted to investigate it but wasn't able to because it failed silently