Open SyedHaris94 opened 4 years ago
Hi,
Amazon Transcribe expects the audio to be sent continuously in realtime, the error you shared is returned when a connection to the transcribe client is opened, but no audio is sent.
How are you recording the audio from the user on the frontend? if it's using the microphone, then the recording is in realtime, the next question is how the audio chunks are being sent over the WebSocket. From the looks of it, once the socket is established, the client is sending a start event which is followed by the speech_to_text event. Can you share the way the audio chunks are being emitted from the front end?
Another thing to note is that you don't need to Throttle the stream if it's being sent in realtime from the microphone. You would have to throttle the stream if it's the complete audio sent at once.
Hi here is my front end code to send the data using web-sockets
** navigator.mediaDevices.getUserMedia({ audio: true }).then( stream => { ss(socket).emit("start"); let vm = this; vm.stream_data = stream; this.recorder = new window.MediaRecorder(stream); this.recorder = RecordRTC(stream, { type: "audio", mimeType: "audio/webm", sampleRate: 44100, desiredSampRate: 16000, numberOfAudioChannels: 1,
//1)
// get intervals based blobs
// value in milliseconds
// as you might not want to make detect calls every seconds
timeSlice: 3500,
//2)
// as soon as the stream is available
ondataavailable: function(blob) {
let params = {
speech: true,
sessionId: vm.session,
langCode: vm.selected,
query: stream
};
// 3
// making use of socket.io-stream for bi-directional
// streaming, create a stream
var stream = ss.createStream();
vm.stream = stream;
// stream directly to server
// it will be temp. stored locally
ss(socket).emit("speech_to_text", stream, {
name: "stream.wav",
size: blob.size,
langCode: vm.selected
});
// pipe the audio blob to the read stream
ss.createBlobReadStream(blob).pipe(stream);
}
**
on timeSlice: 3500 it works but if i set the timeslice value less than 3500 it does not recieve the data on backend and give me this error. Error: You have reached your limit of concurrent streams, 5. Try again later.
Are you recording the entire audio on the frontend and then sending it all at once?
The ideal method of doing this would be to stream the audio as it becomes available, I'll try to extend the browser-demo example with server-side streaming over the weekend. In the meanwhile, if you want to play around with it, you can view this repo: https://github.com/qasim9872/aws-transcribe-browser-demo.
The current implementation streams the audio directly from the browser, to extend it in we would have to update this function to send the audio chunk using the socket: https://github.com/qasim9872/aws-transcribe-browser-demo/blob/master/src/js/components/streaming-manager.js#L81 and the connectToTranscribe function in the same file to connect the socket.
Alternatively, if you can share a bit more of your code, maybe a minimum working example, I can try and get that to work.
Hi,
here's an example of sending the audio from a backend server.
https://github.com/qasim9872/aws-transcribe-browser-demo/tree/feature/streaming-from-server
Thanks
I have same issue
when i send the stream from web it gives me error
Here is my Code:
const client = new AwsTranscribe({ // if these aren't provided, they will be taken from the environment accessKeyId: config.accessKeyId, secretAccessKey: config.secretAccessKey, }) let transcribeStream;
})