Open huanmeng opened 11 years ago
It seems that audio input/output devices mismatched. Are you using Microphone/Headphone or default speakers of the notebook?
I'll test RecordRTC on XP soon.
I run it on desktop,the microphone works well.i'll try another microphone...
hi, for only use the audio i must have chrome canary ???
No. Audio recording works fine with all latest chrome releases i.e. canary, dev, beta and >27 stable.
Audio recording only fails when input/output audio devices mismatch. For example; if you're using microphone to record audio and using separate speakers to listen record voice.
Audio is recorded in wav format. It must be supported well on your system.
If it still not works for you; try to change buffer length. See how wav file is encoded.
yes thank's, it work with the 27 beta version... :+1:
Hi!
I've been trying recordRTC example with chrome canary 28.0.1481.0 canary and chrome 27.0.1453.56 beta-m but it doen't work with either of them.
I listen to my voice when pressing the "record audio" button but the output.wav seems to be empty. I think it records a muted buffer.
I'm using headphones with integrated microphone, the mic is recording at 44100Hz.
Any idea what the problem might me? Thank you very much.
hi! me again! managed to make the demo work. I had to check if my headphones were also working at the same freq as my mic.
thank you very much.
How can we change the speaker/microphone properties in windows XP? I mean how can we know the exact freq? windows xp only appears to say the quality is 'good' but I don't know where to see the exact freq. value.
thanks.
for which chrome version it works fine.Please let us know.
RecordRTC works fine on Chrome Version 28.0.1490.2 dev-m
and Version 28.0.1496.2 canary
. It worked fine on beta too.
There are some hardware issues causes failures; especially when you try to use notebook's built-in speakers. Try headphone/microphone instead.
RecordRTC is using FileWriter
(i.e. webkitRequestFileSystem
) APIs to write recorded files to disk. These APIs fails in incognito mode
.
Dear Muaz Khan,
First, Thanks for your reply.
Its working on my latest canary, and yesterday, I have made it to worked on Chrome 25.0.1364.84 (stable version)(requires input audio and output speaker device with same sample rate) .But, This is not supporting for latest chrome versions.(working for only on laptop not on desktop. because for laptop I am managing input and output sample rate at same frequencies. ) Also, I would like to know for which chrome version all WebRTC experiment works fine.
Following are my some question regarding audio recording:
Can I write such program that initially take users name in hidden element using html and using java-script can I broadcast it to all users, who are hitting same URLusing socket.io.
Thanks in advance. May I get your mobile no or Email-id. Regards, Tushar Kadam.
I would like to know for which chrome version all WebRTC experiment works fine.
It depends. Tab sharing or screen sharing experiments works only on chrome beta and canary. Pre-recorded media streaming only works on canary out of MediaSource APIs
support. All other experiments works fine on all chrome releases; and Firefox nightly; also on chrome beta for android.
Can we manage frequencies of input and output device at java-script level?
I've not tried such thing yet.
Can we record video with sound?
It is a challenge for us; I started working on it.
Why audio recording is not working for latest chrome version?
I'm sure it is Web Audio APIs specific issue. However, we can't forget devices incompatibilities issues too.
Audio recording will work for desktop?. if I, any how manage sampling rate of input audio and output speaker device at same frequency?
Audio recording works fine on all notebooks manufactured by HP, Dell and Toshiba. I've not test desktops yet.
Some experiments you have done are not using socket.io. Can I convert all your experiments to work on socket.io. (if yes, what changes I need to do?).
You can use socket.io for signaling in each and every experiment. You just need to replace code in openSocket
or openSignalingChannel
methods.
My experiments' behavior is dynamic. Channels and peers are dynamically created. That's why I recommend you use this socket.io implementation.
Can I run your all experiments without internet connection?(on local machine only)
Maybe, however, I've not tested it in such way.
Can I manage bandwidth required for video conferencing (like I want make whole my web application to work on 150 kbps while video conferencing. so it can use less bandwidth from network).
You can use bandwidth
constraints. Link
You can also modify SDP too for bandwidth; however this feature is not implemented yet.
Can I write such program that initially take users name in hidden element using html and using java-script can I broadcast it to all users, who are hitting same URLusing socket.io.
RTCMultiConnection uses word session-id
. DataChannel.js uses word channel-name
. All other experiments uses predefined default channel
.
You can use users' names as session-id
, channel-name
or default channel; and it will work as you asked.
Is it possible to record audio from video conference using html5 and javascript (without using flash)?.
This experiment is recording audio from both local and remote media streams. Though, that experiment is too old.
You can record audio from a single MediaStream
. If you meant to record voice of all participants in a single file; this feature is not implemented in RecordRTC
, yet.
RecordRTC
is independent.
You can track audio stream from video conferencing
like this:
var audioTracks = mediaStream.getAudioTracks();
var audioStream = new webkitMediaStream(audioTracks); // audio-only stream
Is there a way to upload the video recorded from RecordRTC to a server directly? Thank you very much.
@zhuochun, you can get Blob
and POST
to the server using XMLHttpRequest
object.
blob = recorder.getBlob();
You can get temporarily created file too; and POST
the file directly to the server:
tempFile = recorder.toURL();
You can use FileReader
API to read the file as array-buffer; text or blob; and POST
to the server.
reader = new FileReader();
reader.readAsDataURL(tempFile);
reader.onload = function (event) {
dataURL = event.target.result;
};
@muaz-khan Thank you very much.
I tried to do POST
with Blob
. However, on the server side, it cannot be recognized as a video file (details on my qns on stackoverflow). Do you have any hints on solving the problem?
Sorry to say that getBlob
method is not returning a real Blob
object; it is just returning DataURL
.
You can get a real Blob
from this line; then you can use FileReader
API to read blob in following formats:
readAsArrayBuffer
readAsBinaryString
readAsDataURL
readAsText
reader = new FileReader();
reader.readAsArrayBuffer(realBlobObject);
reader.onload = function (event) {
arrayBuffer = event.target.result;
};
@muaz-khan Thanks again. I solved the problem using the real Blob
with FormData
post.
Hi
I am using chrome latest version, and when i am trying to record audio & video that last 1:05 I get only a 33 seconds video and the audio usually muted after 10 second.However when I record lets say 10 sec of video+audio it works fine.
Is it a common issue ?
Thanks
Hello Muaz Khan,
Thanks for the great script :) I have used this script for audio+video recording and it works fine only the thing is that its not recording an voice in mobile so can you please tell me how to record the voice in mobile as well ?
Thanks
@muaz-khan How can i get the real wav blob object? Your link goes to the main file. I have the problem that i try to convert them in mp3 with the lamejs lib. For that i need an Int16Array. When i try to read the blob i get no result.
// Set Upload data in hidden field
fileReader.onload = function () {
console.log(this.result);
var buffer = new Int16Array(this.result);
var mp3Blob = new Blob([encodeMP3(buffer)], {type: 'audio/mp3'});
};
fileReader.readAsArrayBuffer(blob);
Here this.result
is empty. Can anyone tell me how to fix that problem? When i try to get the buffer directly its empty and the same with the dataView. So where could be the problem...
Here readAsBinaryData
and readAsDataUrl
gives me some output.
Blob should be accessed inside stopRecording
callback:
recordRTC.stopRecording(function() {
var recordedBlob = recordRTC.getBlob();
// OR---- I Prefer this one
var recordedBlob = recordRTC.blob;
var reader = new FileReader();
reader.onload = function(event) {
var buffer = event.target.result;
};
reader.readAsArrayBuffer(recordedBlob);
});
Demo snippet: https://github.com/muaz-khan/RecordRTC#record-only-audio
That is not working its inside my stopRecording function and i have copied your complete code to test. When i run that part i get an empty ArrayBuffer {}
. So that isn't working. It would be great if that work then this would solve my problems and i could convert them very easy.
recordRTC.stopRecording(function() {
var recordedBlob = recordRTC.blob;
var reader = new FileReader();
reader.onload = function(event) {
var buffer = event.target.result;
console.log(buffer);
};
reader.readAsArrayBuffer(recordedBlob);
});
Is it a problem to use that code inside an jQuery document ready?
recordStopBtn.click(function() {
I use an click method to trigger the button.
Array buffer is NOT empty :) You should check byteLength
attribute:
console.debug('Byte Length', buffer.byteLength);
BTW, you may missed checking these demos:
Especially this one:
Ok yes you're right sorry... I work here on that problem for days now ^^ i'm a bit tired... and yes i have seen that examples. But i like the lamejs version much more its very easy and complete written in JS and ists working on Firefox the quality is not good but its working. But i think i have found my problem.
But thanks for your help =).
Is it possible to get a mono blob? The problem is the lame file needs mono data or Stereo data with left and right channel. Can you tell me how to solve that?
Does RecordRTC's browser requirements match those of getUserMedia? Or do you need more recent browser versions that those listed here: https://caniuse.com/#search=getusermedia
I can't get the voice data. I try this demo on chrome 27.0 dev,there is no Web Audio Input in about:flags,the video record works well,but the wav file is silent,the length of the file is right. After debug,I found that in the onaudioprocess evevt,the input buffer array's data were 0.Does this demo works well on winXP?