Closed ianholing closed 1 year ago
I want to make a driving recorder app, do "object detect" and "video recording" at the same time, how can I solve it ?
I want to make a driving recorder app, do "object detect" and "video recording" at the same time, how can I solve it ?
In Android, I solve it using a platform channel and CameraX as a workaround, it is not the best option but I needed a solution
I have the same requirement. Is there any other plugin that can do this
I found a temporary solution,modify the plugin two places
1-1: dart side: in startImageStream()&&stopImageStrea() method, Delete the crash code where isRecordingvide is true 1-2: dart side: in startVideoRecording() , Delete the crash code where isStreamingImages is true
2-1: java side: in camera.java file, found startVideoRecording() method, replace createCaptureSession( CameraDevice.TEMPLATE_RECORD, () -> mediaRecorder.start(), mediaRecorder.getSurface());
to createCaptureSession( CameraDevice.TEMPLATE_RECORD, () -> mediaRecorder.start(), mediaRecorder.getSurface(),imageStreamReader.getSurface());
2-2: java side: in camera.java file, found startPreviewWithImageStream() method, replace createCaptureSession(CameraDevice.TEMPLATE_RECORD, imageStreamReader.getSurface());
to if (mediaRecorder != null) { createCaptureSession(CameraDevice.TEMPLATE_RECORD, imageStreamReader.getSurface(),mediaRecorder.getSurface()); } else { createCaptureSession(CameraDevice.TEMPLATE_RECORD, imageStreamReader.getSurface()); }
It works perfectly on Android !! happy
In addition, when the iOS phone calls StartVideoRecording and StopVideoRecording at the same time, it will occasionally crash and the recorded video cannot be played
The solution
3-1: ios side. Modify the video recording method by changing
CVPixelBufferRef nextBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CMTime nextSampleTime = CMTimeSubtract(_lastVideoSampleTime, _videoTimeOffset); [_videoAdaptor appendPixelBuffer:nextBuffer withPresentationTime:nextSampleTime];
to
if (_videoWriterInput.readyForMoreMediaData) { [_videoWriterInput appendSampleBuffer:sampleBuffer]; }
3-2: ios side: plus _videoWriter. ShouldOptimizeForNetworkUse = true;
before[_videoWriter addInput: _videoWriterInput];
At this time, Flutter plugin can record video while Image Stream process
Usage of Flutter:
just call startVideoRecording/stopVideoRecording and startImageStream/stopImageStream at same time
Great @hellozsh! I should check this out. The only side problem here is that you need to start record and start stream at the same time? Good enough for me anyway, you can stream images and restart the stream with video whenever you need it. You should propose a PR with this. It is better than what they have now. Thank you!
Another thing you can do is have imageStream, preview and recording all at different sizes. For example you need a 320*240 image stream for ML inference but still want to record FullHD you can add in Camera.java private final Size streamSize;
then in Camera constructor where previewSize
and captureSize
are streamSize = computeBestPreviewSize(cameraName, ResolutionPreset.low);
and the in the open
method (at line 216'ish) change
imageStreamReader =
ImageReader.newInstance(previewSize.getWidth(), previewSize.getHeight(), imageFormat, 2);
to
imageStreamReader =
ImageReader.newInstance(streamSize.getWidth(), streamSize.getHeight(), imageFormat, 2);
It is somewhat strange that one part of Google is pushing the ML angle very hard with Tensorflow and creating all sorts of abstractions on top of it native side. And yet the Flutter team does not even provide proper tools to take advantage of their own products.
Is there any plan to get this done in the official plugin in the future?
@Kypsis is it right to modify Camera.java that is in project folder - ios - .symlinks - plugins - camera - android -src -main - java - io -flutter - plugins -camera -Camera.java ?
Downloads/flutter/.pub-cache/hosted/pub.dartlang.org/camera-0.9.4+5/android/src/main/java/io/flutter/plugins/camera/Camera.java:166: error: cannot find symbol streamSize = computeBestPreviewSize(cameraProperties.getCameraName(), cameraFeatures.getResolution().getRecordingProfile());
it cause above error on compile.
@ianloic Were u able to find any solution for this problem? I am not able to record the video when the imageStream is started. The recorded video gets stored but it size is always zero. Is there any way to make the imageStream run and video record at the same time??
I want to know how to record video in Image Stream. I am using ML now. When using Image Stream, I need to record video, but camera cannot be used at the same time
临时找到了解决办法,修改插件两处
1-1:dart 端:在 startImageStream()&&stopImageStrea() 方法中,删除 isRecordingvide 为 true 的崩溃代码 1-2:dart 端:在 startVideoRecording() 中,删除 isStreamingImages 为 true 的崩溃代码
2-1:java端:在camera.java文件中找到startVideoRecording()方法,替换
createCaptureSession( CameraDevice.TEMPLATE_RECORD, () -> mediaRecorder.start(), mediaRecorder.getSurface());
为createCaptureSession( CameraDevice.TEMPLATE_RECORD, () -> mediaRecorder.start(), mediaRecorder.getSurface(),imageStreamReader.getSurface());
2-2:java端:在camera.java文件中,找到startPreviewWithImageStream()方法,替换createCaptureSession(CameraDevice.TEMPLATE_RECORD, imageStreamReader.getSurface());
为if (mediaRecorder != null) { createCaptureSession(CameraDevice.TEMPLATE_RECORD, imageStreamReader.getSurface(),mediaRecorder.getSurface()); } else { createCaptureSession(CameraDevice.TEMPLATE_RECORD, imageStreamReader.getSurface()); }
在Android上完美运行!!开心
Hi, I'm glad to see this answer, I want to know how to deal with it in iOS? I didn't find the right way
I also need to do the same thing; face detection and video recording. But the main issue right now is that I have added some face contours to be able to show the bounding box, and those are not being tracked or traced along with the face after I made the above change to the camera.java code. Any solution for it?
@np8698 I know it's not the easiest way, but I made final video frame by frame using the ffmpeg flutter wrapper, that's an option :shrug:
临时找到了解决办法,修改插件两处 1-1:dart 端:在 startImageStream()&&stopImageStrea() 方法中,删除 isRecordingvide 为 true 的崩溃代码 1-2:dart 端:在 startVideoRecording() 中,删除 isStreamingImages 为 true 的崩溃代码 2-1:java端:在camera.java文件中找到startVideoRecording()方法,替换
createCaptureSession( CameraDevice.TEMPLATE_RECORD, () -> mediaRecorder.start(), mediaRecorder.getSurface());
为createCaptureSession( CameraDevice.TEMPLATE_RECORD, () -> mediaRecorder.start(), mediaRecorder.getSurface(),imageStreamReader.getSurface());
2-2:java端:在camera.java文件中,找到startPreviewWithImageStream()方法,替换createCaptureSession(CameraDevice.TEMPLATE_RECORD, imageStreamReader.getSurface());
为if (mediaRecorder != null) { createCaptureSession(CameraDevice.TEMPLATE_RECORD, imageStreamReader.getSurface(),mediaRecorder.getSurface()); } else { createCaptureSession(CameraDevice.TEMPLATE_RECORD, imageStreamReader.getSurface()); }
在Android上完美运行!!开心Hi, I'm glad to see this answer, I want to know how to deal with it in iOS? I didn't find the right way
Hello, did you find a solution? I also need this functionality. Thanks
@np8698 I know it's not the easiest way, but I made final video frame by frame using the ffmpeg flutter wrapper, that's an option 🤷
@ianholing Yes, I intend to do the same. Right now, i have the list of images which I obtained by converting the yuv images ( or CameraImage ) to RGB. Now with ffmpeg i am not able to understand how to get the path for these images and combine them. If possible could you help me out ?
@ianholing can you please show the example for it
Of course.. In my use case in AvatART App, FPSs could be low depending on the phone because we use a heavy custom model and the usage of GPUs/CPUs could be really intensive, so.. we decided to recreate all the frames with the actual video recorded with a usual framerate and without the IA processing, this way the output feels really smooth:
https://gist.github.com/ianholing/8d416f2bde6e4402c787731f655db19b
Hope it helps :) if anyone could even test the App and give me a review in the store I would really appreciate it, I am in the middle of the ASO process right now :P Marketing hat today
@ianholing Does your solution work with:
all at once? Or do you process video stream after video is recorded already?
In fact, the response for both are "yes", let me explain: Yes, I process video after video is recorded already, a plain video which I process frame by frame with the script in my last comment. But I also stream images in realtime and create a preview in other completely different work. This may seem to duplicate the work, but if you use your preview images to create the final video you will have two problems:
@ianholing Yeah, that's what I'm afraid of given that I try to make a video using images stream, the audio would be unsynchronized in case of frame-dropping.
I need to record a video and analyze it in real-time while recording to detect some changes on it (I may lose some frames, that's not a problem), that's why I'm curious if you managed to do it somehow.
I also tried to record a video, and while recording, take pictures periodically using camera, and analyze such pictures. That seems to work on iOS, but crashes app on Android.
My another attempt was to picture CameraPreview widget every second and analyze such a picture. I thought it was a good-enough solution, but it turns out Flutter does not allow to make pictures out of PlatformViews using RepaintBoundary https://github.com/flutter/flutter/issues/25306
Screenshot of an app is not a solution for me sadly. I need to think more on how to solve it :D
I want to make a driving recorder app, do "object detect" and "video recording" at the same time, how can I solve it ?
In Android, I solve it using a platform channel and CameraX as a workaround, it is not the best option but I needed a solution
🤷 I didn't fix it on multiplatform, but for Android, I went to OS directly using platform channels and CameraX and it works ok, I'm sure there will be something on iOS side too if you want to go OS Level in order to record + process.
There are also other solutions at the beginning of this thread for use a modified version of Camera Flutter plugin, have you tried that?
Not yet, once I get to native code modifications, I will definitely try the solutions above.
This thread has been automatically locked since there has not been any recent activity after it was closed. If you are still experiencing a similar issue, please open a new bug, including the output of flutter doctor -v
and a minimal reproduction of the issue.
Record video while Image Stream process is working
I'm pretty sure this is a common problem in a lot of applications and not a strange and isolated request. I am developing an application to create face filters so I need to process each image to get the face keypoints, but if I do that, there is no way to record a vídeo. It is simply useless if I can't record it. I don't care if the video is recorded with the processed images or not, I can do it later again, but at least have that video somewhere.
The thing here is there is not a workaround, If you want to record video and process images, you should create a new plugin.
Proposal
I don't know how difficult it is or why, but it is definitely possible. There are applications in the market doing that and other Flutter plugins capable of that also like https://pub.dev/packages/rtmp_publisher or https://github.com/mtellect/CameraDeepAR
I don't think this should be an external package but a feature of the actual camera plugin.