Closed sebaslogen closed 1 year ago
@cybrox is the one who has added the pre processor so he might be more advice on the usage but from what I understand it can totally be used for this. What I'm not sure is about the format of that file. You can try to just write each frame in a file and see how it goes from there. I was waiting before publishing the pre processor in case @cybrox had more PR incoming lol
@jaumard Thanks! 😄 - I actually don't have any more at the moment.
@sebaslogen You could definitely use the pre-processor to write the mjpeg to the disk. I think the limiting factor in this case is the write speed of the file system or the memory constraints of your plattform in case you want to cache the stream first.
The way flutter_mjpeg
is written, it only sends a frame off for rendering once it has found a JPEG start of image and end of image sequence. This means that the bytes (List<int>
) you get in the pre-processor are actually the raw bytes of a single JPEG image. If you were to write these to a .jpg
file, it would be a complete, valid image file.
So you could use a pre-processor that stores and returns the sequence like this:
import 'package:flutter_mjpeg/flutter_mjpeg.dart';
class CaptureMjpegPreprocessor extends MjpegPreprocessor {
@override
List<int>? process(List<int> frame) {
// _writeToFile(frame)
return frame;
}
}
I didn't think about this use-case before but when quickly going over it, I think there are two possibilities:
a) You implement some kind of divider into the pre-processor class, so that you only capture every n-th frame or only capture a frame ever n ms (This is basically what I did in the code in the original PR) then you write each of these to disk immediately. This will result in a relatively small video capture but at a very low frame rate. (Or you can use this approach to only store a snapshot)
b) If your stream has a determined length and stops after a while, you might be able to cache all the frames (e.g. add them to an array) and then write them to the file system later. This would probably work on desktop platforms but not on mobile platforms where memory limits are more strict. For example iOS gives you 1.5GB of memory max. (You can also use this approach to just cache and store the first n frames of the stream, so you could store the first 2s for example)
Whichever way you choose, the pre-processor would be the right place to implement this. However, I'm not sure if an app is performant enough to do the job.
If performance is an issue you can use an isolate to save frame on the disk, like this the stream will not freeze on UI side.
Thanks @cybrox :) I'll push a new version then ^^
preprocessor is available in 2.0.3 :)
Closing this @sebaslogen, feel free to share your findings when you give it a try!
Awesome! Thank you both for the quick and helpful answers! 😃
I got it working with 2.0.3 🎉 this is the code:
class MjpegWriter extends MjpegPreprocessor {
Future<File> get _localFile async {
final path = await getAppDocsPath();
return File('$path/lastRecording.mjpg');
}
Future<File> saveFrame(List<int> frame) async {
final file = await _localFile;
// Write the file
List<int> head = utf8.encode("${BOUNDARY_PART}${frame.length}${BOUNDARY_DELTA_TIME}${BOUNDARY_END}");
file.writeAsBytes(head, mode: FileMode.append);
return file.writeAsBytes(frame, mode: FileMode.append, flush: true);
}
@override
List<int>? process(List<int> frame) {
saveFrame(frame);
return frame;
}
}
I'm not sure if I have to write sync (with writeAsBytesSync) or if the async calls in the process
function can run in parallel and create a problem while writing to the file (overwriting or not following the order of frames), but it just works for the proof of concept.
FYI: From this I get a working lastRecording.mjpg
file when the recording is stopped and then I transform it to an mp4 using Ffmpeg-kit
Hi @sebaslogen,
Thanks for the code. I have a doubt. Where do the BOUNDARY_PART, BOUNDARY_DELTA_TIME, and BOUNDARY_END variables come from? Can you provide more context or source code to clarify the origin of these variables?
Hi @sebaslogen,
Thanks for the code. I have a doubt. Where do the BOUNDARY_PART, BOUNDARY_DELTA_TIME, and BOUNDARY_END variables come from? Can you provide more context or source code to clarify the origin of these variables?
These come from the format of the source stream that you're using. For reference these are the values I use:
const String BOUNDARY_PART = '\r\n\r\n--myboundary\r\nContent-Type: image/jpeg\r\nContent-Length: ';
const String BOUNDARY_DELTA_TIME = '\r\nDelta-time: 110';
const String BOUNDARY_END = '\r\n\r\n';
Hi @sebaslogen, Thanks for the code. I have a doubt. Where do the BOUNDARY_PART, BOUNDARY_DELTA_TIME, and BOUNDARY_END variables come from? Can you provide more context or source code to clarify the origin of these variables?
These come from the format of the source stream that you're using. For reference these are the values I use:
const String BOUNDARY_PART = '\r\n\r\n--myboundary\r\nContent-Type: image/jpeg\r\nContent-Length: '; const String BOUNDARY_DELTA_TIME = '\r\nDelta-time: 110'; const String BOUNDARY_END = '\r\n\r\n';
Thanks @sebaslogen .
Is it possible to save the data received in the stream to disk? Then I could add the "record video" feature to the app.
I see a new frame pre-processor was just added to the Widget constructor but that is not yet released and I'm not 100% sure how to use that to write the mjpeg to disk.