Closed dirkwhoffmann closed 4 years ago
One way to do this is to pipe images to ffmpeg https://stackoverflow.com/questions/51903888/is-it-possible-to-send-ffmpeg-images-by-using-pipe I have used that in the past and just ship the ffmpeg executable next my tool. As ffmpeg is very flexible and supports many output formats this might be a good way to do it (without actually integrating some API)
I did some experiments with ffmpeg based on the instructions given in the stackoverflow thread.
I started by using popen
to open ffmpeg inside the emulator core:
snprintf(command, sizeof(command), "/usr/local/bin/ffmpeg -y -f rawvideo -pix_fmt argb -s 800x600 -r 25 -i - -c:v libx264 -profile:v high444 -level:v 3 -b:v 2500 -an /tmp/out_vid.h264");
file = popen(command, "w");
First, I had profile:v
set to baseline
, but this produced a runtime error:
x264 [error]: baseline profile doesn't support 4:4:4
When using profile high444
, it actually started recording: đ
frame= 54 fps= 52 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
frame= 80 fps= 52 q=69.0 size= 1kB time=00:00:00.30 bitrate= 35.2kbits/s speed=0.193x
frame= 105 fps= 51 q=69.0 size= 2kB time=00:00:00.80 bitrate= 20.5kbits/s speed=0.39x
frame= 131 fps= 51 q=69.0 size= 3kB time=00:00:01.32 bitrate= 17.0kbits/s speed=0.513x
frame= 157 fps= 51 q=69.0 size= 3kB time=00:00:01.84 bitrate= 15.5kbits/s speed=0.596x
frame= 182 fps= 51 q=69.0 size= 4kB time=00:00:02.34 bitrate= 14.6kbits/s speed=0.65x
frame= 208 fps= 51 q=69.0 size= 5kB time=00:00:02.86 bitrate= 14.1kbits/s speed=0.696x
But the result is not quite what I expected: đź
(The texture cutout is wrong, because I just dumped some portion of the texture. But the image looks like it was processed with a blur filter a thousand times).
try outputting mp4
instead and skip the specific settings for libx264
it's likely some of them are bad (or the bitrate is very low)
Using mp4 and skipping libx264 produced a similar result. I think the main issue is the bit rate. Using these parameters
/usr/local/bin/ffmpeg -y -f rawvideo -pix_fmt argb -s 800x600 -r 50 -i - -profile:v high444 -level:v 3 -b:v 64k -an /tmp/out_vid.mp4
it looks better:
But what happened to the colors?
I would guess you don't have the colors in argb
? (it may be en endian issue) in someway I think you feed the image in a format that ffmpeg doesn't expect.
what is strange though ... it "nearly" is ok .. that is what make me rule out the order of rgb ... i mean what should be blue is blue and not red ... its just not the correct blue đ€
so this:
FLAGS NAME NB_COMPONENTS BITS_PER_PIXEL
IO... argb 4 32
~it say it's 32-bit per component if you use argb
~ no, it's actually per pixel I see now, still guess it's something like I wrote above
As you don't care about alpha (I guess?) you may want to try 0rgb
or rgb0
or do you think that there is maybe a bit shift in the data?
I would guess you don't have the colors in argb ?
or do you think that there is maybe a bit shift in the data?
You are both right. When called with -pix_fmt rgba
, the colors are correct.
Based on this simple experiment, I think it's pretty easy to add screen recording to vAmiga. Before I start to integrate this in a nice way, we should find answers to the following questions:
Here are the corresponding UAE configuration dialogs:
Later, we might want to integrate audio as well. I hope ffmpeg accepts raw audio data via stdin, too.
Interestingly, UAE talks in Geman to me in these two dialogs. Everything else is in English.
My suggestion is to keep it simple with a few options and have an advanced
checkbox where the user can modify the commandline to ffmpeg (and a button to reset to default if the user screws up)
Here is a first proposal what the recorder window could look like:
Small question to make sure we're all on the same page. We need the build-in screen recorder to produce videos with a frame rate of 50fps. Otherwise, we could simply use the macOS screen recording feature. Correct?
Yeah, the macOS
screen recorder is actually pretty good, but for proper recordings you want to do it in the native fps of the machine as that makes it look much better.
Here is a first proposal what the recorder window could look like:
Small question to make sure we're all on the same page. We need the build-in screen recorder to produce videos with a frame rate of 50fps. Otherwise, we could simply use the macOS screen recording feature. Correct?
Well 50 or 60 depending if youâre emulating PAL or NTSC I guess.
For me itâs important to have very high quality and every frame perfectly captured including audio in perfect sync.
When winuae does recording like this I donât think it plays the audio, just sends it to the capture process.
Itâs not really possible to do a really high quality capture with the built in screen recorder.
Well 50 or 60 depending if youâre emulating PAL or NTSC I guess.
vAmiga doesn't support NTSC. This is the reason why I always talked about 50 fps.
For me itâs important to have very high quality and every frame perfectly captured including audio in perfect sync.
On the emulator site, I simply grab the texture after a frame has been drawn and dump the raw data into FFmpeg via stdout. Now, we need to find out how FFmpeg can be told to encode the video with zero loss.
I think ffmpeg supports some lossless codecs, but with high enough bit-rate I don't think it's much of an issue
We definitely donât need it to be lossless, as long as we have a high bitrate option it will be fine.
I'm still at the drawing board, trying to find the right place to plug in the new feature. Conceptually, screen recordings belong to the same category as screenshots and snapshots. Therefore, I think they should be configured at the same place which is the Preferences panel. Hence, my new plan is to move all screenshot, snapshot, and screen recording options to a new panel which could look like this:
To start (or stop) recording, there will be a new menu item in the Machine menu.
Here is an issue I don't know how to deal with. The current implementation takes care of the visible texture area. E.g., in my test run, the visible texture area is (156.0,26.0) - (908.0,313.0) which means that FFmpeg produces a video of size 752x287.
This is what I record (toolbar changes to red when screen capturing is active):
This is what I get:
The root cause is clear. VLC displays the video with an aspect ration of 1:1 which is not the aspect ratio of pixels on an Amiga monitor. How shall we deal with this?
Well I assume you double the lines or something like that when you actually use the texture in the emulator? you need to match your scaled output to what you want to see in the video output (you can send scaling values to ffmpeg)
you can send scaling values to ffmpeg
That worked. FFmpeg is awesome đ. With it's trillion options, everything seems to be doable.
Having the scaling issue in mind, it might be useful to let the user select the aspect ratio. For normal videos, scaling is a must have. In case of @alpine9000, I think he would like to switch scaling off in order to analyze a recorded video on a per-pixel basis for debugging.
There is a good explanation here:
https://superuser.com/questions/907933/correct-aspect-ratio-without-re-encoding-video-file
Basically, there are two options: Setting the Sample Aspect Ratio (SAR) or the Display Aspect Ratio (DAR). SAR means rescaling the image whereas DAR means to set a meta-flag which is later evaluated by the video player.
I think it's best to use the DAR approach, because it keeps the raw pixel data unchanged. FFmpeg provides an -aspect
flag which, however, doesn't work in combination with VLC. A solution (described in the article, too) is to embed a H264 flag via -bsf:v \"h264_metadata=sample_aspect_ratio=2/4\"
. This turned out to work well.
Yeah ffmpeg can do so much :) I pretty much always use it for my encoding needs
I am still at the drawing board, putting together requirements. Yesterday, I managed to produce videos VLC can play, but I wasnât able to convince QuickTime to open them. Since vAmiga is primarily a macOS app (at least for now), supporting QuickTime is a must-have, I think.
Unfortunately, Quicktime is pretty picky about the video format and it reject nearly everything that doesnât comply 100% with what Apple wants. Configuring FFmpeg to produce QuickTime compatible output is everything but easy, but after crawling through dozens of pages on stack overflow, I was finally able to assemble a parameter set that worked:
ffmpeg
-f rawvideo
-pixel_format rgba
-s <WIDTH>x<HEIGHT>
-r 50
-i -
-f mp4
-vcodec libx264
-pix_fmt yuv420p
-b:v <BITRATE>
-bsf:v \"h264_metadata=sample_aspect_ratio=<ASPECT_RATIO>\"
-y
<OUTFILE>
The first block specifies the format of the input stream which is pretty straight forward (it is determined by the internal texture format). The second block specifies the output stream.
Here is my plan:
<OUTFILE>
<WIDTH>
,<HEIGHT>
(by selection âEntire textureâ or âVisible Textureâ from a pop-up button)<BITRATE>
<ASPECT_RATIO>
(117:256 PAL, 1:2, 3:4, 1:1)Please let me know if more parameters need to be made customizable.
if you don't set libx264
as video codec does it work in quicktime then?
If I remove -vcodec libx264
from the command line arguments, it still works.
But libx264 is still used, I think, because it continues to show up in the debug output:
...
[libx264 @ 0x106809a00] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x106809a00] ref B L0: 51.1% 48.2% 0.7%
[libx264 @ 0x106809a00] ref B L1: 97.3% 2.7%
[libx264 @ 0x106809a00] kb/s:22.91
hum... I'm pretty sure I got this working in the past.
What's wrong about libx264?
I can't remember exactly what parameters I used (likely default) for the movies on this page http://tbl.nu/2019/09/30/Effects1/ there were all encoded in ffmpeg
and if I save them and locally play them in quicktime (on my mac) they work fine.
Video capturing works pretty well now (I think). The usage model is as follows:
There is a catch though. Because the emulator knows nothing about upscaled textures, it can only record the native Amiga texture (upscaling, if enabled, is all done on the GUI side). This is nearly impossible to change. On the positive side, the screen recorder gets along very well with the DMA debugger, because the DMA texture is superimposed onto the emulator texture before the final texture is handed over to the GUI.
Hence, from the next version on, we'll be able to flood Youtube with cool DMA debugger videos đ:
Next step: Implement audio recording (right now, only video is recorded đ).
Passing audio and video to FFmpeg simultaniously is more difficult than I thought. Up to now, I assumed that I can use stdin to feed in both streams like this: frame 1 video data, frame 1 audio data, frame 2 video data, frame 2 audio data. However, it doesn't seem to be work that way. Here is a thread about this exact problem:
https://superuser.com/questions/876577/how-is-mpeg-interleaved
Seems like I have to replace stdin by two named pipes. One pipe for the video stream and another pipe for the audio stream.
More difficultiesâŠ
This is what I have achieved:
Now it becomes difficult:
Now, the following happens (which leads to a deadlock):
I found out that I need to feed in approx. 98 video frames until FFmpeg opens the audio pipe.
The easiest way would be to tell FFmpeg to open both pipes immediately. This would eliminate all deadlocks.
Another thing I could try is to feed in video data via a pipe and audio data via stdin. stdin is non-blocking.
Any ideas?
Perhaps another way would be to put this on a separate thread, then from the "main" vAmiga thread you push up frame and audio data to the thread and ffmpeg can sit and do it's thing while vAmiga "main" pushes up data. Some issues with this.
But perhaps that could work?
Not sure if separate threads help. If MMmpeg opens the pipes at different times (which apparently the case), I always run into the problem that for a certain period of time, vAmiga starts writing data into one pipe only (because the other pipe is yet closed). This will bring video and audio out of sync. Maybe it's best to write video and output data into some kind of a "smart buffer" inside the screen recorder. The smart buffer will buffer everything until the pipe can be opened (which means FFmpeg has started reading from it). Once the pipe is open, all buffered data is dumped out, and from that point on, data is directly fed into the pipe.
I'll try tomorrow...
Well that is what I mean with that you push data to a separate thread.
the data will be a list of visual frames and list of audio frames. vAmiga will be producing new audio and visual frames for ffmpeg thread to consume at the correct point once both pipes are open.
Update: I think I now fully understand how blocking and non-blocking pipes work. What we need is blocking pipes which in turn means that there is no way around a multi-threaded design. Therefore, the new code will utilize a classic multi-threaded producer-consumer scheme (which is in fact exactly what @emoon proposed). I already managed to assemble a prototype which is able to record video and audio simultaneously for the first time.
There is still some work to do, especially for the audio part. The current implementation produces sound samples based on cycle counts wich means that the number of generated samples varies between frames. Furthermore, the sample frequency is currently determined by macOS. macOS usually requests a sample frequency of 48k, but Iâd like to use a sample frequency of 44.1k in the videos which means that the new code has to synthesize two audio streams. One stream for macOS (cycle-based) and another stream for FFmpeg which contains exactly 882 audio samples per frame and channel.
you can likely give ffmpeg 48k and tell it to output 44.1k if you wish tho (that way you don't need to convert it yourself)
To cope with the sample rate issue, I've reworked the audio backend. In the new design, Paula's four audio state machines write their output into four samplers (Sampler class), one sampler for each channel. Each sampler generates a separate audio stream with a fixed sample rate. The four streams are fed into a muxer (class Muxer) which takes four input streams (one for each channel) and outputs two stereo output streams. The screen recorder has it's own muxer which allows the emulator to generate two streams in parallel. One stream (48kHz) is fed into the native audio device in macOS and the other stream (44.1kHz) is fed into FFmpeg.
The code still needs some cleanup, but the main functionality is working. Here is the first DMA debugger video produced with vAmiga (disk 1 of the Black Lotus demo) đ:
http://www.dirkwhoffmann.de/software/videos/eon.mp4
BTW, the multi-threaded approach with buffered pipes which I favored a couple of days ago has turned out to be very brittle. Using multiple pipes with FFmpeg doesn't really seem to be a good idea. The final code uses a simpler approach. It launches two FFmpeg instances, one for the video stream and one for the audio stream. Both streams are written to temporary files in the /tmp folder. When the screen recorder is stopped, FFmpeg is started a third time. This time, it is utilized to combine both streams to the final result. Because the final step requires no recoding, it runs extremely fast.
plays very well đ... but the video quality is not very high ... it is quite blurry especially with the DMA debugger ... why is that so ?
why is that so ?
It was recorded with a low bit rate of 1024k:
I need to adjust the list of selectable bit rates, but I don't know yet which bit rates are appropriate for a video of this kind. I didn't spend much time on creating videos in the past.
I don't know yet which bit rates are appropriate for a video of this kind
me neither, but according to the low quality I would guess 2048K or better 4096K
@emoon ?
There is something strange. This is from the debug log:
Starting video encoder with options:
/usr/local/bin/ffmpeg -nostdin -loglevel warning -f:v rawvideo -pixel_format rgba -r 50
-s:v 716x276
-i /tmp/videoPipe -f mp4 -pix_fmt yuv420p -b:v 1024k -bsf:v "h264_metadata=sample_aspect_ratio=768/1404"
-y /tmp/video.mp4
As you can see, image size is set to 716x276.
Now, when I look at the final video in the Finder, the resolution is smaller:
Don't know what's going on here... đ€
Width reduction seems to happen in the merging phase:
The left file is the recorded video stream and the right file is the final video (created by combining the video stream with the audio stream):
Here are the command line parameters for the merging phase:
/usr/local/bin/ffmpeg -loglevel warning -i /tmp/video.mp4 -i /tmp/audio.mp4 -c:v copy -c:a copy -y /Users/hoff/Desktop/Untitled.mp4
Right now, I have no idea why FFmpeg is reducing the width of the video.
Something I thought about now when watching the TBL demo would be to set stereo separation for the audio. Usually you record demos in mono because of the full (default) stereo separation and many tunes (such as the one in Eon) is made for mono.
would be to set stereo separation for the audio.
vAmiga is already very flexible about that. The muxer supports volume scaling and panning for all four Paula channels. Since the screen recorder uses the same configuration parameters as the core emulator, the user is able to record videos in stereo or mono. It's also possible to record a single Paula channel, only.
I just tried release v0.9.13, capture works really well! At first however it crashed for me, and I had to enter 1:1 as the pixel ratio in the preferences as the default was 0:0 and ffmpeg was exiting with an error message.
The video quality is really good, I wonder if we could have an option for even higher bitrates as a test to see if 2k is really enough for really high motion stuff ?
Maybe have the bitrate dropdown editable so the user can write a custom value if wanted?
At first however it crashed for me, ... as the default was 0:0
OK, this shouldn't happen. The emulator needs to check the integrity of these values because they come from the user default storage.
Maybe have the bitrate dropdown editable so the user can write a custom value if wanted?
Sound like the way to go. đ
In v0.9.13.1, I've replaced the NSPopUpBox for the video bit rate by a NSComboBox.
Furthermore, the emulator shouldn't crash any more if invalid aspect ratios are specified.
Suggested by @alpine9000: Add the ability to record videos frame-accurately with a rate of 50Hz (similar to what UAE does).
Steps to do: