cirquit / trdrop

trdrop - a raw video analysis program
MIT License
381 stars 33 forks source link

Divide export sequence in multiple folder #78

Open aj752 opened 3 years ago

aj752 commented 3 years ago

Hi,

This is a feature request not a problem with the software. Just pointing out something that I would find very useful but maybe others not. Let me explain:

When analysing a long video (even starting from 2 mintues or longer) at a 60FPS recording, you get A LOT of files - which is logical. In my workflow I use the "export as overlay", I bring them into Final Cut Pro, adjust the duration per image to 1frame and stitch this together. This works perfectly until you do 5 minutes or up. Even for a decent machine (i7 7700K, 64GB RAM, working from SSD) it gets too heavy to handle. I can image other people with even less capable systems to struggle with this.

It would be awesome if for e.g. you could say; Folder1 is my export. Do you want to split sequence? Yes. Then in Folder1 you would get a subfolder for each minute (or two, or a setting for XXX minute per subfolder) of the video. This way you could stitch this easy by importing per folder and creating multiple overlays which you could later could group together. You obviously can work with just importing the files you want to sequence but it gets messy and when you have 30K-40K+ files in a directory it is just not very great to work with.

Or maybe I should be doing this completely different and change my workflow? Any suggestions highly appreciated.

Hope this make sense.

Thanks!

cirquit commented 3 years ago

So my take on this - doing this subfolder thing should be quite easy, but it's still a kind of hacky workaround as you would have to do the grouping yourself afterward in Final Cut Pro.

Seems like the only way to really resolve this would be to allow a video export. I've taken a new look at the licenses of ffmpeg and it should all be compatible and not bring me any problems as long as I don't sell my software, which I'm not planning to do anyway. I'm still more or less restricted on the container/encoding, but I hope this won't be an issue.

So my proposal would be the following - scrap the whole frame by frame image export (or leave it in but not make it the default), and export it h264 encoded as an mp4. To allow overlay export, I'd make a color picker available where you can choose the color which will replace the current alpha channel. Do you have any other encodings/containers you feel are better suited for this kind of video export? Because I'm not sure what is currently state of the art, I can at least take a look into what is possible besides the usual h264 + mp4.

This will bring some inflexibility issues, as you probably won't be able to change the resolution of the video anymore while it's exporting, and maybe other settings that can corrupt the resulting video file. So I will have to manually disable all of those, and it will probably take some time to find all of them.

aj752 commented 3 years ago

You're completely right, it is very laborious work but that is the only way I get it to work. So I realise other users will most probably use the CSV and create their own graph. FCP doesn't allow this and it gets quite difficult after that (for me at least).

So just to give you an idea of my workflow with your software:

When I capture I do it on the two cards at the same time. One is the 4K compressed (HEVC 200Mbit, so image quality is very decent), with this one I record to SD card and the 1080P60 device is connected in between (this one does 4K passthrough) and hooked up to my iMac. When I have a scene I would like to analyse, I start and stop recording in OBS from the 1080p (the 4K records everything without a break to SD). When everything is ready I only use the 1080P video to match the sequence on the 4K video (up to the frame). Then I use the overlay on top of the 4K video to have the best looking result from the analysis. Making the overlay from the PNG's is by far the most work in these kind of video's for me. But it works well and it is very reliable.

Exporting the overlay as video with an alpha channel would obviously be extremely (and I mean EXTREMELY) awesome. At least in my case. I only use your software for short video fragments between 35-120 seconds because of this. Making this change in the software would make the difference between 35-120 seconds and endless (until storage is full) time of analysis to show. In my case at least :) Having the png-sequence divided in subfolders would be the next best thing for workflow reasons.

Resolution of the overlay should not be a huge problem I think if you pick something like, let's say 1080p. In my case, I only edit 4K videos but I export the overlay in 1080p because of speed and because reworking them is faster. After that, I just scale them a bit and they look more than sharp enough on a 2160p export. The other way around, exporting 4K for lower res video, would probably also work pretty well. The beauty of the overlay in this format is that when you edit a video you can easily scale to your likings, this goes for Adobe and FCP. I even have presets and reuse the overlay 3 times in one to change the layout. Example here: layout video

EDIT: I realise you mean changing the resolution WHILE it is exporting and not changing the resolution at all. I wasn't even aware that was possible. When I pick one resolution that is the one I need, never changed while exporting or had the need for it to be honest.

So regarding your question, I'm on a mac. So for me Pro Res 4444 codec does this by default (alpha channel) but this is a closed ecosystem. I wasn't even aware that H264 would allow to export with Alpha Channel to be honest. Lots of devices and tools do work with HEVC/H265 nowadays, maybe that would be worth exploring? If you decide to implement this and you're not running macOS - or don't have the possibility to test this, I would be happy to test outputs within FCP to see if Alpha Channels are recognised correctly and so on. I'm guessing quite some people are doing the editing on Mac also.

cirquit commented 3 years ago

What kind of settings are you using to make OBS record the 1080p stream? I'm having the feeling that this might be already compressed.

So I just checked and there are encodings like VP9 that are allowing an alpha channel. My very naive solution would've been to just use h.264 (which to my knowledge does not have an alpha channel support, see this hacky way), and let the user pick a color to replace the "alpha" channel with that color. Just the same way a green screen works, but you can choose the color, as I already see issues with people picking green as their framerate color.

So for HVEC allows for an alpha channel, but it seems it's a proprietary extention for apple products only? I also doubt very much that OpenCV the library I'm using has implemented support for that yet. HVEC, or even h.265 seems to be included by default for some time.

I will play around with it when I come to implement this feature, but I think the color solution is probably one that will work for most users. Creating a mask should be the bread and butter for FCP, Premiere, or other video editing software.

How are you using trdrop on a Mac btw?

aj752 commented 3 years ago

I'm using the "Lossless Quality, Tremendously Large File Size" option in OBS. The HD60S+ does not have an encoder, so I'm really hoping you won't tell me it is wrong hahah (lots of time wasted :))

Yes I see, the alpha channel thing is really maybe an Apple thing to easily export. VP9 support is I believe limited with FCP (macOS) so not sure how that would work out on FCP. The color picker should indeed be a piece of cake for any video editing software and probably you will have the most compatibility among your users.

So because I do most short scenes, like mentioned before, I mostly have multiple scenes I need to analyse. For trdrop-purpose only :) I have 4-5 VM's running with VMware Fusion. One scene only? Then I spin up one VM. 4 or more scenes? I spin up all of them and so on to accelerate workflow. CPU is never fully utilised within the VM (at least not more than one core) - I guess because of the nature of serial processing of the video? I use a thunderbolt SSD which I "share" with the VM's. So they export directly to the SSD and from FCP I can just navigate to the directory for import. It took me a few hours to have everything in place but it makes a great difference when actually having to analyse and edit more fluidly.

cirquit commented 3 years ago

to the OBS options: I will check it out what happens there when I have time, but it will be useful to know for #77.

Otherwise, I feel like I should do a MacOS version, this would remove most of the complexity for your workflow. I have access to MacBook, so I'll look into cross-compiling when I get the time.

The analysis should actually be parallelized with OpenMP and should use the number of threads it deems necessary. Additional parallelization happens if you analyze multiple videos (a thread per video). Have to take a look into that as well.

Thanks for the explanation, lets hope I find some time before next winter holidays to work on trdrop :cry:

fakedeltatime commented 3 years ago

I'm not exactly sure about how Final Cut Pro works, or if there's something that invalidates my opinion on this, but I think the problem you're having comes primarily from Final Cut Pro itself, or your import method in Final Cut Pro. This is not meant to invalidate the request for the feature.

I just exported a test analysis of a 7+ minute 1080p60FPS video as a .png overlay, and while the export does take long, when importing to my software of choice, in this case the Olive video editor, it doesn't take more than a second, and same when I add that imported sequence to my timeline.

I'm using a Ryzen 3600 paired with 16GB of RAM, on Linux running of an HDD.

So if I understood correctly, the way you're importing them is as a set of individual images, all of which you adjust the duration of to be a single frame, and then you stitch them together into a clip? Is there no way to import them all at once as a single sequence of images, so that the stitched clip is created at import?

aj752 commented 3 years ago

So in my opinion, nothing invalidates your opinion :) I agree with you and the problem is FCP and how it handles the image sequence, specially if you're doing it with a (much) larger number of files and it just works.

FCP indeed doesn't support image sequence import natively, so you can make it work but it seems it is not meant doing this job. You cannot import the sequence as a single clip, you need to group them after importing all the images (create a compound clip). All the tutorials I've watched never mentioned other Apple software to do this, and I wasn't aware of other software. Because of your writing I've done (new) multiple searches and trying to find a solution. What I found out is dat you can do this with Apple Motion (Apple's version of Adobe After Effects). That was a bummer because it is paid program for this purpose only.

The other option I found is to do this with Apple Compressor. This is also paid software but I already owned it - go figure how stupid I feel. I can import one image and then you can tell it, it is a "Image sequence", you select the folder and indeed within less than one second it is imported. I can then export as Res Pro 4444 (Alpha channel), which takes about 2x of the time of the sequence. This export I can easily import into FCP and use as overlay. No more hackey stuff like cirquit mentioned :)

I'm so glad you guys replied and pointed out some stuff, this made me look at it differently and resolved the issue. Obviously, export to video would still be an awesome feature from trdrop software as it would remove the additional step of creating the clip (for all users), but the need for subfolders disappeared with these new insights. Really appreciate the replies!

aj752 commented 3 years ago

to the OBS options: I will check it out what happens there when I have time, but it will be useful to know for #77.

Otherwise, I feel like I should do a MacOS version, this would remove most of the complexity for your workflow. I have access to MacBook, so I'll look into cross-compiling when I get the time.

The analysis should actually be parallelized with OpenMP and should use the number of threads it deems necessary. Additional parallelization happens if you analyze multiple videos (a thread per video). Have to take a look into that as well.

Thanks for the explanation, lets hope I find some time before next winter holidays to work on trdrop 😢

I uploaded an 1080p60 recording made with OBS (with the setting I mentioned before). Not sure if it will be helpful but I thought it wouldn't hurt: OBS recording