ultramango / gear360pano

Simple script to create equirectangular panorama by stitching images from Samsung Gear 360
MIT License
225 stars 55 forks source link

Crash on long video processing #32

Closed evertvorster closed 6 years ago

evertvorster commented 7 years ago

Hi there.... I routinely take clips that are many thousands of frames. With the default setup, /tmp is used for the blended images, and this directory is mounted in memory on linux, at least. Once the memory filesystem runs out of space, the script fails.

I suggest putting the blended images in a subdirectory of the directory the source clip is in, just like the the extracted frames are. The chances are that the file system where the raw video is kept has a lot more space than the ram of the system. This does not seem to impact the speed of the script at all.

I like that the intermediate frames from Nona is being put into ram, it does speed things up a bit.

Kind regards, -Evert-

ultramango commented 7 years ago

Hi,

I can make it as an option (it makes sens for "special" configurations).

I'd argue that the way it is done in the script is pretty standard. The script uses system's designated command to get temporary directory: mktemp (on Windows it should be TEMP env variable). Normally any temporary things get cleaned by the script but in case of problems when files do not get deleted, the system should take care of removing temporary files (at some point).

BTW: I was looking at "live" frame processing but I couldn't find anything (simple) that could pipe the frames to external command for processing.

L.

evertvorster commented 7 years ago

Hi there. I also looked into piping the output from nona into enblend. It can't be done in UNIX, as the pipe buffer is in the kilobyte range.

You have the best solution of writing into memory and then immediately using it again. At most this uses a few megabytes of memory.

However, since it's quite easy making very large video clips, they should not be expanded to memory.

If it were possible, another option would be to extract one frame, pause the extraction, process the frame, encode the frame, pause the encoding, and unpause the decoding for the next frame, and repeat this process until the input is completed. I doubt if ffmpeg supports such a thing. It may be overly complicated and fragile.

Kind regards, Evert

ultramango commented 7 years ago

A bit off-topic, small demo-snippet on how to do video processing with external command (a bit ugly in my opinion):

https://gist.github.com/ultramango/bfee661b59333ef3ffcb7a6824eb6565

Now having a panorama script that reads from stdin and outputs to stdout this could save disk space in case of long videos. Not sure about the speed penalty.

evertvorster commented 7 years ago

Hi there!

I am pretty happy with the workflow I have so far. I was able to boost the jpg quality from ffmpeg with the -q 2 parameter. (It just about doubles the output size for the .jpg created in the first step) I was also able to change the re-encoding at the end into hevc format, with a quality setting of 18. If you are interested in putting this into your script, I would be more than happy to give you a copy of the modified script to use as you please.

However, I am now looking into the remap filter of ffmpeg. Here is a couple of interesting links: https://trac.ffmpeg.org/wiki/RemapFilter#no1 https://stackoverflow.com/questions/37431516/remapping-multiple-mp4-videos-into-a-single-one-with-ffmpeg

It will take quite a bit of fine tuning, and they have not solved the seam lines or vignetting, but that may just be another layer of filters. Hopefully at the end I will have a one-liner ffmpeg command that will convert the gear360 to equirectangular with hardware accelleration.

It does not do stabilization, and frankly, the ActionDirector's stabilization harms more than it fixes. I have been using panoramatools plugin in blender to get my video super-stable. However, it's excruciatingly slow.

Happy coding! -Evert-

On 6 July 2017 at 00:17, ultramango notifications@github.com wrote:

A bit off-topic, small demo-snippet on how to do video processing with external command (a bit ugly in my opinion):

https://gist.github.com/ultramango/bfee661b59333ef3ffcb7a6824eb6565

Now having a panorama script that reads from stdin and outputs to stdout this could save disk space in case of long videos. Not sure about the speed penalty.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ultramango/gear360pano/issues/32#issuecomment-313242262, or mute the thread https://github.com/notifications/unsubscribe-auth/ALfL1UsXea9kJYjViG0JDWf5ZRZi0koTks5sLAthgaJpZM4ON6Xr .

-- Evert Vorster Isometrix Acquistion Superchief

ultramango commented 6 years ago

You can now set temporary directory for video script, see -t|--temp DIR option - should fix the problem.

Remap filter look interesting, that would speed up the process a lot given that the map is build in the first place. I guess there's some software to do re-maping, the only idea I have is to create artificial 32 bit input image, stitch it and then find which pixel (unique colour) went where.

BTW: I did some experiments with pipe processing, it's a bit slower then unpacking video but saves a lot of disk space, Gists: https://gist.github.com/ultramango/8365652860090f67c6c857c0a2d79704 https://gist.github.com/ultramango/86f9d40f249ce15e41217ec65d293939

Note: a lot of trickery to do that.

ultramango commented 6 years ago

Here's a wiki entry about it: https://github.com/ultramango/gear360pano/wiki/Stream-Processing (totally forgot about it).