Closed christianbrodbeck closed 9 years ago
I thought we already had something to do this? I've never used it, but something made this
@mwaskom Yeah you can export individual frames (save_image_sequence), then compose them externally. What this PR could add is the option to save different views separately (there is a big warning against having different views in the same figure now) and automatically compose them in a movie.
As far as I can see an alternative way to achieve that would be to extend save_montag to grid like montages, and then allow a montage in save_image_sequence. Might involve adding somewhat wild argument interpretation to previous API though.
Ah, OK.
I don't really do anything with an interesting time dimension, so best to let the MEG folks weigh in here.
any chance to add a test to keep our high coverage?
@agramfort I will definitely also add tests, I just wanted to first gauge interest in the separate class for tiling images into movies. If you think it's a good addition I will start working on tests and functionality for tiling different views.
I would make the class private if users are meant to just call save_movie
I did see any danger in complex API
I've added the views parameter to Brain.save_movie()
to illustrate the tiling, using the data from the plot_meg_inverse_solution.py
example. Haven't handled turning off labels yet:
brain.save_movie('~/Desktop/test', views=[['lat', 'fro'], ['med']])
As far as I can see the tiling functionality could also be achieved by extending .save_montage()
and it could be exposed through accepting nested lists as order
parameter (although I haven't actually tried it). FFmpeg could then be wrapped in a simpler way and the ImageTiler class could be avoided.
On the other hand the ImageTiler class might be more flexible (e.g. it could be used for making a movie of topographic sensor space data alongside a source estimates in combination with mne-python).
What do you prefer?
And in general about a Brain.save_movie()
method wrapping FFmpeg?
I don't have particularly strong feelings either way. ffmpeg
isn't a bad option, since it's available on most *nix platforms.
As far as I can see the tiling functionality could also be achieved by extending .save_montage() and it could be exposed through accepting nested lists as order parameter (although I haven't actually tried it). FFmpeg could then be wrapped in a simpler way and the ImageTiler class could be avoided.
I like the idea of nested lists to specify a montage. I find it really flexible. I would be +1 for adding it to save_montage. As you suggest it would simplify wrapping.
On the other hand the ImageTiler class might be more flexible (e.g. it could be used for making a movie of topographic sensor space data alongside a source estimates in combination with mne-python).
Good point...
I am neutral to be honest.
What would be the simplest way to create a movie including both left and right hemisphere? One way I could think of that would keep using the montage function is to have a brain with hemi='both'
and then alternately hide one hemisphere while adding the other to the montage. Is there a simpler way to do this?
I was just looking into figuring this out myself, but it looks like it's in the midst of being added. Looking forward to it!
@mattjerhart maybe you can test this branch and give your feedback. If it works great for you it will speed up the merge.
I added https://github.com/nipy/PySurfer/pull/89 which uses Brain.save_montage()
for saving a movie.
@christianbrodbeck are you still interested in working on this? If so, it might be easiest / best to use MoviePy:
http://zulko.github.io/blog/2014/11/29/data-animations-with-python-and-moviepy/
WDYT?
Looks cool!!
2014-12-08 18:08 GMT+01:00 Eric Larson notifications@github.com:
@christianbrodbeck https://github.com/christianbrodbeck are you still interested in working on this? If so, it might be easiest / best to use MoviePy:
http://zulko.github.io/blog/2014/11/29/data-animations-with-python-and-moviepy/
WDYT?
— Reply to this email directly or view it on GitHub https://github.com/nipy/PySurfer/pull/79#issuecomment-66148659.
Eric, how far have you gouys gotten with Vispy over the last months? Is it ready for meshes and time courses? Do you have a demo by chance?
2014-12-08 18:10 GMT+01:00 Denis-Alexander Engemann < denis.engemann@gmail.com>:
Looks cool!!
2014-12-08 18:08 GMT+01:00 Eric Larson notifications@github.com:
@christianbrodbeck https://github.com/christianbrodbeck are you still interested in working on this? If so, it might be easiest / best to use MoviePy:
http://zulko.github.io/blog/2014/11/29/data-animations-with-python-and-moviepy/
WDYT?
— Reply to this email directly or view it on GitHub https://github.com/nipy/PySurfer/pull/79#issuecomment-66148659.
It is getting close. I plan on working on this and raw plotting in March :)
Yes looks great, and I just pip
installed it painlessly, so it could be added to setup.py dependencies and we would not need to have users download anything externally right?
It looks pretty simple, should I update this PR? Only, if we are going to abandon Mayavi soon is it still worth the effort? :)
Heh, yes, even if we go to vispy
then that will just change how the screenshot
images are generated to be put into moviepy, it shouldn't really affect the movie-making API. So if you're up for getting it working, please do :)
Ok great, I will
Excellent. It might be worth checking to make sure @agramfort and @mwaskom agree before putting too much effort in, in case they have other good / alternative ideas...?
fMRI is temporally boring so I don't do much with movies -- I trust you MEG folk on this topic :)
Ok I’ll hold off for their approval :)=
pip installed without pain too ! let's do a quick POC to see how it flies...
Sounds like you're good to go @christianbrodbeck
Actually.... I just read that MoviePy requires and internally calls ffmpg (http://zulko.github.io/moviepy/install.html) and
""" And here are a few uses for which MoviePy is NOT the best solution: ...
""" (http://zulko.github.io/moviepy/getting_started/quick_presentation.html)
So actually I don't think it will improve over this PR(?)
Hmm... I wonder about a few issues related to the two approaches (moviepy vs subprocess
calls):
I'm not sure what the answers are to these questions, but I'd look at how MoviePy operates (development model, etc.) to see if it's worth hitching our horse to their wagon or not.
For 1 and 2, they're all ffmpg parameters and should be easy to add to the wrapper. For 3, since it is a simple subprocess wrap I don't think there is a big issue in testing, and they say themselves it's not meant for what we want to do... 4 might be for the future, at which point we could replace out wrapper, but it does not sound like it on the website.
Based on this I don't think that using moviepy would change this PR dramatically, or would it be the drop that brings the barrel to a merge? :)
I'm okay with sticking with ffmpeg
for now if it seems like the best approach to you. Should I look at this PR or #89, or something else? Looks like both PRs need a rebase in any case.
Closing this for #89
This PR would make it possible to export a movie for a brain object with a data layer with time axis (see the modified example). I've used FFmpeg for conversion but support for other converters could be added. If this is considered useful It could be extended to include several views in the movie as tiles.