nipy / PySurfer

Cortical neuroimaging visualization in Python
https://pysurfer.github.io/
BSD 3-Clause "New" or "Revised" License
244 stars 98 forks source link

WIP: save a movie using Brain.save_movie() #79

Closed christianbrodbeck closed 9 years ago

christianbrodbeck commented 10 years ago

This PR would make it possible to export a movie for a brain object with a data layer with time axis (see the modified example). I've used FFmpeg for conversion but support for other converters could be added. If this is considered useful It could be extended to include several views in the movie as tiles.

mwaskom commented 10 years ago

I thought we already had something to do this? I've never used it, but something made this

christianbrodbeck commented 10 years ago

@mwaskom Yeah you can export individual frames (save_image_sequence), then compose them externally. What this PR could add is the option to save different views separately (there is a big warning against having different views in the same figure now) and automatically compose them in a movie.

christianbrodbeck commented 10 years ago

As far as I can see an alternative way to achieve that would be to extend save_montag to grid like montages, and then allow a montage in save_image_sequence. Might involve adding somewhat wild argument interpretation to previous API though.

mwaskom commented 10 years ago

Ah, OK.

I don't really do anything with an interesting time dimension, so best to let the MEG folks weigh in here.

agramfort commented 10 years ago

any chance to add a test to keep our high coverage?

christianbrodbeck commented 10 years ago

@agramfort I will definitely also add tests, I just wanted to first gauge interest in the separate class for tiling images into movies. If you think it's a good addition I will start working on tests and functionality for tiling different views.

agramfort commented 10 years ago

I would make the class private if users are meant to just call save_movie

I did see any danger in complex API

christianbrodbeck commented 10 years ago

I've added the views parameter to Brain.save_movie() to illustrate the tiling, using the data from the plot_meg_inverse_solution.py example. Haven't handled turning off labels yet:

brain.save_movie('~/Desktop/test', views=[['lat', 'fro'], ['med']])

screen shot 2014-01-17 at 17 14 56

As far as I can see the tiling functionality could also be achieved by extending .save_montage() and it could be exposed through accepting nested lists as order parameter (although I haven't actually tried it). FFmpeg could then be wrapped in a simpler way and the ImageTiler class could be avoided.

On the other hand the ImageTiler class might be more flexible (e.g. it could be used for making a movie of topographic sensor space data alongside a source estimates in combination with mne-python).

What do you prefer?

christianbrodbeck commented 10 years ago

And in general about a Brain.save_movie() method wrapping FFmpeg?

larsoner commented 10 years ago

I don't have particularly strong feelings either way. ffmpeg isn't a bad option, since it's available on most *nix platforms.

agramfort commented 10 years ago

As far as I can see the tiling functionality could also be achieved by extending .save_montage() and it could be exposed through accepting nested lists as order parameter (although I haven't actually tried it). FFmpeg could then be wrapped in a simpler way and the ImageTiler class could be avoided.

I like the idea of nested lists to specify a montage. I find it really flexible. I would be +1 for adding it to save_montage. As you suggest it would simplify wrapping.

On the other hand the ImageTiler class might be more flexible (e.g. it could be used for making a movie of topographic sensor space data alongside a source estimates in combination with mne-python).

Good point...

I am neutral to be honest.

christianbrodbeck commented 10 years ago

What would be the simplest way to create a movie including both left and right hemisphere? One way I could think of that would keep using the montage function is to have a brain with hemi='both' and then alternately hide one hemisphere while adding the other to the montage. Is there a simpler way to do this?

matt-erhart commented 10 years ago

I was just looking into figuring this out myself, but it looks like it's in the midst of being added. Looking forward to it!

agramfort commented 10 years ago

@mattjerhart maybe you can test this branch and give your feedback. If it works great for you it will speed up the merge.

christianbrodbeck commented 10 years ago

I added https://github.com/nipy/PySurfer/pull/89 which uses Brain.save_montage() for saving a movie.

larsoner commented 9 years ago

@christianbrodbeck are you still interested in working on this? If so, it might be easiest / best to use MoviePy:

http://zulko.github.io/blog/2014/11/29/data-animations-with-python-and-moviepy/

WDYT?

dengemann commented 9 years ago

Looks cool!!

2014-12-08 18:08 GMT+01:00 Eric Larson notifications@github.com:

@christianbrodbeck https://github.com/christianbrodbeck are you still interested in working on this? If so, it might be easiest / best to use MoviePy:

http://zulko.github.io/blog/2014/11/29/data-animations-with-python-and-moviepy/

WDYT?

— Reply to this email directly or view it on GitHub https://github.com/nipy/PySurfer/pull/79#issuecomment-66148659.

dengemann commented 9 years ago

Eric, how far have you gouys gotten with Vispy over the last months? Is it ready for meshes and time courses? Do you have a demo by chance?

2014-12-08 18:10 GMT+01:00 Denis-Alexander Engemann < denis.engemann@gmail.com>:

Looks cool!!

2014-12-08 18:08 GMT+01:00 Eric Larson notifications@github.com:

@christianbrodbeck https://github.com/christianbrodbeck are you still interested in working on this? If so, it might be easiest / best to use MoviePy:

http://zulko.github.io/blog/2014/11/29/data-animations-with-python-and-moviepy/

WDYT?

— Reply to this email directly or view it on GitHub https://github.com/nipy/PySurfer/pull/79#issuecomment-66148659.

larsoner commented 9 years ago

It is getting close. I plan on working on this and raw plotting in March :)

christianbrodbeck commented 9 years ago

Yes looks great, and I just pip installed it painlessly, so it could be added to setup.py dependencies and we would not need to have users download anything externally right?

It looks pretty simple, should I update this PR? Only, if we are going to abandon Mayavi soon is it still worth the effort? :)

larsoner commented 9 years ago

Heh, yes, even if we go to vispy then that will just change how the screenshot images are generated to be put into moviepy, it shouldn't really affect the movie-making API. So if you're up for getting it working, please do :)

christianbrodbeck commented 9 years ago

Ok great, I will

larsoner commented 9 years ago

Excellent. It might be worth checking to make sure @agramfort and @mwaskom agree before putting too much effort in, in case they have other good / alternative ideas...?

mwaskom commented 9 years ago

fMRI is temporally boring so I don't do much with movies -- I trust you MEG folk on this topic :)

christianbrodbeck commented 9 years ago

Ok I’ll hold off for their approval :)=

agramfort commented 9 years ago

pip installed without pain too ! let's do a quick POC to see how it flies...

larsoner commented 9 years ago

Sounds like you're good to go @christianbrodbeck

christianbrodbeck commented 9 years ago

Actually.... I just read that MoviePy requires and internally calls ffmpg (http://zulko.github.io/moviepy/install.html) and

""" And here are a few uses for which MoviePy is NOT the best solution: ...

""" (http://zulko.github.io/moviepy/getting_started/quick_presentation.html)

So actually I don't think it will improve over this PR(?)

larsoner commented 9 years ago

Hmm... I wonder about a few issues related to the two approaches (moviepy vs subprocess calls):

  1. How easy is it to do things like set the time scale in your code vs moviepy? In other words, if I want 100 ms to take 1 sec to play, is custom code or moviepy easier?
  2. How about if I want to change my output size, output file type, etc.? Does moviepy make this easier or harder?
  3. How well tested will moviepy be on different platforms and use cases vs our custom code?
  4. Does moviepy plan to implement support for multiple encoding backends at some point?

I'm not sure what the answers are to these questions, but I'd look at how MoviePy operates (development model, etc.) to see if it's worth hitching our horse to their wagon or not.

christianbrodbeck commented 9 years ago

For 1 and 2, they're all ffmpg parameters and should be easy to add to the wrapper. For 3, since it is a simple subprocess wrap I don't think there is a big issue in testing, and they say themselves it's not meant for what we want to do... 4 might be for the future, at which point we could replace out wrapper, but it does not sound like it on the website.

Based on this I don't think that using moviepy would change this PR dramatically, or would it be the drop that brings the barrel to a merge? :)

larsoner commented 9 years ago

I'm okay with sticking with ffmpeg for now if it seems like the best approach to you. Should I look at this PR or #89, or something else? Looks like both PRs need a rebase in any case.

christianbrodbeck commented 9 years ago

89 is probably better because it reuses montag code instead of implementing separate code to combine images. I'll rebase and ping you there when it's ready.

christianbrodbeck commented 9 years ago

Closing this for #89