mne-tools / mne-gsoc2018-3d

Sandbox for GSoC 2018 3D Viz
BSD 3-Clause "New" or "Revised" License
9 stars 4 forks source link

Fixes #1. Simple brain mesh. #8

Closed OlehKSS closed 6 years ago

OlehKSS commented 6 years ago

I have several questions regarding this task:

Another general, what do you think about adding a GitHub project to this repository, and me as collaborator, so I will be able to assign issues I am working on and moving them from according to their status (for example, 'To Do', 'In Progress', 'Done')?

I will add unit testing the later.

larsoner commented 6 years ago

is there any patterns in the way MNE or other EEG/MEG data typically stored?

Check out the simplified ps.py gist I linked to before:

https://gist.github.com/larsoner/25fa656c6d6e0b02b56c40a571bfb77c

These are the standard data structures that MNE uses (and that your code will presumably need to use).

should I create a BrainMesh/Surface class for storing mesh data (e. g. vertices and faces)? Will it be useful later?

No let's stick with simple NumPy arrays for now, similar to how it's done in the gist.

What do you think about adding a GitHub project to this repository, and me as collaborator

I've given you admin permissions so you can tweak these things.

OlehKSS commented 6 years ago

@larsoner, after analyzing the gist that you've shared I think that I will need to add these:

I am checking whether camera, light and shading options are available in ipyvolume. Do you agree with the list above? Have I missed something?

Moreover, I can't find explicit documentation for the mesh.set_gl_state function. It is a dictionary of options, but I can't find which these options are. Could you suggest anything?

I haven't found something that can be put into the read_brain_mesh function from viz.py module. As far as I understand, curvature data correspond to the color of the mesh, thus it will be used in #2. I think I also have no need to manipulate data for now.

OlehKSS commented 6 years ago

Well, there are no options available in ipyvolume for camera scaling factor, light direction and color and shading. Azimuth can be changed for the whole scene (or figure), but not for a separate meshes.

larsoner commented 6 years ago

Well, there are no options available in ipyvolume for camera scaling factor, light direction and color and shading.

That's okay, we will probably need to add these later. For now we can just live with the defaults, whatever they are.

Azimuth can be changed for the whole scene (or figure), but not for a separate meshes.

That's okay I think we want them to move together. Eventually we will want e.g. left and right hemi plots separately, but for now plotting just left, just right, or left and right together (that move together) is a good start.

I haven't found something that can be put into the read_brain_mesh function from viz.py module. As far as I understand, curvature data correspond to the color of the mesh, thus it will be used in #2.

What is the plan for the read_brain_mesh function? What are the inputs and outputs supposed to be? It seems like everything you need to read the mesh components is already in the gist ready to be translated to a script / function.

OlehKSS commented 6 years ago

I have updated the according to the examples provided. You can see brain plot using notebook in the examples folder. What do you think?

larsoner commented 6 years ago

Writing down steps I used with Ubuntu system Python for reference:

  1. Clone this repo
  2. Check out OlehKSS/olehkss/1 however you want
  3. Install maartenbreddels/ipyvolume if you don't have it already according to the dev install docs
  4. Did jupyter nbextension enable --py --sys-prefix widgetsnbextension
  5. Opened an issue (https://github.com/maartenbreddels/ipyvolume/pull/132/files) because it did not work :)
  6. Got feedback that got it working (see issue).

And now I get this in jupyter notebook:

screenshot from 2018-05-17 14-14-15

So once we clean up the code a bit and make it PEP8-compliant, +1 for merge

larsoner commented 6 years ago

Looks good, let's keep going @OlehKSS !