napari / napari-core

BSD 3-Clause "New" or "Revised" License
5 stars 3 forks source link

Idea: use IPython kernel for execution #22

Open jni opened 5 years ago

jni commented 5 years ago

I made a tiny proof-of-concept for how I would like the GUI to work: a dictionary (or perhaps more elaborate object, we can think about it) maps buttons / menu items / keyboard shortcuts to functions. These functions, together with the workspace (in the example it's just globals() but probably we want to change this), can be used to generate code as we run things.

Now, in the example, I'm using exec and print to handle both the execution and logging. This is good enough in that the same code is run as is written, which is exactly what we want. However, how about the following: we launch an IPython kernel to run things. For every GUI interaction, we generate code to be run by the kernel, send it, and execute it.

Key advantages of this approach:

Basically in this vision, Napari would be an imaging-focused IDE.

Key challenges with the approach:

jni commented 5 years ago

@kne42 when you're back from holidays, please comment on this? =)

kne42 commented 5 years ago

@jni I like this idea a lot! Especially since a lot of machinery is done for us in an interface that most people already know how to use. This has the bonus of allowing us to convert workflows into Jupyter Notebooks and vice versa.

I don't have much experience dealing with the internals of IPyKernels but I'd imagine they're pretty easily modifiable to be compatible with the IPython plugin system. This enables us the possibility of directly implementing the object tracking/layer system we were talking about without much hacking (by creating our own IPython plugin effectively).

However, I'm not quite sure what challenges you are talking about with regards to non-trivial code generation. Could you please give some examples?

jni commented 5 years ago

I'm hoping @stefanv can describe some of the issues they ran into. =) @stefanv do you have a dead branch in your git history somewhere that might be informative? =)

I presume once you start doing for-loops, function definitions, and conditional logic, it's not that easy to get nice-looking code automatically. But honestly I haven't tried anything beyond my tiny demo above.

jni commented 5 years ago

But @kne42 part of the purpose of this is that I want to dramatically simplify #13 before merging. Specifically, I don't want a complicated set of different function classes. Rather, every plugin is one or more Python functions annotated with types, and we have a workspace which includes everything in the global scope at a given time, and we provide a way see anything in the workspace either in an existing viewer window or a new one. (And within each viewer we decide on the order/transparency/visibility of each workspace item on that stack).

kne42 commented 5 years ago

@jni Ah, I see. How do you think we should go about organizing the callbacks then? Since special function groups would typically be called in special places/with special ways (i.e. io plugins will be accessible via File > Open...).

royerloic commented 5 years ago

I agree with @kne42, we need somehow to have different classses of plugins because they get integrated in different ways... Not much of an escape from this. I think the current plan is very simple: function decorations that vary with the plugin type. It will be very straight forward for those that implement plugins...

stefanv commented 5 years ago

The challenge is in cleanly mapping the UI to function calls. How do you do that? When one option is selected, others may change, etc. For a color image you may want to execute over multiple layers, or first convert to Lab. So, how do you store all these potential code paths cleanly?

Traits used to be one such an avenue, and perhaps Traitlets allows you to do the same now?

W.r.t. the IPython kernel execution, how will you extract results from the kernel rapidly enough for real-time display of larger datasets? Having the GUI and computing process in the same memory allows direct, rapid access.

royerloic commented 5 years ago

On 4. Sep 2018, at 14:08, Stefan van der Walt notifications@github.com wrote:

The challenge is in cleanly mapping the UI to function calls. How do you do that? When one option is selected, others may change, etc. For a color image you may want to execute over multiple layers, or first convert to Lab. So, how do you store all these potential code paths cleanly?

I would keep these as separate operations: applying blur to a multi-channel image applies it to all channels. If you want to convert to lab you have to doit explicitly…

After much thought, realistically, I don’t think that we can always have a 1-1 mapping between scikit-image or numpy functions and the menu entries… There will be occasional impedance mismatches that need to be adjusted. But that’s ok.

Traits used to be one such an avenue, and perhaps Traitlets allows you to do the same now?

Never heard of this.

W.r.t. the IPython kernel execution, how will you extract results from the kernel rapidly enough for real-time display of larger datasets? Having the GUI and computing process in the same memory allows direct, rapid access.

Execution speed is an issue, running on an ipython kernel should be optional not mandatory.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Napari/napari/issues/22#issuecomment-418518666, or mute the thread https://github.com/notifications/unsubscribe-auth/AByMkqdb9nf2RYfIY1v5ZXIGaQSbNF3bks5uXuvmgaJpZM4WDYkv.

jni commented 5 years ago

The challenge is in cleanly mapping the UI to function calls.

After much thought, realistically, I don’t think that we can always have a 1-1 mapping between scikit-image or numpy functions and the menu entries…

No. The idea is to have 1-1 mapping between menu items and some functions, be it scikit-image, numpy, or a "naparilib" function that composes many external functions.

When one option is selected, others may change, etc. For a color image you may want to execute over multiple layers, or first convert to Lab. So, how do you store all these potential code paths cleanly?

There should be no dependency of output on some state set by another menu item. For example, I hate that in Fiji "regionprops", you first have to go set options in Analyze > Set Measurements, then you can do Analyze > Analyze Particles. This is just insane design and I want to do everything possible to avoid it in Napari.

So, to answer your question, either the option is there to apply to LAB as a keyword argument (and thus pop-up option) for the processing operation, or the user needs to do it themselves using scikit-image > color > rgb2lab.

Not much of an escape from this. I think the current plan is very simple: function decorations that vary with the plugin type. It will be very straight forward for those that implement plugins...

Can you enumerate the plugin types you are thinking of? My list:

how will you extract results from the kernel rapidly enough for real-time display of larger datasets?

Yeah this is certainly a concern. I wonder if there is a way have a kernel running as a thread within the same process...? However Jupyter has to deal with this stuff and you can have some pretty impressive performance with e.g. IPyVolume. But, @royerloic has optimal performance as a top-level goal, so this is true, unless we can run it in-process then that idea is out. But my point about having the written out code exactly match the executed code stands. We can replace ipykernel calls with exec calls.