MRtrix3 / mrtrix3

MRtrix3 provides a set of tools to perform various advanced diffusion MRI analyses, including constrained spherical deconvolution (CSD), probabilistic tractography, track-density imaging, and apparent fibre density
http://www.mrtrix.org
Mozilla Public License 2.0
288 stars 178 forks source link

MRView: refactor code to separate OpenGL handling from Qt #1437

Open jdtournier opened 6 years ago

jdtournier commented 6 years ago

Just documenting this as #1403 prompted me to write down these ideas...

The current design of mrview is quite restrictive in many respects, which complicated things like offscreen rendering and scripting. This is two-fold:

  1. storage of rendering parameters in Qt widgets, rather than in a separate state object independent of Qt. This makes it near-impossible to set up an offscreen target, since the entire GUI front-end also needs to be initialised.

  2. update of parameters within the draw() call. This means that correct handling of command-line parameters requires that a GL update before the next option / command is processed to guarantee that operations happen in the correct order. This makes it impossible for mrview to process all options before rendering, needlessly re-rendering a potentially complex scene multiple times in the process.

So I would like to (eventually) refactor the code to cleanly separate the state of the viewer, the OpenGL handling, and the Qt GUI parts. In particular, I would like to be able to manipulate the state of the render independently of the OpenGL handling, so that we can set up the view before any GL updates. I would also like to have all the code responsible for setting up the GUI widgets cleanly separated from the OpenGL handling. This would allow offscreen rendering to be implemented (potentially outside of Qt altogether), provide improved command-line / script handling, and hopefully also simplify maintenance (cleaner, more modular code).

jdtournier commented 6 years ago

This is a response to this post:

@LeeReidCSIRO: I'm impressed (and pleasantly surprised) that you're interested in getting stuck into this. However, I have a feeling there's a slight misunderstanding as to how things would work. You suggest 3 tiers:

  1. data itself, with signalling properties
  2. properties of display of data, also with signalling properties
  3. display itself, listening to signals from other tiers

That's not really what I have in mind. I'll try to clarify.

I'm also envisaging 3 orthogonal components, but not those listed above:

  1. Data and the associated state of the display. There would be no or minimal signalling - essentially all it needs to do is be able to request the scene be rendered again if anything has changed (very hard to do partial updates in OpenGL, you tend to redraw the whole scene). The important thing is that the data and its state can be modified independently of the other components, and the other components can be updated in response when notified of such change - but they don't have to. This means we can set up the scene (e.g. process all command-line options) before we even get an OpenGL context or initialise the GUI (important for off-screen rendering, for example). It would also be responsible for processing UI events such as mouse button presses, etc - but only in as far as it would provide suitable callbacks to handle the events, which (3) would then call as appropriate.

  2. OpenGL renderer. This would render the data according to the state in (1). Importantly, this would contain only OpenGL commands, with no dependence on the GUI toolkit (i.e. Qt). This means we could ditch Qt altogether if we decide it's easier to perform offscreen rendering with e.g. glut. It would require a suitable OpenGL context be available and current by the time it performs the render (this would be the responsibility of the GUI toolkit), but would not be responsible for managing events, window resizes, button clicks, etc.

  3. GUI toolkit (currently Qt). This would provide a suitable OpenGL context (whether on- or off-screen), present a UI around it, ensure it is in sync with the state in (1), pass mouse and keyboard events in the format expected by (1), etc. Note that this wouldn't even have to be a GUI at all - it could be a script interpreter for example. The main thing is it would allow manipulation of (1) and provide a context for (2) to operate.

The way I see this being implemented would require some very deep changes to the way the code is currently organised. Ideally, we'd probably have 3 folders, one for each of the layers above, and each mode / tool would have its own code within each of those. We could also have (1) & (2) merged together, so long as they remain logically separate (i.e. you can drive (1) without (2) having been initialised), but I reckon it would just be cleaner to keep them separate...

This is a huge refactor, and I wouldn't take it on lightly... We'd need new classes to represent the various parameters in (1) (in line with #1438. amongst other things), a system to keep track of whether a GL update is required that can be queried when necessary, a system to allow (1) to specify how it expects to be presented in the UI that (3) would then use to set itself up, new classes in (3) that would 'know about' the different types of parameter classes in (1) and display appropriate widgets for them, etc. It would require a lot of thought upfront to get the overall design vaguely right before we can start migrating the functionality over. Not trivial...

It would be great if you're interested in getting involved! Let me know if you want to dive in, we'd need to coordinate pretty tightly on this - I can be a bit of a control freak, but the last thing I want is to discourage others from contributing by constantly criticising their code and/or design decisions. But this shouldn't happen if we make sure we're on the same page from the outset. :+1:

LeeReid1 commented 6 years ago

Hi @jdtournier, Just a clarification: Qt currently supplied classes which are not GUI related. Are you looking to still keep the toolkit in the general sense, or try to remove these classes in all code except your third layer?

I ask because of the statement: "this means we could ditch Qt altogether". If you're looking to do so, it might influence how I tackle #1403 and will certainly change my understanding of how big this job is. e.g. things like slots/signals are a non-standard Qt construct and would need to be replaced throughout the program.

jdtournier commented 6 years ago

@LeeReidCSIRO: sorry for the radio silence - too many things happening all at the same time...

Just a clarification: Qt currently supplied classes which are not GUI related. Are you looking to still keep the toolkit in the general sense, or try to remove these classes in all code except your third layer?

Ideally, we'd remove all Qt-related code from these layers. But I completely get your point regarding how that might influence aspects that inherently relate to signal/slot handling. I think we'll need a sensible pragmatic approach here. I think it should be relatively simple to code up a regular UDP listener (independent of Qt), but not that trivial to have it integrated into the Qt event loop. On the other hand, #1403 is very much a GUI feature - you wouldn't need it for offscreen rendering or any other reason I can think of. In which case it makes sense for it to be part of the GUI layer. However, we can definitely write this such that the handling of the event itself is firmly in the first layer, with a callback to process incoming sync events, and a function to send out an update when necessary. This would leave the handling of the UDP broadcast & listen in the GUI layer, which is arguably where it belongs. How does that sound?