Open NicolasRannou opened 7 years ago
Hey @NicolasRannou! 😀
1 - I think having the annotation options as a dropdown text menu somewhere near the play controls like on youtube is a good idea. i dont think youd want to label cc or use a cc icon tho... im not sure what icon would be best but can noodle on it.
another idea is to display annotations by default in say the upper left of the image and have the control for changing style or hiding right next to annotations. this is more contextual bc the control is where the annotations appear.
doing both would likely be fine.
2 - it makes a lot of sense to have play before the timeline because timelines start to the left and the play makes them go right. so directionally its a good pattern to follow. +1
3 - have you ever played w adobe lightroom? if not take a look at this around 4:30 - https://youtu.be/bN2jqsJgbBs or if you want to play with smtg similar yourself try the open source app darktable (what i use - darktable.org) - i dont know if its the right model to emulate since there are a number of differences, but the core interactions (find images, manipulate images, view image metadata, compare images) seem so similar maybe theres smtg we cpuld learn.
a lot of these types of tools have a model where there is a right hand side tools / panels area/dock, typucally using an accordian ui pattern as well s each docked panel is collapsible. histograms by default tend to be the upper right panel. in some apps - this is a nice affordance - you can hit Tab (or pull a menu item if you dont know shortcut) to show/hide the panel dock, so if gives you full screen real estate to examine images but then when youre ready to use the tools like the histo you can pull back in easily.
smtg to think about anyway. i think a lot of apps settle on a right hand dock rather than a dock along the top of the screen for graphs bc w wide screen hw and landscape orientation it doesnt impede on the real estate of the images / main viewing area as much. these tools tend to have a thin bar along the top for menus / icons / some controls but not really things like graphs / etc.
does applying the different color schemes on the image change the histogram? how does the histogram impact the volume rendering? im familiar w working w histogram for photo processing but this context is new to me!
Sounds good!
1- I'll give a shot at the dropdown menu - FYI we currently use icons from: https://www.webcomponents.org/element/PolymerElements/iron-icons/demo/demo/index.html
2- 👍
3- I'm with you on that - I actually had the dock on the right hand size in the first iterations. My idea was to have selected "viewer/images" specific actions on the right dock, and have a thin toolbar on the top for more general actions, such as choosing the layout of the page and maybe collaboration stuff -
Old GIF:
4- colors schemes do not impact the histogram, however the histogram helps the user to tweak the window/level to highlight regions of interest in an image:
Given (from 3D Slicer):
You can see there is a "bump" in the middle of the histogram, that is actually the data of interest. If we modify the window level (could be sliders, shift + mouse move, etc.), the dynamic range of the image changes and highlights the region of interest:
In the context of volume rendering, histograms helps us design custom transfer functions for opacity and colors. For instance, we want to set the background noise to transparent, the bones to opaque and some soft tissues in between.
In general the user directly designs its transfer function on top of the histogram. But maybe we should not worry about that just yet - that may not be needed in this project and is pretty advanced feature!
See how different opacity transfer functions affect the rendering. In this case we do not use the histogram as the opacity transfer functions are presets but you can get a feeling on how important it is to design the right transfer function.
Slow demo: https://fnndsc.github.io/ami/#vr_singlepass
Highpass opacity transfer function:
Bandpass opacity transfer function:
Linear opacity transfer function:
1 - av:subtitles looks like a good one? alternatively on of the icons:picture-inpicture?
yes definitly
@mairin we hope to have a version for you to test around soon!
Here is a screenshot of the latest version:
All the panels do not work well together (looks messy) but there is some improvements.
Top toolbar: viewer specific actions such as define the layout or make the viewer "fullscreen" (https://developer.mozilla.org/en-US/docs/Web/API/Fullscreen_API)
Left panel: Images to be visualized (should that go under the viewer instead of a left panel?)
Right panel: Information/actions specific to the "selected" image. (An image is selected when it has the blue border)
Center: the viewers
Another sure I am not quite sure how to approach is how should it look when for instance no image is selected: hide the panel? A special content to tell the user what to do?
Thanks!
@NicolasRannou sorry for the slow response; I've blocked off a few hours today to look at the latest version in depth and to think about some of the details you gave me above. Expect some more ideas / suggestions by the end of today Boston time. :)
@mairin no worries!
Enclosed an updated slightly different screenshot!
Thanks!
(theater mode is basically get rid of the "GEXplorer - Logout" main toolbar and make the rest of the viewer (left, center and right panels) full screen)
I love the theater mode idea - it's really similar to how Gimp hides all panels so you can focus just on the image canvas, shortcut for that is Tab... other image viewers / processing tools do this but I'm not 100% sure of the standards, I will look into it!
@NicolasRannou I played around with this a lot today, this is totally just playing around and some ideas. Sometimes it's easier to come up with ideas / solutions and identify issues just playing around with a design visually. I have a list of questions too, will post in a bit. The main thing I tried with this is splitting out image acquisition from image viewing / exploring. Not sure how you feel about that. I also put the controls in a pane-specific sidebar panel rather than a screen-wide sidebar panel. Not sure if this makes sense, particularly if more than one image is being manipulated at once (as in one of your animated GIFs of an earlier version.) But even if it's not useful, it's at least good brain food right?
OK here's the questions I had:
1) Is there a specific screen resolution targeted here? Should this be responsive for different platforms (tablet, smartphone, laptop) or would it not realistically be used across those?
2) I moved the play controls to the bottom of each image, thinking that commonly web video players have them on the bottom. I was also thinking it could appear on hover, like on YouTube, etc. so there is less visual clutter overall from the play controls. I'm not sure if stuff like playing in a loop or flagging specific frames would be useful?
3) I didn't mock up the "Library" / image acquisition piece to find images in PACS but noticed the search icon, how does that work? Does clicking on it open up a search field or a browse dialog? (I don't remember from the demo you did)
4) I wasn't sure about a lot of the metadata displayed for the images. Some of it I looked up and found some details about but don't have the practical knowledge of what it really means. What I found:
5) What does the 'X' icon in the play controls bar do?
Hope these make sense!
Hey @mairin, really like the design!
Looks pretty good to have the "thumbnails" at the bottom -
I'm wondering if the "hide-able" decks may not be too small? Just thinking out loud.
I do not have any strong opinion about splitting data "retrieval" and "visualization", I just want it to be easy for the user to figure out. :) Your current mockup may actually just work fine! (with the tabs at the top, it looks clear)
One thing to keep in mind is that we actually have 2 types of dataset: the "regular" images the user retrieves, and the associated "normals" for each image. In theory, each regular image can have up to 4 related "normals". The user should then be able to view the "regular" and related "normal" at the same time. Do you see what I mean?
I think what could work is an extra button or layer or whatever on top of the thumbnails to let the user easily access the related "normal" of a given dataset. (similar to icon which identifies panes)
About your questions:
specific screen resolution?
play controls at the bottom / youtube like overlay
find search images from the PACS
Once the user hits "QUERY PACS", we replace the content of the modal with all the matching data available on the server (PACS): (old screen shot)
We can order/filter each column -
At this point the user can select the data he is actually interested in then click "next" to actually download the selected data and fill the thumbnails bar.
metadata
4.
Yes X is not filled - it should display the dimensions of the image.
There are standard fields in the DICOM datasets, i.e. PatientName, PatientAge (age of the patient when the can was done), PatientID, PatientDOB, Study Name, Study ID, etc... There are a loooot of fields that may or not be used. Most common fields about patient/study are usually set.
Yes it is the frame number when you press "play". When looking at a dataset from a given direction, it tells us in how many "frames" we sliced the data in this given direction.
5- X icon removes the data being visualized from the viewer, putting it back to its initial empty state.
Hope it mostly makes sense :)
(sorry for the accidental close, i dropped my keyboard o_O)
Docks vs. Sidebar, etc.
Yeh, I agree - the hideable docks are too small. The idea needs more work. The principle I'm working with there is the closer to the area on screen that's being manipulated, I think the more intuitive the controls are going to be. The thing about the sidebar idea - In the applications that have the right sidebar, the sidebar is in a single document window - so it's really clear that those controls in the sidebar apply to the one visible main image (eg darktable, lightroom, inkscape, etc). When playing around with the screen design I worried a bit that maybe the association between the sidebar controls and the panes being manipulated wouldn't be clear. (Maybe I'm overly worried about this though?) A few ideas to pull those controls closer to the image they affect / account for this potential issue:
Have a separate flow for manipulating image controls. Maybe there's a button per image that you click on and it takes you to a separate tab / screen where it's just the one image of focus and a full right sidebar of controls. Disadvantage here is that you cannot then control multiple images at once with the same controls or view other images while working with another. So maybe this is a no-go because of that.
Maybe keep the window-wide right sidebar, but by default it's inactive and you have to explicitly push images to it (since you're doing this explicitly, the confusion about what it's linked to is a bit alleviated.) We could introduce a concept of dragging images into the sidebar (maybe a little strip or stage where thumbnails show at the very top, you can drag off images you don't want affected too), maybe they get highlighted as selected too, then you could manipulate multiple at once if needed and you also maintain the context of the full set of image panes on screen.
-- Regular images v normals
One thing to keep in mind is that we actually have 2 types of dataset: the "regular" images the user retrieves, and the associated "normals" for each image. In theory, each regular image can have up to 4 related "normals". The user should then be able to view the "regular" and related "normal" at the same time. Do you see what I mean?
I think I understand the concept of regular and normals - regular is the images from the patient of interest and the normals are reference / 'standard' images for comparison, correct?
Where I'm a little confused is how the user is going to view them in relationship to each other. There's a lot of variables at play here - is the image subject regular or normal, what frame is it on, what angle is it from, what kinds of WW/WL/LUT (or other controls) have been set, so in thinking about how the user would want to view them at the same time, I have a few questions -
Search interface
These screenshots are super-helpful, thank you!
I think what I'm going to do next is try to mock up what the 'library' tab would look like, then play around more with the sidebar / controls ideas.
Just a quick post of some library tab ideas... work in progress for sure.
that looks very interesting!
couple of notes:
the sorting columns currently also have "filtering" capabilities - all columns titles "Patient Name", "PatientID", etc. are actually input field where a user can just input some text to filter results based on input.
on limitation is that at this point we do not have access to preview images of the data in the library, we only have access to a JSON description of the data available on the PACS.
Workflow:
the pacs settings would be a modal right?
how would you go from the "selection" to the "explore" tab? a simple button at the bottom ("Retrieve selection", "view selection", ?). Pushing the button will instruct the backend to actually retrieve the data from the pacs and redirect user to the "explore" tab?
I thought I answered to you previous message- :/
Docks vs. Sidebar, etc.
Yes that is the toughest thing to me - I would tend toward 1.a. or 1.b.
Maybe something like in you mocks but having a dropdown menu at the top or vertical tabs only show one element at the time?
Regular images v normals
Yes you are right about the concept -
is the main way we expect users to interact with the regulars wrt the normals to have the regular in one pane, the normal in another?
Yes
Is there ever a case where they'd want them superimposed over one another / directly overlaid? Are regular image always compared to normals from the same orientation / angle, or is there are case for comparing say a transverse regular image to a sagittal normal?
Maybe at some point but not yet
I am assuming the normal images (which are a set with multiple frames typically right?) vs the regular images, maybe the user has a specific frame of each they want to compare? Or do they want to keep one on a single frame and walk through the frames of the other looking for areas of similarity v difference?
Not quite sure to understand
I wonder if it would be useful to explicitly mark / demarcate the normals vs the regular images... I'm wondering if looking at the images, and the user gets interrupted or whatnot, they might lose their place and get momentarily confused / disoriented about which was the normal which was the patient?
Yes I think it would be useful to know which images are related, between the viewers and also in the thumbnails!
@mairin I iterated on your design of the search toolbar for the library and came up with the following:
The only difference is that I replaced the search fields by a dropdown as a user typically just performs a search given a single field.
This approach would also let us provide custom "search input" (text input, date picker, etc.) depending on the search field!
Does that make sense?
Thanks!
@NicolasRannou (thanks for your patience, i was out unexpectedly last week) I like your approach with the dropdown search! I think it's a good way to go!
Hey @mairin no worries - hope all is fine!
@NicolasRannou I'm coming back to this after a few weeks, sorry for the long delay, everything is fine now :)
Some questions and discussion points :)
So the steps you outlined are as follows:
I am wondering, how does step 3 work? E.g., is it ok to make multiple staggered retrievals of data in step 3? I'm thinking that the appearance of the images could really help a user confirm a selected image is the correct one and save them back and forth between the library tab and the explore tab. E.g., the user workflow without any kind of previews in the library would work like this:
So I'm wondering if maybe we had buttons on the bottom of the selection pane of the library - one button that says 'Preview' and another that says 'Explore' -
Offering the 'preview' button might save the user the hassle of the rinse and repeat if they're digging for something they can't quite remember / find based on the text metadata alone. My main reservation here, is this rinse and repeat going to be a real problem in practice? I don't have experience understanding how these studies with this sort of rich 4D data are retrieved and read typically, so I'm making a guess that this rinse and repeat loop is going to happen based on more general user behavior regarding working with image data. It could be the thumbnail doesn't actually help all that much anyway, if it's not the right spot in the timeline or the right angle to trigger recognition for the user.
Not sure, what do you think?
So how would the user move from 'selection' to 'explore'? It's a good question. My initial thinking when mocking this up would be to base it on the behavior of lightroom / darktable / etc. since we're following that model generally - and the way those work, there is no button to move between the tabs. Whatever is selected in the library tab, would be retrieved when the user clicked on the explore tab.
In thinking through the thumbnailing issue above, though, I do see that having an 'explore selected studies' button at the bottom right, inside of the selection pane in the library might be a nice explicit hint as to what should happen next. I think, such a button would be an extra handle rather than the only way to do this - i think if you've selected some images in the pane and then click on the explore tab, it should do the same exact thing hitting the 'explore' button in the selection pane does - retrieve the full study from the PACS server.
I was definitely thinking about the PACS setting being a modal! I can start mocking up what that could look like next if it'd be helpful!
I may post again with more thoughts as I start trying to reorient myself here, reviewing the files and open threads above. Also note - if it wasn't obvious - all of the mockups I've been posted here are in the mockup.svg file in the repo - I use Inkscape to create the file but it should be viewable in other SVG editors as well.
(I'm going to start breaking out design work items here as separate tickets but let's keep this open as the main collab thread.)
Glad to hear you have sorted everything out!
I think the preview "button" is a great idea, let me check how well it plays with the current workflow/API interface we have!
Right now the user can freely navigate between tabs, however we only retrieve data from the PACS server if the user explicitly hit the "retrieve" button in the library.
Let me post some screenshots of the latest version - the main change is that we now have a new layout with 6 viewers: (got to create a proper icon for that). The next main challenge is to properly hit the user weather viewers are "linked", "selected", or wheter is is a normative or a real "patient" data:
My idea was the following: 1- Linked viewers will display a "link icon" (see 2 bottom left viewers) 2/3 - Play with colors of the text and borders to display information?
Some quick mockups:
In this last mockup, the idea was to have the "left, bottom, and right" border in blue of selected. The top border would be colored depending on if it is a "normative" or "patient" data.
Maybe coloring text is too much and 2 borders (1 for selected + 1 for normative/patient) may be too much?
And another screenshot from the library:
Latest version as of today:
I had a couple of ideas about making which image is selected clear, and showing that images are linked. I was thinking about real world analogues that might provide some interesting ideas for handling the problems, or good brain food for better ideas.
For the selected image, I had this idea about the spotlight on a stage - the selected image is brightened, and the others slightly dimmed. There's cons to this of course - it changes the appearance of the non-selected images, and we wouldn't want it to impact the radiologists' reading of the images. So maybe it's a no-go.
For showing that images are linked together - seeing the 3x2 grid of six, made me think of film strips... say 3 images are linked, they could be displayed in the same row with a frame around them so they look like different frames of the same film strip... could even simulate the spinning of the film by having the center image largest and the other two on the sides slightly smaller. Images that aren't linked wouldn't have the frame around them... Maybe again, too kooky an idea, but silly ideas can be good brain food for good ones :)
I'm going to poke around with the screenshots you sent and see if I can come up with any interesting potential solutions.
Do we assume once an image is opened for viewing in Ra that it's been registered, or is there a chance an unregistered image would appear in the viewer, @NicolasRannou ?
Hi @mairin -- @NicolasRannou probably won't get to see this until his time tomorrow.
This viewer, aka RaV, has no concept of image context. Not sure if that's what are asking or not -- but RaV basically shows whatever you throw at it. There is no process or registration. For example, in the image above, it's not implied or assumed that the images in the top row are registered to those in the bottom row.
@rudolphpienaar Ah, ok, thanks! Is it possible to view registered images in RaV? (I'm assuming if so, the user would have to know all of the metadata of what image set is registered to what other image set, etc. and manually drag out the ones in an order that makes sense to them? Or launch it from the ChRIS UI from a feed that outputs registered images?)
Yes! I suspect you're thinking of the same things we are. The viewers currently are just dumb terminals and the context/meaning of what to display etc is all contextual to a plugin. So the viewers will provide some imaging ability, but if you want to see registered images, you have to run an imaging plugin. And then use the viewer to show the outputs of the registration plugin.
So right now @NicolasRannou is working on a fork/variation of RaV (code named RaVIO -- RAdiology Viewer Imaging Overlay) that is basically RaV but more generic and a better candidate for the default ChRIS viewer.
RaVIO will be "fed" with some input directory containing images. Then it generates a thumbnail on left, and the semantics are the same as RaV. You can select from a set of grid layouts, and then drag/drop images from thumbnail into cells in the grid. The "IO" part of the workflow is that you will also be able to drag/drop one image on top of another already dropped image in the same cell. In other words, each grid cell can contain two volumes potentially. In that case, the user can toggle the alpha/transparency of the top image to let the lower image bleed through. Or there is also a slider effect.
From a UIX perspective, I don't know how to best convey these semantics to a user, but that's the idea at least.
Anyway, I'm sure @NicolasRannou can provide more info when he's on line again.
@mairin - quick idea I had about iterating on the current layout -
1- Add annotation under a "cc" dropdown menu, very much like in youtube
2- Move "play" button to the left of the slider, youtube like -
3- Add an histogram that displays the intensity distribution of the current dataset at the top. May not be too critical but could be useful, especially if we want to add some 3D Volume rendering.
Old:
New: