Open iamleeg opened 5 years ago
Two points to note:
openslide
library (https://openslide.org/api/openslide_8h.html) has a function to copy a rectangular segment of the image at a given level to a buffer. (openslide_read_region
)Yes, I imagined steps 1-2 to be done elsewhere. The view only needs to do part 3.
Having said that, are we able to use openslide to interpret every format of slide we want to display? If so, then the tiling stuff may be unnecessary, and the view can request a rectangle which represents the current region/level. It would be slightly larger than the visible region to make scrolling a bit more responsive.
My understanding is that yes, openslide will be able to read every slide format we will need. I will confirm this with Joshua.
As far as I can see openslide covers all of the bases - the images we're actively working with in Oxford are all either .ndpi or .svs, but I've heard rumours that there are some floating around in the other file formats listed on the main page of openslide. If it's possible to use openslide to read in any of those then I'd say that'd be perfect
@martinjrobins @fcooper8472 we'll need a consistent interface between the "view" and the "model". It looks like OpenSlide has openslide_read_region
which takes x,y,w,h and level and writes data to a buffer. It's possible to make a QImage
from such a buffer, which can be displayed. The view will be able to track where it's looking, i.e. x,y,w,h.
So this almost matches up, apart from the level thing. Can we infer that from the scale? It's effectively correlated to how much the image has been downsampled from the sensor output, isn't it?
(Specifically, the QImage
reading format QImage::Format_ARGB32_Premultiplied
is the same format that openslide emits)
Again, @JABull1066 can confirm this but I think the amount of zoom at each level would be proportional to the amount the image has been downsampled from level 0. Not sure if this is useful, but I know there is a function in openslide that provides you with the level most closely associated with the given downsample factor.
Yes, I think the zoom is effectively the same as downsampling from level 0 - I believe the different levels are actually different images which are literally taken using more or less powerful lenses (although not 100% sure on that). I think automatically choosing a level based on the required zoom is just a matter of working out what the lowest resolution level which has been stored is which is sufficiently high-res is...it sounds like the openslide function @martinjrobins mentioned should do this
From the OpenSlide 2013 (J Pathol Inform) paper:
A digital slide is represented as an ordered list of pyramid levels; level 0 is the highest resolution level and each subsequent level is a downsampled version of the previous level. In general, no image scaling is performed by the library; the only levels available through the API are those actually stored in the slide file. The centerpiece is the read_region() function [...] Additional calls exist to determine the downsampling factor for each level (relative to level 0) and to determine the next largest level for an arbitrary downsample factor.
You probably saw that, but there is a demo basic slide viewer available on the openslide website (openslide.org/demo) as well as several downloadable slide files we can play with to get a feel.
Ooh, thanks, I didn't see the paper. I'll read that.
Based on this, I am expecting the view to make a call like imageForRegion(x,y,w,h)
and for SlideData
to work out what level to use to extract the bits for that image. Otherwise we have to leak information into the view about what levels exist in the slide image. Does that sound OK?
We need a class that can display a large image, with panning and zooming. The typical way to do this is to:
See, for example, CATiledLayer from Core Animation. But Qt doesn't have one of those. A sampleproject shows zooming and panning around an image, which is a good starting point.