SyneRBI / SIRF

Main repository for the CCP SynerBI software
http://www.ccpsynerbi.ac.uk
Other
58 stars 29 forks source link

Proposal for Image Geometry #198

Closed ashgillman closed 3 years ago

ashgillman commented 6 years ago

See https://github.com/CCPPETMR/SIRF/wiki/Proposals-for-new-features#image-geometry for full info

Purpose

The aim of this project is to have SIRF "aware" of image geometry. In essence, this means the ability for a SIRF image to be able to identify the position of a given voxel in DICOM LPS space. More broadly, this will allow:


I had a chat with @evgueni-ovtchinnikov and we have a proposal for implementing resampling.

It might look something like the following psuedocode:

class ImageData : aContainer {
    // ImageData is abstract also
    virtual Data get_data();                               // set raw voxel data
    virtual void set_data(data);                           // get raw voxel data
    virtual ImageData clone();                             // copy image
    virtual GeomInfo get_patient_coord_geometrical_info()  // get geometrical info (assume voxelised)
#ifdef BUILD_NIFTYREG
    NiftyImageData to_nifty_image();                       // convert into Richard's format
    void fill_from_nifty_image(image) {                    // check this.geomInfo and nifty.geomInfo, resample, fill
        this_geom = this.get_patient_coord_geometrical_info()
        other_geom = image.get_patient_coord_geometrical_info()
        data = nifty::resample(other.get_data(), other_geom, this_geom)
        fill(data)
    }
#endif
#ifdef BUILD_ITK
    ItkImageData to_itk_image();       // Not for now, but demonstrating future opportunity
    void fill_from_nifty(data);        // resample, fill
#endif
}

class PETImageData : ImageData {...}
class MRImageData : ImageData {...}

// maybe?
#ifdef BUILD_NIFTYREG
class NiftyImageData : ImageData {...}
#endif

Then, in Python this can by used like:

pet_image = pet_reconstructor(pet_acq_data)
mr_image = mr_reconstructor(mr_acq_data)
pet_prior = pet_image.clone()
pet_prior.fill_from_nifty_image(mr_image.to_nifty_image())

@rijobro @KrisThielemans

ashgillman commented 6 years ago

Depends #193

rijobro commented 6 years ago

I've had a play over the weekend, and I can convert from a PETImageData to what you refer to as NiftyImageData with the following code:

void SIRFImageData::set_pet_image_data(const PETImageData &image_sirf_pet)
{
    _image_sirf_pet = make_shared<PETImageData>(image_sirf_pet);
    typedef VoxelisedGeometricalInfo3D Info;
    Info info = image_sirf_pet.get_patient_coord_geometrical_info();
    Info::Offset          offset  = info.get_offset();
    Info::Size            size    = info.get_size();
    Info::Spacing         spacing = info.get_spacing();
    Info::DirectionMatrix dmat    = info.get_direction();
    Info::TransformMatrix tm      = info.calculate_index_to_physical_point_matrix();

    _image_nifti = make_shared<nifti_image>();
    _image_nifti->dim[0]=_image_nifti->ndim=3;
    // Size
    _image_nifti->dim[1]=_image_nifti->nx=size[0];
    _image_nifti->dim[2]=_image_nifti->ny=size[1];
    _image_nifti->dim[3]=_image_nifti->nz=size[2];
    _image_nifti->dim[4]=_image_nifti->nt=1;
    _image_nifti->dim[5]=_image_nifti->nu=1;
    _image_nifti->dim[6]=_image_nifti->nv=1;
    _image_nifti->dim[7]=_image_nifti->nw=1;
    _image_nifti->nvox=_image_nifti->nx*_image_nifti->ny*_image_nifti->nz*_image_nifti->nt*_image_nifti->nu;
    // Spacing (extra dimensions are 0 by default)
    _image_nifti->pixdim[1]=_image_nifti->dx=spacing[0];
    _image_nifti->pixdim[2]=_image_nifti->dy=spacing[1];
    _image_nifti->pixdim[3]=_image_nifti->dz=spacing[2];
    // Data types
    _image_nifti->datatype = DT_FLOAT32;
    _image_nifti->nbyper = sizeof(float);
    _image_nifti->swapsize = sizeof(float);
    _image_nifti->intent_code = NIFTI_INTENT_NONE;
    _image_nifti->xyz_units=2; // distances in mm
    _image_nifti->nifti_type=1;
    _image_nifti->byteorder=1;
    _image_nifti->scl_inter=0.F;
    _image_nifti->scl_slope=1.F;
    _image_nifti->iname_offset=352;
    _image_nifti->byteorder=1;
    // Set the transformation matrix information
    _image_nifti->qoffset_x=offset[0];
    _image_nifti->qoffset_y=offset[1];
    _image_nifti->qoffset_z=offset[2];
    _image_nifti->quatern_d=1;
    _image_nifti->qfac=-1;
    _image_nifti->qform_code=1;
    for (int i=0;i<4;++i)
        for (int j=0;j<4;++j)
            _image_nifti->qto_xyz.m[i][j]=tm[i][j];
    // Allocate the data
    _image_nifti->data = (void *)calloc(_image_nifti->nvox, _image_nifti->nbyper);
    // Copy the data
    float *data = static_cast<float*>(_image_nifti->data);
    image_sirf_pet.get_data(data);
rijobro commented 6 years ago

If I open a nifti image with mSTIR.ImageData, the metadata should be identical to the image created by this process. However, they are slightly different, specifically these fields:

    _image_nifti->qto_xyz.m[0][3]
    _image_nifti->qto_xyz.m[1][3]
    _image_nifti->qto_xyz.m[2][2]
    _image_nifti->qoffset_x
    _image_nifti->qoffset_y

I'm trying to debug this now.

rijobro commented 6 years ago

Lastly, I'm still hampered by the fact that my code can't interact with mSTIR etc. in Matlab and python. As a short-term work around, I've wrapped PETImageData (mSTIR) into SIRFReg. The end goal would probably be the inverse (wrap SIRFReg in the same way that the rest of SIRF is wrapped). Anyway, this means that I'm currently testing in Matlab with:

pet=mSIRFReg.PETImageData('pet_image.nii');
nifti=mSIRFReg.SIRFImageData();
nifti.set_pet_image_data(pet);

which will then do the conversion.

ashgillman commented 6 years ago

Alt interface (@rijobro):

pet_image = pet_reconstructor(pet_acq_data)
mr_image = mr_reconstructor(mr_acq_data)
mr_resampled = mr_image.clone()
mr_resampled.resample(pet_image, transformation_matrix='identity')

pet_prior = pet_image.clone()
pet_prior.fill(mr_resampled)

pet_prior.fill(mr_image)  # Error: Image spaces are different
KrisThielemans commented 6 years ago

@ashgillman, could you clarify the use of transformation_matrix here?

mr_resampled.resample(pet_image, transformation_matrix='identity')

Also, I wouldn't put the resample function "in" the ImageData class. I know you are thinking about this for supporting non-voxelised images at some point, but you'd want to be able to use different resamplers. (Nifty, ITK, STIR, Gadgetron,...). can't stuff all of them into ImageData. I rather think the resampler has to follow the same format as Reconstructor etc. As per the doc/UserGuide.md

recon.set_input(acquisition_data); 
recon.setup(image_data); 
recon.process(); 
output_image_data=recon.get_output(); 

with additional overloads being discussed in #194.

ashgillman commented 6 years ago

The reason for the transform is that resamplers generally accept a transform when resampling. This would be useful, for example, if there was motion.

So maybe:

pet_image = sirf.STIR.ImageData(...)
ute_image = sirf.nifty.ImageData(...)
mprage_image = sirf.nifty.ImageData(...)

motion_transform = sirf.nifty.register(moving=mprage_image, fixed=ute_image)
resampler = sirf.nifty.<something>
mprage_in_pet_resampled = sirf.nifty.resample(
    image=mprage_image, reference=pet_image,
    transform=motion_transform, resampler=resampler)

mprage_prior = pet_image.clone()
mprage_prior.fill(mprage_in_pet_resampled)
DANAJK commented 6 years ago

Comment on the Wiki:

The DICOM origin is not specified. A sensible origin would be the point in the patient that was landmarked. In scanner space, this is normally isocentre+bedshift.

There is a helpful discussion about spatial referencing in a MATLAB blog. Their approach also makes it explicitly clear about corner edges vs centre of corner pixel and relations of subscripts to real coordinates. Unfortunately, their spatial referencing object does not allow for orientated axes.: https://blogs.mathworks.com/steve/2013/08/28/introduction-to-spatial-referencing/

Whether 'y' increases as you go up or down can appear confusing (in MATLAB the image origin is top left and the row index increases as you go down). It can be tempting to start flipping arrays to cope with this, but it's not necessary and leads to all manor of confusion. If you just use DICOM-like spatial referencing (IPP and IOP), it's all fine. It's then more intuitive to find 3D coordinates of image voxels using vector addition from IPP, than using a 4D transform matrix applied to image coordinates.

If you try to place a 2D image in a 3D image volume, just ensure it is right-handed (i.e. the through-slice direction is the cross-product of the vector along a row with the vector down a column).

The key to all this is the link between array subscripts and image coordinates - we may need separate MATLAB and Python functions?

ashgillman commented 6 years ago

Thanks David.

Good to know Re origin. I tried to use the terminology of "vendor-specified origin" in the wiki documentation to attempt to avoid ambiguity. I guess I don't really know what the definition of this is though, I have been internally going by "Where (0, 0, 0) mm is on the vendor-reconstruction DICOMS." But this obviously isn't a rigorous definition.

I agree we should be explicit about where exactly in the voxel its position is. I think we should choose the centre, as the most common and least ambiguous.

Regarding Python vs. MATLAB, yes you are right, there will have to be different wrapping code here. Python uses the same internal memory layout and indexing scheme as C and C++ (0-based, row-major, array[slice_no-1][column_no-1][row_no-1]).

https://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html#internal-memory-layout-of-an-ndarray

Data in new ndarrays is in the row-major (C) order, unless otherwise specified, but, for example, basic array slicing often produces views in a different scheme.

MATLAB uses 1-based indexing and is column-major (array[row_no][column_no][slice_no]).

We should be able to query, in each language, for a given index, what is the corresponding LPS coordinate. I believe the C++ implementation should follow 0-based, row-major conventions since this is the de-facto for the language. So the Python wrapper will simply expose the output. But the MATLAB wrapper will have to do a little bit of calculation to convert first.

ashgillman commented 6 years ago

I assume where (0, 0, 0) mm is on the DICOMs is where the laser lines up... And I assume that once the radiographer hits go, the point where the laser lines up moves to the isocentre of the image. Is this the case? Can we verify this on our test scans?

DANAJK commented 6 years ago

DICOM doesn’t specify an origin.

A point in the patient that is clearly defined is the landmarking point, so it would make sense to choose that as an origin.

Yes, on first entry the landmark point should go to the isocentre. I’m not sure how well-controlled this movement is. It might be that the travel is ‘approximate’ but they have a more accurate actual measurement and take this into account. After planning on a localiser scan, the bed may move to keep the anatomy of interest close to isocentre.

David

ashgillman commented 6 years ago

Sorry, when I said "(0, 0, 0) on the DICOMs" I meant the DICOM image format origin, not the DICOM standard origin (which isn't defined).

DICOM format images, do (surely?) have an origin. So although DICOM the standard doesn't specify where an origin should be, DICOM as an image format does and each scanner surely must specify where its origin is (though I guess this can be changed).

Yes?

KrisThielemans commented 6 years ago

Although DICOM the standard doesn't specify where an origin should be, DICOM as an image format does and each scanner surely must specify where its origin is

Obviously, every scanner has a specific algorithm to decide its origin, which will depend on landmarks, conventions, software version, but hopefully not the weather. I'm afraid I think that we have to find out what that algorithm is. hopefully by just using numbers in the headers. If it's not obvious, we can ask the manufacturers.

DANAJK commented 6 years ago

If your task is to compute the coordinate in Patient LPS space of any pixel in a DICOM image, then you do not need to specify an image origin explicitly – you do not even need an image space. With ImageOrientationPatient (IOP), ImagePositionPatient (IPP) and the PixelSpacing, everything is in Patient space and you can just step along from the top-left pixel, computing the coordinates of an image pixel in the Patient coordinate system.

For example, if you wanted the coordinate of the top right pixel centre in Patient space, it would be:

TRP = IPP + PixelSpacing(2)widthIOP(1:3)

Where I’ve used a 1-based indexing, width is the number of pixels wide for the image, PixelSpacing is in mm and IOP(1:3) will be a unit vector pointing along a row.

If instead of using vector addition, you wish to use a transformation matrix, then you will need to make an image coordinate system in order to be able to multiply the matrix by a homogeneous coordinate. You can put the origin of these image coordinates where you like, provided you ensure that IPP maps to the correct image coordinate. In the default MATLAB image coordinate system, IPP maps to (1,1).

The origin of the Patient coordinate system is not defined in DICOM, but it will not change from image to image until the patient is re landmarked. All DICOM images with the same FrameOfReferenceUID should have a consistent Patient origin and can thus be resampled, fused, displayed together etc.

David

KrisThielemans commented 6 years ago

If your task is to compute the coordinate in Patient LPS space of any pixel in a DICOM image, then you do not need to specify an image origin explicitly – you do not even need an image space.

@DANAJK, to me this is terminology. IPP is obviously defined w.r.t. some (scanner/software defined) origin. We don't care what it is, but unfortunately need to know how to compute IPP if we've got some raw data in our hands (which contains some info on bed position etc).

I of course agree with your equations, and if you write all of them down (for an arbitrary voxel index in 3D), it'll boil down to a 3x3matrix multiplication + IPP, or a 4x4matrix for homogeneous coordinates. We can choose to do these calculations internally with whatever is the most convenient/fast method, and we could return all possible representations easily. @ashgillman's GeometricalInfo does already a few of them. (I'm not sure what you mean with "you will need to make an image coordinate system". the math is the same)

You make an excellent point about FrameOfReferenceUID. we should make this part of GeometricalInfo: as soon as FrameOfReferenceUID changes, we have to stop. Of course, pretty-much only DICOM has this field, so we'll have to allow for the case of "unknown FrameOfReferenceUID".

ashgillman commented 6 years ago

To avoid confusion (if that is possible..) I have changed the wiki to use the terminology:

ashgillman commented 6 years ago

Discussion Re frame of reference can go in PR, here: https://github.com/CCPPETMR/SIRF/pull/193#issuecomment-413881509

DANAJK commented 6 years ago

I suspect that in ISMRMRD it will be the vector from the frame-of-reference to the centre of the reconstructed slab with all bed moves and offsets applied in acquisition already taken care of. Here 'slab' means 'slice' in 2D imaging and 'volume' in 3D imaging. This all remains to be checked. There is also the possibility of confusion over where is the centre because the MR image is the result of a Fourier Transform and when the array length is even, the 'middle' is at the pixel with index (N/2)+1 in 1-based units.

KrisThielemans commented 5 years ago

@rijobro, @ashgillman this is probably done? Possiby we still need to check Gadgetron mapping to DICOM, and we definitely still need to fix STIR mapping to DICOM, but the latter doesn't seem to be a SIRF issue.

ashgillman commented 5 years ago

I believe so. Overall, there is still plenty to do, but the generic info class has been decided on and added.