dcmjs-org / dcmjs

Javascript implementation of DICOM manipulation
https://dcmjs.netlify.com/
MIT License
293 stars 111 forks source link

Normalization makes writing adapters fiddly for multi-volume series #70

Open JamesAPetts opened 5 years ago

JamesAPetts commented 5 years ago

(Related to #4)

Some scanners produce multi-volume series, whereby the instance number linearly increases, but there may be multiple volumes with e.g. different b-values or multiple phases.

Normalizers in dcmjs currently sort by ImagePositionPatient, such that if you have two sub volumes in the same series and your images are single-framed, Normalizer.normalizeToDataset returns a multiframe dataset which contains two interspliced volumes.

This makes writing adapters difficult/awkward if the source lib orders by instance number. Currently the cornerstone segmentation adapter will produce a segmentation with incorrect PerFrameFunctionalGroups for such multi-volume images.

There are a few solutions I can think of off the top of my head, which each have different drawbacks.

As you can see all approaches are rather unsatisfactory, thoughts?

JamesAPetts commented 5 years ago

Perhaps the best approach would be to not try to sort the given datasets at all, in general? But just concatenate them.

pieper commented 5 years ago

What kind of scans do you have? The idea of the Normalizers is that you have acquisition-specific subclasses that are currently selected by SOPClassUID.

But those are broad categories, so if you want to sort certain types of acquisitions in different ways we could come up with a plug-in model whereby different Normalizers could examine the datasets and offer options about how to load them. This is how Slicer's DICOMPlugins work and it's turned out to be a pretty reasonable way to manage the complexity.

JamesAPetts commented 5 years ago

What kind of scans do you have?

I'll see if I can share this data, its just CT Image Storage, with two phases in one series, stacked one after another in instance number.

pieper commented 5 years ago

Yes, that should really be normalized to a legacy covered CT image with correct dimension encoding.

To address Andrey's point, maybe we could come up with samples that represent the essential structure of the data but with any identifiers stripped (including image so it's a true safe harbor).

On Tue, Jul 30, 2019 at 8:38 AM James Petts notifications@github.com wrote:

What kind of scans do you have?

I'll see if I can share this data, its just CT Image Storage, with two phases in one series, stacked one after another in instance number.

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/dcmjs-org/dcmjs/issues/70?email_source=notifications&email_token=AAA6Y7KM5FMB6OMAY4HNW5TQCBOADA5CNFSM4IH4SOY2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3EMN4I#issuecomment-516474609, or mute the thread https://github.com/notifications/unsubscribe-auth/AAA6Y7NSAA5MUID6RGAJ6DLQCBOADANCNFSM4IH4SOYQ .

fedorov commented 4 years ago

Do we want to/even have the resources to maintain private field filtering in dcmjs to try to work out what to do?

I can't see how you can go around parsing private attributes if you want to have happy imaging researchers. I have limited experience, but I think it might be ok to ignore those private fields to meet the needs of radiologists, but I would think OHIF et al want to address various quantitative analysis use cases which often cannot be addressed unless private attributes are parsed or normalized into standard attributes. Where that normalization functionality fits, that I don't know...

pieper commented 4 years ago

The same problem will exist in any of our code intended to work with real-world dicom data where the scanner vendors embed important information in non-standard locations. The only solution I know of is to hard-code workarounds for known cases and issue stern warnings when we see data that is not in a known format. Ideally we'll be able to build an open-source knowledge base with example data across dcmjs, highdicom, and other software so that we handle as many cases as possible consistently.

Where that normalization functionality fits, that I don't know...

within dcmjs that would be in specialized normalizer subclasses that recognize particular format variants and map them to normalized form. This is not implemented yet, but it would be similar to what Slicer DICOMPlugins do, but creating normalized DICOM instances instead of MRML.