Open mb3152 opened 1 year ago
Planned for the next release, but no schedule yet.
I recommend installing in a virtual environment to avoid problems with older dependencies.
Thanks so much. We want to include it in a larger package; we can just use it separately for now though. Thanks!On Apr 25, 2023, at 5:38 PM, Richard Beare @.***> wrote: Planned for the next release, but no schedule yet. I recommend installing in a virtual environment to avoid problems with older dependencies.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>
Ah, that makes sense.
FYI - I also hope to bump the simpleitk version for the next release. However I've noticed in testing to date that there have been slight changes to the transform code in itk/simpleitk meaning that the results change. Hence the decision to wait for a bigger set of upgrades.
Thanks Richard! Is there much development to do for just the pydicom upgrade changes?On Apr 25, 2023, at 6:09 PM, Richard Beare @.***> wrote: Ah, that makes sense. FYI - I also hope to bump the simpleitk version for the next release. However I've noticed in testing to date that there have been slight changes to the transform code in itk/simpleitk meaning that the results change. Hence the decision to wait for a bigger set of upgrades.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>
I also think I am having an issue with the environment install, because numpy is requiring a newer python.
conda create --name KarawunEnv python=3.9 karawun
Package python conflicts for:
karawun -> numpy[version='>=1.13.0'] -> python[version='>=3.10,<3.11.0a0|>=3.10,<3.11.0a0|>=3.11,<3.12.0a0|>=3.9,<3.10.0a0|>=3.8,<3.9.0a0|>=3.11,<3.12.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0',build=*_cpython]
python=3.9
karawun -> python[version='>=3.6,<=3.9']
Also having issues with the source method:
(base) maxbertolero@maxs-mbp-2 karawun % conda create --name KarawunEnv --file https://github.com/DevelopmentalImagingMCRI/karawun/raw/master/requirements.txt
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- simpleitk[version='>=1.2,<=2.0.2']
Current channels:
- https://repo.anaconda.com/pkgs/main/osx-arm64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/osx-arm64
- https://repo.anaconda.com/pkgs/r/noarch
- https://conda.anaconda.org/conda-forge/osx-arm64
- https://conda.anaconda.org/conda-forge/noarch
- https://conda.anaconda.org/anaconda/osx-arm64
- https://conda.anaconda.org/anaconda/noarch
- https://conda.anaconda.org/SimpleITK/osx-arm64
- https://conda.anaconda.org/SimpleITK/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
Looks like that is due to old packages not being available on M1/M2 architectures.
I'm not going to have a chance to look at this in any detail for a while.
If you're happy to play around you can clone the repo and go through the testing procedure (see "Verifying your installation" at https://developmentalimagingmcri.github.io/karawun/installation.html) after changing the versions in requirements and see how close the tests come to passing. Let me know how you go.
If you do attempt this, I suggest using this branch: https://github.com/DevelopmentalImagingMCRI/karawun/tree/MFFVersioneer, which has some updates relating to multiframe dicoms as templates.
Oh that makes sense! Actually, I’ll make a little Linux based docker container. You want me to share it / the docker file when I’m done?On Apr 26, 2023, at 5:48 PM, Richard Beare @.***> wrote: Looks like that is due to old packages not being available on M1/M2 architectures. I'm not going to have a chance to look at this in any detail for a while. If you're happy to play around you can clone the repo and go through the testing procedure (see "Verifying your installation" at https://developmentalimagingmcri.github.io/karawun/installation.html) after changing the versions in requirements and see how close the tests come to passing. Let me know how you go. If you do attempt this, I suggest using this branch: https://github.com/DevelopmentalImagingMCRI/karawun/tree/MFFVersioneer, which has some updates relating to multiframe dicoms as templates.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>
Sounds good - please attach the dockerfile to this thread.
Still working on the container, but I having an issue where, when we load it in Brainlab, they both load in correctly, but we can't overlay it. Is there something we have to edit in the DICOM tags to make sure Brainlab knows it can overlay the tract object on it? Thanks in advance.
If you convert multiple MR modality nifti files along with one or more tck files then, by default, brainlab will only allow the tck files to be overlaid on the first of the nifti files listed in the conversion command. In order to overlay on the others you need to accept the registration. That happens, if memory serves, in the image fusion module in brainlab. All nifti files converted by a single importTractography
command are assumed to be aligned and are assigned to a common dicom frame of reference. However brainlab requires that the coregistration is reviewed and accepted by the user.
I finally got access to a Brainlab system; any idea what is happening here?
It looks like the seg dcm has no 'Location', 'Window', 'PatientPosition', 'Matrix' information. Is that the issue?
This is very strange. I have never seen it before.
The seg dcm doesn't have those tags, if memory servers, because it is assign to be derived from another dicom (by default the first in the nifti argument list) and therefore inherits them.
Can you send provide the commandline you are using?
Do the T1 and streamlines align correctly in mrview?
The only remaining potential explanation is issues with multiframe dicoms as your template dicom. We made a quick hack to get those to behave, but the work isn't complete. That is the version in the development branch I mentioned.
Hmm, it's not even rendering in mrview. I am using the tck from DSI Studio. Figured it was fine. Any idea what is going on? Here is the T1 and tck. I can also ask Frank at DSI Studio. Here is the tck and the
https://www.dropbox.com/scl/fo/hzjiwlpvxlntuo58442hd/h?dl=0&rlkey=g4yej6outkvu9mw7wdzu0y6ti
Have you also got the DWI it was created on, or at least the FA map derived from the same DWI?
On Tue, May 9, 2023 at 8:05 PM Max Bertolero @.***> wrote:
Hmm, it's not even rendering in mrview. I am using the tck from DSI Studio. Figured it was fine. Any idea what is going on? Here is the T1 and tck. I can also ask Frank at DSI Studio. Here is the tck and the
https://www.dropbox.com/scl/fo/hzjiwlpvxlntuo58442hd/h?dl=0&rlkey=g4yej6outkvu9mw7wdzu0y6ti
— Reply to this email directly, view it on GitHub https://github.com/DevelopmentalImagingMCRI/karawun/issues/35#issuecomment-1539842620, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAF6RILYTGLSDGKWRD6J43TXFIJHDANCNFSM6AAAAAAXLMD2XY . You are receiving this because you commented.Message ID: @.***>
If you switch to volume render mode in mrview you can see the track outside the FOV of the T1 - exactly as you saw in brainlab.
I'm guessing that t1 and DWI aren't coregistered.
On Tue, May 9, 2023 at 10:35 PM Richard Beare @.***> wrote:
Have you also got the DWI it was created on, or at least the FA map derived from the same DWI?
On Tue, May 9, 2023 at 8:05 PM Max Bertolero @.***> wrote:
Hmm, it's not even rendering in mrview. I am using the tck from DSI Studio. Figured it was fine. Any idea what is going on? Here is the T1 and tck. I can also ask Frank at DSI Studio. Here is the tck and the
https://www.dropbox.com/scl/fo/hzjiwlpvxlntuo58442hd/h?dl=0&rlkey=g4yej6outkvu9mw7wdzu0y6ti
— Reply to this email directly, view it on GitHub https://github.com/DevelopmentalImagingMCRI/karawun/issues/35#issuecomment-1539842620, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAF6RILYTGLSDGKWRD6J43TXFIJHDANCNFSM6AAAAAAXLMD2XY . You are receiving this because you commented.Message ID: @.***>
That’s super helpful, thanks. They are, but we’ve had issues with this out of DSI Studio before. I am assuming it’s writing out the tck files with the wrong voxel coordinates. I’ll ask Frank at DSI studio about this. On May 9, 2023, at 8:39 AM, Richard Beare @.***> wrote: If you switch to volume render mode in mrview you can see the track outside the FOV of the T1 - exactly as you saw in brainlab.
I'm guessing that t1 and DWI aren't coregistered.
On Tue, May 9, 2023 at 10:35 PM Richard Beare @.***> wrote:
Have you also got the DWI it was created on, or at least the FA map derived from the same DWI?
On Tue, May 9, 2023 at 8:05 PM Max Bertolero @.***> wrote:
Hmm, it's not even rendering in mrview. I am using the tck from DSI Studio. Figured it was fine. Any idea what is going on? Here is the T1 and tck. I can also ask Frank at DSI Studio. Here is the tck and the
https://www.dropbox.com/scl/fo/hzjiwlpvxlntuo58442hd/h?dl=0&rlkey=g4yej6outkvu9mw7wdzu0y6ti
— Reply to this email directly, view it on GitHub https://github.com/DevelopmentalImagingMCRI/karawun/issues/35#issuecomment-1539842620, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAF6RILYTGLSDGKWRD6J43TXFIJHDANCNFSM6AAAAAAXLMD2XY . You are receiving this because you commented.Message ID: @.***>
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>
There may be some nibabel tools for this now. See this old mrtrix thread https://community.mrtrix.org/t/conversion-of-streamlines-from-trk-to-tck/806/9
On Tue, May 9, 2023 at 10:48 PM Max Bertolero @.***> wrote:
That’s super helpful, thanks. They are, but we’ve had issues with this out of DSI Studio before. I am assuming it’s writing out the tck files with the wrong voxel coordinates. I’ll ask Frank at DSI studio about this. On May 9, 2023, at 8:39 AM, Richard Beare @.***> wrote: If you switch to volume render mode in mrview you can see the track outside the FOV of the T1 - exactly as you saw in brainlab.
I'm guessing that t1 and DWI aren't coregistered.
On Tue, May 9, 2023 at 10:35 PM Richard Beare @.***> wrote:
Have you also got the DWI it was created on, or at least the FA map derived from the same DWI?
On Tue, May 9, 2023 at 8:05 PM Max Bertolero @.***> wrote:
Hmm, it's not even rendering in mrview. I am using the tck from DSI Studio. Figured it was fine. Any idea what is going on? Here is the T1 and tck. I can also ask Frank at DSI Studio. Here is the tck and the
https://www.dropbox.com/scl/fo/hzjiwlpvxlntuo58442hd/h?dl=0&rlkey=g4yej6outkvu9mw7wdzu0y6ti
— Reply to this email directly, view it on GitHub < https://github.com/DevelopmentalImagingMCRI/karawun/issues/35#issuecomment-1539842620 , or unsubscribe < https://github.com/notifications/unsubscribe-auth/AAF6RILYTGLSDGKWRD6J43TXFIJHDANCNFSM6AAAAAAXLMD2XY
. You are receiving this because you commented.Message ID: @.***>
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>
— Reply to this email directly, view it on GitHub https://github.com/DevelopmentalImagingMCRI/karawun/issues/35#issuecomment-1540058734, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAF6RIJCFCPM3MUW7ZLSD7DXFI4LRANCNFSM6AAAAAAXLMD2XY . You are receiving this because you commented.Message ID: @.***>
Hey, we figured this out, just had to apply the affine correctly to get the voxel coordinates correct. Working on the container now, I can send it over soon.
By the way, have you figured out how to, in general, convert nifti to DICOM and have it load as an overlay? No issues with your outputs, we just can't get our other maps to overlay on the T1. I assume there is some DICOM tag we are missing or something. Thanks!
Are you able to overlay the streamlines on the other maps?
I'll have a play around.
Yes, we can overlay the streamlines totally fine on the T1. It's just the dicoms we have wrote with pydicom (eg tract density images) that we can't get to overlay. Totally not your responsibility to figure this out, was just curious if you knew off hand how to get dicoms to overlay on a T1 in general. Thank you again for your time!
My best guess without seeing the data is to try the following. Apologies if this is what you're trying, but it sounds like it might not be. Collect all your images, i.e. T1 + tdi, as co-registered nifti files
Convert both with karawun as well as the tract and label images in the same command. Including them in the same command will add the frame of reference tags that brainlab interprets as indicating that the images are aligned (it will also require that you accept the frame of reference in the image fusion module). I would assume that convincing brainlab that the data is in the same space is required before overlaying is allowed.
importTractography --dicom-template path/to/a/dicom --nifti T1.nii.gz tdi.nii.gz --tract-files left_cst.tck right_cst.tck -o path/to/output/folder
This form of the command, with multiple nifti files will assume that those files are aligned and writes the frame of reference information into the dicoms.
Not sure what the steps in brainlab for overlaying are, but I'll see if I can find out.
That is insanely helpful, that makes a ton of sense. Thank you!
How do you choose the frame of reference tag to put in the DICOM?
The frame of reference is a UUID. In karawun I create one for an import process - see here . The ID gets passed to all the sub functions creating different sorts of dicom.
Note that the main import function also accepts a frame of reference UID, but you can't pass it in from the command line - it was for when I was experimenting with importing stuff by parts, but I figured it was too complicated to be generally useful.
Perfect. Lots of great DICOM code in there! Thank you.
Also, I am trying to containerize this, and I am stuck at the simpleITK install.
Here is the Dockerfiles code. I have tried a few ways to get simpleITK installed, but this is the most basic and depends on how you install it.
FROM ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
apt-utils \
build-essential \
ninja-build \
git \
python3.7 \
python3-pip \
python3-dev \
python3-venv \
cmake \
&& apt-get clean
RUN pip3 install scikit-build cmake
RUN pip3 install karawun
It looks like an issue with getting ITK from GitHub, but I can't seem to fix that. Any ideas? error.txt
Update, this works on our Azure node, not on my Mac.
(base) maxbertolero@compute01:~/brainmapping$ docker run --rm -it --entrypoint /bin/bash karawun
root@d140b93547e9:/# ls
bin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var
root@d140b93547e9:/# importTractography -h
usage: importTractography [-h] -d DICOM_TEMPLATE -o OUTPUT_DIR -n NIFTI [NIFTI ...] [-t TRACT_FILES [TRACT_FILES ...]] [-l LABEL_FILES [LABEL_FILES ...]] [-v]
A tool for creating Brainlab compatible dicom data from mrtrix tract files and nifti images. All image and tckfiles are assumed to be aligned and will be placed intoa dicom frame of reference with the first
nifti imageas the base for the reference. The alignment must beaccepted in brainlab before objects can be viewed onall images.
optional arguments:
-h, --help show this help message and exit
-d DICOM_TEMPLATE, --dicom-template DICOM_TEMPLATE
Dicom file to supply tags, ideally from one of the dicoms converted to nifti
-o OUTPUT_DIR, --output-dir OUTPUT_DIR
Output directory for generated dicom
-n NIFTI [NIFTI ...], --nifti NIFTI [NIFTI ...]
One or more input nifti files
-t TRACT_FILES [TRACT_FILES ...], --tract-files TRACT_FILES [TRACT_FILES ...]
One or more mrtrix streamline files
-l LABEL_FILES [LABEL_FILES ...], --label-files LABEL_FILES [LABEL_FILES ...]
One or more nifti label image files
-v, --verbose verbose errors, python traceback
It is hard to be certain, but I think it is failing when cloning ITK. It is building SimpleITK, requiring ITK, as there aren't aarch64 binaries for the version you need.
Yup, that was the issue. I think the git repo was down, I had a cached version and it built fine. I’m going to write the entry point and you can use that as a template if you ever want to containerize this. Thanks again!!On May 31, 2023, at 5:37 PM, Richard Beare @.***> wrote: It is hard to be certain, but I think it is failing when cloning ITK. It is building SimpleITK, requiring ITK, as there aren't aarch64 binaries for the version you need.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>
I've done some checking today. Our brainlab system is in a funny state at the moment, so we get a mix of old and new stuff.
It seems important to go into the "image fusion" module and accept the frame of reference. Once you've done that tracts will appear on all images included in the --nifti part of the conversion.
Overlays don't appear to be possible in the simple viewer. However the cranial planning tools can. The old icon was a kind of double triangle - I'm not sure if has changed - it allows overlays to be switched on and off and there's a "spyglass" mode to allow you to see through the top overlay.
Yes, this is what we have seen when we demo this with BrainLab users. It only works if you go to the image fusion module.
Also, I am now getting an error I used to not get, seems related to where you do the rescaling?
root@59b942f33d3c:/# importTractography --dicom-template dicoms/sub-NDAR#####_ses-baselineYear1Arm1_dicom000001.dcm \
> --nifti nifti/sub-NDAR#####_desc-preproc_T1w.nii.gz --tract-files \
> tck/sub-NDAR#####_ses-baselineYear1Arm1_space-T1w_desc-preproc_gqi.Corticospinal_Tract_L.tck \
> -o out/
Exception handler: OSError - With tag (0028, 0106) got exception: ushort format requires 0 <= number <= (0x7fff * 2 + 1)
for data_element:
(0028, 0106) Smallest Image Pixel Value US: -51
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/pydicom/filewriter.py", line 230, in write_numbers
value.append # works only if list, not if string or number
AttributeError: 'int' object has no attribute 'append'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/pydicom/filewriter.py", line 232, in write_numbers
fp.write(pack(format_string, value))
struct.error: ushort format requires 0 <= number <= (0x7fff * 2 + 1)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/pydicom/tag.py", line 30, in tag_in_exception
yield
File "/usr/local/lib/python3.8/dist-packages/pydicom/filewriter.py", line 555, in write_dataset
write_data_element(fp, dataset.get_item(tag), dataset_encoding)
File "/usr/local/lib/python3.8/dist-packages/pydicom/filewriter.py", line 482, in write_data_element
writer_function(buffer, data_element, writer_param)
File "/usr/local/lib/python3.8/dist-packages/pydicom/filewriter.py", line 237, in write_numbers
raise IOError(
OSError: ushort format requires 0 <= number <= (0x7fff * 2 + 1)
for data_element:
(0028, 0106) Smallest Image Pixel Value US: -51
Fixing this problem is another one for my upgrade list.
The input nifti files have to be scaled such that pixel values are in an unsigned short type - i.e. not negative. I need to include more informative errors about this. The problem is, from memory, that I'm writing mr modality dicoms and they have that constraint on pixel type.
Okay cool, so if I make sure all nifti values are positive that should fix it?On Jun 6, 2023, at 7:32 AM, Richard Beare @.***> wrote: Fixing this problem is another one for my upgrade list. The input nifti files have to be scaled such that pixel values are in an unsigned short type - i.e. not negative. I need to include more informative errors about this. The problem is, from memory, that I'm writing mr modality dicoms and they have that constraint on pixel type.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>
Yep - I think so.
Hello,
I saw in this thread that there is no way to control the StudyUID or FrameOfReferenceUID from the command line interface. Could that feature be added to the next release? Or, have an option/flag to use the original DICOM's UIDs instead of automatically generating new ones?
Sorry to take so long to reply to this - it fell off my to do list.
I think the best answer to this sort of question is to suggest that if you need this level of control then using the karawun package as a programming library is probably the best way forward. I experimented with adding new dicom files to an existing FrameOfReferenceUID, and it is possible but fiddly, so I didn't make the option available via the command line. It is generally simpler to regenerate the entire collection, and doesn't take particularly long.
Reusing existing StudyUIDs sounds potentially dangerous and doesn't add anything to the workflow we've been using - perhaps you can suggest a use case? My thinking has been that the files involved in the workup should be considered as a new study to keep things simple. Our workflows often involve images acquired in multiple sessions (e.g. different MRI sessions, PET sessions etc) and making the derived dicoms match one of these studyUIDs doesn't offer a benefit that I can see.
Any chance you can update the requirements to a newer pydicom?
error: pydicom 2.3.1 is installed but pydicom==1.4.2 is required by {'karawun'}