arcadelab / deepdrr

Code for "DeepDRR: A Catalyst for Machine Learning in Fluoroscopy-guided Procedures". https://arxiv.org/abs/1803.08606
GNU General Public License v3.0
205 stars 59 forks source link

The generated DRR is too bright #72

Closed LexTran closed 2 years ago

LexTran commented 2 years ago

Hi! I'm using deepdrr to generate a DRR from .nii.gz files, and I found out that the result is too bright, as the image below shows: example_projector_26 Do you have any idea how can I fix this problem?

mathiasunberath commented 2 years ago

When inspecting your image, you will note that you have an area (top left corner) where X-rays pass mostly through air and other areas, the center, where X-rays pass though highly attenuating tissue. What this does it creates a very high dynamic range (very low and very high attenuation), but you can ever only display 256 gray values (8bit). Please review this: https://en.wikipedia.org/wiki/Dynamic_range

LexTran commented 2 years ago

What this does it creates a very high dynamic range (very low and very high attenuation), but you can ever only display 256 gray values (8bit).

Thanks for your reply! But I'm still confused, how can I fix this? I noticed that in your work, the result is much better, which part of the code should I concern?

mathiasunberath commented 2 years ago

The information you are looking for is in your image, it depends on how you visualize it, i.e., map the float32 range into 256 gray tones. Take a look at this or related information: https://www.youtube.com/watch?v=A5aqQwipju8&ab_channel=simply_rad this is for CT but the problem of dynamic range is the same

LexTran commented 2 years ago

您要查找的信息在您的图像中,这取决于您如何对其进行可视化,即将 float32 范围映射为 256 个灰度色调。看看这个或相关信息:https ://www.youtube.com/watch?v=A5aqQwipju8&ab_channel=simply_rad这是用于 CT 但动态范围的问题是相同的

I‘ll give it a try, thank you so much!

ECNUACRush commented 1 year ago

Hi, @mathiasunberath @LexTran @benjamindkilleen I also met this problem in my pelvic dataset(which is identical to your demo at colab, pelvic1k is my lab's work :) ), but when I changed the z parameter in the example_projector.py file, I found it also works(the output will be clearer and a little dark). So, I want to know whether I should check this range question(window width and level, I already look some material about that) or try other parameters? I found there are so many parameters such as spectrum,photon_count,scatter_num, and others. Until now I only tried different x,y,z to see the output performance(compared to X-ray). I can manually make it work But it's impossible to find all suitable x,y,z, and other parameters for all CT files in my dataset(over 1000slices). I'm new to this area and confused by these too many files. I already read your paper at MICCAI about deepdrr, but I don't really know how to fix my problem according to this paper. Sincerely looking for your reply, thanks!

ECNUACRush commented 1 year ago

I selected x,y,z as (0,0,-175) to generate two DRR image(first I use nnunet to only reserve bone area) as follows: '0001' 图片

And the second pic is obviously too bright. I open it in ITK-snap to see some attributes about it. 图片

mathiasunberath commented 1 year ago

playing around with x,y,z should only change the position (not even the orientation) of your anatomy (i.e., the CT scan) relative to the X-ray camera source position. This has nothing to do with contrast in your resulting image, other than through visualization channels - I have explained this above and will briefly explain again. Because a human can only see 255 shades of gray (8bit), images of higher dynamic range (such as the ones generated with DeepDRR, 32bit) need to be scaled somehow to fit the 8bit range for display. You can change this mapping from a very broad 32bit dynamic range to a much narrover 8bit one using window level (you can google more information on this). Your first step should thus be to inspect your images with a tool that allows you to adjust window and level. We use ImageJ / Fiji as a lightweight tool to do this and other quick image manipulation tasks.

Should the above confirm that the image indeed does not have the content you want, then your next step would be to take a look at the 3D material segmentations to see whether something is going wrong there.

ECNUACRush commented 1 year ago

Thank you! I will have a try now.

ECNUACRush commented 1 year ago

I have used imageJ to inspect my images, My generated DRR images(.png) are already set to 0-255, 图片 So I realized that you mean to adjust CTs to 0-255, I used ITK-snap to make it. After fixing many bugs, I get the following result: 图片 but what i want is X-ray style result like this: https://user-images.githubusercontent.com/49430264/200182242-ffadd4be-3475-4952-bf32-7738eaa8f8ad.png So i am a little confused..

ECNUACRush commented 1 year ago

@mathiasunberath Maybe my early expression is not very clear. This is your demo at colab: I only change z range(-500, 50) to (-200, 0) and use my data(the same as you have used at colab, "dataset6_CLINIC_0001_data.nii.gz". I get it from my lab, they already made great segmentation on this dataset, so I directly save pelvic bone area without other regions such as soft-tissue as my data.) 图片 when z is at -200, the result is too bright and doesn't seem like x-ray. 图片 when projected at -50, the result is more likely. So what I want is to get this kind of drr.

benjamindkilleen commented 1 year ago

@ECNUACRush The reason is in your second image the field of view falls slightly outside of the volume, so you have some pixels with no attenuation. By default, the Projector will map to the min and max of your image to give you something in [0,1]. As a quick fix, you can set intensity_upper_bound to clip the image before the negative log transform. A reasonable value is on the order of 1 to 20.

ECNUACRush commented 1 year ago

@benjamindkilleen I‘m appreciate for your reply!thanks. But I have tried to set intensity_upper_bound as 1,2,3……10,13,15,18,20, the best result is projecting z at around (-100,0). And I don't see some changes when compared to my early edition(not change intensity_upper_bound). I also notice that your note in the code says "A good value is 40 keV / photon. Defaults to None.". So I change photon_count from 100k to 10k, and intensity_upper_bound to 4, I also try 1 and 10. But I still get a similar result. The problem is at this dataset(103 CTs), I set x = 0, y = 0, z = -170by observation, other parameters such as photon_count and intensity_upper_bound as default. I can get a good result on the majority of dataset, except some CTs such as the upper pic 2, which bother me a lot. It means I must change z manually to generate satisfied result for those small parts of data.

ECNUACRush commented 1 year ago

My lab has done many good jobs(such as DecGan published on TMI and RSgan on MIA and miccai) depending on your excellent work deepdrr. Thanks for both of your work and kindly reply to me! Btw, I have inquired parameter questions to them, author of DecGAN told me to use spectrum from 90kv to 120kv, but he didn't know why. So what i want to ask is if i want to generate more similar DRR as X-ray. Can you give me some tips for regulating those parameters? I just don't learn it from the original paper. I will also read previous issues carefully to get answers if possible.

benjamindkilleen commented 1 year ago

@ECNUACRush I'm glad you've found it useful!

I am currently projecting through pelvic1k data and not experiencing this issue. How are you instantiating your Volume? It looks like you might be projecting through the segmentation. Try loading the CT and segmentation separately with something like Volume.from_nifti("path/to/ct.nii.gz", materials=dict(bone=seg_array))

Our documentation needs to be updated on this point.

mathiasunberath commented 1 year ago

Well, so for me the thing I don't understand is why changing "z" which should be a 3D translation, is chaging the appearance of your images so dramatically. The first thing to find out would be this one: Is the image BEFORE log-transform OK? If yes, then it's an issue related to possible very low/high values in your intensity image that appear/disappear when you re-center your image, and chosing a different I_0 for normalization will solve the issue. (we have been working on a "detector class" that takes care of this but not there yet). If no, then it's possibly related to the 3D segmentation/density processing and inspecting your segmentations and HU values first will be a good idea. There are clipping planes in 3D, and so if you have an error outside of them, you may end up only seeing them in some cases.

ECNUACRush commented 1 year ago

@mathiasunberath @benjamindkilleen Thanks to both of you! I miss the materials of Volume.from_nifti, I think it would be the reason. just as @mathiasunberath says, it also bothers me a lot. When I test with the same pic(have bone and soft tissues), changing "z" will not have dramatically change. I will validate this guess soon after my group meeting and I will feedback when I finish it! Thanks for sharing with me, again! Hope you have a nice day!

ECNUACRush commented 1 year ago

BTW, I guess the question owner also meets this problem because I find his picture seems like mine(only has bone) @LexTran

ECNUACRush commented 1 year ago

Thanks for you all! I tried two methods, firstly, set 0-255=> 125-255, which can effectively reduce brightness and look like an X-ray more, result is as follows: 图片 previously I misunderstood prof @mathiasunberath, I found the result of drr already set to 8bit(0 - 255), so I didn't continue to limit it.Now it worked very well, but the lower bound (which I set as 125) is more empirical and may not suit all data.

Then I try to fix it by adding: Volume.from_nifti("path/to/ct.nii.gz", materials=dict(bone=seg_array)) as @benjamindkilleen said. It also worked! Comparison result: 图片 But it also faces a problem: DRR seems blur more, maybe I should try to adjust other parameters?

mathiasunberath commented 1 year ago

So, I don't think your problem is solved. This is because, if indeed this is the same CT, you are having a mirroring from the first to the second case (see location of fractures). This may indicate that the reason one does not work is because you're somehow not reading it correctly.

In your first case, you also see ring artifacts in your higher contrast, left image. This suggests something is incorrect with the volume or segmentation, and I would think that what is happening is that your CT has valid values only in a circular ROI because that region has valid data completeness. Outside they might have written arbitrary HU values and they give you strong contributions to your attenuation depending on where your source is. Please verify this by aggressively window leveling your CT looking for non-vacuum/air values in peripheral regions.

ECNUACRush commented 1 year ago

@mathiasunberath I'm sorry. I didn't express myself clearly. These two cases are not the same CT. Two rows represent two CTs separately. So there is no mirroring. I think you are right about the ring artifacts, I will check my CT carefully. Although the pelvic1k(https://github.com/MIRACLE-Center/CTPelvic1K) was collected and tested successfully, it certainly still has some problems when projecting it to DRR images such as optical grating( we think it may result from berth or detector and consider removing it now), blur image and ring artifacts. I found it is not easy to project all data(1184CTs consist of 7 sub-datasets) due to the gap(inter-dataset and intra-dataset) all at once. I will continue to implement good DRRs on those CTs firstly as a necessary part of my later work. Thank you for your patient reply all the time! I will timely feedback if there is any progress.

ECNUACRush commented 1 year ago

@ECNUACRush I'm glad you've found it useful!

I am currently projecting through pelvic1k data and not experiencing this issue. How are you instantiating your Volume? It looks like you might be projecting through the segmentation. Try loading the CT and segmentation separately with something like Volume.from_nifti("path/to/ct.nii.gz", materials=dict(bone=seg_array))

Our documentation needs to be updated on this point.

Sorry to bother you again, Last month I generated many images by deepdrr and trained my decomposition network well, which help me get some good results. Now I have one more question, I want to know whether I can merely project bone area in a CT volume? I use Volume.from_nifti("path/to/ct.nii.gz", materials=dict(bone=seg_array)) as @benjamindkilleen said and gained following result: 图片 图片

note: the first picture is the whole CT volume while the second is only bone-area. but as you may find, this bone seems "unreal", when I checked the value in imageJ, I found that the range of values in the bone region is small. So I'm a little confused about what to do to generate the usual bone(it may seem as follows:

图片

I gained this picture in 2 steps: 1. use NNunet to segment the bone out of the original CT. 2. project the bone area from what we have got in step 1.

Looking forward to your reply, Thanks.

ECNUACRush commented 1 year ago

A supplement: You can ignore other bones in my pictures (leg and spine) because I only focus on the pelvis. The third image was generated by me earlier. At that time, I did not think about the spine, but we have done a good job of segmentation the pelvis and spine in 3d CT volume now.

mathiasunberath commented 1 year ago

My understanding is that you currently only project the segmentation mask, but NOT the CT density values. Consequently, you get something that is rather "simple-looking", lacking details. Make sure that you do project through the CT densities. One not very elegant way of doing it would be to set the absorption of all other materials to 0.

ECNUACRush commented 1 year ago

@mathiasunberath Thanks for your reply! I think you are right, the second picture was generated by a mask. The third picture was generated by the method you mentioned above(set other materials to 0). But I found this method may cause some problems in some cases: 图片

when checked by ImageJ, You can see it has a weird, uneven range of gray.

mathiasunberath commented 1 year ago

Well, so there are multiple things here. 1) ImageJ shows that this is an RGB image which means that all of your dynamic range has already been compressed to 8bit per channel. We had previously talked about this extensively, suggesting that this is not desirable because it gives you a lot of problems especially if you have a few pixels that have super low values (see the top row in your image) and lots of other pixels that have very high values (see bone etc.). This conteributes to oyur issue. 2) you may have selected a very soft spectrum (like 90kV), and so what you are seeing here is likely to some extent beam hardening (because bone has very high absorption in low energy X-ray). Together with 1) this should exlain what you are seeing/

ECNUACRush commented 1 year ago

Well, so there are multiple things here.

1. ImageJ shows that this is an RGB image which means that all of your dynamic range has already been compressed to 8bit per channel. We had previously talked about this extensively, suggesting that this is not desirable because it gives you a lot of problems especially if you have a few pixels that have super low values (see the top row in your image) and lots of other pixels that have very high values (see bone etc.). This conteributes to oyur issue.

2. you may have selected a very soft spectrum (like 90kV), and so what you are seeing here is likely to some extent beam hardening (because bone has very high absorption in low energy X-ray). Together with 1) this should exlain what you are seeing/

Thanks for your reply, prof.

  1. The images generated by me using deepdrr are all 8-bit gray images, So the reason you see the image display as RGB is actually that I converted it with imageJ, but it doesn't look any different visually. Just to confirm this, I used deepdrr and projected a few more sets of data, they are all 8 bits.
  2. You are right. I use default option 90kV as spectrum, but as @benjamindkilleen said, I previously tried other options such as 120kV and changed intensity_upper_bound too. But I still got similar images.
  3. So my solution now is to ignore those pictures(count around 20% of all data). This can be used, but I wish there was a better or smarter solution to this problem. As you said, maybe I can try to dismiss those super low values(top row in my image) by a simple python program before using deepdrr?
mathiasunberath commented 1 year ago

1) they are 8 bit because you save them as such; DeepDRR does NOT produce 8-bit projections. 2) this will make a difference, but you can decide that it's negligible for your purposes 3) I think we have provided many pointers that may impact what you are doing here, including how to deal with the black area (again, 8 bit is not your friend), so until you try those solutions I don't think we have further pointers

ECNUACRush commented 1 year ago

Inspired by you, for question 1, I simply input CT volume before and did not carefully study the channel of generating images. I think I have understood which part of the experiment to do, I will continue to try now, thank you for your patient advice.

fanyigao commented 1 year ago

谢谢大家!我尝试了两种方法,首先,设置0-255=>125-255,这样可以有效降低亮度,看起来更像X射线,结果如下:图片之前我误解了Prof,我发现drr的结果已经设置为8bit(0 - 255),所以我没有继续限制它。现在它运行得很好,但下限(我设置为 125)更具经验性,可能不适合所有数据。

然后我尝试通过添加来修复它:如前所述。它也奏效了!比较结果: 但它也面临一个问题: 图片 DRR 似乎更模糊,也许我应该尝试调整其他参数?Volume.from_nifti("path/to/ct.nii.gz", materials=dict(bone=seg_array))

老哥,控制125-255是什么意思,我用deepdrr的最小示例也得到的亮度很高的图片。请问你后来的层次比较分明的drr是怎么得到的。

ECNUACRush commented 1 year ago

谢谢大家!我尝试了两种方法,首先,设置0-255=>125-255,这样可以有效降低亮度,看起来更像X射线,结果如下:图片之前我误解了Prof,我发现drr的结果已经设置为8bit(0 - 255),所以我没有继续限制它。现在它运行得很好,但下限(我设置为 125)更具经验性,可能不适合所有数据。 然后我尝试通过添加来修复它:如前所述。它也奏效了!比较结果: 但它也面临一个问题: 图片 DRR 似乎更模糊,也许我应该尝试调整其他参数?Volume.from_nifti("path/to/ct.nii.gz", materials=dict(bone=seg_array))

老哥,控制125-255是什么意思,我用deepdrr的最小示例也得到的亮度很高的图片。请问你后来的层次比较分明的drr是怎么得到的。

你用imagej调图像的动态范围就能看出来区别了,如果是要投骨头的话,把非骨头区域置为-2000而不是0,就ok了