eigenvivek / DiffPose

[CVPR 2024] Intraoperative 2D/3D registration via differentiable X-ray rendering
http://vivekg.dev/DiffPose/
MIT License
118 stars 14 forks source link

Do you have tips to make my model work sufficiently? #42

Open Nick19111996 opened 1 month ago

Nick19111996 commented 1 month ago

Hi Vivek, Thanks again for sharing your work. I am trying to use your workflow for lateral pelvic X-rays but I face some issues. The model works 50% of the time when I use a random lateral DiffDRR as input (see GIF 1). I trained the model with a batch size of 2 images (due to memory limitations) and used a random offset with standard deviation of 40 for translations and 0.1 for the rotations. When I use a real lateral X-ray as input the model never finds the right position of the X-ray source (See GIF 2). We don’t know the actual position of X-ray source but in GIF 3 we can see that the synthetic X-ray looks different compared to the actual X-ray (this GIF switches between synthethic and real X-ray).
Do you maybe have some tips to make my model work sufficiently? Would adjusting the learning rate help? Or should I use better preprocessing of the real X-ray image? Our lateral X-rays are of low quality because we use a mobile C-arm (Figure 1). For preprocessing we currently only use “img = exposure.equalize_adapthist(img/np.max(img))” Do you think further preprocessing is required? If so, do you have tips regarding the preprocessing?

Your help is deeply appreciated. Thanks in advance.

Kind regards,

Nick

image1 GIF 1: Applying model to random generated synthethic X-ray

image2 GIF 2: Applying model to real X-ray (reference X-ray source is not the actual position of the X-ray source but is just a lateral X-ray)

image3 GIF 3: Synthethic and real X-ray (klik on GIF for better resolution)

input

Figure 1: Preprocessed lateral X-ray

eigenvivek commented 1 month ago

Hi @Nick19111996 cool images! The first thing that jumps out is the circular crop on your fluoroscopy image... I've never tested DiffDRR with an image like that, so perhaps the issue lies there. It looks like the region outside your image is white (1). Could you try setting it to black (0)? That way, those pixels won't contribute to the loss.

eigenvivek commented 1 month ago

Hi @Nick19111996 closing for now, feel free to reopen if you're still facing any issues!

Nick19111996 commented 2 weeks ago

Hi Vivek,

Thank you for your suggestions.

I tried making the white pixels black, but unfortunately, this did not resolve the issue. I then created some intraoperative images without the circular crop, but that didn’t work either. I think the intraoperative X-ray still looks quite different compared to the DiffDRR image. So, in addition to histogram equalization, I also tried histogram matching (see Figure 1).

image Figure 1: Histogram matching

Unfortunately, this approach also didn’t resolve the issue. I then noticed that the model is very responsive to the black pixels at the borders of the image. To confirm this, I manually adjusted a DiffDRR image by adding a black upper left corner and used it as input for the model. In Figure 2, you can see the input and output images, which show that the model tries to align with the black corner.

Black shapes Figure 2: Result of adding black shape in the input image

Because of this, I thought it might help to focus more on the center of the image rather than the borders. To test this, I applied data augmentation with black triangles in the corners, black edges, and circular/oval shapes (to simulate random gas in the intestines). See Figure 3 for some examples of the augmentation.

Afbeelding van WhatsApp op 2024-11-06 om 09 49 59_5507511c Figure 3: Data augmentation with black triangles, black borders and oval/circular shapes.

This approach worked on the manually adjusted DiffDRR data (Figure 4), but the issue persists with real C-arm images taken during surgery. Using images without histogram matching resulted in slightly better loss, but overall, the model still clearly does not find the correct pose (see Figure 5).

Do you have any other suggestions that could help resolve this?

Your help is greatly appreciated.

Kind regards,

Nick

Black shapes after augmentation Figure 4: Result of adding black shape in the input image after training with data augmentation

Result with real data after augmentation Figure 5: Result of applying model (trained with data augmentation) on real X-ray image