abrilcf / mednerf

MIT License
139 stars 30 forks source link

how to rendering img #5

Closed gpqls closed 1 year ago

gpqls commented 1 year ago

In paper, you said "During training we use the whole set of 72 DRRs (a fifth of all views within a full 360-degree vertical rotation) per patient and let the model render the rest."

  1. I'm curious about the rendering of the rest here.

    if ((it+1) % config['training']['video_every']) == 0: N_samples = 4 zvid = zdist.sample((N_samples,)) basename = os.path.join(outdir, '{}{:06d}_'.format(os.path.basename(config['expname']), it)) evaluator.make_video(basename, zvid, render_poses, as_gif=False)

Is this the right code? I think it's not rendering the rest, it's just generating videos with zdist created by the generator. If there is a code that renders the rest, please let me know which code it is.

  1. We want to use the model as a dataset that is not used for training. I only used the 2 to 20 lung images in the dataset you provided for me and I want to test them using the 1 lung images. Maybe I can use render_xray_G.py, but I don't understand this code.

The reason why I don't understand the code well is probably zdist, zvid. I think these two are just variables that are constantly updated by simple generators that are not related to the actual image.

abrilcf commented 1 year ago

Hi,

  1. The rendering part is done calling the forward method of the generator. Please refer to this line. For that, you pass z, which consists of both the shape and appearance codes sampled from a Gaussian distribution here's the definition. And, you also need the rays, (refer to this). Instead of using 72, you can increase the number to more poses.
  2. I'm not sure I understand what you're trying to do. The video part is performed at the end, after getting the rgbs from the generator. You can also refer to the GRAF paper (if you haven't done so). We used GRAF as our base code.
gpqls commented 1 year ago

But Isn't this https://github.com/abrilcf/mednerf/blob/a980b43d0f882e06c40c82f20be76bd57e57e66c/graf-main/render_xray_G.py#L187 just rendering with zdist? That's just the part where you put z in the generator and what I want to do is where you do the rest of the rendering. Where can I see the results of the remaining 4/5 rendering? image And I want to get this CT projection result.

And https://github.com/abrilcf/mednerf/blob/a980b43d0f882e06c40c82f20be76bd57e57e66c/graf-main/render_xray_G.py#L143 There is no part to take real xray img and render it, but only loss is calculated, is that what I understood?

abrilcf commented 1 year ago

Hi, yes, the rendering part is done with z and the generator only. The idea is to deform a random volumetric field by further optimizing the generator, and the deformation should better resemble the given x-ray image given PSNR. For knee there's only one test example, but 4 for chest.

BianFeiHu commented 1 year ago

Hi, yes, the rendering part is done with z and the generator only. The idea is to deform a random volumetric field by further optimizing the generator, and the deformation should better resemble the given x-ray image given PSNR. For knee there's only one test example, but 4 for chest.

Hi, where can i get the test example

abrilcf commented 1 year ago

Hi, is the first instance from the knee dataset

BianFeiHu commented 1 year ago

Hi, is the first instance from the knee dataset

Tanks for your reply, Do you mean the 72 DRR image 01_xray00**.png in knee_xrays,but are they used in the training process? I don't know whether we need train/test split when using GAN.

abrilcf commented 1 year ago

yes, as mentioned in a different issue , we left out the first instance from the knee dataset and the first four for the chest dataset for testing. But we don't use all views, only the first view (01_xray0000.png)

BianFeiHu commented 1 year ago

yes, as mentioned in a different issue , we left out the first instance from the knee dataset and the first four for the chest dataset for testing. But we don't use all views, only the first view (01_xray0000.png)

I see, Thanks a lot

gpqls commented 1 year ago

Is the validation you are talking about showing the result by score, not image? As I said above, I want to get this image result. image

And where's the rest of the rendering? The z you said is just a distribution and I mean the rest of the rendering in the paper. Clearly, the paper mentions that 1/5 was extracted and that the rest is rendered. "During training we use the whole set of 72 DRRs (a fifth of all views within a full 360-degree vertical rotation) per patient and let the model render the rest."