Closed djx99 closed 1 year ago
Hi, We trained pixelNerf from scratch with our data. For that, we modified the dataloader, especially the camera part. You need the camera matrices to train pixelNerf. Since we generated the DRRs using Plastimatch, we referred to their projection geometry and Projection matrix documentation to generated the camera matrices.
thank you for your reply. Do you keep specific X-ray data, camera parameters and loading codes? I haven't done this before, I'd appreciate it if you could share
thank you~ I see and I'll try it. I've been using nerf directly for CT reconstruction recently, but the results are not ideal. The first is the reconstruction result of 200k iterations, and the second is the GT. Your article thinks that using nerf or pixelnerf directly is not effective. Can you use nerf as a comparison algorithm? Can you get a general result? Does nerf not get clean reconstruction results at all?
the objective of our work was to apply neural radiance fields in medical images, but there are differences compared to "natural images"; color, structures, camera system, availability, etc. So our contribution is an attempt to bridge that gap by using the "most suitable" approach (in that time) to our task. One of my collaborators got very good results with pixelNeRF but using the whole CT volume, you can take a look at the script. In a way, I think you can call that a "general result"..? However, we couldn't manage to make it work with few projections.
I'm very sorry for interrupting so many times. Do you have the email address of this collaborator, I would like to inquire about the ct results of pixelnerf. Because he only uploaded the configuration file, and did not add comments or run commands, etc., so I would not use this configuration file. Or can you tell me, how can I run pixelnerf on ct data? Thank you very much.
And how do you get the value of focal near and far? how to compute?Do these values have an effect on the results?
Hi, We trained pixelNerf from scratch with our data. For that, we modified the dataloader, especially the camera part. You need the camera matrices to train pixelNerf. Since we generated the DRRs using Plastimatch, we referred to their projection geometry and Projection matrix documentation to generated the camera matrices.
From this, it seems that the parameters cx
and cy
are 0 for the knee and chest images. Is it really like that or I am missing something?
That is correct, both are zero. Basically, the intrinsic parameters are fixed for all chest instances or knees.
I'm curious how you did your pixelnerf comparison experiment. Are you directly replacing the natural or rendered images in the nerf with the X-ray images? Or is there another way?