Open hjahan58 opened 1 year ago
The code has both CPU and GPU support for execution. In fact, it is recommended to run the code with GPUs (preferably with >12 GB CUDA memory for the default scanner template) for faster execution. If you are running into a CUDA memory error with a QUADRO GPU, you can reduce the dimensions of the reconstructed image in your template_*.py file in lib/forward_model directory by changing the recon_params['image_dims'] = (512, 512, 350) to a smaller value. Another way would be to change recon_params['n_views'] or g['det_col_count'], g['det_row_count'] to reduce the resolution of the sinogram instead.
Hi dear Dr Ankitvm I changed the dimensions you suggested, but I'm still getting the out of memory error. If I want to run the code under cpu. Do I need to create a new environment other than what you have introduced in the github repository? Do I have to make changes in the algorithm codes to run under cpu?
The code can be run on the CPU with the same environment - however, it is not recommended when you are using a 3D scanner geometry. The undelrying library that creates the scanner objects is the ASTRA which will either run into memory error or take a very long time to execute. The code however should work fine with 2D scanner configurations.
My graphic card is QUADRO RTX 4000 and it does not have more than 8 gigs of RAM, and when running scripts, I encounter a lack of memory error for PyTorch-cuda.