LLNL / LEAP

comprehensive library of 3D transmission Computed Tomography (CT) algorithms with Python API and fully integrated with PyTorch
https://leapct.readthedocs.io
MIT License
74 stars 9 forks source link

encountered an error while running the demo #9

Closed JoosenLi closed 5 months ago

JoosenLi commented 6 months ago

Merry Christmas ! After installation, I encountered an error while running the code /LEAP/demo_leaptorch/test_fproject_and_FBP.py. How should I handle this?

Traceback (most recent call last): File "/data4/liqiaoxin/.pycharm_helpers/pydev/pydevconsole.py", line 364, in runcode coro = func() File "", line 1, in File "/data4/liqiaoxin/.pycharm_helpers/pydev/_pydev_bundle/pydev_umd.py", line 198, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "/data4/liqiaoxin/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/data4/liqiaoxin/code/LEAP/demo_leaptorch/test_fproject_and_FBP.py", line 25, in from leaptorch import Projector File "/home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages/leaptorch.py", line 13, in lct = tomographicModels() File "/home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages/leapctype.py", line 78, in init self.libprojectors = cdll.LoadLibrary(os.path.join(current_dir, "../build/lib/libleap.so")) File "/home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/ctypes/init.py", line 460, in LoadLibrary return self._dlltype(name) File "/home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/ctypes/init.py", line 382, in init self._handle = _dlopen(self._name, mode) OSError: /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages/../build/lib/libleap.so: cannot open shared object file: No such file or directory 1703734536605

kylechampley commented 6 months ago

Really sorry about all the issues you are experiencing. The code changed a lot in the past few months and our changes kept breaking other things. We think these issues should now be resolved.

Please pull the newest version of the code and re-try. Note that the syntax of the demos have changed and we deleted all the example data. Now you just run the script and it will simulate the input data for you.

JoosenLi commented 5 months ago

Firstly, dear developer, please accept my sincere gratitude. After downloading the latest version of leap and installing it using pip install -v ., I encountered the following error. I don’t understand how to solve this error, especially “CMake Error in src/CMakeLists.txt: Unknown CUDA architecture specifier ‘major’.” Can you tell me how to solve it?

Using pip 23.3.2 from /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages/pip (python 3.9) Looking in indexes: http://mirrors.aliyun.com/pypi/simple/ Processing /data4/liqiaoxin/code/LEAP-main Running command pip subprocess to install build dependencies Looking in indexes: http://mirrors.aliyun.com/pypi/simple/ Collecting setuptools>=40.8.0 Downloading http://mirrors.aliyun.com/pypi/packages/55/3a/5121b58b578a598b269537e09a316ad2a94fdd561a2c6eb75cd68578cc6b/setuptools-69.0.3-py3-none-any.whl (819 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 819.5/819.5 kB 8.6 MB/s eta 0:00:00 Collecting wheel Using cached http://mirrors.aliyun.com/pypi/packages/c7/c3/55076fc728723ef927521abaa1955213d094933dc36d4a2008d5101e1af5/wheel-0.42.0-py3-none-any.whl (65 kB) Installing collected packages: wheel, setuptools Successfully installed setuptools-69.0.3 wheel-0.42.0 Installing build dependencies ... done Running command Getting requirements to build wheel -- The C compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- The CUDA compiler identification is NVIDIA 12.0.76 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- Looking for pthread.h -- Looking for pthread.h - found -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Found CUDA: /usr/local/cuda (found suitable version "12.0", minimum required is "11.7") -- Found OpenMP_C: -fopenmp (found version "4.5") -- Found OpenMP_CXX: -fopenmp (found version "4.5") -- Found OpenMP: TRUE (found version "4.5") -- Configuring done CMake Error in src/CMakeLists.txt: Unknown CUDA architecture specifier "major".

-- Generating done CMake Generate step failed. Build files cannot be regenerated correctly. /tmp/pip-build-env-1mrw73a5/overlay/lib/python3.9/site-packages/setuptools/dist.py:472: SetuptoolsDeprecationWarning: Invalid dash-separated options !!

      ********************************************************************************
      Usage of dash-separated 'description-file' will not be supported in future
      versions. Please use the underscore name 'description_file' instead.

      By 2024-Sep-26, you need to update your project and remove deprecated calls
      or your builds will no longer be supported.

      See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
      ********************************************************************************

!! opt = self.warn_dash_deprecation(opt, section) running egg_info writing src/leapct.egg-info/PKG-INFO writing dependency_links to src/leapct.egg-info/dependency_links.txt writing requirements to src/leapct.egg-info/requires.txt writing top-level names to src/leapct.egg-info/top_level.txt reading manifest file 'src/leapct.egg-info/SOURCES.txt' writing manifest file 'src/leapct.egg-info/SOURCES.txt' Getting requirements to build wheel ... done Running command Preparing metadata (pyproject.toml) -- The C compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- The CUDA compiler identification is NVIDIA 12.0.76 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- Looking for pthread.h -- Looking for pthread.h - found -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Found CUDA: /usr/local/cuda (found suitable version "12.0", minimum required is "11.7") -- Found OpenMP_C: -fopenmp (found version "4.5") -- Found OpenMP_CXX: -fopenmp (found version "4.5") -- Found OpenMP: TRUE (found version "4.5") -- Configuring done CMake Error in src/CMakeLists.txt: Unknown CUDA architecture specifier "major".

-- Generating done CMake Generate step failed. Build files cannot be regenerated correctly. /tmp/pip-build-env-1mrw73a5/overlay/lib/python3.9/site-packages/setuptools/dist.py:472: SetuptoolsDeprecationWarning: Invalid dash-separated options !!

      ********************************************************************************
      Usage of dash-separated 'description-file' will not be supported in future
      versions. Please use the underscore name 'description_file' instead.

      By 2024-Sep-26, you need to update your project and remove deprecated calls
      or your builds will no longer be supported.

      See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
      ********************************************************************************

!! opt = self.warn_dash_deprecation(opt, section) running dist_info creating /tmp/pip-modern-metadata-vobiu9_5/leapct.egg-info writing /tmp/pip-modern-metadata-vobiu9_5/leapct.egg-info/PKG-INFO writing dependency_links to /tmp/pip-modern-metadata-vobiu9_5/leapct.egg-info/dependency_links.txt writing requirements to /tmp/pip-modern-metadata-vobiu9_5/leapct.egg-info/requires.txt writing top-level names to /tmp/pip-modern-metadata-vobiu9_5/leapct.egg-info/top_level.txt writing manifest file '/tmp/pip-modern-metadata-vobiu9_5/leapct.egg-info/SOURCES.txt' reading manifest file '/tmp/pip-modern-metadata-vobiu9_5/leapct.egg-info/SOURCES.txt' writing manifest file '/tmp/pip-modern-metadata-vobiu9_5/leapct.egg-info/SOURCES.txt' creating '/tmp/pip-modern-metadata-vobiu9_5/leapct-1.0.dist-info' Preparing metadata (pyproject.toml) ... done Requirement already satisfied: numpy in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from leapct==1.0) (1.26.3) Requirement already satisfied: torch in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from leapct==1.0) (2.1.2) Requirement already satisfied: filelock in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (3.13.1) Requirement already satisfied: typing-extensions in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (4.9.0) Requirement already satisfied: sympy in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (1.12) Requirement already satisfied: networkx in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (3.2.1) Requirement already satisfied: jinja2 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (3.1.2) Requirement already satisfied: fsspec in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (2023.12.2) Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (12.1.105) Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (12.1.105) Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (12.1.105) Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (8.9.2.26) Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (12.1.3.1) Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (11.0.2.54) Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (10.3.2.106) Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (11.4.5.107) Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (12.1.0.106) Requirement already satisfied: nvidia-nccl-cu12==2.18.1 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (2.18.1) Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (12.1.105) Requirement already satisfied: triton==2.1.0 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from torch->leapct==1.0) (2.1.0) Requirement already satisfied: nvidia-nvjitlink-cu12 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch->leapct==1.0) (12.3.101) Requirement already satisfied: MarkupSafe>=2.0 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from jinja2->torch->leapct==1.0) (2.1.3) Requirement already satisfied: mpmath>=0.19 in /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages (from sympy->torch->leapct==1.0) (1.3.0) Building wheels for collected packages: leapct Running command Building wheel for leapct (pyproject.toml) -- The C compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- The CUDA compiler identification is NVIDIA 12.0.76 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- Looking for pthread.h -- Looking for pthread.h - found -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Found CUDA: /usr/local/cuda (found suitable version "12.0", minimum required is "11.7") -- Found OpenMP_C: -fopenmp (found version "4.5") -- Found OpenMP_CXX: -fopenmp (found version "4.5") -- Found OpenMP: TRUE (found version "4.5") -- Configuring done CMake Error in src/CMakeLists.txt: Unknown CUDA architecture specifier "major".

-- Generating done CMake Generate step failed. Build files cannot be regenerated correctly. /tmp/pip-build-env-1mrw73a5/overlay/lib/python3.9/site-packages/setuptools/dist.py:472: SetuptoolsDeprecationWarning: Invalid dash-separated options !!

      ********************************************************************************
      Usage of dash-separated 'description-file' will not be supported in future
      versions. Please use the underscore name 'description_file' instead.

      By 2024-Sep-26, you need to update your project and remove deprecated calls
      or your builds will no longer be supported.

      See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
      ********************************************************************************

!! opt = self.warn_dash_deprecation(opt, section) running bdist_wheel running build running build_py copying src/leaptorch.py -> build/lib copying src/leapctype.py -> build/lib installing to build/bdist.linux-x86_64/wheel running install running install_lib creating build/bdist.linux-x86_64 creating build/bdist.linux-x86_64/wheel copying build/lib/leaptorch.py -> build/bdist.linux-x86_64/wheel copying build/lib/leapctype.py -> build/bdist.linux-x86_64/wheel running install_egg_info running egg_info writing src/leapct.egg-info/PKG-INFO writing dependency_links to src/leapct.egg-info/dependency_links.txt writing requirements to src/leapct.egg-info/requires.txt writing top-level names to src/leapct.egg-info/top_level.txt reading manifest file 'src/leapct.egg-info/SOURCES.txt' writing manifest file 'src/leapct.egg-info/SOURCES.txt' Copying src/leapct.egg-info to build/bdist.linux-x86_64/wheel/leapct-1.0-py3.9.egg-info running install_scripts creating build/bdist.linux-x86_64/wheel/leapct-1.0.dist-info/WHEEL creating '/tmp/pip-wheel-x7gj8bll/.tmp-jgdx75vo/leapct-1.0-py3-none-any.whl' and adding 'build/bdist.linux-x86_64/wheel' to it adding 'leapctype.py' adding 'leaptorch.py' adding 'leapct-1.0.dist-info/METADATA' adding 'leapct-1.0.dist-info/WHEEL' adding 'leapct-1.0.dist-info/top_level.txt' adding 'leapct-1.0.dist-info/RECORD' removing build/bdist.linux-x86_64/wheel Building wheel for leapct (pyproject.toml) ... done Created wheel for leapct: filename=leapct-1.0-py3-none-any.whl size=27449 sha256=71867d70746352ba2558f364ff7b5f5fda4cffcaddfef3a81841f221fae831b0 Stored in directory: /data4/liqiaoxin/.cache/pip/wheels/7d/1d/94/4df1ba5b11c10424179467f3cfcc967f1ccd00c93433c90412 Successfully built leapct Installing collected packages: leapct Attempting uninstall: leapct Found existing installation: leapct 1.0 Uninstalling leapct-1.0: Removing file or directory /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages/pycache/leapctype.cpython-39.pyc Removing file or directory /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages/pycache/leaptorch.cpython-39.pyc Removing file or directory /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages/leapct-1.0.dist-info/ Removing file or directory /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages/leapctype.py Removing file or directory /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages/leaptorch.py Successfully uninstalled leapct-1.0 Successfully installed leapct-1.0

kylechampley commented 5 months ago

OK, try this. Open src\CMakeLists.txt and comment out line 129 by adding a # Then remove the # on line 131.

JoosenLi commented 5 months ago

OK, try this. Open src\CMakeLists.txt and comment out line 129 by adding a # Then remove the # on line 131.

Thank you for your reply, your suggestion seems to have worked. However, I have a small issue. When I run the test_project_and_FBP.py demo, it seems that you didn’t create a sample_data folder in the latest directory. Therefore, when saving data, there is a small error message indicating that the path does not exist. This can be resolved by simply creating a sample_data folder.

kylechampley commented 5 months ago

I appreciate the feedback. I added that directory back in. Let me know if you run into any other issues.

JoosenLi commented 5 months ago

Hello, I have some questions about the usage logic of LEAP. Are proj.leapct.set_default_volume() and proj.allocate_batch_data() necessary, or are they just for pre-allocating space? I noticed that vol_data and proj_data seem to be attributes of the Projector, so do I need to input the image into proj before using it? Do the data f_th and g_th in proj(f_th) and proj.fbp(g_th) need to be put into vol_data and proj_data every time before calling? I think these questions may be related to your INRTO.md document, but this document has not been updated recently, is its content still valid? I look forward to your answer.

kylechampley commented 5 months ago

You asked a lot of questions in this post. Some of your questions I don't quite understand, so I'll answer a few of them now and maybe you and I will better understand each other as we move through this.

First of all, if you want to use the NN solvers, you must allocate the space for the batch data. You don't necessarily have to run the call: allocate_batch_data(), but if you don't you'll have to set these member variables yourself. If you try running without allocating this space, it will return an error. Basically, any time you want to do tomography, you'll have to allocate space for the projection data and the reconstruction (volume) data. Pre-allocating these arrays makes it so you don't have to allocate space each iteration.

Next, let's talk about the set_default_volume() function. This function is just for convenience. It does not allocate any memory, all it does it tells LEAP how to define the reconstruction volume. It defines the "default volume" which is the volume that fills the field of view of your data and uses the native voxel sizes. You can also do this yourself with the set_volume(...) command, where you tell LEAP how many voxels in each dimension, the voxel sizes (mm), and if you want this shifted from the origin. We do recommend you use the set_default_volume() command as it is usually what most people want and the code runs most efficiently, but you are free to define it how you want. You do need to define the volume parameters before you allocate memory for it.

I'll answer the reminder of your questions after you reply to this post.

I'd also like to mention that all of this is for doing reconstruction with neutral networks. If you just want to do tomographic reconstruction, you should go directly through the tomographicModels class which is in leapctype.py.

JoosenLi commented 5 months ago

Thank you very much for your detailed answer. Your answer has been very helpful for me to understand and use LEAP! I also plan to use LEAP to replace the algorithm I wrote myself, and to state and cite it when publishing papers.

Here are some questions I have when using the LEAP package: Should proj be regarded as a projection and reconstruction operation function, or a class containing operation data? If proj is considered as an operation function, proj needs vol_data and proj_data attributes to store the reconstructed data and projection data, but these are needed as input data for proj() and proj.fbp(), which confuses me. If proj is considered as an object containing operation data, then sino = proj(img) and img = proj.fbp(sino) seem like operator functions. In neural networks, the output img is different each time, does this mean different proj objects? This also confuses me. In other words, a proj object contains the projection function, the projection function, the projection data, and the projection data, which makes it difficult for me to understand. I don’t know which ones are necessary and which ones are optional, which makes me unsure how to use it when using it as a layer in neural networks. (Your INTRO.md provides an example of usage, but I’m not sure if this document is still applicable after the records of the LEAP package have been modified)

I tried to use LEAP for a (256,256) image FP and FBP, but I got unexpected results. Can you tell me where I went wrong?

from leaptorch import Projector

proj = Projector(forward_project=True, use_static=True, use_gpu=True, gpu_device=torch.device('cuda:0'), batch_size=1) numCols = 384 numAngles = 1000 pixelSize = 1 numRows = 1 proj.leapct.set_parallelbeam(numAngles, numRows, numCols, pixelSize, pixelSize, 0.5 (numRows - 1), 0.5 (numCols - 1), proj.leapct.setAngleArray(numAngles, 180.0)) proj.leapct.set_default_volume() proj.allocate_batch_data() f_th = torch.from_numpy(image).unsqueeze(0).unsqueeze(0).to(torch.device('cuda')) sino_slice3 = proj(f_th) FBP_CT3 = proj.fbp(sino_slice3)[0, 0, :, :].cpu().numpy() plt.figure(figsize=(10, 5)) plt.subplot(1, 2, 1) plt.imshow(image, cmap='gray') plt.colorbar() plt.axis('off') plt.title('truth') plt.subplot(1, 2, 2) plt.imshow(FBP_CT3, cmap='gray') plt.colorbar() plt.axis('off') plt.title('recon') plt.show() 1704894611846

kylechampley commented 5 months ago

First I'll address the issue with the brain CT reconstruction. You are getting a bogus answer because you specified your image size as 384 x 384, but you provided an image of size 256 x 256. Note that the set_default_volume() sets the size of your reconstruction, so since you wanted 256 x 256, you should have done something like this instead: proj.leapct.set_volume(256, 256, 1, pixelSize, pixelSize)

kylechampley commented 5 months ago

The Projector class should be viewed as a torch.nn.Module that performs forward and back projections.

You shouldn't have to worry about proj_data and vol_data. They are just internal data arrays used for the calculations. You do not need to fill these with any values and should be viewed as private member variables. You provide the inputs and the Projector class will generate the outputs. Yes, the values of the data (either projections or volume data) are constantly changing. What is static is the parameters that specify the CT geometry and CT volume, including the data dimension sizes.

Did you see the demo scripts here? https://github.com/LLNL/LEAP/tree/main/demo_leaptorch

You should especially look at this one: https://github.com/LLNL/LEAP/blob/main/demo_leaptorch/test_recon_NN.py

as it provides a nice usage example.

JoosenLi commented 5 months ago

Even if I change numCols to 256, or change proj.leapct.set_default_volume() to proj.leapct.set_volume(256, 256, 1, pixelSize, pixelSize), the resulting image is incorrect (the sino_slice3 obtained after proj(f_th) is also wrong, all are inf or nan). When I set numCols to 256, the sino_slice3.max() obtained after proj(f_th) is inf, and when I set numCols to 384, the sino_slice3.min() obtained after proj(f_th) is nan. I have been puzzled as to why this error occurs.

kylechampley commented 5 months ago

I don't have your input image, so I made one that is just a square. The following code works for me:

import torch import matplotlib.pyplot as plt import numpy as np from leaptorch import Projector

proj = Projector(forward_project=True, use_static=True, use_gpu=True, gpu_device=torch.device('cuda:0'), batch_size=1) numCols = 384 numAngles = 1000 pixelSize = 1 numRows = 1 proj.leapct.set_parallelbeam(numAngles, numRows, numCols, pixelSize, pixelSize, 0.5 (numRows - 1), 0.5 (numCols - 1), proj.leapct.setAngleArray(numAngles, 180.0))

proj.leapct.set_default_volume()

proj.leapct.set_volume(256,256,1,pixelSize,pixelSize) proj.allocate_batch_data()

image = np.zeros((256,256),dtype=np.float32) image[image.shape[0]//2-10:image.shape[0]//2+10,image.shape[1]//2-10:image.shape[1]//2+10] = 1.0

f_th = torch.from_numpy(image).unsqueeze(0).unsqueeze(0).to(torch.device('cuda')) sino_slice3 = proj(f_th) FBP_CT3 = proj.fbp(sino_slice3)[0,0,:,:].cpu().numpy() plt.figure(figsize=(10, 5)) plt.subplot(1, 2, 1) plt.imshow(image, cmap='gray') plt.colorbar() plt.axis('off') plt.title('truth') plt.subplot(1, 2, 2) plt.imshow(FBP_CT3, cmap='gray') plt.colorbar() plt.axis('off') plt.title('recon') plt.show()

JoosenLi commented 5 months ago

I found out what the problem was. There was an error when my input image was 64-bit. Changing it to 32-bit solved the problem.Thank you so much!!!

kylechampley commented 5 months ago

Sure, no problem. One more thing. I'm not sure you actually want to specify the voxel size as 1 mm. Doing this would mean that 128 columns of your projection data would never see anything. Do you want to make your voxels bigger so that they fill the field of view? If so, you could do this: proj.leapct.set_volume(256,256,1,pixelSize*384.0/256.0,pixelSize)

JoosenLi commented 5 months ago

"You are right, thank you for the reminder, this improvement has helped me a lot!"