Code for SINR: Deconvolving Circular SAS Images Using Implicit Neural Representations found at https://arxiv.org/abs/2204.10428.
conda update -n base conda
. conda env create -f environ.yml
.conda activate CSAS_INR_Deconv
.conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
.
Visit https://pytorch.org/
for more details.The example_sim_deconv_pipeline/deconvolve_simulated_scene.py
script creates two sets of simulated CSAS measurements,
reconstructs the measurements using DAS, and then deconvolves the images using our INR approach and baselines discussed
in the paper. The simulation and geometry parameters are modified by editing using the
example_sim_deconv_pipeline/system_parameters.ini
and example_sim_deconv_pipeline/simulation.ini
files. The deconvolution parameters can be edited using the example_sim_deconv_pipeline/deconv.ini
. All .ini
files
are commented to provide instructions for use. The output
of the deconvolution methods are saved in the deconv_dir
directory.
Running the pipeline will generate a results figure in each deconv_dir/image*
directory showing the deconvolution
results of all the methods run on the created dataset.
Additionally, one can find bar plots displaying deconvolution metrics for each method.
The script airsas_deconv_pipeline/reconstruct_scene.py
will reconstruct an AirSAS scene, compute the PSF, and then
deconvolve the scene using the methods specified in the associated deconv.ini file. For example, one can use the INR to deconvolve
AirSAS small feature cutout scene shown in the paper figure:
.
Running airsas_deconv_pipeline/reconstruct_scene.py
will populate the directory airsas_deconv_pipeline\20k_scene
with INR deconvolution results for the 20k small feature DAS scene.
Running the script shoould yield a DAS reconstruction that looks like this:
DAS Reconstruction (found at airsas_deconv_pipeline\20k_scene\scene_abs.png
) :
and an INR Deconvolution (found at airsas_deconv_pipeline\20k_scene\image0\INR\deconv_img_100.png
) at epoch 100
that looks like this: