IMSY-DKFZ / simpa

The Simulation and Image Processing for Photonics and Acoustics (SIMPA) toolkit.
https://simpa.readthedocs.io/en/main/
Other
86 stars 17 forks source link

Sensor mask problem in kWave #408

Open jeremieglt opened 21 hours ago

jeremieglt commented 21 hours ago

Describe the bug

I am simulating a very simple example with kWave as only module in my pipeline with an artificial image as input (see device visualisation for the geometry below) : I want to image three thin vessels in a normal background with MSOT Acuity Echo, but I get a problem related to the "sensor mask" from the simulate_2D Matlab code (see log below).

simpa_device_position_after Capture d'écran 2024-12-03 112115

Specify a priority (low, medium, high) Medium

Current Behavior

The problem seems to be related somewhat to the pitch of the sensor : when I modify the pitch to 0.5 mm (not default for MSOT Acuity, but default for the CurvedArrayDetectionGeometry), the simulation runs but I only get the value(s) from the first element(s) on both ends of the curvilinear array (see output below, for the moment I am just imaging one point, see code below, but it doesn't matter to the problem), but when I set it to 0.34 mm (default value for MSOT Acuity), I have this mask problem.

3_points_at_top_kwave

I went and see into the simulate_2D.m script, and it seems like, in the sensor definition part, the pitch used is 0.5 :

% add elements to the array
for ind = 1:num_elements
  x = elem_pos(1, ind);
  y = elem_pos(2, ind);
  alpha = angles(1, ind);
  x = x + 0.5 * (element_width*sin(alpha)); % HERE
  y = y + 0.5 * (element_width*cos(alpha)); % HERE
  karray.addRectElement([x, y], element_width, 0.00001, [angles(1, ind)]);
end

So I tried to modify it to 0.34 mm but I still have the same problems so I guess the problem is not only there or not there at all.

Expected behavior

No more mask error.

Additional context

My function :

def run_kwave_forward_3_points(spacing: float | int = 0.2, path_manager=None):
    """
    Runs the forward simulation with SIMPA reduced to a kave simulation on a simple image 
    (3 points at the top of the image) : skipping the MCX part to ensure that the image taken 
    as input by SIMPA and MSOT rec's forward models are identical.

    The goal of this simulation is to compare the output sinogram with the one
    obtained using MSOT rec's forward model.
    """

    if path_manager is None:
        path_manager = sp.PathManager()

    # Seed the numpy random configuration prior to creating the global_settings file in
    # order to ensure that the same volume is generated with the same random seed every time.
    np.random.seed(RANDOM_SEED)

    # Initialize global settings and prepare for simulation pipeline including
    # volume creation and optical forward simulation.
    general_settings = {
        Tags.RANDOM_SEED: RANDOM_SEED,
        Tags.VOLUME_NAME: VOLUME_NAME,
        Tags.SPACING_MM: spacing,
        Tags.SIMULATION_PATH: path_manager.get_hdf5_file_save_path(),
        Tags.SIMPA_OUTPUT_FILE_PATH: path_manager.get_hdf5_file_save_path() + "/" + VOLUME_NAME + ".hdf5",
        Tags.DIM_VOLUME_X_MM: VOLUME_TRANSDUCER_DIM_IN_MM,
        Tags.DIM_VOLUME_Y_MM: VOLUME_PLANAR_DIM_IN_MM,
        Tags.DIM_VOLUME_Z_MM: VOLUME_HEIGHT_IN_MM,
        Tags.WAVELENGTHS: WAVELENGTHS,
        Tags.GPU: True,
        Tags.DO_FILE_COMPRESSION: True,
        Tags.US_GEL: False,
        Tags.IGNORE_QA_ASSERTIONS: True # ignoring a negative pressure problem
    }

    settings = sp.Settings(general_settings)

    settings.set_acoustic_settings({
        Tags.DATA_FIELD_SPEED_OF_SOUND: 1540,
        Tags.DATA_FIELD_ALPHA_COEFF: 0.01,
        Tags.DATA_FIELD_DENSITY: 1000,
        Tags.ACOUSTIC_SIMULATION_3D: False,
        Tags.ACOUSTIC_MODEL_BINARY_PATH: path_manager.get_matlab_binary_path(),
        Tags.KWAVE_PROPERTY_ALPHA_POWER: 0.00,
        Tags.KWAVE_PROPERTY_SENSOR_RECORD: "p",
        Tags.KWAVE_PROPERTY_PMLInside: False,
        Tags.KWAVE_PROPERTY_PMLSize: [31, 32],
        Tags.KWAVE_PROPERTY_PMLAlpha: 1.5,
        Tags.KWAVE_PROPERTY_PlotPML: False,
        Tags.RECORDMOVIE: False,
        Tags.MOVIENAME: "visualization_log",
        Tags.ACOUSTIC_LOG_SCALE: True,
        Tags.SENSOR_SAMPLING_RATE_MHZ: 40
    })

    settings.set_volume_creation_settings({
        Tags.SIMULATE_DEFORMED_LAYERS: True,
        Tags.STRUCTURES: create_vessel_tissue() # whatever
    })

    settings.set_optical_settings({
        Tags.OPTICAL_MODEL_NUMBER_PHOTONS: 1e7, # 5e7
        Tags.OPTICAL_MODEL_BINARY_PATH: path_manager.get_mcx_binary_path(),
        Tags.OPTICAL_MODEL: Tags.OPTICAL_MODEL_MCX,
        Tags.LASER_PULSE_ENERGY_IN_MILLIJOULE: 50,
        Tags.MCX_ASSUMED_ANISOTROPY: 0.9
    })

    # Get device for simulation
    device_position = np.array([VOLUME_TRANSDUCER_DIM_IN_MM/2,
                                VOLUME_PLANAR_DIM_IN_MM/2,
                                0])
    field_of_view_extent = [-35, 35, 0, 0, 0, 50]
    device = sp.MSOTAcuityEcho()
    device.set_detection_geometry(
        sp.CurvedArrayDetectionGeometry(pitch_mm=0.5,
                                        radius_mm=40,
                                        number_detector_elements=256,
                                        detector_element_width_mm=0.24,
                                        detector_element_length_mm=13,
                                        center_frequency_hz=3.96e6,
                                        bandwidth_percent=55,
                                        sampling_frequency_mhz=40,
                                        angular_origin_offset=np.pi,
                                        device_position_mm=device_position,
                                        field_of_view_extent_mm=field_of_view_extent))
    device.add_illumination_geometry(sp.SlitIlluminationGeometry())

    # Run first part of the pipeline artificially, to make sure that the hdf5 file is created that we can add
    # data fields to it later
    pipeline = [
        sp.ModelBasedAdapter(settings),
        sp.MCXAdapter(settings)
        ]
    sp.simulate(pipeline, settings, device)

    # Dimensions of the image
    number_of_pixels_x = int(VOLUME_TRANSDUCER_DIM_IN_MM / spacing)
    number_of_pixels_y = int(VOLUME_PLANAR_DIM_IN_MM / spacing)
    number_of_pixels_z = int(VOLUME_HEIGHT_IN_MM / spacing)

    # Positioning the points
    position_x_point_1 = int(number_of_pixels_x / 2)
    position_z_point_1 = int(10 / spacing)
    # position_x_point_2 = int(number_of_pixels_x / 3)
    # position_z_point_2 = int(15 / spacing)
    # position_x_point_3 = int(2 * number_of_pixels_x / 3)
    # position_z_point_3 = int(12.5 / spacing)

    p0_array = np.zeros((number_of_pixels_x, number_of_pixels_y, number_of_pixels_z))
    p0_array[position_x_point_1, :, position_z_point_1] = np.ones(number_of_pixels_y)
    # p0_array[position_x_point_2, :, position_z_point_2] = np.ones(number_of_pixels_y)
    # p0_array[position_x_point_3, :, position_z_point_3] = np.ones(number_of_pixels_y)
    p0_array = np.array(p0_array, dtype='float64')

    # Saving our new array as the modified initial pressure
    save_data_field(data=p0_array,
                    file_path=str(path_manager.get_hdf5_file_save_path() + "/" + VOLUME_NAME + ".hdf5"),
                    data_field=Tags.DATA_FIELD_INITIAL_PRESSURE, 
                    wavelength=WAVELENGTHS[0])

    # Commande to be able to continue the simulation without creating a new HDF5 file
    settings[Tags.CONTINUE_SIMULATION] = True

    # Running kwave
    sp.KWaveAdapter(settings).run(device)

    # Loading and storing the obtained sinogram
    p0_sinogram = sp.load_data_field(str(path_manager.get_hdf5_file_save_path() + "/" + VOLUME_NAME + ".hdf5"),
                                  Tags.DATA_FIELD_TIME_SERIES_DATA, WAVELENGTHS[0])
    save_path = './data/msot_rec_test_data/' + '3_points_at_top' + '_kwave.npy'
    np.save(save_path, p0_sinogram)

    # Plotting the result
    plot_npy(save_path)

Thanks in advance for your help !

kdreher commented 21 hours ago

The problem is that you're defining the detector position at z-position=0. The origin of the CurvedArrayDetectionGeometryis the focus. So you're setting the focus inside the very first voxel in the volume. Since it has a radius of 40mm, it will put all the sensor elements above the volume, i.e. outside of the grid that is simulated. You either have to define your volume differently such that you make it big enough to make your device fit into it, like adding 40mm to your z-dim and then set the device position at z=40 or you use the update_settings_for_use_of_model_based_volume_creator method of the MSOTAcuityEcho to increase the volume after the fact like in the paper experiment repo.

jeremieglt commented 20 hours ago

Ok thank you for your answer, that was a big misunderstanding from my side on the device positoning.