rsachetto / MonoAlg3D_C

Extensible Monodomain simulator
MIT License
16 stars 13 forks source link

Incorrect Stimulus Loaded #67

Open PeihengLu opened 7 months ago

PeihengLu commented 7 months ago

Hi! Sorry for another question!

When executing the batch script, all iterations other than the first one only contains the last stimulus:

Stimulus name: stim_plain
[stim] configuration:
[stim] library = ./shared_libs/libdefault_stimuli.so
[stim] main function = stim_if_x_less_than
[stim] start = 1801.000000
[stim] duration = 2.0
[stim] period = 450
[stim] current = -50.0
[stim] x_limit = 2000.0

Only the first iteration managed to load the stimulus with the correct start time:

Stimulus name: stim_plain
[stim] configuration:
[stim] library = ./shared_libs/libdefault_stimuli.so
[stim] main function = stim_if_x_less_than
[stim] start = 1.0
[stim] duration = 2.0
[stim] period = 450
[stim] current = -50.0
[stim] x_limit = 2000.0

The configuration file is as follows: cable.ini

[main]
num_threads=4
dt_pde=0.04     ; delta time for the PDE (monodomain equation)
simulation_time=2000.0
abort_on_no_activity=false
use_adaptivity=false

[update_monodomain]
main_function=update_monodomain_default

[save_result]
print_rate=100
mesh_print_rate=100
mesh_format=ensight
output_dir=./outputs/cable

init_function=init_save_with_activation_times
end_function=end_save_with_activation_times
main_function=save_with_activation_times
time_threshold=0.0
apd_threshold=-65.0
save_visible_mask=false
remove_older_simulation=true
binary=false

[assembly_matrix]
init_function=set_initial_conditions_fvm
sigma_x=0.00022     ; conductivities in every direction, in S/mm
sigma_y=0.00022
sigma_z=0.00022
library_file=shared_libs/libdefault_matrix_assembly.so
main_function=homogeneous_sigma_assembly_matrix

[linear_system_solver]
tolerance=1e-16
use_preconditioner=no
max_iterations=500
library_file=shared_libs/libdefault_linear_system_solver.so
use_gpu=yes
main_function=conjugate_gradient
init_function=init_conjugate_gradient
end_function=end_conjugate_gradient

[domain]
name=1D mesh
start_dx=400.0  ; the mesh is discretised as cubes (all dimensions are the same size) and the cube size is 100 um (0.1 mm)
start_dy=400.0  
start_dz=400.0
cable_length=120000.0 ; 120000 um, which is 12 cm
main_function=initialize_grid_with_cable_mesh

[ode_solver]
adaptive=false
dt=0.02     ; delta time for the ODE (cell model)
use_gpu=no
gpu_id=0
library_file= ./shared_libs/libToRORd_fkatp_mixed_endo_mid_epi.so

[stim_plain]
start = 1.0
duration = 2.0
period = 450
current = -50.0
x_limit = 2000.0 ; this means that the 10 first cells were paced by us
main_function=stim_if_x_less_than

[extra_data]
INa_Multiplier=0.2
main_function=set_extra_data_mixed_torord_fkatp_epi_mid_endo

The batch configuration file:

[batch]
initial_config=../cable/cable.ini
output_folder=batch_simulations
num_simulations_per_parameter_change=1

[modify]
;section|parameter=(range or list)|start|end|increment (including start and end)
extra_data|INa_Multiplier=range|0.25|1.0|0.05
PeihengLu commented 7 months ago

I can fix this issue by adding an extra line in the batch configuration file:

stim_plain|start=list|1.0

However, it would still be interesting to see why the start time is being changed

rsachetto commented 7 months ago

I am testing this and in my machine worked correctly.

cable_run_1_INa_Multiplier_0.250000/outputlog.txt:[stim] start = 1.0 cable_run_2_INa_Multiplier_0.300000/outputlog.txt:[stim] start = 1.0 cable_run_3_INa_Multiplier_0.350000/outputlog.txt:[stim] start = 1.0 cable_run_4_INa_Multiplier_0.400000/outputlog.txt:[stim] start = 1.0 cable_run_5_INa_Multiplier_0.450000/outputlog.txt:[stim] start = 1.0 cable_run_6_INa_Multiplier_0.500000/outputlog.txt:[stim] start = 1.0 cable_run_7_INa_Multiplier_0.550000/outputlog.txt:[stim] start = 1.0 cable_run_8_INa_Multiplier_0.600000/outputlog.txt:[stim] start = 1.0 cable_run_9_INa_Multiplier_0.650000/outputlog.txt:[stim] start = 1.0 cable_run_10_INa_Multiplier_0.700000/outputlog.txt:[stim] start = 1.0 cable_run_11_INa_Multiplier_0.750000/outputlog.txt:[stim] start = 1.0 cable_run_12_INa_Multiplier_0.800000/outputlog.txt:[stim] start = 1.0 cable_run_13_INa_Multiplier_0.850000/outputlog.txt:[stim] start = 1.0 cable_run_14_INa_Multiplier_0.900000/outputlog.txt:[stim] start = 1.0 cable_run_15_INa_Multiplier_0.950000/outputlog.txt:[stim] start = 1.0

rsachetto commented 7 months ago

How many MPI processes are you using? I just found a bug where the simulation crashes if a process don't have any job to do. I will fix it.

PeihengLu commented 7 months ago

I am not using MPI actually, I am only using Monoalg3D_batch to make testing different multipliers easier. Maybe that is why this is broken for me.

PeihengLu commented 7 months ago

Testing with mpirun rn

rsachetto commented 7 months ago

It worked here without mpirun

cable_run_1_INa_Multiplier_0.250000/outputlog.txt:[stim] start = 1.0 cable_run_2_INa_Multiplier_0.300000/outputlog.txt:[stim] start = 1.0 cable_run_3_INa_Multiplier_0.350000/outputlog.txt:[stim] start = 1.0 cable_run_4_INa_Multiplier_0.400000/outputlog.txt:[stim] start = 1.0 cable_run_5_INa_Multiplier_0.450000/outputlog.txt:[stim] start = 1.0 cable_run_6_INa_Multiplier_0.500000/outputlog.txt:[stim] start = 1.0 cable_run_7_INa_Multiplier_0.550000/outputlog.txt:[stim] start = 1.0 cable_run_8_INa_Multiplier_0.600000/outputlog.txt:[stim] start = 1.0 cable_run_9_INa_Multiplier_0.650000/outputlog.txt:[stim] start = 1.0 cable_run_10_INa_Multiplier_0.700000/outputlog.txt:[stim] start = 1.0 cable_run_11_INa_Multiplier_0.750000/outputlog.txt:[stim] start = 1.0 cable_run_12_INa_Multiplier_0.800000/outputlog.txt:[stim] start = 1.0 cable_run_13_INa_Multiplier_0.850000/outputlog.txt:[stim] start = 1.0 cable_run_14_INa_Multiplier_0.900000/outputlog.txt:[stim] start = 1.0 cable_run_15_INa_Multiplier_0.950000/outputlog.txt:[stim] start = 1.0

rsachetto commented 7 months ago

Could you try a clean build? make clean; make

rsachetto commented 7 months ago

It should work with only one process. I will keep looking. Let me know if you find this bug again.

PeihengLu commented 7 months ago

It still didn't work for me on a clean build. I'll try again in the uni lab machine as soon as I can. It's very likely just a strange problem on my side.

Thank you very much for your help so far btw!

rsachetto commented 7 months ago

Thanks for reporting. Maybe it is a bug in how I manage some global states with the batch simulations. I've just found a small bug when saving the mesh using the Ensight format.