Open lschm opened 2 months ago
Hi Lena,
Could you tell me which version (GUI or command line) of our code you used? If you could also tell me the parameters used, it would also help solve the problem.
Best, Minho
Hi,
I used the command line, here is the call I used for training the model (for parameters):
python -m src.train --exp_name xx --noisy_data xx --is_folder --results_dir xx --patch_size 61 22 22 --bs_size 3 3 --bp --n_epochs 40 --logging_interval_batch 5000
Best, Lena
It seems like the --bp mode was the problem. I changed the test.py on GitHub. Could you download the updated test.py and change bp_mode to True?
I had already modified the test.py to model = SUPPORT(in_channels=61, mid_channels=[16, 32, 64, 128, 256], bp=True, depth=5,\ blind_conv_channels=64, one_by_one_channels=[32, 16], last_layer_channels=[64, 32, 16], bs_size=bs_size).cuda() so the change you made to the code doesn't have any effect, as far as I can see
Oh I see,, Could you tell me the process of checking N+1 frames and the last N-1 were removed?
I just compared signal onset and movements between the raw and denoised data (images). I didn't do any comprehensive testing with synthetic data. However, it was consistent for two different models trained on different data and analyzing different data.
Hi,
your FAQ states that the first and last N frames of each timeseries are removed during the inference process. However, when I closely compare the raw and processed data it seems that actually the first N+1 frames are removed and the last N-1.
Best wishes, Lena