charlesnchr / ML-SIM

Desktop app, web app and source code for training, testing and using ML-SIM
http://ML-SIM.com
29 stars 1 forks source link

PSF #4

Open liuxinxjtu opened 2 years ago

liuxinxjtu commented 2 years ago

Hi, I'm very interested in your work, could you provide PSF simulate the process? After training with simulated data, whether to use real data to fine-tune the model, and then verify

charlesnchr commented 2 years ago

Hi @liuxinxjtu, thanks for your interest.

The PSF is by default approximated. See the function PsfOtf(w,scale) in "MLSIM_datagen/SIMulator_functions.py".

If you wanted to fine-tune the model using a PSF more closely resembling your system, you could replace this function call and load an external file.

Whether to use real data to fine-tune the model comes down to whether you have good reconstruction output from other methods to use as targets. If you have a low signal-to-noise ratio and you train with e.g. FairSIM as target, you will end up reproducing a lot of the artefacts that FairSIM generates.

liuxinxjtu commented 2 years ago

thanks, Generate the parameter opt.nrep in the SIM data from your codes. The purpose is to generate multiple sets of images with different directions and phases to prove the robustness of the network in terms of directions and angles?

Quma233 commented 2 years ago

@liuxinxjtu It seems that this setting actually augments the dataset and enables the network to learn a many-to-one mapping (to the GT). I guess, to learn this mapping, opt.nrep should be >= 2.

charlesnchr commented 2 years ago

@Quma233 is correct that opt.nrep is intended for data augmentation. The robustness of the network comes from the large parameter space used for data synthetisation. This in turn requires a large source image set or, alternatively, opt.nrep >= 2. For fine-tuning you can probably ignore this option.

liuxinxjtu commented 1 year ago

How many nanometers is the pixel size mentioned in the paper?