Open iandobbie opened 4 years ago
I suggest we have a switch on teh experiment interface to interleave channels. Then the experiment can have a lost of lights and generate and action table with camera triggers and a round robin list of light triggers. With Z moves at the relevant place, then the data stored in a file with the relevant channels/z set.
I have started to implement this in 1cam-multi-channel branch. So far generates light pulses of the correct size.
Outstanding issues
I'm sure there will be other issues.
I have dealt with part of point 2. I don't take a second image till after the max camera exposure time, so we can leave the camera exposure fixed and should cope with readout time. Noticed a couple of additional factors that need to be taken care off.
Latest version of my branch appears to do all the action table stuff. I can't actually run and experiment here so I will test on some real hardware and then also do the proper image saving fixes.
I am now working on the meta data for the file. The emission wavelengths need to be encoded somewhere, which currently doesn't exist, then this info needs to be passed to the datasaver. Possible the multi bandpass filters return the next band at longer wavelength from the excitation wavelength, although this ignore long stokes shift dyes.
Latest push has meta data but not quite correct yet. There are two outstanding issues as far as I can see
1) The excitation wavelength is always the longest, it needs to be the excitation wavelength used for that image, so needs to passed from the experiement module to the data saver.
2) There is no current mapping between excitation wavelength and emission of a multi-bandpass em filter. This needs to be in a config file I think
Sorted the excitation wavelength, just need to map the emission wavelength somewhere and then pass that.
Current suggestion is that we define it in the camera config.
Definition of mapping in a camera section of the depot file is defined as em-map: 488 : 525 561 : 580
This means that 488 excitation is mapped to 525 emission and 561 ex mapped to 580 em in the interleave multi-channel mode.
This appears to work on simulated devices but crashed on Zaber today. I think this might be a Ximea camera config issue. I will confirm tomorrow.
This works on the zaber, with the digital Z-stack edits (#691) to get the Z stack to work on the Zaber. I will test on Danny's system on Friday and pull if it works. The complete edits are isolated in
https://github.com/iandobbie/cockpit/commits/interleave-multichan
Pulled and tested on Danny's Aurox system so I am happy it works and is generalisable.
One issue is that the parsing of the em-map config is not robust and should probably be improved. I will open a separate issue around this.
I see that this adds a ""Interleave all channels on one Camera" checkbox to all experiments, and the base Experiment
class now takes a new interleave
parameter. However, it seems that most of the actual logic is only implemented on the zStack experiment. Is that correct? If so, should this not either be done in a central place or maybe the UI changed so that the checkbox is only available for the z stack experiment?
I am trying to merge this. I can move the GUI control onto the zStack experiment. I see no reason why other experiments might not want to use this, but most the useful experiments are Zstacks underneath anyway, eg just plane time lapse.
Having looked into this there is no customization in the generic Z stack, so this will add considerable complication for little benefit IMHO.
I propose to pull my 1cam-mutli-channel branch into master to implement this functionality. One issue is it changes the default configs as it adds the interleave experiment parameter, so the config needs to be reset once the software is upgraded.
Hi @iandobbie, is this interleaved version handling a mixed mode? That is two cameras one of which has to take two channels. I have a use case for this. I have two cameras separated by a long pass at some 550nm. I need to take in one camera GFP-green and in the other one, using a dual band, red and far-red.
I haven't thought about this use case so I suspect not. I don't think it would be hard to add but would need some thought and some testing. The critical thing is to match lasers with cameras and emission wavelengths. If you are willing to have the images taken sequentially then I think all you would need would be some logic to ensure the right camera is triggered with the relevant light source. If you want the two camera to trigger simultaneously for one image and then just one camera for the second image set then it would need more thought and code logic.
I would need sequential acquisition in order to minimize crosstalk. At least in my equipment it is not worth doing simultaneous acquisition... What might be interesting is to factor in acquisition order. You might want to interleave the cameras. That is red_on_cam2 then green_on_cam1 then farRed_on_cam2. The idea being to optimize the timing by giving cam2 time to read the image while image is being taken on cam1. I will dive into this if the application comes to need it.
I agree interleaving the cameras seems like an obvious win. I don't think much additional work would be needed, just some way to map the illumination and emission colours to a specific camera. I already have a list which maps excitation to emission for use on multiband filter sets. I seem to remember there was a way to configure the acquisition order too but I could be miss remembering. If I get a chance I'll have a look.
This code works and has been used on the zaber and I believe on the aurox system so it seems silly to let it atrophy. I am keen to merge this into the main branch. I guess I need Tom or Julio to demonstrate that the code doesn't break an existing multi camera system.
We could then work on the suggestion from Julio above to allow the mixed mode suggested above, eg 3 channels across 2 cameras.
I pulled it up to date with master and the only issue was some documentation edits which obviously got applied differently in the two branches.
The Zaber system has multiple LEDs so to get multi channel images we would use a single multi-band filter cube and flash individual LEDs for each acquisition. This needs an experiment which will multiplex mulltilpe channels on a single camera. Exposure one uses LED1, take images on camera, expsoure 2 uses LED2, then move Z, exposure 3 is LED1 again etc...