Tyelab / bruker_control

Repository for code running the lab's multi-photon imaging experiments at the Bruker Ultima Investigator scope. Unites Arduino, Prairie View, and GenTL machine cameras in Python.
Mozilla Public License 2.0
8 stars 3 forks source link

Configurability: Frame Averaging During Acquisition #106

Open jmdelahanty opened 2 years ago

jmdelahanty commented 2 years ago

At the moment, we have the microscope collect data as quickly as possible. In the Ultima Investigator's case, this happens to be quite close to 30FPS.

The end result of this is that our data in the end, while certainly usable and yields traces that are reasonable, is quite noisy.

This issue is being created based off of two things:

  1. Advice from multiple professors/PhDs have told me that our frame rate of 30Hz is quite high and probably unnecessary.
  2. The ability for users to choose the kind of framerate they want to use is something that the system should allow for generally as it's something that people would want to change/customize according to their fluorescent indicator

30 FPS Might Be Unnecessary

Among the reasons for performing scans at the fastest possible rate with our imaging include, from my memory:

An additional convenience of using a 30FPS rate is that the camera recording the face of the subject is also recording at a speed fast enough to capture much of the motion of the subject's face. Note that the camera takes an image each time the microscope does as it's triggered by the microscope's output TTL start of frame triggers.

Potential Drawbacks of Continuing this Behavior

There are multiple potential drawbacks to continuing this practice of fastest possible scan acquisitions.

Faster scanning speeds means noisier data

The way multi-photon scopes capture data in resonant scanning is by the rapid movement of multiple mirrors that direct a laser light path into extremely specific points in space on the order of microsecond precision. Each pixel in an image is acquired according to the dwell time specified in Prairie View's software. However long the dwell time is specified is how long the laser remains at a point in the field of view. As the laser stimulates each of these points, photons are emitted from the genetically encoded calcium indicators. These photons are received by the Photo-Multiplier Tube (PMT) Gallium Arsenic Phosphate (GaAsP) surface that is highly sensitive to light. This triggers a cascade down the PMT in which the electrons excited by the incoming photons are amplified exponentially as they are drawn down the tube by the high voltage supplied to the tube's surfaces. As these electrons are pulled through the tube, a current is generated which is then sampled by the Data Acquisition Card (DAQ). The software then turns the measurements from the card into a pixel intensity within the uint16 (meaning 16bit) range of numbers.

A consequence of moving as fast as possible through these points in the field of view (FOV) is that the system samples few photons from each point in space. As noted [here]() there are several kinds of noise that are introduced in these types of imaging sessions. If one samples only very quickly through each point in space, the relative amount of signal to noise will be smaller. In other words, if you sample each position only very quickly, you are collecting smaller amounts of real signal (biological fluorescence).

Although temporally smoothing things via a running average can be helpful in reducing the relative amounts of signal to noise and it does indeed produce images that are clearer in appearance, from what multiple people have told me it can be better practice to simply slow down the frame rate of the imaging session and acquire better SNR at experimental runtime.

One scientist online I briefly messaged with, Dr. Masayuki Sakamoto from Kyoto University had this to say:

I agree that your imaging data is a little bit noisy. I found many horizontal lines, and I think you should modify the resonant scan settings... I am also imaging with GRIN lenses ... [but] at 15Hz even with a resonant scan... [The] slower the image is taken, the brighter the image and the better the signal-to-noise ratio. For calcium imaging, we consider 15Hz to be fast enough. The horizontal lines of 30Hz imaging are probably caused by bi-directional scanning.

Another scientist I messaged, Dr. Nuné Martiros (who seemed to know Kay and Romy interestingly!) had this to say:

I would [definitely] recommend doing a slower frame rate, maybe 10Hz, which would allow you to gather more photons from each pixel. Even with gcamp7f, the signal itself will not be anywhere that fast, so imaging with that high of a frequency is not really giving you better temporal resolution.

There are other filtering steps that some programs/people have suggested to perform on the data as well but that can be part of a different issue somewhere else.

Faster scanning means bigger data

Simply put, the more samples you take the more datapoints you have! The more datapoints you have, the more data you have to stuff into a file! At the moment, a single channel raw binary from Bruker at 30Hz puts out approximately 75GB. If we were to run the scope at 15Hz we would divide that in half!

Currently an approximately 25 minute recording at 30Hz yields about 45k tiff images totaling about 22GB. We could end up with higher quality imaging that only has 11GB total for a given subject's recording day! The savings long term could be quite high if we modify this parameter and just collect what we need for imaging calcium indicators.

If we were to use Python and Dask to go to H5/Zarr with lossless compression those file sizes could be reduced down to just 7/8GB per recording! Note that 2-photon data doesn't compress very well because it's inherently noisy/there is shot noise that's always there.

Configurability of Scope Frame Rate

This is something that should be allowed to have configurations made for it inside the configuration file. Something as simple as:

"frame_rate" : 15

in the configuration .json file could be how it's stored and implemented. In order for this to succeed, should we choose to do it, would be to take that frame rate in the file and then:

mannk2 commented 2 years ago

Hi there Jeremy. I have a few points to make regarding your thoughts on this.

Yes, people do tend to default to the 'image as fast as possible and deal with it later' method. I'm even one of those people. In the case where generating too much data is burdensome, I totally get what you mean. Imaging at 15Hz is probably fine for calcium imaging. Certainly even 10Hz is good enough depending on what you're doing. With that in mind, there are a few points to be made regarding the system (and other systems as well, as the principles should be the same on other scopes).

1) Resonant scanning speed cannot be changed. This is a result of the fact that the resonant scanner runs at a fixed frequency. The only thing one can do to alter that is to change the number of lines you scan (adding more takes longer, for example). Therefore when you're imaging at 30Hz in resonant mode, you cannot actually tell the scan head to slow down. You could, for example, take a single frame (~33ms), and blank the laser for 33ms, and then collect another frame. This would skip a frame, giving you 15Hz imaging, but would still end up with no change in noise. You can, however, simply use frame averaging. In this case, the system still scans at 30Hz, but averages frames before dumping them. This will reduce your noise and also your file size, and seems to be what you want to do.

2) reducing your X spatial resolution while in resonant mode (looks like you do 512x512?) can also reduce noise by changing the dwell time per pixel. As the scan head sweeps the sample, the PMT is sampled as many times as possible per pixel. By reducing X resolution, the scan head will sweep at the same speed of course, but you will average across more PMT samples per pixel, giving you better SNR (the 'enable multisampling' feature, which should be on, will report how many times a pixel will sample the PMT in resonant mode). This is a great way to reduce the size of your files and increase your SNR, as you're collecting fewer pixels. However, as with the temporal resolution, use the spatial resolution that your experiment requires. Maybe you don't actually need 512 X pixels just as you don't need 30 frames per second. It'll help with both issues here. This is also a post-hoc option, as you can reduce the spatial resolution of already-collected data by averaging neighboring pixels and will achieve the same outcome.

3) Slowing imaging rates in galvo mode is indeed possible, as the galvo scan heads can be controlled in this manner. You can select a dwell time that corresponds to a desired frame rate, but it's going to be on the slow side. Reducing scan speed here is similar to a frame average in resonant mode, except that you're doing pixel averaging. I'm not sure what method Dr. Sakamoto is using, but the affect of reduced scan speed on a Bruker system is to simply increase the number of samples taken from the PMT due to the longer dwell time for the laser on each pixel. This will not make the sample brighter, but it will make it less noisy. In situations where you have a very dim sample and want to increase the brightness, you can certainly sum pixels post-hoc. This doesn't get around the issue of generating more data than desired, but perhaps early in the pipeline every n frames can be summed and then turned into a downsampled t-series, just like with frame averaging. This is more of a niche application though for dim samples.

Please let me know if this helps at all. I really do appreciate the care with which you put together these posts.

-Kevin

jmdelahanty commented 2 years ago

Hey Kevin! Thank you so much for this thorough reply about this!

Resonant scanning speed cannot be changed. This is a result of the fact that the resonant scanner runs at a fixed frequency.

That reminds me of what Michael and Steve told me! They said that it runs at a fixed 8kHz frequency if I remember correctly... Thank you for the clarification.

You can, however, simply use frame averaging. In this case, the system still scans at 30Hz, but averages frames before dumping them. This will reduce your noise and also your file size, and seems to be what you want to do.

I like this idea! It's along the lines of what we'd like to do. The data isn't (yet) too large for us to hold onto/use, but there are certain things that will run faster if it has to churn through less data! I talked with Austin yesterday about this and he said he plans to continue recording at 30Hz and will likely continue not to perform frame averaging before putting the data into suite2p.

reducing your X spatial resolution while in resonant mode (looks like you do 512x512?) can also reduce noise by changing the dwell time per pixel... Maybe you don't actually need 512 X pixels just as you don't need 30 frames per second. It'll help with both issues here.

Honestly never occurred to me! It might be worth trying out this modification and see how images come out. I think trying it first by modifying data we have would be interesting. It can't be too hard to rescale things to that resolution right? May as well find out!

In situations where you have a very dim sample and want to increase the brightness, you can certainly sum pixels post-hoc. This doesn't get around the issue of generating more data than desired, but perhaps early in the pipeline every n frames can be summed and then turned into a downsampled t-series, just like with frame averaging.

This is something that Deryn does for samples with very low SNR/very sparse recordings (something like 20 neurons only seen in the FOV!). Something that we've discussed here in the lab so far is how we can document when we decide to do frame averaging like that. Is there an average brightness or something that we should use as a cutoff? We don't want to just say, "Well in these experiments we did averaging because it seemed like it helps." Ideally we could describe numerical cutoffs of some kind through empirical testing of things.

mannk2 commented 2 years ago

awesome I'm glad to help. I also like the forum. it's hopefully a good place for people to ask questions and talk about options. I admit I've never used github that way before.

oh also gentle nudge to sign that document. I can also send it directly to Kay, but professors often are hard to get to respond to things like that.

cheers!

On Fri, Jun 24, 2022 at 4:28 PM Jeremy Delahanty @.***> wrote:

Hey Kevin! Thank you so much for this thorough reply about this!

Resonant scanning speed cannot be changed. This is a result of the fact that the resonant scanner runs at a fixed frequency.

That reminds me of what Michael and Steve told me! They said that it runs at a fixed 8kHz frequency if I remember correctly... Thank you for the clarification.

You can, however, simply use frame averaging. In this case, the system still scans at 30Hz, but averages frames before dumping them. This will reduce your noise and also your file size, and seems to be what you want to do.

I like this idea! It's along the lines of what we'd like to do. The data isn't (yet) too large for us to hold onto/use, but there are certain things that will run faster if it has to churn through less data! I talked with Austin yesterday about this and he said he plans to continue recording at 30Hz and will likely continue not to perform frame averaging before putting the data into suite2p.

reducing your X spatial resolution while in resonant mode (looks like you do 512x512?) can also reduce noise by changing the dwell time per pixel... Maybe you don't actually need 512 X pixels just as you don't need 30 frames per second. It'll help with both issues here.

Honestly never occurred to me! It might be worth trying out this modification and see how images come out. I think trying it first by modifying data we have would be interesting. It can't be too hard to rescale things to that resolution right? May as well find out!

In situations where you have a very dim sample and want to increase the brightness, you can certainly sum pixels post-hoc. This doesn't get around the issue of generating more data than desired, but perhaps early in the pipeline every n frames can be summed and then turned into a downsampled t-series, just like with frame averaging.

This is something that Deryn does for samples with very low SNR/very sparse recordings (something like 20 neurons only seen in the FOV!). Something that we've discussed here in the lab so far is how we can document when we decide to do frame averaging like that. Is there an average brightness or something that we should use as a cutoff? We don't want to just say, "Well in these experiments we did averaging because it seemed like it helps." Ideally we could describe numerical cutoffs of some kind through empirical testing of things.

— Reply to this email directly, view it on GitHub https://github.com/Tyelab/bruker_control/issues/106#issuecomment-1165911536, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACEND34LZTYJJSZBWRLWP2LVQYK43ANCNFSM5ZGGX44A . You are receiving this because you commented.Message ID: @.***>

jmdelahanty commented 2 years ago

I actually have the discussions tab enabled here in case stuff like this ever gets brought up! We can move it there if you'd like. I think you're the first person other than me in the whole world that has commented here!

I sent along the signed form to your Bruker email I think, I can resend if it got lost! It went out just after 1PM Pacific.

jmdelahanty commented 2 years ago

Hey @mannk2 ! Interesting thread re framerates that might be of interest to you as well. Let me know what you think!

jmdelahanty commented 1 year ago

I talked to members of Bruker's team at SfN and Jimmy Fong told me that all we need to do is use a setstate command to have the frame averaging perform an average every 2 frames to get towards the 15 FPS framerate. Adding a prairieview_utils function to do this should be pretty easy but if we were to do this it would harm how we gather video data of the mouse's face.

EDIT 1/11/23:

I don't think it actually would harm the way video data is collected with the current setup since the frame triggers are output every time given what Kevin Mann, PhD taught us in this thread above. I also just tested in Prairie View with the oscilloscope and the framerate is the exact same even when doing averaging. It's all software averaging so it has no effect on when the images are actually taken. Cool to validate.

jmdelahanty commented 1 year ago

From #137's Kameron Clayton, PhD, here's his thoughts on framerates to use:

In terms of frame rate, we like to collect data at 30 Hz. I prefer not to average online so as to future proof my data, you never know what a new ROI segmentation analysis pipeline will require to work well, so I will always err on the side of recording too much data. In principle, you could perform downsampling all you want offline. Would be interesting to compare outputs of Suite2P/CAIMAN with the same data with different effective sampling rates.

That said, you are totally correct in your intuition. 2 Hz is probably sufficient to capture all calcium related events, when people do multiplane imaging using a piezo (e.g. Stringer et al. Science), they end up with this sort of frame rate across all 5-10 planes they image.

jmdelahanty commented 1 year ago

From Dr. Ryoma Hattori in the Komiyama Lab at UCSD last year:

We typically use the maximum frame rate for the FOV size we use, and it is usually ~30Hz for our most imaging experiments. We extract the 30Hz fluorescence data from ROIS, then deconvolve them before analyses. For most neural activity analyses, we usually temporally average some data points to suppress noise in the data. I agree with Assaf that SNR will not improve by simply decreasing the frame rate at least with our Thorlabs microscopes. Even if you can improve SNR somehow by simply decreasing the frame rate with your microscope, noisier images with higher frame rate would always contain more information. If SNR is the issue, you just need to downsample images by temporal averaging after image acquisitions for visualization (e.g. nice movie of calcium signals). If you can increase laser power, increase the power. Some people image at very low rate (e.g. 2.5-3Hz for GCaMP6s in some papers from Harris/Carandini Lab), but that is because they image many planes. Low rate data may be fine for most data analyses, but, for example, the accuracy of deconvolution algorithm can suffer with lower rate. Our collaborator developed a new deconvolution algorithm (unpublished) that infers exact spike timing from low rate calcium signals, and the temporal accuracy of spike timing estimation suffers a lot with lower frame rate data. However, most people would not need exact spiking timing from calcium, and I think you probably do not need to worry too much about deconvolution accuracy as long as the frame rate is >10Hz.

We use ScanImage software for image acquisition, which has a function to perform rolling averaging for visualization on its GUI during imaging. We typically do 20-frame averaging for the visualization, but the saved data is raw 30Hz data. We also do averaging with a large window size (e.g. 50 frame moving averaging) after motion correction so that we can inspect motion correction quality easily. For registration and ROI detection, we use raw frames. I would not recommend averaging before motion correction because the frames used for the averaging are not registered. Also, I think ROI detection performance would not improve by averaging if you are using sutie2p’s algorithm. As for bit, I would not recommend 8bit. The range of 8 bit is probably too narrow.

jmdelahanty commented 1 year ago

Renamed it since you can't actually change the framerate of the scope but can do software averaging during acquisition which people might want to do.

jmdelahanty commented 1 year ago

I asked about max speed and what it really means/when it'll actually matter for recordings and here's what Kevin taught me over a couple small communications. It would be relevant for multiplane imaging if we had a piezo for going between planes, not sure if this influences things when using an ETL which the new scope could use:

This sets up to collect the data essentially uninterrupted. It sounds like there is only a marginal difference when imaging frames as there really isn't any significant amount of time for the y galvo to reset for each frame. The speed comes from volume where it wont wait for the z device to fully settle and will just take a frame while in motion.

This results in a slightly tilted frame in z, depending on how far you are moving/driving the z device.