peng-lab / BaSiCPy

MIT License
63 stars 21 forks source link

General questions about model results and preprocessing of data #146

Closed adrtsc closed 8 months ago

adrtsc commented 8 months ago

Hi everyone,

It's great to see a python implementation of this approach! I have a few general questions about how to apply the models and preprocess the data that you hopefully can help me with.

We collect large amounts of imaging data and we would like to integrate BaSiCPy into our workflow for illumination correction and background subtraction. The images we collect are typically fluorescent microscopy images of cultured cells with different fluorescent stainings. The background (empty space) in these images is typically around 100 - 110 gray values. Our empirical darkfield images acquired with this microscope and camera are very flat and also usually in the range of 100 - 110 gray values. What we would like to achieve is to get rid of:

  1. uneven illumination
  2. background/camera gain (background pixels should have values around 0 after processing)

I've played around a bit now with BaSiCPy to correct illumination artifacts on these images. What I did so far is to give a unprocessed image of shape (100, 2160, 2560) (so 100 images of shape (2160, 2560) as an input. I then call the follow the example notebooks and do this:

from basicpy import BaSiC

basic = BaSiC(get_darkfield=True, smoothness_flatfield=1)
basic.fit(image_stack)

So far so good and everything works out. I now get the flatfield, darkfield and baseline estimates. However, here I start to have a few questions:

  1. The darkfield estimates I get are always just arrays filled with 0. However, our empirically acquired darkfield images are sitting at values of around 100 - 110 (a value that seems to be represented most closely by the "baseline" value). Is this expected or should I tune some parameters?
  2. When calling basic.transform(image) I understand that the following operation is carried out: (img - darkfield) / flatfield. Which means that background pixels still sit at an intensity of around 100 in our case. My expected result would be that background values come close to 0. What would be the correct way to achieve this?
    1. Should the baseline value just be subtracted? As in (img - darkfield) / flatfield - baseline
    2. Should the camera gain be subtracted from the images before even fitting the model?

Would be happy about your insights on how to approach this correctly. Thanks!

tying84 commented 8 months ago

Dear Andria,

Thank you for using BaSiCPy. Your issue is a bit complicated and I need to check your images. Could you write me an email to my main email address:

@.***

We could set up a meeting next week to discuss your images and how to apply BaSiCPy correctly.

Best,

Tingying

On Thu, 11 Jan 2024 at 10:25, Adrian Tschan @.***> wrote:

Hi everyone,

It's great to see a python implementation of this approach! I have a few general questions about how to apply the models and preprocess the data that you hopefully can help me with.

We collect large amounts of imaging data and we would like to integrate BaSiCPy into our workflow for illumination correction and background subtraction. The images we collect are typically fluorescent microscopy images of cultured cells with different fluorescent stainings. The background (empty space) in these images is typically around 100 - 110 gray values. Our empirical darkfield images acquired with this microscope and camera are very flat and also usually in the range of 100 - 110 gray values. What we would like to achieve is to get rid of:

  1. uneven illumination
  2. background/camera gain (background pixels should have values around 0 after processing)

I've played around a bit now with BaSiCPy to correct illumination artifacts on these images. What I did so far is to give a unprocessed image of shape (100, 2160, 2560) (so 100 images of shape (2160, 2560) as an input. I then call the follow the example notebooks and do this:

from basicpy import BaSiC

basic = BaSiC(get_darkfield=True, smoothness_flatfield=1) basic.fit(image_stack)

So far so good and everything works out. I now get the flatfield, darkfield and baseline estimates. However, here I start to have a few questions:

  1. The darkfield estimates I get are always just arrays filled with 0. However, our empirically acquired darkfield images are sitting at values of around 100 - 110 (a value that seems to be represented most closely by the "baseline" value). Is this expected or should I tune some parameters?
  2. When calling basic.transform(image) I understand that the following operation is carried out: (img - darkfield) / flatfield. Which means that background pixels still sit at an intensity of around 100 in our case. My expected result would be that background values come close to 0. What would be the correct way to achieve this?
    1. Should the baseline value just be subtracted? As in (img - darkfield) / flatfield - baseline
    2. Should the camera gain be subtracted from the images before even fitting the model?

Would be happy about your insights on how to approach this correctly. Thanks!

— Reply to this email directly, view it on GitHub https://github.com/peng-lab/BaSiCPy/issues/146, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACXOKOY6ETVWXWD7TVQ5OMDYN6VW7AVCNFSM6AAAAABBWE6JO2VHI2DSMVQWIX3LMV43ASLTON2WKOZSGA3TMMJTGIYDSMA . You are receiving this because you are subscribed to this thread.Message ID: @.***>