Closed Eddymorphling closed 4 months ago
Forgot to add - I also see that a lot of points are being considered outside the atlas.
INFO 2024-02-20 10:25:24 AM - INFO - transform.py:70
MainProcess transform.py:70 -
Ignoring point: [2731, 252,
1546] as it falls outside the
atlas.
Have you checked the registration accuracy? This often happens if the registration results are poor.
The registration looks good. Its strange that the "cells" label in napari are not being considered in the final summary.csv
. Just an FYI - I think I also noticed this recently after updating my pypi packages for brainglobe-workflows, brainreg to the latest versions.
Which atlas are you using? I added this feature recently and I may (probably) have overlooked something.
Thanks @adamltyson . I am using allen_mouse_10um
.
This issue only happens in a few datasets, not all. Any advise? Thank you.
Hey @Eddymorphling - would you be able to provide the cellfinder output directories for one dataset where this happens, and one where it doesn't, please? I'd be happy to investigate further.
Yes, of course. Here it is - example.zip. Thank you.
Just to add, I noticed in the brainglobe -workflows log files for the "no-cells" dataset that almost all cells were ignored as it falls outside the atlas. But my atlas registration looks good to be honest. Do let me know if you would like to share this log file.
Sorry to bother you again , just thought of trying to setup a new env and run the analysis. Everything worked out well but when I run brainmapper -h
, I get the below error.
(brainglobe_v2) [ivm@vn2013is10 ~]$ brainmapper -h
Traceback (most recent call last):
File "/home/ivm/conda/envs/brainglobe_v2/bin/brainmapper", line 5, in <module>
from brainglobe_workflows.brainmapper.main import main
File "/home/ivm/conda/envs/brainglobe_v2/lib/python3.10/site-packages/brainglobe_workflows/brainmapper/main.py", line 15, in <module>
import bg_space as bgs
Looks like bg-space renaming has not been transcribed to one of the scripts. I had to pip install bg-space
to get over the error. Also, not sure if this is related to the issue mentioned above?
Hey Yes, having a look at the log file would be useful. Visualising the points in napari makes me think this is connected to https://github.com/brainglobe/cellfinder/issues/383
Presumably there's no obvious imaging artefacts that would explain cellfinder detecting lots of cell candidates in a cuboid shape ventrally and anterior of the brain:
Looks like bg-space renaming has not been transcribed to one of the scripts. I had to pip install bg-space to get over the error. Also, not sure if this is related to the issue mentioned above?
I wouldn't think so, but thanks for reporting. I've opened a separate issue to try and reproduce this.
Here is the log file for the "no-cells" folder - no-cells-logs.zip
Those points from the cuboids are false artefacts from the sample holder that was used to mount the brain onto the ligthsheet. But, if you think this could pose an issue, I will implement --start-plane
and --end-plane
to avoid these false artefacts.
Visualising the points in napari makes me think this is connected to brainglobe/cellfinder#383
Thanks for the tip @alessandrofelder. Just read this, so maybe I am detecting cells >65000 and that explains it? I just I will have to implement a --start-plane
and--end-plane
to keep it below 65000 cells for now. Is there a planned update to fix this issue? I guess as you mention in the PR, using a unsigned 32bit for cell detection would be a fix :)
Just read this, so maybe I am detecting cells >65000 and that explains it?
Yea, I suspect so. If you wouldn't mind testing this hypothesis, you could
pip install git+https://github.com/brainglobe/cellfinder@switch-detection-to-uint32
in your conda environment and then brainmapper
again?
I am hoping to make a new release including this fix in the next few days.
Sure, can do, thank you. I can only report back in around 24hours as it takes a day to run the cell detection on the cluster :)
That would be great, thank you.
Thanks for the help, this has been a very productive thread. Made my day!
That would be great, thank you.
Hi @alessandrofelder Reporting half way through the run. The log files still seems to ignore a lot of points. Here is a snippet:
2024-02-22 11:17:57 AM - INFO - MainProcess transform.py:70 - Ignoring point: [2411, 482, 1453] as it falls outside the atlas.
2024-02-22 11:17:57 AM - INFO - MainProcess transform.py:70 - Ignoring point: [2411, 482, 1423] as it falls outside the atlas.
I did not run the registration again but just re-ran the cell detection/classification/analysis after running the git update that you had suggested.
Ah apologies. I spoke too soon. Results look good after the first dataset. I am now testing it on my other datasets that did not output the cell counts properly. Will keep you posted!
Results look good after the first dataset.
:crossed_fingers: thanks - keep us posted
Reporting back in. Everything works like a charm now! Thank you :)
Just out of curiosity, was anything updated in the recent releases? My runs are way faster now, takes only around 3-4 hours (registration + detection) compared to 9-10hours in the past.
I spoke to @alessandrofelder and he thinks this speed up is also due to the same bug fix.
The best of both worlds, this is great!
Hi All, I have been using
brainglobe-workflows
for a while now using a custom trained model. I noticed a small issue recently. After running the entire workflow, I see that many cells are detected as "cells" when I open the output folder using napari (screenshot below).But when I look into the "analysis" folder that contains the
summary.csv
file I find very few cell counts in here (screenshot below). A lot of them are missing or not included. Is there a reason why this happens? My understanding is that all "cells" that I see in the napari plugin are automatically included when assigning them a brain region in thesummary.csv
file. This issue only happens in a few datasets, not all. Any advise? Thank you.