Open SZ-qing opened 1 year ago
The model we used for the exhaustive automated labelling stage is available here
The commands for using it are under swin_base_2_class/g2_4
in swin_det_old.md
Thanks, I still want to confirm the order of annotation, or the functions in the label tool.
In label tool
raw image--->Create RecBox----->Set ROIs
for roi in 17 ROIS:
for per_frame in roi_127_frames:
per_frame: Create RecBox----->Create Mask
Use Swin exhaustive Automated Labeling
in 11 ROIs:
use command :under swin_baase_2_class/g2_4
results: 31 ROIs
In label tool
expert: in the last frame------>label ipsc
or diff
Track all object
----->
My understanding is that the Label tool
first makes 17 ROIs
and then labels the cells and corresponding location information
. Then SWIN
is used to expand the ROIs based on the previously labeled cells. Then the expert uses the label tool to label the cells in the last frame
, and finally backtracks the cells from the last frame to the top cell.
What is the difference between the Track All Object
function in the label tool and the command line all_frames_roi @ propagate_by_tracking--mask_ipsc
[https://github.com/abhineet123/ipsc_prediction/blob/master/ipsc_labelling_tool/cmd/mask_ipsc.md].
I look forward to your reply very much, and I am trying to understand the article.
Your understanding of the labeling pipeline is correct.
Track All Object
function tracks each labeled object in the current frame into future frames independently, i.e. one single-object tracker is used for each object.
These trackers do not consider existing objects in the future frames when tracking the objects in the current frame.
propagate_by_tracking
works similarly to a multi-object tracker and uses object association to track objects backwards in time.
It assigns IDs and categories to all the existing (uncategorized) objects in each frame by matching them to the objects in the next frame.
It also detects potential cell fusion and division events and asks the user to verify these.
The model we used for the exhaustive automated labelling stage is available here
The commands for using it are under
swin_base_2_class/g2_4
in swin_det_old.md
Using the command line: python3 tools/test.py config=configs/swin/ipsc_2_class_g2_4.py checkpoint=work_dirs/ipsc_2_class_g2_4/epoch_1000.pth eval=bbox,segm test_name=reorg_roi
I want to identify the new ROIs from the raw image again based on the trained SWIN model. But the error message is as follows:
FileNotFoundError: IPSC2Class: [Errno 2] No such file or directory: '/ipsc_prediction/data/ipsc/well3/reco_test_swin/reorg_roi.json'
In configs/base/datasets/ipsc_2_class_g2_4.py:
dataset_type = 'IPSC2Class' data_root = '/ipsc_prediction/data/ipsc/well3/all_frames_roi/raw_images/' all_frames_roi_data_root = '/ipsc_prediction/data/ipsc/well3/all_test_swin/' reorg_roi_data_root = '/ipsc_prediction/data/ipsc/well3/reco_test_swin/'
Does it still need the newly recognized ROI file here?
My understanding is that after training the SWIN detector, the new one can be identified from the original image.
Thank you very much for your help.
Sorry, reorg_roi.json
represents a temporary provisional dataset containing new ROIs we had created in addition to those in all_frames_roi.json
.
Some of the ROIs in this new dataset turned out to have too much visual damage to be useful and so we removed these from our final dataset.
Essentially all_frames_roi
+ reorg_roi
- bad ROIs
= ext_reorg_roi
which is the final dataset on which all the models were trained.
These are mentioned in the the "Exhaustive Automated Labeling" section of the paper - all_frames_roi
contains the "11 of the 17 ROIs from the previous stage" while reorg_roi
has 29 ROIs of which 9 were bad and the remaining "20 new ones" were used to create the total 31 ROIs in ext_reorg_roi
.
The reorg_roi.json
file is attached here, though, as stated above, it contains some bad ROIs that we have not made publicly available.
Let me know if you like to have a look at these too.
You can just replace reorg_roi.json
with ext_reorg_roi.json
(also attached) in the config file to run on the full dataset.
Thanks, But what I want to ask is how to get new ROI.
I want to use the trained swin model you provided to get a new ROI from the raw image. python3 tools/test.py config=configs/swin/ipsc_2_class_g2_4.py checkpoint=work_dirs/ipsc_2_class_g2_4/epoch_1000.pth eval=bbox,segm test_name=reorg_roi
command could do this?
In configs/base/datasets/ipsc_2_class_g2_4.py: dataset_type = 'IPSC2Class' data_root = '/ipsc_prediction/data/ipsc/well3/all_frames_roi/raw_images/' all_frames_roi_data_root = '/ipsc_prediction/data/ipsc/well3/all_test_swin/' reorg_roi_data_root = '/ipsc_prediction/data/ipsc/well3/reco_test_swin/'
What settings do these need to pay attention to.
The model trained in the Exhaustive Automated Labeling stage does not generate new ROIs but instead detects all the cells in the new ROIs that we created manually.
These detected cells are not classified into good and bad because this model was trained on class-less labels from the Selective Uncategorized Semi-Automated Labeling stage.
A human expert manually labels all the detected cells in the last frame of each of the 31 new ROIs into good and bad and these labels are then propagated backwards in time.
The model trained in the Exhaustive Automated Labeling stage does not generate new ROIs but instead detects all the cells in the new ROIs that we created manually.
These detected cells are not classified into good and bad because this model was trained on class-less labels from the Selective Uncategorized Semi-Automated Labeling stage.
A human expert manually labels all the detected cells in the last frame of each of the 31 new ROIs into good and bad and these labels are then propagated backwards in time.
That means stage Exhaustive Automated Labeling
just detects cells and doesn't create ROIs, then manually delineates new ROIs from the identified cells. I misinterpreted the article Exhaustive Automated Labeling stage
A Swin transformer instance segmentation model
(Liu et al., 2021c) was first trained on the annotations from the previous stage. Next, ROI
sequences spanning all 127 frames were created. These were designed to cover as much of
the circular well area containing cells as possible while minimizing overlap between different
ROIs., I originally thought it was the SWIN model that detects cells and automatically delineates ROIs, thank you.
hi, @abhineet123 I have the same question for understanding how to create NEW ROI. What criteria do you use when creating ROI manually? In the paper, I saw this statement, "These were designed to cover as much of the circular well area containing cells as possible while minimizing the overlap between different ROIs", at the Exhaustive Automated Labeling part. In the supplementary, fig. 2 shows 31 ROIs, and the ROIs have many overlaps between different ROIs. Maybe, can that augment training images? Looking forward to your reply!
The ROIs were chosen manually by an expert after examining the complete video. There are indeed many overlaps between the ROIs but not as many as this single image suggests.
The cells grow and shrink and move about as the video progresses and each ROI provides a sufficiently distinct perspective on these changes from all other ROIs.
This includes variations in scale, for example, when a smaller ROI is contained within a larger one and this, as you suggest, does have an augmentation effect on the training set.
The ROIs were chosen manually by an expert after examining the complete video. There are indeed many overlaps between the ROIs but not as many as this single image suggests.
The cells grow and shrink and move about as the video progresses and each ROI provides a sufficiently distinct perspective on these changes from all other ROIs.
This includes variations in scale, for example, when a smaller ROI is contained within a larger one and this, as you suggest, does have an augmentation effect on the training set.
I forgot that a cell is a living thing, that it morphs and moves. This solves my doubts, thank you very much!
Hello,
Exhaustive Automated Labeling
part of your paper,A Swin transformer instance segmentation model
you Used which added 20 ROIs. Is this part of the code and model available? I still haven't seen your public.