Closed nabeelkhalid92 closed 5 months ago
Follow up! The installation worked, it had to do with the python version.
I have another question. I am training BoxSnake using my own custom dataset which contains microscopic data. The training runs smoothly at the start with AP for segmentation raising alongside the AP for detection but after a while, the AP for segmentation starts to decrease as the detection AP rises.
Are there any parameters in the config file related to POLYGON_HEAD or BOX_SUP I have to change other than the Num_classes etc?
I will be awaiting your response.
Thank you again :)
Hi, can you tell me the size of your microscopic dataset? And can you visualize some cases?
Thank you very much for your reply,
The dataset I am using is the LIVECell dataset: https://github.com/sartorius-research/LIVECell
Below you can find the AP decline case I am talking about:
After 1000 iterations: [04/02 19:12:55] d2.engine.defaults INFO: Evaluation results for testall in csv format: [04/02 19:12:55] d2.evaluation.testing INFO: copypaste: Task: bbox [04/02 19:12:55] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75,APs,APm,APl [04/02 19:12:55] d2.evaluation.testing INFO: copypaste: 31.5135,65.8799,27.6709,32.3391,32.9443,32.1784 [04/02 19:12:55] d2.evaluation.testing INFO: copypaste: Task: segm [04/02 19:12:55] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75,APs,APm,APl [04/02 19:12:55] d2.evaluation.testing INFO: copypaste: 27.3450,59.6818,22.6698,26.1375,27.3899,34.2320
After 1500 Iterations: [04/02 19:28:55] d2.engine.defaults INFO: Evaluation results for testall in csv format: [04/02 19:28:55] d2.evaluation.testing INFO: copypaste: Task: bbox [04/02 19:28:55] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75,APs,APm,APl [04/02 19:28:55] d2.evaluation.testing INFO: copypaste: 35.1612,70.8197,32.3604,37.6642,32.4191,32.3361 [04/02 19:28:55] d2.evaluation.testing INFO: copypaste: Task: segm [04/02 19:28:55] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75,APs,APm,APl [04/02 19:28:55] d2.evaluation.testing INFO: copypaste: 25.8201,64.4165,14.7160,24.7313,25.6564,33.5879
After 2000 iterations: [04/02 19:42:45] d2.engine.defaults INFO: Evaluation results for testall in csv format: [04/02 19:42:45] d2.evaluation.testing INFO: copypaste: Task: bbox [04/02 19:42:45] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75,APs,APm,APl [04/02 19:42:45] d2.evaluation.testing INFO: copypaste: 37.6725,73.6499,35.7813,39.2702,38.2316,39.5760 [04/02 19:42:45] d2.evaluation.testing INFO: copypaste: Task: segm [04/02 19:42:45] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75,APs,APm,APl [04/02 19:42:45] d2.evaluation.testing INFO: copypaste: 21.3821,61.3990,7.6428,18.1947,23.4008,35.2326
After 2500 Iterations: [04/02 19:57:39] d2.engine.defaults INFO: Evaluation results for testall in csv format: [04/02 19:57:39] d2.evaluation.testing INFO: copypaste: Task: bbox [04/02 19:57:39] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75,APs,APm,APl [04/02 19:57:39] d2.evaluation.testing INFO: copypaste: 37.3242,74.8505,34.4125,39.5199,36.0486,39.0514 [04/02 19:57:39] d2.evaluation.testing INFO: copypaste: Task: segm [04/02 19:57:39] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75,APs,APm,APl [04/02 19:57:39] d2.evaluation.testing INFO: copypaste: 16.4391,56.2552,3.4211,12.2411,20.9670,33.4366
The Segmentation AP falls right down after this.
Below is the config file I am using, I made changes related to the LIVECell dataset:
BASE: "../Base-BoxSnake-RCNN-FPN.yaml" OUTPUT_DIR: "/raid/nabeelk/nabeelk/nabeelk/output_boxsnake/" MODEL: WEIGHTS: "/home/nabeelk/BoxSnake-master/configs/COCO-InstanceSegmentation/BoxSnake_RCNN/boxsnake_rcnn_R_50_FPN_coco_1x.pth" MASK_ON: True ROI_MASK_HEAD: NAME: "PolygonHead" POOLER_TYPE: "" POLYGON_HEAD: IN_FEATURES: ["p2", "p3", "p4", "p5"] PRED_WITHIN_BOX: False POLY_NUM_PTS: 64 CLS_AGNOSTIC_MASK: True PREPOOL: False UPSAMPLING: False FPN: NORM: "SyncBN" ANCHOR_GENERATOR: SIZES: [[8], [16], [32], [64], [128]] # One size for each in feature map ASPECT_RATIOS: [[0.5, 1.0, 2.0, 3.0, 4.0]] # Three aspect ratios (same for all in feature maps)
DIFFRAS: RESOLUTIONS: (64, 64, 64, 64, 64, 64, 64, 64) USE_RASTERIZED_GT: False INV_SMOOTHNESS_SCHED: (0.1,) RESNETS: DEPTH: 50 ROI_HEADS: NUM_CLASSES: 1 BATCH_SIZE_PER_IMAGE: 512 PROPOSAL_ONLY_GT: False BOX_SUP: ENABLE: True LOSS_POINTS_PROJ: True LOSS_POINTS_PROJ_WEIGHT: 1.0 LOSS_LOCAL_PAIRWISE: True LOSS_PAIRWISE_WARMUP_ITER: 10000 LOCAL_PAIRWISE_KERNEL_SIZE: 3 LOCAL_PAIRWISE_DILATION: 2 LOSS_LOCAL_PAIRWISE_WEIGHT: 0.5 LOSS_GLOBAL_PAIRWISE: True LOSS_GLOBAL_PAIRWISE_WEIGHT: 0.03 CROP_PREDICTS: True CROP_SIZE: 64 MASK_PADDING_SIZE: 4
RPN: IN_FEATURES: ["p2", "p3", "p4", "p5", "p6"] BATCH_SIZE_PER_IMAGE: 256 POST_NMS_TOPK_TEST: 3000 POST_NMS_TOPK_TRAIN: 3000 PRE_NMS_TOPK_TEST: 6000 PRE_NMS_TOPK_TRAIN: 12000 RETINANET: NUM_CLASSES: 1 TOPK_CANDIDATES_TEST: 3000 PIXEL_MEAN: [128, 128, 128] PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
OPTIMIZER: "ADAM"
BASE_LR: 1e-4
WEIGHT_DECAY: 0.1
WEIGHT_DECAY_NORM: 0.0
STEPS: (17000, 18000)
MAX_ITER: 20000
IMS_PER_BATCH: 16
CHECKPOINT_PERIOD: 500
CLIP_GRADIENTS:
ENABLED: False
DATASETS:
TRAIN: ("trainall",)
TEST: ("testall",)
INPUT:
MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)
VIS_PERIOD: 0
TEST:
DETECTIONS_PER_IMAGE: 3000
EVAL_PERIOD: 500
DATALOADER:
NUM_WORKERS: 12
[04/02 18:41:15] detectron2 INFO: Running with full config:
CFG_FILE_STR: BoxSnake_RCNN/allconfig.yaml
COMMENT: NONE
CUDNN_BENCHMARK: false
DATALOADER:
ASPECT_RATIO_GROUPING: true
FILTER_EMPTY_ANNOTATIONS: true
NUM_WORKERS: 12
REPEAT_THRESHOLD: 0.0
SAMPLER_TRAIN: TrainingSampler
DATASETS:
PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
PRECOMPUTED_PROPOSAL_TOPK_TRAIN: 2000
PROPOSAL_FILES_TEST: []
PROPOSAL_FILES_TRAIN: []
TEST:
Please let me know what else can be done. Thank you again :)
You can refer to this config to revise your config in terms of ANCHOR_GENERATOR
:
ANCHOR_GENERATOR:
SIZES: [[4], [9], [17], [31], [64], [127]] # One size for each in feature map
ASPECT_RATIOS: [[0.25, 0.5, 1.0, 2.0, 4.0]] # Three aspect ratios (same for all in feature maps)
This will fit with the object size of your dataset.
In addition, the default parameters are set for a common environment. For the cell segmentation, the RGB color may influence the model's performance. Especially in the weakly supervised setting, the model is prone to overfitting. So you can adjust PAIRWISE.SIGMA
of this line to find the best sigma
for the pairwise loss for your dataset.
I have tried changing the local.pairwise sigma, I have used 0.5, 1.0, 1.5, 2.4, 2.5, 2.6, 3.0, 3.5 and it doesn't make much of a difference unfortunately.
Are there any other parameters which I can play around with? Thank you
Can you visualize some samples?
python demo/demo.py \
--config-file configs/COCO-InstanceSegmentation/BoxSnake_RCNN/boxsnake_rcnn_R_50_FPN_1x.yaml \
--input demo/demo.jpg \
--output ${/your/visualized/dir} \
--confidence-threshold 0.5 \
--opts MODEL.WEIGHTS ${your/checkpoints/boxsnake_rcnn_R_50_FPN_coco_1x.pth}
Here are some results from the model before it starts to overfit.
And this a result after the model does overfit
You can see that the colors of the objects are similar to the background, so you need to reduce sigma
of local pairwise since our method mainly relies on the RGB color feature. Can you try sigma=0.1
or sigma=0.01
?
In addition, I suggest that you can use some image augmentation methods to improve the contrast between objects and backgrounds.
Hello, I'm running into the same issues during the installation. Could you specify which Python version worked for you?
Thank you!
We rely on python=3.8. You can prepare the env by:
conda create --name boxsnake python=3.8 -y
conda activate boxsnake
conda install pytorch==1.9.0 torchvision==0.10.0 cudatoolkit=11.1 -c pytorch -c nvidia
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
git clone https://github.com/Yangr116/BoxSnake.git
cd BoxSnake
pip install -r requirements.txt
bash scripts/auto_build.sh
Hi,
First of all thank you very much for providing such a nice platform.
I am running into some trouble with the installations, getting errors like:
ERROR: Could not find a version that satisfies the requirement MultiScaleDeformableAttention==1.0 (from -r requirements.txt (line 36)) (from versions: none) ERROR: No matching distribution found for MultiScaleDeformableAttention==1.0 (from -r requirements.txt (line 36))
And some more errors like these. Can you please let me know why is that? Is it related to the python version? I already have tried Python 3.9 and 3.10.
I will wait for your response. Thank you and kind regards,