Open Vincent630 opened 2 years ago
Hi @Vincent630, thanks for your interest in SparseInst and I'd like to fix it! It seems that the visualization results are very bad, which is being explored in #42. I'm solving it now and might offer you some suggestions soon.
thank you so much, please let me konw if you got any process.
i can also offer some badcase sample so you may konw what i mean,of course i have some unthoughtful thoughts, if there any possible that is because of the heatmap thing cause this ?sparseinst use a coarse heatmap which may include some other instance feature so that may infect the boundary of instance and instance separating. anyway i think sparseinst is very elegent and instinct resolution ,hope to cover widly in instance segmentation.
@Vincent630, are the evaluation metrics are normal, e.g., 69.3 AP and inference speed?
yes,that looks fine,and i have evaluate multi iteration and compare its result with the default config final model but still find that almost all evalute result got so many badcase
this is the evalution metrics with test script,it seems fine for me ,my envrionment is tesla T4 .
by the way,i have two qustion wich is out of this subject, 1) when i change config "IMS_PER_BATCH:",for example set as 16, the total iteration 270000 cost 6 datas on two gpu ,but when i set as 32,same iteration cost about double time(6 days..),i don't konw what's that mean...... time reduce when maxmize batch size make sense for me. 2) when training processing finished,we can get two kind model,"instances_predictions.pth" and "model_final.pth"(iteration checkpoint),my question is ,how to ues this "instances_predictions.pth" and what's the difference between instances_predictions and model_final?(i have used this "instances_predictions" model to visulize but that report some "key error " stuff with offical demo.py script)
Hi @Vincent630, we adopt 64 images per batch on 8 GPUs (11 GB memory, 8 images per GPU) for COCO training. Reducing batch size might affect the performance a bit, e.g., -0.8 AP after reducing the batch size from 64 to 32. For your task, you can reduce the batch size to fit your GPU and reduce the iterations (e.g., 180k) for faster training.
For the second question, model_final.pth
is the ultimate state dicts, including model weights, scheduler, optimizer state dict. instances_predictions.pth
contains the raw predictions on the val set. For visualization, you need to load the pre-trained weights, i.e., model_final.pth
.
i am not sure if i figure your answer"instance_predictions is a raw predictions", i thought it should get the right prediction include mask and instance info if it can be inference,otherwise what this "instance prediction model" made for ???OlO
thanks for your briliant work ,because it's efficent inference and elegent framework ,i am so interested in sparseinst.(none postprocess and IAM stuff ).but i got bad perference on my own dataset with sparseinst. visulize result is so bad(compare solo and yolact),its nearly can not get one good result on the train set.if that is not so borther ,please give me some help and i will appriciate it so much.(if that is possible i can provide a small dadaset include about 500 images) train set:5000+,and i follow all default set of sparseinst.
[07/01 08:58:52 d2.engine.defaults]: Evaluation results for coco_2017_val in csv format: [07/01 08:58:52 d2.evaluation.testing]: copypaste: Task: segm [07/01 08:58:52 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [07/01 08:58:52 d2.evaluation.testing]: copypaste: 69.2574,82.1681,74.2320,0.2904,5.6352,86.3270