Open nakul3112 opened 4 years ago
PLease suggest me what could be missing. I would be glad to get the issue solved asap, so that I can use the repository for panoptic segmentation of my custom dataset.
hi @nakul3112 , you can refer this also.
Here are more queries? 1) Is your repository, a complete new one, with its own docker? 2) Also, will I be able to do panoptic segmentation for my dataset which has 7 classes. I am not sure whether I will need data with ground truth for this panoptic task? 3) Please guide me into this. I was previously using retinannet with Resnet50fpn, but haven't got desired results yet. I would be glad to use this panoptic sementation, to see how it can clearly classify my 7 classes .
Hope to get response at your earliest.
Also, how do I train the model for my custom dataset? Any lead on this would be useful
The docker is basically use to set your environment which you are working on. I installed seperately those libraries which are required. Can you eleborate more about 7 classes. Is these 7 classes for instance or semantic? For training you need to develop own data pipeline, currently this repository is only providing inference.
Those 7 classes are for semantic
So for developing training pipeline, once we have training images and their ground truth labels, what kind of model
is supposed to be used/trained, in order to use this repository for inference ?
This repository is trained on cityscapes dataset. So, if your 7 classes is in the [ 'road', 'sidewalk', 'building', 'wall', 'fence', 'pole', 'traffic light', 'traffic sign', 'vegetation', 'terrain', 'sky', 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', 'bicycle', 'void'], then you don't need to train model again. Just null all the other classes. And training from scratch needs alot of resources like GPU's and there will be no guarantee to get good results. So, try to use pretrained models.
@pushkar-khetrapal Where can I find those pretrained model, so that I can train further for my custom dataset and classes?
@pushkar-khetrapal Also, I read that for training the images, we need the data in pixel-wise annotated format. Is possible to expedite the process of having train images with annotation format that Panoptic segmentation requires?
@nakul3112 The panoptic is made up of semantic + instance segmentation. There will be more than one head. You can use cityscapes dataset. It's easy but you need a lot of resources to train a panoptic model. Every repository provides trained weights, you can use those weights instead of training model from scratch. You can eliminate those classes which you don't need.
Actually, my dataset has all classes related to vegetation and flowers. Could I still use pretrained weights?
@pushkar-khetrapal Here is the thing, basically I want to use the pretrained model/backbone from this repository, and use that to classify my 7 classes with bounding boxes. So what could be the next steps to acheive it?
Hi, I recently tried cloning the repository and make docker-build. After that, when I ran the command for testing
make docker-run-test-sample
, I got following error.File "scripts/demo.py", line 14, in <module> from realtime_panoptic.utils.visualization import visualize_segmentation_image,visualize_detection_image File "/workspace/panoptic/realtime_panoptic/utils/visualization.py", line 3, in <module> import cv2 File "/usr/local/lib/python3.6/dist-packages/cv2/__init__.py", line 5, in <module> from .cv2 import * ImportError: libGL.so.1: cannot open shared object file: No such file or directory make: *** [Makefile:34: docker-run-test-sample] Error 1
Enter the docker using interactive mode, then pip install opencv-python. This resolved my error.
Hi, I recently tried cloning the repository and make docker-build. After that, when I ran the command for testing
make docker-run-test-sample
, I got following error.File "scripts/demo.py", line 14, in <module> from realtime_panoptic.utils.visualization import visualize_segmentation_image,visualize_detection_image File "/workspace/panoptic/realtime_panoptic/utils/visualization.py", line 3, in <module> import cv2 File "/usr/local/lib/python3.6/dist-packages/cv2/__init__.py", line 5, in <module> from .cv2 import * ImportError: libGL.so.1: cannot open shared object file: No such file or directory make: *** [Makefile:34: docker-run-test-sample] Error 1
Enter the docker using interactive mode, then pip install opencv-python. This resolved my error.
@jinensetpal Can you please elaborate more on it? such as the steps and commands that you use. Sorry I'm new to Docker. Thanks in advance!
@Congdinh1801 Interactive mode on Docker essentially gives you access to the shell of the environment. Once the docker is running, you can enter interactive mode. To access interactive mode, run:
# docker ps -a // list container names
# docker run $NAME // run docker container
# docker exec -it $CONTAINER_ID sh // enter interactive mode
From there, fulfill the unmet dependency using pip install opencv-python
. Then, you can exit the interactive shell and run make docker-run-test-sample
as before, and it should compile and run the program without error.
Hi, I recently tried cloning the repository and make docker-build. After that, when I ran the command for testing
make docker-run-test-sample
, I got following error.