-
`python tools/train.py configs/swin/upernet_swin_tiny_patch4_window7_512x512_160k_ade20k.py --options model.pretrained=checkpoints/swin_tiny_patch4_window7_224.pth model.backbone.use_checkpoint=True -…
-
The link 404,
Here is where the link sends you.
https://github.com/impiga/SwinTransformer-object-detection/blob/main/docs/get_started.md.
-
Hi,
Thanks for your work.
I have a question about this project.
Do we need to change RGB, PIXEL_MEAN, PIXEL_STD of the configuration, to keep consistency with the original SwinTransformer?
-
Hi, everyone:
| Backbone | Method | Crop Size | Lr Schd | mIoU | mIoU (ms+flip) | #params | FLOPs | config | log | model |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |…
-
Hi guys, I have some questions about the result in object detection.
I trained the model using custom coco datasets. and when I try inference,
it showed weird results like this:
![스크린샷 2…
-
-
when to support mscoco? thx
-
I trained Swin Transformers Cityscapes Instance Segmentation from pretrained Swin Base on ImageNet 22K.
During training, evaluations are very normal.
E.g:
```
2021-06-29 00:57:35,334 - mmdet - INF…
-
My output image is always the same as the input image. Is it possible to have one working example with the demo.
I tried theses
`python demo.py --config-file ../configs/coco/panoptic-segmentatio…
-
Hi,
Just looking for some advice on how to use the current implementation of Swin Transformer in an FPN-based detector model. Does the current implementation work out of the box, or some modificati…