-
## ❓ How to do something using detectron2
Currently, DensePose reads in single images and infer dense annotations. This is very slow and quite wasteful. Does DensePose have the ability to read in bat…
-
### 🐛 Describe the bug
I have successfully built PyTorch from source for my legacy hardware on Windows 10. I'm building torchvision for cuda 10.2, sm30 having the following env variables USE_CUDA=1 U…
-
Hi @zhengziqiang ! This is great work !
I tried to setup the environment and run the CoralSCOP model. There were a few issues with the installation initially which I managed and got it to work. I w…
-
I successfully train and inference on youtubevis2019 and 2021, but it fails on OVIS. There is no error in the OVIS training process, but an error is reported when the last pth file is generated and re…
-
### 🐛 Describe the bug
It looks odd asking for 20209.02 GiB of memory.
```bash
python benchmarks/dynamo/torchbench.py \
--accuracy --no-translation-validation --inference --bfloat16 \
…
-
Thank you so much for updating "inference_single_image".
But, the following error occurred when using the script.
`[02/21 14:08:53 fvcore.common.checkpoint]: [Checkpointer] Loading from weights/ma…
-
Hi, I'm tring to load the model zoo Mask R-CNN and evaluate it on the COCO dataset, but the result is much lower than that of expected, the code I'm using is
```
from detectron2.evaluation im…
-
We keep a wishlist of examples **may** appear in v0.2 or later release. Any contributions are welcomed.
- [ ] Distributed Inference: OPT-175B
- [ ] Wav2Vec
- [ ] Hubert
- [ ] Detectron2: FCOS / …
-
> wait for official support
# Reference
- [ ] [detectron2.export.export_onnx_model](https://detectron2.readthedocs.io/modules/export.html?highlight=onnx#detectron2.export.export_onnx_model)
- [ ]…
-
Hi, I came cross same problem as yours.
My ONNX is not exactly like yours, I exported directly from detectron2, but my trt inference result also all zeros, however, my onnxruntime inference is OK.
…