-
# miou for cityscapes val dataset
| repositor | net name | note | miou |
| :---------: | :---------------: |:------------: | :-----------: |
| deeplab | deeplab_base…
-
If anyone has found a way to do this, could you please give me a hint? I guess there's simpler way than writing script to convert RGB segmentation output into these IDs (that could get complicated, si…
-
I can download the city scape dataset, but there's no model. The models directory is empty further, the cityscape website has no downloadable pre trained nets - just data. I open the checksum.md fi…
-
Thanks to the author for the code sharing.
The author's code was used to train original yolov4 achieves 38.0% mAP for 8 classes in cityscapes->foggy citycapes, which is significantly different from…
-
Hi, I noticed you updated your evluate.py. But I think your **previous** implementation is right, since if you use the evaluation code from https://github.com/mcordts/cityscapesScripts, you can get a …
-
A nice work! but I am confused about some hyper-parameters.
-
Hi, thanks for the contribution!
Using stylegan2-ada, is it possible to generate consecutive frames of a video? (such as sequences in the KITTI/Cityscapes dataset)
I am not talking about style tra…
ffabi updated
3 years ago
-
## 环境
1.系统环境:Ubuntu 18.04
2.MegEngine版本:1.5.0
3.python版本:3.7.0
## 复现步骤
1.计算损失
2.损失回传
3.用item取出损失
## 请提供关键的代码片段便于追查问题
## 请提供完整的日志及报错信息
```shell
Traceback (most recent call last):…
-
@walzimmer
Hi, thank you for your great work.
I want to do data preparation code for Intersection dataset.
I succeeded in splitting the intersection dataset into train and val as README in tum-tra…
-
While trying to reproduce the results of the paper "Context Encoders: Feature Learning by Inpainting", using the torch code available in Github repositories with adversarial loss, I got the following …