-
![fuck](https://user-images.githubusercontent.com/15827053/43888492-639852fa-9bf4-11e8-87fc-6b4a9a7c5d2c.jpg)
此repo提供的MTCNN对卡通图片误检率特别高,但原版MTCNN并不会这样,不知是对MTCNN做了何种处理?原版MTCNN在同样的参数下在RNet的时候就会reject此图
-
现在不知道哪些是需要配合使用,比如Pnet, Rnet, Onet, 哪些单独使用。不清楚每个model对应的意思,比如多少点的检测,更快,更慢,精度比之前提高了,还是下降了。非常感谢。
-
I try to run your code, but it still in epoch one (not go to epoch two), but the iter increasing. I use several argprase when run your code
`!python main.py --experiment_name=test --dropout=0.15 -…
-
`slope_raster() `works only with LINESTRING.
Sometimes we don't want to break a street network in all the intersections (e.g. bridges, tunnels).
`network = st_cast(network, "LINESTRING", do_split=…
-
1. What are the recommended thresholds for nets in your framework?
parser.add_argument('--thresh', dest='thresh', help='list of thresh for pnet, rnet, onet', nargs="+",default=[0.6, 0.7, 0.7], type=f…
-
The error message changed so the test is failing
-
It would make sense to do this after we add support for the Realm object notifications
-
您好,为什么生成O_Net训练数据的时候,有很多的neg和part数据,但是pos数据却为0呢?
-
三个阶段所需的样本比例都相同吗?
PNet 的neg:pos:part 为ratio = [3, 1, 1]
Rnet和ONet中也需要3:1:1吗?还是根据gen_hard_example生成多少样本就是多少呢?
-
您好,非常感谢您开源了mtcnn的训练代码!我一直比较好奇mtcnn的训练数据是如何构造的,所以主要看了您的prepare_data代码。在看gen_hard_example.py时,有3个疑问,请求解答:
1、这份代码的目的是为RNet或者ONet准备数据,所以,对应的应该是输入数据到PNet或PNet-RNet,得到预测的bbox,再进行IoU判断,最终得到pos,neg,part这三种标记…