-
@AlexeyAB Hi,
First of all thanks for your great work!
I was curious about the Binarizing of the weights as well as the BIT1-XNOR inference. It seems that the weights used are binarize weights, …
-
How far is the accuracy when BIT1-XNOR is used for training and inference
-
-
Hi, @AlexeyAB
I'd like to know more about how INT8 version is implemented.
Is it based on one/more papers?
Could you give related links for reference?
Thanks
-
Hi!
Are there plans for making a low precision inference mode like many other neural network frameworks out there?
Would be really helpful for embedded applications where we have very limited memory…
-
I have a lot of computer are using Intel GPU on motherboard, How to run darknet using them?
Any software support or alternative tools to use them, to make darknet are possible running in most compu…
-
Hey @AlexeyAB .
I am training your repo of darknet with tiny-yolov2 and full yolov2 cfg file with XNOR = 1 on COCO and Pascal VOC dataset. I have completed 10,000 iterations on both the models with …
-
@AlexeyAB Do you live and breathe Yolov3?
-
Asking here because issue tracking is disabled in the other project: I'd like to use [yolo2_light](https://github.com/AlexeyAB/yolo2_light) for a robotics research project because it's more light-weig…
tlind updated
6 years ago
-
I saw gplhegde fork of darknet which enables to easily truncate weights to 16 or 8 bits. My question is if one can run it on the original darknet without upscaling it to float before? I would like to …