Closed machcchan474 closed 4 years ago
Please don't crosspost issues - this issue really belongs to nncase github, since aXeleRate just uses their conversion utility. https://github.com/kendryte/nncase/issues/164#issue-681621415
From my own experience I only encountered this problem once - the workaround I had to use to avoid it was to train from scratch(and not to use imagenet weights) In docs for nncsse you can find the following parameters for converter: https://github.com/kendryte/nncase/blob/master/docs/USAGE_EN.md
--dump-weights-range which shows the weight range - on some layers you'll see quite a difference, that is the large divergence converter is complaining
--weights-quantize-threshold the threshold to control quantizing op or not according to it's weights range, default is 32.000000. you can increase it to accommodate for weight divergence in your model. however for me when I tried it it resulted in bad model.
Both of these options are available in nncase2beta4. You'll need to perform the conversion manually.
Move the discussion to nncase issues. Closing now.
Describe the bug When converting to kmodel, it reports fallback to float conv2d due to weights divergence How should I fix this?
To Reproduce
Expected behavior
Screenshots
Environment (please complete the following information): Here is the config.json { "model" : { "type": "Detector", "architecture": "MobileNet7_5", "input_size": [224,224], "anchors": [0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828], "labels": ["person"], "coord_scale" : 1.0, "class_scale" : 1.0, "object_scale" : 5.0, "no_object_scale" : 1.0 }, "weights" : { "full": "", "backend": "imagenet" }, "train" : { "actual_epoch": 100, "train_image_folder": "/media/storage/pool/1/", "train_annot_folder": "/media/storage/pool/1/labelimg/detector/annotation", "train_times": 2, "valid_times": 2, "valid_metric": "mAP", "valid_image_folder": "", "valid_annot_folder": "", "batch_size": 4, "learning_rate": 1e-4, "saved_folder": "detector", "first_trainable_layer": "", "augumentation": true, "is_only_detect" : false }, "converter" : { "type": ["k210"] } }
Additional context Add any other context about the problem here.