openvinotoolkit / nncf

Neural Network Compression Framework for enhanced OpenVINO™ inference
Apache License 2.0
936 stars 233 forks source link

PermissionError: [Errno 13] Permission denied: 'C:\\Users\\liumi\\AppData\\Local\\Temp\\tmph9zrjz62' #1344

Closed liumingzhu6060 closed 1 year ago

liumingzhu6060 commented 2 years ago

when I use PTQ of NNCF, a problem occured, " File "D:\Software\anaconda3\envs\NNCF\lib\site-packages\onnx__init__.py", line 40, in _save_bytes with open(cast(str, f), 'wb') as writable: PermissionError: [Errno 13] Permission denied: 'C:\Users\liumi\AppData\Local\Temp\tmph9zrjz62'"

my code is :

import onnx from nncf.experimental.post_training.compression_builder import CompressionBuilder from nncf.experimental.post_training.algorithms.quantization import PostTrainingQuantization from nncf.experimental.post_training.algorithms.quantization import PostTrainingQuantizationParameters from nncf.common.utils.logger import logger as nncf_logger from nncf.experimental.post_training.api import dataset as ptq_api_dataset from nncf.experimental.onnx.tensor import ONNXNNCFTensor from utils.datasets import LoadImagesAndLabels

class YoloV5Dataset(ptq_api_dataset.Dataset): def init(self, path, batch_size, shuffle): super().init(batch_size, shuffle) self.load_images = LoadImagesAndLabels(path)

def __getitem__(self, item):
    img, _, _, _ = self.load_images[item]
    # Input should be in [0,1].
    img = (1 / 255.) * img
    return {"images": ONNXNNCFTensor(img.numpy())}

def __len__(self):
    return len(self.load_images)

dataset = YoloV5Dataset("mydata_one_button\Images", 1, True)

original_model = onnx.load("./weights/phone.onnx") num_init_samples = 100

We'll ignore detector head not to quantize them

ignored_scopes = [

Head branch 1

"Mul_217",
"Add_219",
"Mul_221",
"Mul_223",
"Mul_227",
# Head branch 2
"Mul_251",
"Add_253",
"Mul_255",
"Mul_257",
"Mul_261",
# Head branch 3
"Mul_285",
"Add_287",
"Mul_289",
"Mul_291",
"Mul_295",
# "Conv_287",
# "Conv_293",
# "Conv_225",

] output_model_path = "./weights/phone-quantized.onnx"

Step 1: Create a pipeline of compression algorithms.

builder = CompressionBuilder()

Step 2: Create the quantization algorithm and add to the builder.

quantization_parameters = PostTrainingQuantizationParameters( number_samples=num_init_samples, ignored_scopes=ignored_scopes )

quantization = PostTrainingQuantization(quantization_parameters) builder.add_algorithm(quantization)

Step 4: Execute the pipeline.

nncf_logger.info("Post-Training Quantization has just started!") quantized_model = builder.apply(original_model, dataset)

Step 5: Save the quantized model.

onnx.save(quantized_model, output_model_path) nncf_logger.info( "The quantized model is saved on {}".format(output_model_path))

onnx.checker.check_model(output_model_path)

AlexKoff88 commented 2 years ago

@liumingzhu6060, thanks for your interest in this feature. Frankly, we didn't validate this on Windows OS since this is an experimental feature. But this is definitely worth considering in the future taking into account that this is for ONNX Runtime. For now, you can try Linux or WSL which is easy to use on Windows. BTW, this can be a YOLO v5 problem and not NNCF. @kshpv, please take a look as well.

kshpv commented 2 years ago

Hello @liumingzhu6060!

Thanks for the question. To get more details - what version of NNCF are you using? I recommend you trying the latest develop branch, because this issues seems to be resolved with this PR - https://github.com/openvinotoolkit/nncf/pull/1233