Open Hui-zju opened 5 months ago
I have this problem,too
Hi sorry for late reply, is this issue still there ?
Could you help to update the dependencies with latest requirements.txt
and try it again ?
I have solved this problem, some errors can be ignored
Hi sorry for late reply, is this issue still there ? Could you help to update the dependencies with latest and try it again ?
requirements.txt
I tried to do model conversion in windows and encountered different errors in both attempts. The errors are reported as below, how can I fix these problems?
First: (openvino_env) C:\Users\dell\chatglm3.openvino>python convert.py --model_id F:/LLM/chatglm3-6b-modelscope/chatglm3-6b --precision int4 --output F:/chatglm3-6b-ov INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, onnx, openvino ====Exporting IR===== Framework not specified. Using pt to export the model. Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 7/7 [06:13<00:00, 53.39s/it] Setting eos_token is not supported, use the default one. Setting pad_token is not supported, use the default one. Setting unk_token is not supported, use the default one. Setting eos_token is not supported, use the default one. Setting pad_token is not supported, use the default one. Setting unk_token is not supported, use the default one. Using the export variant default. Available variants are:
_is_quantized_training_enabled
is going to be deprecated in transformers 4.39.0. Please usemodel.hf_quantizer.is_trainable
instead warnings.warn( C:\Users\dell\openvino_env\lib\site-packages\optimum\exporters\openvino\model_patcher.py:198: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if past_length: WARNING:nncf:Weight compression expects a single reduction axis, but 2 given. Weight shape: (8192, 32, 2), reduction axes: (1, 2), node name: module.transformer/aten::index/Gather. The node won't be quantized. Searching for Mixed-Precision Configuration ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 112/112 • 0:03:55 • 0:00:00INFO:nncf:Statistics of the bitwidth distribution: +--------------+---------------------------+-----------------------------------+ | Num bits (N) | % all parameters (layers) | % ratio-defining parameters | | | | (layers) | +==============+===========================+===================================+ | 8 | 28% (31 / 114) | 21% (29 / 112) | +--------------+---------------------------+-----------------------------------+ | 4 | 72% (83 / 114) | 79% (83 / 112) | +--------------+---------------------------+-----------------------------------+ Applying Weight Compression ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 114/114 • 0:04:48 • 0:00:00Exception ignored in: <finalize object at 0x285f445c720; dead> Traceback (most recent call last): File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\weakref.py", line 591, in call__ return info.func(*info.args, **(info.kwargs or {})) File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\tempfile.py", line 859, in _cleanup cls._rmtree(name, ignore_errors=ignore_errors) File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\tempfile.py", line 855, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\shutil.py", line 750, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\shutil.py", line 620, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\tempfile.py", line 846, in onerror cls._rmtree(path, ignore_errors=ignore_errors) File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\tempfile.py", line 855, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\shutil.py", line 750, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\shutil.py", line 601, in _rmtree_unsafe onerror(os.scandir, path, sys.exc_info()) File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\shutil.py", line 598, in _rmtree_unsafe with os.scandir(path) as scandir_it: NotADirectoryError: [WinError 267] 目录名称无效。: 'C:\Users\dell\AppData\Local\Temp\tmpl2bqxzug\openvino_model.bin' Configuration saved in F:\chatglm3-6b-ov\openvino_config.json ====Exporting tokenizer===== WARNING:transformers_modules.chatglm3-6b.tokenization_chatglm:Setting eos_token is not supported, use the default one. WARNING:transformers_modules.chatglm3-6b.tokenization_chatglm:Setting pad_token is not supported, use the default one. WARNING:transformers_modules.chatglm3-6b.tokenization_chatglm:Setting unk_token is not supported, use the default one.Second: (openvino_env) C:\Users\dell\chatglm3.openvino>python convert.py --model_id F:\LLM\chatglm3-6b-modelscope\chatglm3-6b --output F:\chatglm3-6b-OV INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, onnx, openvino ====Exporting IR===== Framework not specified. Using pt to export the model. Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [06:38<00:00, 56.91s/it] Setting eos_token is not supported, use the default one. Setting pad_token is not supported, use the default one. Setting unk_token is not supported, use the default one. Setting eos_token is not supported, use the default one. Setting pad_token is not supported, use the default one. Setting unk_token is not supported, use the default one. Using the export variant default. Available variants are:
_is_quantized_training_enabled
is going to be deprecated in transformers 4.39.0. Please usemodel.hf_quantizer.is_trainable
instead warnings.warn( C:\Users\dell\openvino_env\lib\site-packages\optimum\exporters\openvino\model_patcher.py:198: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if past_length: Traceback (most recent call last): File "C:\Users\dell\chatglm3.openvino\convert.py", line 53, in