thousrm / universal_NPU-CNN_accelerator

hardware design of universal NPU(CNN accelerator) for various convolution neural network
MIT License
71 stars 7 forks source link

An error occurs when running the py file. It should be a library version problem! #51

Open Anas-liu opened 10 months ago

Anas-liu commented 10 months ago

Thanks for the great work!But when I try to reproduce I get an error when running the py file. Check the error report. The error occurs in the load_model statement. It should be caused by the incompatible version of the library. Can you give more detailed operation instructions? Thanks again!

D:\conda_data\envs\pytorch_1_12\python.exe E:/code/topic/universal_NPU-CNN_accelerator/generate_par.py D:\conda_data\envs\pytorch_1_12\lib\site-packages\keras\saving\saved_model\load.py:115: RuntimeWarning: Unexpected end-group tag: Not all data was converted metadata.ParseFromString(file_content) 2024-01-05 11:09:57.531010: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2024-01-05 11:09:57.976125: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1656 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3050 Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6 Traceback (most recent call last): File "E:\code\topic\universal_NPU-CNN_accelerator\generate_par.py", line 9, in model = load_model('mymodel') File "D:\conda_data\envs\pytorch_1_12\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler raise e.with_traceback(filtered_tb) from None File "D:\conda_data\envs\pytorch_1_12\lib\site-packages\tensorflow\python\saved_model\load.py", line 915, in load_partial raise FileNotFoundError( FileNotFoundError: Op type not registered 'DisableCopyOnRead' in binary running on ANAS. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed. You may be trying to load on a different device from the computational device. Consider setting the experimental_io_device option in tf.saved_model.LoadOptions to the io_device such as '/job:localhost'.

Process finished with exit code 1

thousrm commented 10 months ago

Thanks for your interests. The version of keras that I use is 2.13.1.

I think there have been some updates to the load_model function. I will review and edit this later.

I apologise for the lack of recent updates . I've been very busy lately and haven't had the time to update.