Open nfbalbontin opened 3 years ago
@nfbalbontin One of the reasons could be that the model you are trying to is bigger and can't handle by GPU. Have you checked the GPU utilization?. Also, make sure that you are exporting the model library path in your workspace. A similar problem can be found here #1152
Hi! I am exporting the library to my actual workspace. I checked if the GPU is available with with:
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
Giving me back:
Num GPUs Available: 1
After that, when running the progress, I simultaneously checked the GPU with nvidia-smi -l 1
, which showed me:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:00:1E.0 Off | 0 |
| N/A 43C P0 41W / 300W | 309MiB / 16160MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 11107 C python 307MiB |
+-----------------------------------------------------------------------------+
So apparently it isn't the problem.
@nfbalbontin Intresting
Can you add this line to your code before loading model and see what happens.
tf.keras.backend.set_learning_phase(0)
Thanks again for the reply. I added the line before generating the frozen graph - I imagined that's what you meant by "before loading the model"-. Still, I get the same error as before. This is the line that I added before the first step showed above.
tf.keras.backend.set_learning_phase(0)
/home/ec2-user/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow/python/keras/backend.py:435: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
warnings.warn('`tf.keras.backend.set_learning_phase` is deprecated and '
Prerequisites
Please answer the following questions for yourself before submitting an issue.
1. The entire URL of the file you are using
https://github.com/tensorflow/models/blob/master/research/object_detection/inference/infer_detections.py
2. Describe the bug
I've been trying to create an inference detection graph from a frozen graph that I already generated. But each time I run the module
infer_detections.py
, I get the following error:ValueError: Input 1 of node StatefulPartitionedCall was passed float from stem_conv2d/kernel:0 incompatible with expected resource.
3. Steps to reproduce
For creating the
frozen_graph.pb
I run the following steps:1. I obtain the
output_node_names
:2. I generate the
frozen_graph.pb
:3. I run the
infer_detections.py
module:4. Expected behavior
Obtain the inference detection graph
5. Additional context
6. System information