Open mroelandts opened 6 years ago
I had a similar issue. I was using tensorflow 1.5. I downgraded to 1.4.1 and now it works.
yeah thats tensorflow version bound. The reason is the version the model is exported in. There are incompabilities between 1.5 and 1.4
@MatthiasRoelandts does the error still appear?
@GustavZ now i install tensorflow1.7 on jetson tx2 but face same issue:
Loading label map Building Graph 2018-03-29 01:16:27.067350: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 2018-03-29 01:16:27.067445: E tensorflow/core/common_runtime/direct_session.cc:167] Internal: CUDA runtime implicit initialization on GPU:0 failed. Status: unknown error Traceback (most recent call last): File "object_detection.py", line 301, in
main() File "object_detection.py", line 297, in main detection(graph, category, score, expand) File "object_detection.py", line 180, in detection with tf.Session(graph=detection_graph,config=config) as sess: File "/home/nvidia/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1509, in init super(Session, self).init(target, graph, config=config) File "/home/nvidia/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 638, in init self._session = tf_session.TF_NewDeprecatedSession(opts, status) File "/home/nvidia/.local/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 516, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.InternalError: Failed to create session.
can you suggest how to do?
@imxboards i also sometimes face this error when i switch tensorflow versions. It is definetly caused by tensorflows internal changes and in-compabilities. Try using TF-1.4.
I for my self am also not able to run it with TF-1.7
I'm experiencing this exact same issue. I had my code up and running on the Jetson TX2 with tensorflow 1.7, but now after powering down and traveling for a few days, it's giving me this error. I can run it without gpu using export CUDA_VISIBLE_DEVICES=''
however this defeats the purpose. Anybody have any solutions? I could try going to an older version of TF, however all the Jetson builds/wheel files I can find are for Python 2.7 whereas my entire project is written in Python 3.5. Any help would be greatly appreciated!
@ibeckermayer - it is often surprisingly easy to get a Python program running under both Python 3.5 and 2.7. Only took me 30 minutes to convert my 3.5 program - I only had issues where I was using datetime
routines to generate utc time.
Hi @GustavZ,
I searched about multiple session problem. here: https://devtalk.nvidia.com/default/topic/1035884/jetson-tx2/cuda-error-creating-more-than-one-session-using-tensorflow/post/5265161/#5265161
We need to add gpu_options in the tf.Session() that is called at the first. v1.0: object_detection.py v2.0: rod/model.py
def load_frozenmodel():
...
input_graph = tf.Graph()
config = tf.ConfigProto()
config.gpu_options.allow_growth = allow_memory_growth
with tf.Session(graph=input_graph, config=config):
@naisy which problem should this solve?
This solves the problem in environments where 2nd session can not be created.
tensorflow.python.framework.errors_impl.InternalError: Failed to create session.
I am still getting this issue on version 2.0.0
Limiting the GPU memory solves this issue for me:
import tensorflow as tf
MEMORY_LIMIT = 1024
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
tf.config.experimental.set_virtual_device_configuration(gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=MEMORY_LIMIT)])
except RuntimeError as e:
print(e)
I am using tensotflow 2.0.0 in c++ code and am facing the same issue. Any solution guys?
I'm using the latest version of tensorflow (2.3.0) with python 3.6.10 and cuda 10.1 and facing the same issue as well in Ubuntu 18.04. export CUDA_VISIBLE_DEVICES=0 or export CUDA_VISIBLE_DEVICES='' are helping to run the code but not consistent enough. I'm new and don't actually know what these exports are doing exactly. This is the error I get: RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: device kernel image is invalid
@ctsams9 I am having a same situation as yours. Did you find any solution so far ?
@ctsams9 I am having a same situation as yours. Did you find any solution so far ?
@Horcrux1 No luck so far. I'm planning to post some of the results I get after running some tests here and on stackoverflow soon.
I'm using the latest version of tensorflow (2.3.0) with python 3.6.10 and cuda 10.1 and facing the same issue as well in Ubuntu 18.04. export CUDA_VISIBLE_DEVICES=0 or export CUDA_VISIBLE_DEVICES='' are helping to run the code but not consistent enough. I'm new and don't actually know what these exports are doing exactly. This is the error I get: RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: device kernel image is invalid
Just clarifying @ctsams9 exports, both of them set the variable CUDA_VISIBLE_DEVICES, which is useful if you only want cuda/tensorflow to see and work with specific GPU (0 equals the first GPU). If this variable is set to "", then tensorflow will only use CPU for calculations.
In my case, it runs fine without letting my only GPU (0) visible, but I need to fix whatever "device kernel image is invalid" error means to proceed with my projects.
By the way, I am getting the same error on Arch after upgrading tensorflow from pip.
Limiting the GPU memory solves this issue for me:
import tensorflow as tf MEMORY_LIMIT = 1024 gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: tf.config.experimental.set_virtual_device_configuration(gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=MEMORY_LIMIT)]) except RuntimeError as e: print(e)
I tried to limit TF GPU memory earlier but had no success, sadly :(
Okey, downgrading tensorflow to version 2.2 did it.
pip install --force-reinstall tensorflow-gpu==2.2
Also, if you have ever used pip install
with --ignore-installed
to install tensorflow versions or dependencies, consider removing them first.
What's strange for me is that it takes some minutes to initialize tensorflow with GPU. I don't think it's normal :S
Hello GustavZ, I ran into some problems running your code on the Jetson TX2. At first no problems at all but after a few days I keep receiving this error. full terminal log:
If I first run the program without splitting the model and after wards again with the split turned on, it all works fine! But after I reboot, the problem arises again... Do you have any idea?