Open ramtinz opened 2 years ago
Hi, What is the GPU memory of your server? The model which I provided as an example should be run in a GPU with 24G memory.
It's 40GB, and here is the output log of running the job. There are two GPUs that are only used for running DeepMicrobes and not any other processes
$ predict_DeepMicrobes.sh -i SRR5935743.tfrec -b 8192 -l species -p 8 -m /home/user/DM/weights_species -o SRR5935743
Prediction started ...
2021-11-01 12:05:53.848810: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2021-11-01 12:05:54.066837: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1392] Found device 0 with properties:
name: NVIDIA A100-PCIE-40GB major: 8 minor: 0 memoryClockRate(GHz): 1.41
pciBusID: 0000:41:00.0
totalMemory: 39.59GiB freeMemory: 39.18GiB
2021-11-01 12:05:54.158532: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1392] Found device 1 with properties:
name: NVIDIA A100-PCIE-40GB major: 8 minor: 0 memoryClockRate(GHz): 1.41
pciBusID: 0000:a1:00.0
totalMemory: 39.59GiB freeMemory: 39.18GiB
2021-11-01 12:05:54.160654: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1471] Adding visible gpu devices: 0, 1
2021-11-01 12:05:56.184375: I tensorflow/core/common_runtime/gpu/gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-11-01 12:05:56.184473: I tensorflow/core/common_runtime/gpu/gpu_device.cc:958] 0 1
2021-11-01 12:05:56.184492: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: N Y
2021-11-01 12:05:56.184502: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 1: Y N
2021-11-01 12:05:56.184736: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/device:GPU:0 with 38043 MB memory) -> physical GPU (device: 0, name: NVIDIA A100-PCIE-40GB, pci bus id: 0000:41:00.0, compute capability: 8.0)
2021-11-01 12:05:56.428940: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/device:GPU:1 with 38043 MB memory) -> physical GPU (device: 1, name: NVIDIA A100-PCIE-40GB, pci bus id: 0000:a1:00.0, compute capability: 8.0)
I1101 12:05:56.663399 140319671247104 tf_logging.py:115] Using default config.
I1101 12:05:56.663751 140319671247104 tf_logging.py:115] Using config: {'_model_dir': '/home/user/DM/weights_species', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f9e576f6710>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
I1101 12:05:56.712367 140319671247104 tf_logging.py:115] Calling model_fn.
W1101 12:05:56.932611 140319671247104 tf_logging.py:125] From /home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/ops/rnn.py:417: calling reverse_sequence (from tensorflow.python.ops.array_ops) with seq_dim is deprecated and will be removed in a future version.
Instructions for updating:
seq_dim is deprecated, use seq_axis instead
W1101 12:05:56.933309 140319671247104 tf_logging.py:125] From /home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py:432: calling reverse_sequence (from tensorflow.python.ops.array_ops) with batch_dim is deprecated and will be removed in a future version.
Instructions for updating:
batch_dim is deprecated, use batch_axis instead
I1101 12:05:57.088968 140319671247104 tf_logging.py:115] Done calling model_fn.
I1101 12:05:57.151509 140319671247104 tf_logging.py:115] Graph was finalized.
2021-11-01 12:05:57.151751: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1471] Adding visible gpu devices: 0, 1
2021-11-01 12:05:57.151844: I tensorflow/core/common_runtime/gpu/gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-11-01 12:05:57.151853: I tensorflow/core/common_runtime/gpu/gpu_device.cc:958] 0 1
2021-11-01 12:05:57.151859: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: N Y
2021-11-01 12:05:57.151867: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 1: Y N
2021-11-01 12:05:57.152064: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 38043 MB memory) -> physical GPU (device: 0, name: NVIDIA A100-PCIE-40GB, pci bus id: 0000:41:00.0, compute capability: 8.0)
2021-11-01 12:05:57.152418: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 38043 MB memory) -> physical GPU (device: 1, name: NVIDIA A100-PCIE-40GB, pci bus id: 0000:a1:00.0, compute capability: 8.0)
I1101 12:05:57.184179 140319671247104 tf_logging.py:115] Restoring parameters from /home/user/DM/weights_species/model.ckpt-0
I1101 12:06:00.660186 140319671247104 tf_logging.py:115] Running local_init_op.
I1101 12:06:00.668687 140319671247104 tf_logging.py:115] Done running local_init_op.
2021-11-01 12:06:01.933267: E tensorflow/stream_executor/cuda/cuda_blas.cc:647] failed to run cuBLAS routine cublasSgemm_v2: CUBLAS_STATUS_EXECUTION_FAILED
2021-11-01 12:06:01.933449: E tensorflow/stream_executor/cuda/cuda_blas.cc:647] failed to run cuBLAS routine cublasSgemm_v2: CUBLAS_STATUS_EXECUTION_FAILED
Traceback (most recent call last):
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
return fn(*args)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(8192, 400), b.shape=(400, 900), m=8192, n=900, k=400
[[Node: token_lstm/bidirectional_rnn/bw/bw/while/bw/coupled_input_forget_gate_lstm_cell/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](token_lstm/bidirectional_rnn/bw/bw/while/bw/coupled_input_forget_gate_lstm_cell/concat, token_lstm/bidirectional_rnn/bw/bw/while/bw/coupled_input_forget_gate_lstm_cell/MatMul/Enter)]]
[[Node: ArgMax/_157 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_455_ArgMax", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/DM/DeepMicrobes.py", line 365, in <module>
absl_app.run(main)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/absl/app.py", line 278, in run
_run_main(main, args)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/absl/app.py", line 239, in _run_main
sys.exit(main(argv))
File "/home/user/DM/DeepMicrobes.py", line 348, in main
flags.FLAGS.translate)
File "/home/user/DM/models/format_prediction.py", line 88, in paired_report
batch_prob = next(prediction_generator)['probabilities']
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 551, in predict
preds_evaluated = mon_sess.run(predictions)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 577, in run
run_metadata=run_metadata)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1053, in run
run_metadata=run_metadata)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1144, in run
raise six.reraise(*original_exc_info)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/six.py", line 719, in reraise
raise value
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1129, in run
return self._sess.run(*args, **kwargs)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1201, in run
run_metadata=run_metadata)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 981, in run
return self._sess.run(*args, **kwargs)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(8192, 400), b.shape=(400, 900), m=8192, n=900, k=400
[[Node: token_lstm/bidirectional_rnn/bw/bw/while/bw/coupled_input_forget_gate_lstm_cell/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](token_lstm/bidirectional_rnn/bw/bw/while/bw/coupled_input_forget_gate_lstm_cell/concat, token_lstm/bidirectional_rnn/bw/bw/while/bw/coupled_input_forget_gate_lstm_cell/MatMul/Enter)]]
[[Node: ArgMax/_157 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_455_ArgMax", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Caused by op 'token_lstm/bidirectional_rnn/bw/bw/while/bw/coupled_input_forget_gate_lstm_cell/MatMul', defined at:
File "/home/user/DM/DeepMicrobes.py", line 365, in <module>
absl_app.run(main)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/absl/app.py", line 278, in run
_run_main(main, args)
File "/home/user/anaconda3/envs/DM/lib/python3.6/site-packages/absl/app.py", line 239, in _run_main
sys.exit(main(argv))
I did not see this error before. Here are a few things you could try.
Good luck!
None of them worked. A hint is that the GPU utility is zero all the time when the process is loaded and there's GPU memory usage according to the nvidia-smi output.
This is a log using nvidia-smi dmon -i 0 -s mu -d 1 -o TD
when I ran the prediction command for the example file:
#Date Time gpu fb bar1 sm mem enc dec
#YYYYMMDD HH:MM:SS Idx MB MB % % % %
20211112 13:21:29 0 466 4 0 0 0 0
20211112 13:21:30 0 468 4 0 0 0 0
20211112 13:21:31 0 468 4 0 0 0 0
20211112 13:21:32 0 468 4 0 0 0 0
20211112 13:21:33 0 468 4 0 0 0 0
20211112 13:21:34 0 468 4 0 0 0 0
20211112 13:21:35 0 470 4 0 0 0 0
20211112 13:21:36 0 470 4 0 0 0 0
20211112 13:21:37 0 470 4 0 0 0 0
20211112 13:21:38 0 470 4 0 0 0 0
20211112 13:21:39 0 470 4 0 0 0 0
20211112 13:21:40 0 472 4 0 0 0 0
20211112 13:21:41 0 472 4 0 0 0 0
20211112 13:21:42 0 472 4 0 0 0 0
20211112 13:21:43 0 472 4 0 0 0 0
20211112 13:21:44 0 472 4 0 0 0 0
20211112 13:21:45 0 472 4 0 0 0 0
20211112 13:21:46 0 472 4 0 0 0 0
20211112 13:21:47 0 472 4 0 0 0 0
20211112 13:21:48 0 472 4 0 0 0 0
20211112 13:21:49 0 474 4 0 0 0 0
20211112 13:21:50 0 474 4 0 0 0 0
20211112 13:21:51 0 474 4 0 0 0 0
20211112 13:21:52 0 474 4 0 0 0 0
20211112 13:21:53 0 474 4 0 0 0 0
20211112 13:21:55 0 476 4 0 0 0 0
20211112 13:21:56 0 476 4 0 0 0 0
20211112 13:21:57 0 476 4 0 0 0 0
20211112 13:21:58 0 476 4 0 0 0 0
20211112 13:21:59 0 476 4 0 0 0 0
20211112 13:22:00 0 476 4 0 0 0 0
20211112 13:22:01 0 478 4 0 0 0 0
20211112 13:22:02 0 478 4 0 0 0 0
20211112 13:22:03 0 478 4 0 0 0 0
20211112 13:22:04 0 478 4 0 0 0 0
20211112 13:22:05 0 478 4 0 0 0 0
20211112 13:22:06 0 478 4 0 0 0 0
20211112 13:22:07 0 480 4 0 0 0 0
20211112 13:22:08 0 480 4 0 0 0 0
20211112 13:22:09 0 480 4 0 0 0 0
20211112 13:22:10 0 480 4 0 0 0 0
20211112 13:22:11 0 480 4 0 0 0 0
20211112 13:22:12 0 480 4 0 0 0 0
20211112 13:22:13 0 480 4 0 0 0 0
#Date Time gpu fb bar1 sm mem enc dec
#YYYYMMDD HH:MM:SS Idx MB MB % % % %
20211112 13:22:14 0 482 4 0 0 0 0
20211112 13:22:15 0 482 4 0 0 0 0
20211112 13:22:16 0 482 4 0 0 0 0
20211112 13:22:17 0 482 4 0 0 0 0
20211112 13:22:18 0 482 4 0 0 0 0
20211112 13:22:19 0 482 4 0 0 0 0
20211112 13:22:20 0 484 4 0 0 0 0
20211112 13:22:21 0 484 4 0 0 0 0
20211112 13:22:22 0 484 4 0 0 0 0
20211112 13:22:23 0 486 4 0 0 0 0
20211112 13:22:24 0 486 4 0 0 0 0
20211112 13:22:25 0 486 4 0 0 0 0
20211112 13:22:26 0 486 4 0 0 0 0
20211112 13:22:27 0 486 4 0 0 0 0
20211112 13:22:28 0 488 4 0 0 0 0
20211112 13:22:29 0 488 4 0 0 0 0
20211112 13:22:30 0 488 4 0 0 0 0
20211112 13:22:31 0 488 4 0 0 0 0
20211112 13:22:32 0 488 4 0 0 0 0
20211112 13:22:33 0 488 4 0 0 0 0
20211112 13:22:34 0 488 4 0 0 0 0
20211112 13:22:35 0 488 4 0 0 0 0
20211112 13:22:36 0 490 4 0 0 0 0
20211112 13:22:37 0 38534 4 0 0 0 0
20211112 13:22:38 0 38534 4 0 0 0 0
20211112 13:22:39 0 38534 4 0 0 0 0
20211112 13:22:40 0 38542 4 0 0 0 0
20211112 13:22:41 0 38542 4 17 3 0 0
20211112 13:22:42 0 38542 4 0 0 0 0
20211112 13:22:43 0 38542 4 0 0 0 0
20211112 13:22:44 0 38542 4 0 0 0 0
20211112 13:22:45 0 38542 4 0 0 0 0
20211112 13:22:46 0 38542 4 0 0 0 0
20211112 13:22:47 0 38542 4 0 0 0 0
20211112 13:22:48 0 38542 4 0 0 0 0
20211112 13:22:49 0 38542 4 0 0 0 0
20211112 13:22:50 0 38542 4 0 0 0 0
20211112 13:22:51 0 38544 4 0 0 0 0
20211112 13:22:52 0 38546 4 0 0 0 0
20211112 13:22:53 0 38546 4 0 0 0 0
20211112 13:22:54 0 38546 4 0 0 0 0
20211112 13:22:55 0 38546 4 0 0 0 0
20211112 13:22:56 0 38546 4 0 0 0 0
20211112 13:22:57 0 38546 4 0 0 0 0
#Date Time gpu fb bar1 sm mem enc dec
#YYYYMMDD HH:MM:SS Idx MB MB % % % %
20211112 13:22:58 0 38546 4 0 0 0 0
20211112 13:22:59 0 38546 4 0 0 0 0
20211112 13:23:00 0 38546 4 0 0 0 0
20211112 13:23:01 0 38548 4 0 0 0 0
20211112 13:23:02 0 38548 4 0 0 0 0
20211112 13:23:03 0 38550 4 0 0 0 0
20211112 13:23:04 0 38550 4 0 0 0 0
20211112 13:23:05 0 38550 4 0 0 0 0
20211112 13:23:06 0 38550 4 0 0 0 0
20211112 13:23:07 0 38550 4 0 0 0 0
20211112 13:23:08 0 38552 4 0 0 0 0
20211112 13:23:09 0 38552 4 0 0 0 0
20211112 13:23:10 0 38552 4 0 0 0 0
20211112 13:23:11 0 38552 4 0 0 0 0
20211112 13:23:12 0 38552 4 0 0 0 0
20211112 13:23:13 0 38552 4 0 0 0 0
20211112 13:23:14 0 38554 4 0 0 0 0
20211112 13:23:15 0 38554 4 0 0 0 0
20211112 13:23:16 0 38554 4 0 0 0 0
20211112 13:23:17 0 38554 4 0 0 0 0
20211112 13:23:18 0 38554 4 0 0 0 0
20211112 13:23:19 0 38556 4 0 0 0 0
20211112 13:23:20 0 38556 4 0 0 0 0
20211112 13:23:21 0 38556 4 0 0 0 0
20211112 13:23:22 0 38556 4 0 0 0 0
20211112 13:23:23 0 38556 4 0 0 0 0
20211112 13:23:24 0 38556 4 0 0 0 0
20211112 13:23:25 0 38556 4 0 0 0 0
20211112 13:23:26 0 38558 4 0 0 0 0
20211112 13:23:27 0 38558 4 0 0 0 0
20211112 13:23:28 0 38558 4 0 0 0 0
20211112 13:23:29 0 38558 4 0 0 0 0
20211112 13:23:30 0 38558 4 0 0 0 0
20211112 13:23:31 0 38560 4 0 0 0 0
20211112 13:23:33 0 38560 4 0 0 0 0
20211112 13:23:34 0 38560 4 0 0 0 0
20211112 13:23:35 0 38560 4 0 0 0 0
20211112 13:23:36 0 38560 4 0 0 0 0
20211112 13:23:37 0 38560 4 0 0 0 0
20211112 13:23:38 0 38562 4 0 0 0 0
20211112 13:23:39 0 38562 4 0 0 0 0
20211112 13:23:40 0 38562 4 0 0 0 0
20211112 13:23:41 0 38562 4 0 0 0 0
20211112 13:23:42 0 38562 4 0 0 0 0
#Date Time gpu fb bar1 sm mem enc dec
#YYYYMMDD HH:MM:SS Idx MB MB % % % %
20211112 13:23:43 0 38562 4 0 0 0 0
20211112 13:23:44 0 38562 4 0 0 0 0
20211112 13:23:45 0 38562 4 0 0 0 0
20211112 13:23:46 0 38562 4 0 0 0 0
20211112 13:23:47 0 38564 4 0 0 0 0
20211112 13:23:48 0 38564 4 0 0 0 0
20211112 13:23:49 0 38564 4 0 0 0 0
20211112 13:23:50 0 38564 4 0 0 0 0
20211112 13:23:51 0 38566 4 0 0 0 0
20211112 13:23:52 0 38566 4 0 0 0 0
20211112 13:23:53 0 38566 4 0 0 0 0
20211112 13:23:54 0 38566 4 0 0 0 0
20211112 13:23:55 0 38566 4 0 0 0 0
20211112 13:23:56 0 38566 4 0 0 0 0
20211112 13:23:57 0 38566 4 0 0 0 0
20211112 13:23:58 0 38568 4 0 0 0 0
20211112 13:23:59 0 38568 4 0 0 0 0
20211112 13:24:00 0 38568 4 0 0 0 0
20211112 13:24:01 0 38568 4 0 0 0 0
20211112 13:24:02 0 38570 4 0 0 0 0
20211112 13:24:03 0 38570 4 0 0 0 0
20211112 13:24:04 0 38570 4 0 0 0 0
20211112 13:24:05 0 38570 4 0 0 0 0
20211112 13:24:06 0 38570 4 0 0 0 0
20211112 13:24:07 0 38570 4 0 0 0 0
20211112 13:24:08 0 38572 4 0 0 0 0
20211112 13:24:09 0 38572 4 0 0 0 0
20211112 13:24:10 0 38572 4 0 0 0 0
20211112 13:24:11 0 38572 4 0 0 0 0
20211112 13:24:12 0 38574 4 0 0 0 0
20211112 13:24:13 0 38574 4 0 0 0 0
20211112 13:24:14 0 38576 4 0 0 0 0
20211112 13:24:15 0 38568 4 0 0 0 0
20211112 13:24:16 0 0 1 100 3 0 0
20211112 13:24:17 0 0 1 0 0 0 0
20211112 13:24:18 0 0 1 0 0 0 0
20211112 13:24:19 0 0 1 0 0 0 0
20211112 13:24:20 0 0 1 0 0 0 0
20211112 13:24:21 0 0 1 0 0 0 0
20211112 13:24:22 0 0 1 0 0 0 0
20211112 13:24:23 0 0 1 0 0 0 0
20211112 13:24:24 0 0 1 0 0 0 0
20211112 13:24:25 0 0 1 0 0 0 0
20211112 13:24:26 0 0 1 0 0 0 0
#Date Time gpu fb bar1 sm mem enc dec
#YYYYMMDD HH:MM:SS Idx MB MB % % % %
20211112 13:24:27 0 0 1 0 0 0 0
20211112 13:24:28 0 0 1 0 0 0 0
20211112 13:24:29 0 0 1 0 0 0 0
20211112 13:24:30 0 0 1 0 0 0 0
20211112 13:24:31 0 0 1 0 0 0 0
20211112 13:24:32 0 0 1 0 0 0 0
20211112 13:24:33 0 0 1 0 0 0 0
20211112 13:24:34 0 0 1 0 0 0 0
20211112 13:24:35 0 0 1 0 0 0 0
20211112 13:24:36 0 0 1 0 0 0 0
20211112 13:24:37 0 0 1 0 0 0 0
20211112 13:24:38 0 0 1 0 0 0 0
20211112 13:24:39 0 0 1 0 0 0 0
It seems that the memory issue is not the key problem. You could try some other simple tensorflow scripts using the same environment to check if the work can be loaded into your GPU.
Hi I cannot run the example prediction task on a server with red hat 8.4 as OS. When I run it I get this error: E tensorflow/stream_executor/cuda/cuda_blas.cc:647] failed to run cuBLAS routine cublasSgemm_v2: CUBLAS_STATUS_EXECUTION_FAILED I googled it and it seems to be related to a memory issue and some people have suggested (e.g. here) to add the following lines inside the python code to avoid it.
I inserted the code in DeepMicrobes.py just after
import tensorflow as tf
, but it didn't solve it. I inserted the code in embed_lstm_attention.py, and input_pipeline.py but there was no success. I also tried the following codes to limit the memory usage to let the cuBLAS run but no success either.Another possibility is that the OS is incompatible with this version of tensorflow (1.9.0) or its cudatoolkit, or basically I should get the cudatoolkit from another conda channel that has patches involved to solve this issue.
Have you seen such a problem before? and do you have any idea how to solve it? Many thanks for you help in advance