yiranran / Audio-driven-TalkingFace-HeadPose

Code for "Audio-driven Talking Face Video Generation with Learning-based Personalized Head Pose" (Arxiv 2020) and "Predicting Personalized Head Movement From Short Video and Speech Signal" (TMM 2022)
https://ieeexplore.ieee.org/document/9894719
720 stars 147 forks source link

Running Colab Demo; FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/memory_seq_p2p/60_net_G.pth' #64

Open shreyanshdas00 opened 3 years ago

shreyanshdas00 commented 3 years ago

Hi

While running the Colab Demo notebook provided, in the Finetune GAN step, i encounter the following error: FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/memory_seq_p2p/60_net_G.pth'

Below is the whole output message:

`Elapsed time is 19.812 seconds. ----------------- Options --------------- Nw: 3
alpha: 0.3
attention: 1
batch_size: 1
beta1: 0.5
checkpoints_dir: ./checkpoints
continue_train: True [default: False] crop_size: 256
dataroot: 31_bmold_win3 [default: None] dataset_mode: aligned_feature_multi
direction: AtoB
display_env: memory_seq_31 [default: main] display_freq: 400
display_id: 0
display_ncols: 4
display_port: 8097
display_server: http://localhost
display_winsize: 256
do_saturate_mask: False
epoch: 0 [default: latest] epoch_count: 1
gan_mode: vanilla
gpu_ids: 0
iden_feat_dim: 512
iden_feat_dir: arcface/iden_feat/
iden_thres: 0.98
init_gain: 0.02
init_type: normal
input_nc: 3
isTrain: True [default: None] lambda_L1: 100.0
lambda_mask: 2.0 [default: 0.1] lambda_mask_smooth: 1e-05
load_iter: 0 [default: 0] load_size: 286
lr: 0.0001 [default: 0.0002] lr_decay_iters: 50
lr_policy: linear
max_dataset_size: inf
mem_size: 30000
model: memory_seq [default: cycle_gan] n_layers_D: 3
name: memory_seq_p2p/31 [default: experiment_name] ndf: 64
netD: basic
netG: unetac_adain_256
ngf: 64
niter: 60 [default: 100] niter_decay: 0 [default: 100] no_dropout: False
no_flip: False
no_html: False
norm: batch
num_threads: 4
output_nc: 3
phase: train
pool_size: 0
preprocess: resize_and_crop
print_freq: 100
resizemethod: lanczos
save_by_iter: False
save_epoch_freq: 5
save_latest_freq: 5000
serial_batches: False
spatial_feat_dim: 512
suffix:
top_k: 256
update_html_freq: 1000
verbose: False
----------------- End ------------------- dataset [AlignedFeatureMultiDataset] was created The number of training images = 298 initialize network with normal initialize network with normal model [MemorySeqModel] was created loading the model from ./checkpoints/memory_seq_p2p/0_net_G.pth loading the model from ./checkpoints/memory_seq_p2p/0_net_D.pth loading the model from ./checkpoints/memory_seq_p2p/0_net_mem.pth ---------- Networks initialized ------------- [Network G] Total number of parameters : 259.056 M [Network D] Total number of parameters : 2.775 M [Network mem] Total number of parameters : 11.952 M

create web directory ./checkpoints/memory_seq_p2p/31/web... ----------------- Options --------------- Nw: 3
alpha: 0.3
aspect_ratio: 1.0
attention: 1
batch_size: 1
blinkframeid: 41
checkpoints_dir: ./checkpoints
crop_size: 256
dataroot: 31_bmold_win3 [default: None] dataset_mode: aligned_feature_multi
direction: AtoB
display_winsize: 256
do_saturate_mask: False
epoch: 60 [default: latest] eval: False
gpu_ids: 0
iden_feat_dim: 512
iden_feat_dir: arcface/iden_feat/
iden_thres: 0.98
imagefolder: images60 [default: images] init_gain: 0.02
init_type: normal
input_nc: 3
isTrain: False [default: None] load_iter: 0 [default: 0] load_size: 256
max_dataset_size: inf
mem_size: 30000
model: memory_seq [default: test] n: 26
n_layers_D: 3
name: memory_seq_p2p/31 [default: experiment_name] ndf: 64
netD: basic
netG: unetac_adain_256
ngf: 64
no_dropout: False
no_flip: False
norm: batch
ntest: inf
num_test: 200 [default: 50] num_threads: 4
output_nc: 3
phase: test
preprocess: resize_and_crop
resizemethod: lanczos
results_dir: ./results/
serial_batches: False
spatial_feat_dim: 512
suffix:
test_batch_list:
test_use_gt: 0
top_k: 256
verbose: False
----------------- End ------------------- dataset [AlignedFeatureMultiDataset] was created initialize network with normal model [MemorySeqModel] was created loading the model from ./checkpoints/memory_seq_p2p/60_net_G.pth 19_news/31 31_bmold_win3 octave: X11 DISPLAY environment variable not set octave: disabling GUI features Traceback (most recent call last): File "test_batch.py", line 1, in import face_model File "/content/Audio-driven-TalkingFace-HeadPose/render-to-video/arcface/face_model.py", line 11, in import mxnet as mx File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/init.py", line 24, in from .context import Context, current_context, cpu, gpu, cpu_pinned File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/context.py", line 24, in from .base import classproperty, with_metaclass, _MXClassPropertyMetaClass File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/base.py", line 213, in _LIB = _load_lib() File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/base.py", line 204, in _load_lib lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL) File "/usr/local/envs/myenv/lib/python3.6/ctypes/init.py", line 348, in init self._handle = _dlopen(self._name, mode) OSError: libcudart.so.8.0: cannot open shared object file: No such file or directory Traceback (most recent call last): File "test_batch.py", line 1, in import face_model File "/content/Audio-driven-TalkingFace-HeadPose/render-to-video/arcface/face_model.py", line 11, in import mxnet as mx File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/init.py", line 24, in from .context import Context, current_context, cpu, gpu, cpu_pinned File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/context.py", line 24, in from .base import classproperty, with_metaclass, _MXClassPropertyMetaClass File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/base.py", line 213, in _LIB = _load_lib() File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/base.py", line 204, in _load_lib lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL) File "/usr/local/envs/myenv/lib/python3.6/ctypes/init.py", line 348, in init self._handle = _dlopen(self._name, mode) OSError: libcudart.so.8.0: cannot open shared object file: No such file or directory Traceback (most recent call last): File "train.py", line 45, in for i, data in enumerate(dataset): # inner loop within one epoch File "/content/Audio-driven-TalkingFace-HeadPose/render-to-video/data/init.py", line 90, in iter for i, data in enumerate(self.dataloader): File "/usr/local/envs/myenv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 819, in next return self._process_data(data) File "/usr/local/envs/myenv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data data.reraise() File "/usr/local/envs/myenv/lib/python3.6/site-packages/torch/_utils.py", line 369, in reraise raise self.exc_type(msg) FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/envs/myenv/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/usr/local/envs/myenv/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/envs/myenv/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/content/Audio-driven-TalkingFace-HeadPose/render-to-video/data/aligned_feature_multi_dataset.py", line 94, in getitem B_feat = np.load(os.path.join(self.opt.iden_feat_dir,ss[-3],ss[-2],ss[-1][:-4]+'.npy')) File "/usr/local/envs/myenv/lib/python3.6/site-packages/numpy/lib/npyio.py", line 422, in load fid = open(os_fspath(file), "rb") FileNotFoundError: [Errno 2] No such file or directory: 'arcface/iden_feat/19_news/31/frame187.npy'

Traceback (most recent call last): File "test.py", line 47, in model.setup(opt) # regular setup: load and print networks; create schedulers File "/content/Audio-driven-TalkingFace-HeadPose/render-to-video/models/base_model.py", line 89, in setup self.load_networks(load_suffix) File "/content/Audio-driven-TalkingFace-HeadPose/render-to-video/models/base_model.py", line 202, in load_networks state_dict = torch.load(load_path, map_location=str(self.device)) File "/usr/local/envs/myenv/lib/python3.6/site-packages/torch/serialization.py", line 381, in load f = open(f, 'rb') FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/memory_seq_p2p/60_net_G.pth'`

arongsamuel commented 3 years ago

Have you checked if your input video is 25fps?

zerzerzerz commented 2 years ago

Hi

While running the Colab Demo notebook provided, in the Finetune GAN step, i encounter the following error: FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/memory_seq_p2p/60_net_G.pth'

Below is the whole output message:

`Elapsed time is 19.812 seconds.

----------------- Options --------------- Nw: 3 alpha: 0.3 attention: 1 batch_size: 1 beta1: 0.5 checkpoints_dir: ./checkpoints continue_train: True [default: False] crop_size: 256 dataroot: 31_bmold_win3 [default: None] dataset_mode: aligned_feature_multi direction: AtoB display_env: memory_seq_31 [default: main] display_freq: 400 display_id: 0 display_ncols: 4 display_port: 8097 display_server: http://localhost display_winsize: 256 do_saturate_mask: False epoch: 0 [default: latest] epoch_count: 1 gan_mode: vanilla gpu_ids: 0 iden_feat_dim: 512 iden_feat_dir: arcface/iden_feat/ iden_thres: 0.98 init_gain: 0.02 init_type: normal input_nc: 3 isTrain: True [default: None] lambda_L1: 100.0 lambda_mask: 2.0 [default: 0.1] lambda_mask_smooth: 1e-05 load_iter: 0 [default: 0] load_size: 286 lr: 0.0001 [default: 0.0002] lr_decay_iters: 50 lr_policy: linear max_dataset_size: inf mem_size: 30000 model: memory_seq [default: cycle_gan] n_layers_D: 3 name: memory_seq_p2p/31 [default: experiment_name] ndf: 64 netD: basic netG: unetac_adain_256 ngf: 64 niter: 60 [default: 100] niter_decay: 0 [default: 100] no_dropout: False no_flip: False no_html: False norm: batch num_threads: 4 output_nc: 3 phase: train pool_size: 0 preprocess: resize_and_crop print_freq: 100 resizemethod: lanczos save_by_iter: False save_epoch_freq: 5 save_latest_freq: 5000 serial_batches: False spatial_feat_dim: 512 suffix: top_k: 256 update_html_freq: 1000 verbose: False ----------------- End ------------------- dataset [AlignedFeatureMultiDataset] was created The number of training images = 298 initialize network with normal initialize network with normal model [MemorySeqModel] was created loading the model from ./checkpoints/memory_seq_p2p/0_net_G.pth loading the model from ./checkpoints/memory_seq_p2p/0_net_D.pth loading the model from ./checkpoints/memory_seq_p2p/0_net_mem.pth ---------- Networks initialized ------------- [Network G] Total number of parameters : 259.056 M [Network D] Total number of parameters : 2.775 M [Network mem] Total number of parameters : 11.952 M create web directory ./checkpoints/memory_seq_p2p/31/web... ----------------- Options --------------- Nw: 3 alpha: 0.3 aspect_ratio: 1.0 attention: 1 batch_size: 1 blinkframeid: 41 checkpoints_dir: ./checkpoints crop_size: 256 dataroot: 31_bmold_win3 [default: None] dataset_mode: aligned_feature_multi direction: AtoB display_winsize: 256 do_saturate_mask: False epoch: 60 [default: latest] eval: False gpu_ids: 0 iden_feat_dim: 512 iden_feat_dir: arcface/iden_feat/ iden_thres: 0.98 imagefolder: images60 [default: images] init_gain: 0.02 init_type: normal input_nc: 3 isTrain: False [default: None] load_iter: 0 [default: 0] load_size: 256 max_dataset_size: inf mem_size: 30000 model: memory_seq [default: test] n: 26 n_layers_D: 3 name: memory_seq_p2p/31 [default: experiment_name] ndf: 64 netD: basic netG: unetac_adain_256 ngf: 64 no_dropout: False no_flip: False norm: batch ntest: inf num_test: 200 [default: 50] num_threads: 4 output_nc: 3 phase: test preprocess: resize_and_crop resizemethod: lanczos results_dir: ./results/ serial_batches: False spatial_feat_dim: 512 suffix: test_batch_list: test_use_gt: 0 top_k: 256 verbose: False ----------------- End ------------------- dataset [AlignedFeatureMultiDataset] was created initialize network with normal model [MemorySeqModel] was created loading the model from ./checkpoints/memory_seq_p2p/60_net_G.pth 19_news/31 31_bmold_win3 octave: X11 DISPLAY environment variable not set octave: disabling GUI features Traceback (most recent call last): File "test_batch.py", line 1, in import face_model File "/content/Audio-driven-TalkingFace-HeadPose/render-to-video/arcface/face_model.py", line 11, in import mxnet as mx File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/init.py", line 24, in from .context import Context, current_context, cpu, gpu, cpu_pinned File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/context.py", line 24, in from .base import classproperty, with_metaclass, _MXClassPropertyMetaClass File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/base.py", line 213, in _LIB = _load_lib() File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/base.py", line 204, in _load_lib lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL) File "/usr/local/envs/myenv/lib/python3.6/ctypes/init.py", line 348, in init self._handle = _dlopen(self._name, mode) OSError: libcudart.so.8.0: cannot open shared object file: No such file or directory Traceback (most recent call last): File "test_batch.py", line 1, in import face_model File "/content/Audio-driven-TalkingFace-HeadPose/render-to-video/arcface/face_model.py", line 11, in import mxnet as mx File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/init.py", line 24, in from .context import Context, current_context, cpu, gpu, cpu_pinned File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/context.py", line 24, in from .base import classproperty, with_metaclass, _MXClassPropertyMetaClass File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/base.py", line 213, in _LIB = _load_lib() File "/usr/local/envs/myenv/lib/python3.6/site-packages/mxnet/base.py", line 204, in _load_lib lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL) File "/usr/local/envs/myenv/lib/python3.6/ctypes/init.py", line 348, in init self._handle = _dlopen(self._name, mode) OSError: libcudart.so.8.0: cannot open shared object file: No such file or directory Traceback (most recent call last): File "train.py", line 45, in for i, data in enumerate(dataset): # inner loop within one epoch File "/content/Audio-driven-TalkingFace-HeadPose/render-to-video/data/init.py", line 90, in iter for i, data in enumerate(self.dataloader): File "/usr/local/envs/myenv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 819, in next return self._process_data(data) File "/usr/local/envs/myenv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data data.reraise() File "/usr/local/envs/myenv/lib/python3.6/site-packages/torch/_utils.py", line 369, in reraise raise self.exc_type(msg) FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/envs/myenv/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/usr/local/envs/myenv/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/envs/myenv/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/content/Audio-driven-TalkingFace-HeadPose/render-to-video/data/aligned_feature_multi_dataset.py", line 94, in getitem B_feat = np.load(os.path.join(self.opt.iden_feat_dir,ss[-3],ss[-2],ss[-1][:-4]+'.npy')) File "/usr/local/envs/myenv/lib/python3.6/site-packages/numpy/lib/npyio.py", line 422, in load fid = open(os_fspath(file), "rb") FileNotFoundError: [Errno 2] No such file or directory: 'arcface/iden_feat/19_news/31/frame187.npy'

Traceback (most recent call last): File "test.py", line 47, in model.setup(opt) # regular setup: load and print networks; create schedulers File "/content/Audio-driven-TalkingFace-HeadPose/render-to-video/models/base_model.py", line 89, in setup self.load_networks(load_suffix) File "/content/Audio-driven-TalkingFace-HeadPose/render-to-video/models/base_model.py", line 202, in load_networks state_dict = torch.load(load_path, map_location=str(self.device)) File "/usr/local/envs/myenv/lib/python3.6/site-packages/torch/serialization.py", line 381, in load f = open(f, 'rb') FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/memory_seq_p2p/60_net_G.pth'`

Could you please tell me how to run this Colab demo? Directly running pip install -r requirements_colab.txt caused many packages version conflicts. Thanks.

arongsamuel commented 2 years ago

@zerzerzerz this is mainly due to torchvision 0.4.0 Change it to torchvision==0.3.0

zerzerzerz commented 2 years ago

@zerzerzerz this is mainly due to torchvision 0.4.0 Change it to torchvision==0.3.0

Thanks a lot and I'll try it.

pegahs1993 commented 2 years ago

I have the same problem, but it was not solved by changing the version of torchvision? Did you find a solution? Thanks in advance

Ethanoool commented 2 years ago

Don't specify the version of mxnet-cu101

muxiddin19 commented 2 years ago

Don't specify the version of mxnet-cu101

Hi! I have the same issue. FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/memory_seq_p2p/60_net_G.pth' What do you mean with do not specify the version of mxnet-cu101? Requirements.txt file consist of mxnet==1.5.1.post0 mxnet-cu80==1.5.0 lines related to your suggestion. Can you please clarify your solution?

Ethanoool commented 2 years ago

You should not look at this error: FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/memory_seq_p2p/60_net_G.pth'

Instead, you should look at the highest error above. The directory './checkpoints/memory_seq_p2p/60_net_G.pth' doesn't exist because the above codes have error. In my experience, these errors happen because the libraries and its Cuda versions (for my case is torch and torchvision) doesn't fit your GPU and its driver. You need to find libraries that fits the GPU Google Colab allocated for you, be aware that Google Colab might you different GPU based on resources availability.

If you have any more questions, feel free to drop me a direct email.

muxiddin19 commented 2 years ago

I have this issue. I tried to create several different virtual environments, but the results are the same. What will you suggest? Please, try to explain with details, as I am not so good at coding?

tf) root@6352b96a0a4d:/workspace/Audio-driven-TalkingFace-HeadPose# cd Deep3DFaceReconstruction/; CUDA_VISIBLE_DEVICES=0,1 python demo_19news.py ../Data/32 ../Data/32 32 img_list len: 400 WARNING: Logging before flag parsing goes to stderr. W0129 09:45:32.370422 139995203756224 deprecation_wrapper.py:119] From demo_19news.py:57: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

W0129 09:45:32.371711 139995203756224 deprecation_wrapper.py:119] From demo_19news.py:18: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

W0129 09:45:32.371872 139995203756224 deprecation_wrapper.py:119] From demo_19news.py:19: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.

W0129 09:45:33.175683 139995203756224 deprecation.py:323] From /workspace/Audio-driven-TalkingFace-HeadPose/Deep3DFaceReconstruction/tf_mesh_renderer/mesh_renderer/mesh_renderer.py:163: add_dispatch_support..wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where W0129 09:45:33.180481 139995203756224 deprecation_wrapper.py:119] From demo_19news.py:72: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2022-01-29 09:45:33.181554: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1 2022-01-29 09:45:33.409504: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: name: Tesla V100-DGXS-32GB major: 7 minor: 0 memoryClockRate(GHz): 1.53 pciBusID: 0000:07:00.0 2022-01-29 09:45:33.622904: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 1 with properties: name: Tesla V100-DGXS-32GB major: 7 minor: 0 memoryClockRate(GHz): 1.53 pciBusID: 0000:08:00.0 2022-01-29 09:45:33.623139: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 2022-01-29 09:45:33.623297: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcublas.so.10.0'; dlerror: libcublas.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 2022-01-29 09:45:33.623461: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcufft.so.10.0'; dlerror: libcufft.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 2022-01-29 09:45:33.623605: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcurand.so.10.0'; dlerror: libcurand.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 2022-01-29 09:45:33.623748: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcusolver.so.10.0'; dlerror: libcusolver.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 2022-01-29 09:45:33.623890: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcusparse.so.10.0'; dlerror: libcusparse.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 2022-01-29 09:45:33.627646: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7 2022-01-29 09:45:33.627667: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1663] Cannot dlopen some GPU libraries. Skipping registering GPU devices... 2022-01-29 09:45:33.627994: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2022-01-29 09:45:34.392173: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55bbadaafad0 executing computations on platform CUDA. Devices: 2022-01-29 09:45:34.392221: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Tesla V100-DGXS-32GB, Compute Capability 7.0 2022-01-29 09:45:34.392238: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (1): Tesla V100-DGXS-32GB, Compute Capability 7.0 2022-01-29 09:45:34.415562: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2198575000 Hz 2022-01-29 09:45:34.419273: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55bbb02d5000 executing computations on platform Host. Devices: 2022-01-29 09:45:34.419300: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): , 2022-01-29 09:45:34.419376: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix: 2022-01-29 09:45:34.419391: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]
reconstructing... 2022-01-29 09:45:34.836177: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile. Total n: 400 Time: 108.32936334609985

image

Ethanoool commented 2 years ago

The result is fine, unlike errors warnings and logging info can be ignored.

muxiddin19 commented 2 years ago

Thanks for your quick reply! In next step I face this one: tf) root@6352b96a0a4d:/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video# python train_19news_1.py 32 0 19_news/32 32_bmold_win3 octave: X11 DISPLAY environment variable not set octave: disabling GUI features warning: the 'bwboundaries' function belongs to the image package from Octave Forge which you have installed but not loaded. To load the package, run 'pkg load image' from the Octave prompt.

Please read http://www.octave.org/missing.html to learn how you can contribute missing functionality. error: 'bwboundaries' undefined near line 18 column 12 loading models/model-r100-ii/model 0 [10:12:44] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v1.2.0. Attempting to upgrade... [10:12:44] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded! Traceback (most recent call last): File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1623, in simple_bind ctypes.byref(exe_handle))) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/base.py", line 253, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: [10:12:44] src/storage/storage.cc:119: Compile with USE_CUDA=1 to enable GPU usage Stack trace: [bt] (0) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2795cb) [0x7f1d539ce5cb] [bt] (1) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b5d7a5) [0x7f1d562b27a5] [bt] (2) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b617fd) [0x7f1d562b67fd] [bt] (3) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b63f12) [0x7f1d562b8f12] [bt] (4) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::NDArray::NDArray(mxnet::TShape const&, mxnet::Context, bool, int)+0x5d0) [0x7f1d55a3d2b0] [bt] (5) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::InitZeros(mxnet::NDArrayStorageType, mxnet::TShape const&, mxnet::Context const&, int)+0x5c) [0x7f1d55ae249c] [bt] (6) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::ReshapeOrCreate(std::string const&, mxnet::TShape const&, int, mxnet::NDArrayStorageType, mxnet::Context const&, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, bool)+0x3a1) [0x7f1d55af59d1] [bt] (7) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::InitArguments(nnvm::IndexedGraph const&, std::vector<mxnet::TShape, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, mxnet::Executor const, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >)+0xb10) [0x7f1d55afd9a0] [bt] (8) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::Init(nnvm::Symbol, mxnet::Context const&, std::map<std::string, mxnet::Context, std::less, std::allocator<std::pair<std::string const, mxnet::Context> > > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::unordered_map<std::string, mxnet::TShape, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::TShape> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, mxnet::Executor*, std::unordered_map<nnvm::NodeEntry, mxnet::NDArray, nnvm::NodeEntryHash, nnvm::NodeEntryEqual, std::allocator<std::pair<nnvm::NodeEntry const, mxnet::NDArray> > > const&)+0x6a9) [0x7f1d55b0bd59]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "test_batch.py", line 25, in model = face_model.FaceModel(args) File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/arcface/face_model.py", line 51, in init self.model = get_model(ctx, image_size, args.model, 'fc1') File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/arcface/face_model.py", line 38, in get_model model.bind(data_shapes=[('data', (1, 3, image_size[0], image_size[1]))]) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/module.py", line 429, in bind state_names=self._state_names) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 279, in init self.bind_exec(data_shapes, label_shapes, shared_group) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 375, in bind_exec shared_group)) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 662, in _bind_ith_exec shared_buffer=shared_data_arrays, *input_shapes) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1629, in simple_bind raise RuntimeError(error_msg) RuntimeError: simple_bind error. Arguments: data: (1, 3, 112, 112) [10:12:44] src/storage/storage.cc:119: Compile with USE_CUDA=1 to enable GPU usage Stack trace: [bt] (0) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2795cb) [0x7f1d539ce5cb] [bt] (1) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b5d7a5) [0x7f1d562b27a5] [bt] (2) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b617fd) [0x7f1d562b67fd] [bt] (3) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b63f12) [0x7f1d562b8f12] [bt] (4) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::NDArray::NDArray(mxnet::TShape const&, mxnet::Context, bool, int)+0x5d0) [0x7f1d55a3d2b0] [bt] (5) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::InitZeros(mxnet::NDArrayStorageType, mxnet::TShape const&, mxnet::Context const&, int)+0x5c) [0x7f1d55ae249c] [bt] (6) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::ReshapeOrCreate(std::string const&, mxnet::TShape const&, int, mxnet::NDArrayStorageType, mxnet::Context const&, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, bool)+0x3a1) [0x7f1d55af59d1] [bt] (7) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::InitArguments(nnvm::IndexedGraph const&, std::vector<mxnet::TShape, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, mxnet::Executor const, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >)+0xb10) [0x7f1d55afd9a0] [bt] (8) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::Init(nnvm::Symbol, mxnet::Context const&, std::map<std::string, mxnet::Context, std::less, std::allocator<std::pair<std::string const, mxnet::Context> > > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::unordered_map<std::string, mxnet::TShape, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::TShape> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, mxnet::Executor, std::unordered_map<nnvm::NodeEntry, mxnet::NDArray, nnvm::NodeEntryHash, nnvm::NodeEntryEqual, std::allocator<std::pair<nnvm::NodeEntry const, mxnet::NDArray> > > const&)+0x6a9) [0x7f1d55b0bd59]

loading models/model-r100-ii/model 0 [10:12:46] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v1.2.0. Attempting to upgrade... [10:12:46] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded! Traceback (most recent call last): File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1623, in simple_bind ctypes.byref(exe_handle))) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/base.py", line 253, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: [10:12:46] src/storage/storage.cc:119: Compile with USE_CUDA=1 to enable GPU usage Stack trace: [bt] (0) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2795cb) [0x7fd9f61395cb] [bt] (1) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b5d7a5) [0x7fd9f8a1d7a5] [bt] (2) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b617fd) [0x7fd9f8a217fd] [bt] (3) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b63f12) [0x7fd9f8a23f12] [bt] (4) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::NDArray::NDArray(mxnet::TShape const&, mxnet::Context, bool, int)+0x5d0) [0x7fd9f81a82b0] [bt] (5) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::InitZeros(mxnet::NDArrayStorageType, mxnet::TShape const&, mxnet::Context const&, int)+0x5c) [0x7fd9f824d49c] [bt] (6) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::ReshapeOrCreate(std::string const&, mxnet::TShape const&, int, mxnet::NDArrayStorageType, mxnet::Context const&, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, bool)+0x3a1) [0x7fd9f82609d1] [bt] (7) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::InitArguments(nnvm::IndexedGraph const&, std::vector<mxnet::TShape, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, mxnet::Executor const, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >)+0xb10) [0x7fd9f82689a0] [bt] (8) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::Init(nnvm::Symbol, mxnet::Context const&, std::map<std::string, mxnet::Context, std::less, std::allocator<std::pair<std::string const, mxnet::Context> > > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::unordered_map<std::string, mxnet::TShape, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::TShape> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, mxnet::Executor*, std::unordered_map<nnvm::NodeEntry, mxnet::NDArray, nnvm::NodeEntryHash, nnvm::NodeEntryEqual, std::allocator<std::pair<nnvm::NodeEntry const, mxnet::NDArray> > > const&)+0x6a9) [0x7fd9f8276d59]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "test_batch.py", line 25, in model = face_model.FaceModel(args) File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/arcface/face_model.py", line 51, in init self.model = get_model(ctx, image_size, args.model, 'fc1') File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/arcface/face_model.py", line 38, in get_model model.bind(data_shapes=[('data', (1, 3, image_size[0], image_size[1]))]) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/module.py", line 429, in bind state_names=self._state_names) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 279, in init self.bind_exec(data_shapes, label_shapes, shared_group) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 375, in bind_exec shared_group)) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 662, in _bind_ith_exec shared_buffer=shared_data_arrays, *input_shapes) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1629, in simple_bind raise RuntimeError(error_msg) RuntimeError: simple_bind error. Arguments: data: (1, 3, 112, 112) [10:12:46] src/storage/storage.cc:119: Compile with USE_CUDA=1 to enable GPU usage Stack trace: [bt] (0) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2795cb) [0x7fd9f61395cb] [bt] (1) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b5d7a5) [0x7fd9f8a1d7a5] [bt] (2) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b617fd) [0x7fd9f8a217fd] [bt] (3) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b63f12) [0x7fd9f8a23f12] [bt] (4) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::NDArray::NDArray(mxnet::TShape const&, mxnet::Context, bool, int)+0x5d0) [0x7fd9f81a82b0] [bt] (5) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::InitZeros(mxnet::NDArrayStorageType, mxnet::TShape const&, mxnet::Context const&, int)+0x5c) [0x7fd9f824d49c] [bt] (6) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::ReshapeOrCreate(std::string const&, mxnet::TShape const&, int, mxnet::NDArrayStorageType, mxnet::Context const&, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, bool)+0x3a1) [0x7fd9f82609d1] [bt] (7) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::InitArguments(nnvm::IndexedGraph const&, std::vector<mxnet::TShape, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, mxnet::Executor const, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >)+0xb10) [0x7fd9f82689a0] [bt] (8) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::Init(nnvm::Symbol, mxnet::Context const&, std::map<std::string, mxnet::Context, std::less, std::allocator<std::pair<std::string const, mxnet::Context> > > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::unordered_map<std::string, mxnet::TShape, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::TShape> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, mxnet::Executor, std::unordered_map<nnvm::NodeEntry, mxnet::NDArray, nnvm::NodeEntryHash, nnvm::NodeEntryEqual, std::allocator<std::pair<nnvm::NodeEntry const, mxnet::NDArray> > > const&)+0x6a9) [0x7fd9f8276d59]

Traceback (most recent call last): File "train.py", line 22, in from options.train_options import TrainOptions File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/options/train_options.py", line 1, in from .base_options import BaseOptions File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/options/base_options.py", line 6, in import data File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/data/init.py", line 15, in from data.base_dataset import BaseDataset File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/data/base_dataset.py", line 9, in import torchvision.transforms as transforms File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/init.py", line 1, in from torchvision import models File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/init.py", line 11, in from . import detection File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/detection/init.py", line 1, in from .faster_rcnn import File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/detection/faster_rcnn.py", line 7, in from torchvision.ops import misc as misc_nn_ops File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/ops/init.py", line 1, in from .boxes import nms, box_iou File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/ops/boxes.py", line 2, in from torchvision import _C ImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory Traceback (most recent call last): File "test.py", line 30, in from options.test_options import TestOptions File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/options/test_options.py", line 1, in from .base_options import BaseOptions File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/options/base_options.py", line 6, in import data File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/data/init.py", line 15, in from data.base_dataset import BaseDataset File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/data/base_dataset.py", line 9, in import torchvision.transforms as transforms File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/init.py", line 1, in from torchvision import models File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/init.py", line 11, in from . import detection File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/detection/init.py", line 1, in from .faster_rcnn import File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/detection/faster_rcnn.py", line 7, in from torchvision.ops import misc as misc_nn_ops File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/ops/init.py", line 1, in from .boxes import nms, box_iou File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/ops/boxes.py", line 2, in from torchvision import _C ImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory (tf) root@6352b96a0a4d:/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video# python train_19news_1.py 32 01 19_news/32 32_bmold_win3 octave: X11 DISPLAY environment variable not set octave: disabling GUI features warning: the 'bwboundaries' function belongs to the image package from Octave Forge which you have installed but not loaded. To load the package, run 'pkg load image' from the Octave prompt.

Please read http://www.octave.org/missing.html to learn how you can contribute missing functionality. error: 'bwboundaries' undefined near line 18 column 12 loading models/model-r100-ii/model 0 [10:15:13] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v1.2.0. Attempting to upgrade... [10:15:13] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded! Traceback (most recent call last): File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1623, in simple_bind ctypes.byref(exe_handle))) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/base.py", line 253, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: [10:15:13] src/storage/storage.cc:119: Compile with USE_CUDA=1 to enable GPU usage Stack trace: [bt] (0) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2795cb) [0x7f49da0ac5cb] [bt] (1) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b5d7a5) [0x7f49dc9907a5] [bt] (2) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b617fd) [0x7f49dc9947fd] [bt] (3) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b63f12) [0x7f49dc996f12] [bt] (4) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::NDArray::NDArray(mxnet::TShape const&, mxnet::Context, bool, int)+0x5d0) [0x7f49dc11b2b0] [bt] (5) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::InitZeros(mxnet::NDArrayStorageType, mxnet::TShape const&, mxnet::Context const&, int)+0x5c) [0x7f49dc1c049c] [bt] (6) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::ReshapeOrCreate(std::string const&, mxnet::TShape const&, int, mxnet::NDArrayStorageType, mxnet::Context const&, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, bool)+0x3a1) [0x7f49dc1d39d1] [bt] (7) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::InitArguments(nnvm::IndexedGraph const&, std::vector<mxnet::TShape, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, mxnet::Executor const, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >)+0xb10) [0x7f49dc1db9a0] [bt] (8) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::Init(nnvm::Symbol, mxnet::Context const&, std::map<std::string, mxnet::Context, std::less, std::allocator<std::pair<std::string const, mxnet::Context> > > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::unordered_map<std::string, mxnet::TShape, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::TShape> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, mxnet::Executor*, std::unordered_map<nnvm::NodeEntry, mxnet::NDArray, nnvm::NodeEntryHash, nnvm::NodeEntryEqual, std::allocator<std::pair<nnvm::NodeEntry const, mxnet::NDArray> > > const&)+0x6a9) [0x7f49dc1e9d59]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "test_batch.py", line 25, in model = face_model.FaceModel(args) File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/arcface/face_model.py", line 51, in init self.model = get_model(ctx, image_size, args.model, 'fc1') File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/arcface/face_model.py", line 38, in get_model model.bind(data_shapes=[('data', (1, 3, image_size[0], image_size[1]))]) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/module.py", line 429, in bind state_names=self._state_names) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 279, in init self.bind_exec(data_shapes, label_shapes, shared_group) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 375, in bind_exec shared_group)) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 662, in _bind_ith_exec shared_buffer=shared_data_arrays, *input_shapes) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1629, in simple_bind raise RuntimeError(error_msg) RuntimeError: simple_bind error. Arguments: data: (1, 3, 112, 112) [10:15:13] src/storage/storage.cc:119: Compile with USE_CUDA=1 to enable GPU usage Stack trace: [bt] (0) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2795cb) [0x7f49da0ac5cb] [bt] (1) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b5d7a5) [0x7f49dc9907a5] [bt] (2) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b617fd) [0x7f49dc9947fd] [bt] (3) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b63f12) [0x7f49dc996f12] [bt] (4) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::NDArray::NDArray(mxnet::TShape const&, mxnet::Context, bool, int)+0x5d0) [0x7f49dc11b2b0] [bt] (5) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::InitZeros(mxnet::NDArrayStorageType, mxnet::TShape const&, mxnet::Context const&, int)+0x5c) [0x7f49dc1c049c] [bt] (6) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::ReshapeOrCreate(std::string const&, mxnet::TShape const&, int, mxnet::NDArrayStorageType, mxnet::Context const&, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, bool)+0x3a1) [0x7f49dc1d39d1] [bt] (7) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::InitArguments(nnvm::IndexedGraph const&, std::vector<mxnet::TShape, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, mxnet::Executor const, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >)+0xb10) [0x7f49dc1db9a0] [bt] (8) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::Init(nnvm::Symbol, mxnet::Context const&, std::map<std::string, mxnet::Context, std::less, std::allocator<std::pair<std::string const, mxnet::Context> > > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::unordered_map<std::string, mxnet::TShape, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::TShape> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, mxnet::Executor, std::unordered_map<nnvm::NodeEntry, mxnet::NDArray, nnvm::NodeEntryHash, nnvm::NodeEntryEqual, std::allocator<std::pair<nnvm::NodeEntry const, mxnet::NDArray> > > const&)+0x6a9) [0x7f49dc1e9d59]

loading models/model-r100-ii/model 0 [10:15:15] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v1.2.0. Attempting to upgrade... [10:15:15] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded! Traceback (most recent call last): File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1623, in simple_bind ctypes.byref(exe_handle))) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/base.py", line 253, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: [10:15:15] src/storage/storage.cc:119: Compile with USE_CUDA=1 to enable GPU usage Stack trace: [bt] (0) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2795cb) [0x7f5e270ab5cb] [bt] (1) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b5d7a5) [0x7f5e2998f7a5] [bt] (2) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b617fd) [0x7f5e299937fd] [bt] (3) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b63f12) [0x7f5e29995f12] [bt] (4) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::NDArray::NDArray(mxnet::TShape const&, mxnet::Context, bool, int)+0x5d0) [0x7f5e2911a2b0] [bt] (5) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::InitZeros(mxnet::NDArrayStorageType, mxnet::TShape const&, mxnet::Context const&, int)+0x5c) [0x7f5e291bf49c] [bt] (6) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::ReshapeOrCreate(std::string const&, mxnet::TShape const&, int, mxnet::NDArrayStorageType, mxnet::Context const&, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, bool)+0x3a1) [0x7f5e291d29d1] [bt] (7) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::InitArguments(nnvm::IndexedGraph const&, std::vector<mxnet::TShape, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, mxnet::Executor const, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >)+0xb10) [0x7f5e291da9a0] [bt] (8) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::Init(nnvm::Symbol, mxnet::Context const&, std::map<std::string, mxnet::Context, std::less, std::allocator<std::pair<std::string const, mxnet::Context> > > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::unordered_map<std::string, mxnet::TShape, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::TShape> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, mxnet::Executor*, std::unordered_map<nnvm::NodeEntry, mxnet::NDArray, nnvm::NodeEntryHash, nnvm::NodeEntryEqual, std::allocator<std::pair<nnvm::NodeEntry const, mxnet::NDArray> > > const&)+0x6a9) [0x7f5e291e8d59]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "test_batch.py", line 25, in model = face_model.FaceModel(args) File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/arcface/face_model.py", line 51, in init self.model = get_model(ctx, image_size, args.model, 'fc1') File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/arcface/face_model.py", line 38, in get_model model.bind(data_shapes=[('data', (1, 3, image_size[0], image_size[1]))]) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/module.py", line 429, in bind state_names=self._state_names) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 279, in init self.bind_exec(data_shapes, label_shapes, shared_group) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 375, in bind_exec shared_group)) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 662, in _bind_ith_exec shared_buffer=shared_data_arrays, *input_shapes) File "/opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1629, in simple_bind raise RuntimeError(error_msg) RuntimeError: simple_bind error. Arguments: data: (1, 3, 112, 112) [10:15:15] src/storage/storage.cc:119: Compile with USE_CUDA=1 to enable GPU usage Stack trace: [bt] (0) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2795cb) [0x7f5e270ab5cb] [bt] (1) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b5d7a5) [0x7f5e2998f7a5] [bt] (2) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b617fd) [0x7f5e299937fd] [bt] (3) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2b63f12) [0x7f5e29995f12] [bt] (4) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::NDArray::NDArray(mxnet::TShape const&, mxnet::Context, bool, int)+0x5d0) [0x7f5e2911a2b0] [bt] (5) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::InitZeros(mxnet::NDArrayStorageType, mxnet::TShape const&, mxnet::Context const&, int)+0x5c) [0x7f5e291bf49c] [bt] (6) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::common::ReshapeOrCreate(std::string const&, mxnet::TShape const&, int, mxnet::NDArrayStorageType, mxnet::Context const&, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, bool)+0x3a1) [0x7f5e291d29d1] [bt] (7) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::InitArguments(nnvm::IndexedGraph const&, std::vector<mxnet::TShape, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, mxnet::Executor const, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >)+0xb10) [0x7f5e291da9a0] [bt] (8) /opt/conda/envs/tf/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::Init(nnvm::Symbol, mxnet::Context const&, std::map<std::string, mxnet::Context, std::less, std::allocator<std::pair<std::string const, mxnet::Context> > > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::vector<mxnet::Context, std::allocator > const&, std::unordered_map<std::string, mxnet::TShape, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::TShape> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::unordered_map<std::string, int, std::hash, std::equal_to, std::allocator<std::pair<std::string const, int> > > const&, std::vector<mxnet::OpReqType, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::vector<mxnet::NDArray, std::allocator >, std::unordered_map<std::string, mxnet::NDArray, std::hash, std::equal_to, std::allocator<std::pair<std::string const, mxnet::NDArray> > >, mxnet::Executor, std::unordered_map<nnvm::NodeEntry, mxnet::NDArray, nnvm::NodeEntryHash, nnvm::NodeEntryEqual, std::allocator<std::pair<nnvm::NodeEntry const, mxnet::NDArray> > > const&)+0x6a9) [0x7f5e291e8d59]

Traceback (most recent call last): File "train.py", line 22, in from options.train_options import TrainOptions File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/options/train_options.py", line 1, in from .base_options import BaseOptions File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/options/base_options.py", line 6, in import data File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/data/init.py", line 15, in from data.base_dataset import BaseDataset File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/data/base_dataset.py", line 9, in import torchvision.transforms as transforms File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/init.py", line 1, in from torchvision import models File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/init.py", line 11, in from . import detection File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/detection/init.py", line 1, in from .faster_rcnn import File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/detection/faster_rcnn.py", line 7, in from torchvision.ops import misc as misc_nn_ops File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/ops/init.py", line 1, in from .boxes import nms, box_iou File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/ops/boxes.py", line 2, in from torchvision import _C ImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory Traceback (most recent call last): File "test.py", line 30, in from options.test_options import TestOptions File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/options/test_options.py", line 1, in from .base_options import BaseOptions File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/options/base_options.py", line 6, in import data File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/data/init.py", line 15, in from data.base_dataset import BaseDataset File "/workspace/Audio-driven-TalkingFace-HeadPose/render-to-video/data/base_dataset.py", line 9, in import torchvision.transforms as transforms File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/init.py", line 1, in from torchvision import models File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/init.py", line 11, in from . import detection File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/detection/init.py", line 1, in from .faster_rcnn import File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/detection/faster_rcnn.py", line 7, in from torchvision.ops import misc as misc_nn_ops File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/ops/init.py", line 1, in from .boxes import nms, box_iou File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/ops/boxes.py", line 2, in from torchvision import _C ImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory

zerzerzerz commented 2 years ago

I install PyTorch 1.2 and delete POT in Colab_requirements.txt. Besides, replace all "python" and "python3.7" with "python3.6" in command prompt and py files. Then I can run the Colab demo successfully.

Ethanoool commented 2 years ago

ImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory

From here you can see that this file is missing. As I mentioned before this is because the version of torch and torchvision doesn't fit. Try to install the suitable version of these libraries. I can't give you the exact version because the GPU Google Colab gave us are different.

muxiddin19 commented 2 years ago

(tf) root@6352b96a0a4d:/workspace/Audio-driven-TalkingFace-HeadPose# cd Audio/code/; python test_personalized2.py

03Fsi1831 32 01 python atcnet_test1.py --device_ids 1 --model_name ../model/atcnet_pose0_con3/32/atcnet_lstm_99.pth --pose 1 --relativeframe 0 --sample_dir ../results/atcnet_pose0_con3/32/03Fsi1831_99 --in_file ../audio/03Fsi1831.wav Traceback (most recent call last): File "atcnet_test1.py", line 12, in import torchvision File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/init.py", line 1, in from torchvision import models File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/init.py", line 11, in from . import detection File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/detection/init.py", line 1, in from .faster_rcnn import * File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/models/detection/faster_rcnn.py", line 7, in from torchvision.ops import misc as misc_nn_ops File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/ops/init.py", line 1, in from .boxes import nms, box_iou File "/opt/conda/envs/tf/lib/python3.6/site-packages/torchvision/ops/boxes.py", line 2, in from torchvision import _C ImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory 19_news/32 03Fsi1831 0 atcnet_pose0_con3/32/03Fsi1831_99 ../results/chosenbg/03Fsi1831_19_news/32_atcnet_pose0_con3_32_03Fsi1831_99/reassign [] Traceback (most recent call last): File "test_personalized2.py", line 133, in bgdir = dreassign2('19_news/'+person, audiobasen, start, audiomodel, num=num, tran=pingyi) File "test_personalized2.py", line 73, in dreassign2 os.mkdir(folder_to_process+'/reassign') FileNotFoundError: [Errno 2] No such file or directory: '../results/atcnet_pose0_con3/32/03Fsi1831_99/reassign'

pfeducode commented 1 year ago

image