yerfor / GeneFace

GeneFace: Generalized and High-Fidelity 3D Talking Face Synthesis; ICLR 2023; Official code
MIT License
2.44k stars 290 forks source link

CUDA_VISIBLE_DEVICES=0 data_gen/nerf/process_data.sh $VIDEO_ID (missing files and issues) #201

Open wnqw opened 9 months ago

wnqw commented 9 months ago

[INFO] ===== extract audio from data/raw/videos/May.mp4 to data/processed/videos/May/aud.wav ===== ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 7.3.0 (crosstool-NG 1.23.0.449-a04d0) configuration: --prefix=/tmp/build/80754af9/ffmpeg_1587154242452/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeho --cc=/tmp/build/80754af9/ffmpeg_1587154242452/_build_env/bin/x86_64-conda_cos6-linux-gnu-cc --disable-doc --enable-avresample --enable-gmp --enable-hardcoded-tables --enable-libfreetype --enable-libvpx --enable-pthreads --enable-libopus --enable-postproc --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame --disable-nonfree --enable-gpl --enable-gnutls --disable-openssl --enable-libopenh264 --enable-libx264 libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 / 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 / 55. 5.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'data/raw/videos/May.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: mp42mp41 creation_time : 2021-11-09T10:09:46.000000Z Duration: 00:04:02.97, start: 0.000000, bitrate: 3324 kb/s Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 512x512 [SAR 1:1 DAR 1:1], 3004 kb/s, 25 fps, 25 tbr, 25k tbn, 50 tbc (default) Metadata: creation_time : 2021-11-09T10:09:46.000000Z handler_name : ?Mainconcept Video Media Handler encoder : AVC Coding Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 317 kb/s (default) Metadata: creation_time : 2021-11-09T10:09:46.000000Z handler_name : #Mainconcept MP4 Sound Media Handler File 'data/processed/videos/May/aud.wav' already exists. Overwrite ? [y/N] y Stream mapping: Stream #0:1 -> #0:0 (aac (native) -> pcm_s16le (native)) Press [q] to stop, [?] for help Output #0, wav, to 'data/processed/videos/May/aud.wav': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: mp42mp41 ISFT : Lavf58.29.100 Stream #0:0(eng): Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, stereo, s16, 512 kb/s (default) Metadata: creation_time : 2021-11-09T10:09:46.000000Z handler_name : #Mainconcept MP4 Sound Media Handler encoder : Lavc58.54.100 pcm_s16le size= 15183kB time=00:04:02.92 bitrate= 512.0kbits/s speed= 231x
video:0kB audio:15183kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000502% [INFO] ===== extracted audio ===== [INFO] ===== extract audio labels for data/processed/videos/May/aud.wav ===== [INFO] ===== start extract esperanto ===== [INFO] ===== extract images from data/raw/videos/May.mp4 to data/processed/videos/May/ori_imgs ===== ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 7.3.0 (crosstool-NG 1.23.0.449-a04d0) configuration: --prefix=/tmp/build/80754af9/ffmpeg_1587154242452/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeho --cc=/tmp/build/80754af9/ffmpeg_1587154242452/_build_env/bin/x86_64-conda_cos6-linux-gnu-cc --disable-doc --enable-avresample --enable-gmp --enable-hardcoded-tables --enable-libfreetype --enable-libvpx --enable-pthreads --enable-libopus --enable-postproc --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame --disable-nonfree --enable-gpl --enable-gnutls --disable-openssl --enable-libopenh264 --enable-libx264 libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 / 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 / 55. 5.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'data/raw/videos/May.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: mp42mp41 creation_time : 2021-11-09T10:09:46.000000Z Duration: 00:04:02.97, start: 0.000000, bitrate: 3324 kb/s Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 512x512 [SAR 1:1 DAR 1:1], 3004 kb/s, 25 fps, 25 tbr, 25k tbn, 50 tbc (default) Metadata: creation_time : 2021-11-09T10:09:46.000000Z handler_name : ?Mainconcept Video Media Handler encoder : AVC Coding Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 317 kb/s (default) Metadata: creation_time : 2021-11-09T10:09:46.000000Z handler_name : #Mainconcept MP4 Sound Media Handler Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native)) Press [q] to stop, [?] for help [swscaler @ 0x558032eeea80] deprecated pixel format used, make sure you did set range correctly Output #0, image2, to 'data/processed/videos/May/ori_imgs/%d.jpg': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: mp42mp41 encoder : Lavf58.29.100 Stream #0:0(eng): Video: mjpeg, yuvj420p(pc), 512x512 [SAR 1:1 DAR 1:1], q=1-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc (default) Metadata: creation_time : 2021-11-09T10:09:46.000000Z handler_name : ?Mainconcept Video Media Handler encoder : Lavc58.54.100 mjpeg Side data: cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1 ALSA lib pcm_dsnoop.c:641:(snd_pcm_dsnoop_open) unable to open slaved=8.81x
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave ALSA lib pcm.c:2642:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear ALSA lib pcm.c:2642:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe ALSA lib pcm.c:2642:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side ALSA lib pcm_oss.c:377:(_snd_pcm_oss_open) Unknown field port ALSA lib pcm_oss.c:377:(_snd_pcm_oss_open) Unknown field port ALSA lib pcm_usb_stream.c:486:(_snd_pcm_usb_stream_open) Invalid type for card ALSA lib pcm_usb_stream.c:486:(_snd_pcm_usb_stream_open) Invalid type for card ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave [WARN] audio has 2 channels, only use the first. [INFO] loaded audio stream data/processed/videos/May/aud.wav: (3886763,) [INFO] loading ASR model cpierse/wav2vec2-large-xlsr-53-esperanto... /home/qw/anaconda3/envs/geneface/lib/python3.9/site-packages/transformers/configuration_utils.py:380: UserWarning: Passing gradient_checkpointing to a config initialization is deprecated and will be removed in v5 Transformers. Using model.gradient_checkpointing_enable() instead, or if you are using the Trainer API, pass gradient_checkpointing=True in your TrainingArguments. warnings.warn( Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. frame= 6073 fps=213 q=1.0 Lsize=N/A time=00:04:02.92 bitrate=N/A speed=8.53x
video:287215kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown [INFO] ===== extracted images ===== [INFO] ===== extract face landmarks from data/processed/videos/May/ori_imgs ===== [START] uniuje vizaĉanĝu de fl eĉo morispaas st antl ugehedt di opu ĉiu ni ĉhis ĝkam ant de ŝi evis aj kun sizuol tuanti sevtiin hasen stuo ajbli dajs ofu ĉiu ne ĉiezo kurajĉu da n eva fubuihevn eid amomentas ĉisiĵon an seŝ aŭ seŭ on niu du reĉon adi fto aĝis sekstii mu s la gia juvaŭ ĉi itu da ĉein ĝ desis li ea pisto tmekis hap on ajn a ud refuenĝo bla as ĉiunuvestiva jsi fertoajns an laŭov koosted no sevi bonŝeedz saj impunĝu viru oveŭ ŝi ten zajmuaj prajna ju ĉiuj sed azui feis liop ĉu neceza herbas aŭ ŝe dintro esc dam mpe ŝins kan prinos ĉi gata ui omon da sia buĉo nuĉis sronga da nati s tudeej voj omon te kamtu diris fe ra ce efri bon hasĉ ams de sukcii poj obon sne ŝundris sai fan sekio fraĉj udun ant kv ar ĉoron li sambe ŝun s ju najĉas se zobujoneŭ longa a da fefti ĉiuj p senĝo vaŭĝed li iv antofori aj psendu vaŭĉi ĝu neajn po buan k urei ĉiiunienof pipolundne ŝins udopra ud hisĝuj anto broi t fiuĉa se bonaj seĉ umanon gez sia iŝinĉejpo lenj oro pde ŝia etubi vui zaĉen mojn de nove ĉi raante r gedo roits dio noĝasbo deu sin faŭĉit li iv por f aevlui sen gupa s neniskanĉoj efko s drefremdo m la itpeazomfa d deveŝinsina a kan ĉoj pri neŭzue prosp urin anftaŭ izua not daizu ken isli baadaŭ lon hojm senta ĉ uron duo gradsku fajndas k io ĝoob antdeu suka nod en ŝo r dereso hima kan tribuecuel antdaŭispo hilm eĉ ĝas no t desisti gi ebiniĝ pol da an diis paroj a stas haŭurpi buba k si ki ujn la beto dieloĵ helmpa ornm i buakinpipo dvisaŭt objeĉululi gu naj cirpoĉan n en muĉboj a o junaeti de na sidisumŝipe vdis kurajtne jiŝon junaice ĝe ni op ĉiu neĉesa ĝa eŭpon ĉe o na pipo anĝj nojtis pajda princ ipo dris eŭnlio tal enĝen hor buir k da ĉu dit amen io fiuĉ a aftro ĝis u juna ĝidro ap ipol heva ĉiiv iĉ kraittengs s tro apraŝis juni el ovnei ŝins inglan s kotland ue osdnovan a lend tva junino pipo fron s po s deimstua mfo siespis naciistu ĉamo ĉe s skuustr o hospros anto pavo truagiu ni enov k omiuno ĉeis an fables erkoz zi ĝis ĝast peg gkloibolevanc ertu fajno jea ĝis l upers mufengs e todis av ĝi mapilgie ju sdonĉo fres ĝo bo pajo f est hajm en majpiz iĝe ejo ĉurin sta skulo ka ŭ ofte juniversĉ i ol juviĉoja afto laivĉajno haruerk vi stings laivs majo sĉems ado fing sot pajndas ĉiu evolui a asof anta asti keem pei ĝej koks tiu selĉaĝi li ĉakenfumezl aosĝie puŝeĝ mi a famo junaiĉe d — anthaf amo lin komen donat uĉ ĉ divacas i ha v gaŭrnop ĉiune ĝi iĝu deamen strueitladt ĉu brinn is konĉuite gata a asnevo de fu sead o ĉ hu evo juao e evol u lev a l polo ĉeeksiko nu mi ensissoa ĉi buakf o jilu norĝaste pri veleĝi fien scuasmi lokehe tu jer voko ĉunzi and jiu nocei leĉmi buŝ iu ĝo famle apisvo l pros prs ant hapi ni ujea [END] [INFO] save all feats for training purpose... [INFO] saved logits to data/processed/videos/May/aud_esperanto.npy 0%| | 0/6073 [00:00<?, ?it/s][INFO] ===== extracted esperanto ===== [INFO] ===== extract deepspeech ===== /home/qw/.tensorflow/models/deepspeech-0_1_0-b90017e8.pb 2023-09-19 21:56:00.038803: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-09-19 21:56:00.051234: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-09-19 21:56:00.051555: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-09-19 21:56:00.052084: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-09-19 21:56:00.068962: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-09-19 21:56:00.069319: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-09-19 21:56:00.069523: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-09-19 21:56:02.933376: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-09-19 21:56:02.933656: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-09-19 21:56:02.937330: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-09-19 21:56:02.937553: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7908 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:41:00.0, compute capability: 8.9 tring to extract deepspeech from audio file: data/processed/videos/May/aud.wav The target is: data/processed/videos/May/aud_deepspeech.npy /home/qw/proj/GeneFace/data_util/deepspeech_features/deepspeech_features.py:50: UserWarning: Audio has multiple channels, the first channel is used warnings.warn( 0%| | 1/6073 [00:21<35:36:01, 21.11s/it] Traceback (most recent call last): File "/home/qw/proj/GeneFace/data_util/process.py", line 438, in extract_landmarks(ori_imgs_dir) File "/home/qw/proj/GeneFace/data_util/process.py", line 60, in extract_landmarks preds = fa.get_landmarks(input) File "/home/qw/anaconda3/envs/geneface/lib/python3.9/site-packages/face_alignment/api.py", line 113, in get_landmarks return self.get_landmarks_from_image(image_or_path, detected_faces, return_bboxes, return_landmark_score) File "/home/qw/anaconda3/envs/geneface/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, *kwargs) File "/home/qw/anaconda3/envs/geneface/lib/python3.9/site-packages/face_alignment/api.py", line 168, in get_landmarks_from_image out = self.face_alignment_net(inp).detach() File "/home/qw/anaconda3/envs/geneface/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(input, **kwargs) RuntimeError: nvrtc: error: invalid value for --gpu-architecture (-arch)

nvrtc compilation failed:

define NAN __int_as_float(0x7fffffff)

define POS_INFINITY __int_as_float(0x7f800000)

define NEG_INFINITY __int_as_float(0xff800000)

template device T maximum(T a, T b) { return isnan(a) ? a : (a > b ? a : b); }

template device T minimum(T a, T b) { return isnan(a) ? a : (a < b ? a : b); }

extern "C" global void fused_cat_cat(float tinput0_42, float tinput0_46, float tout3_67, float tinput0_60, float tinput0_52, float tout3_71, float aten_cat_1, float aten_cat) { { if (blockIdx.x<512ll ? 1 : 0) { aten_cat[(long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)] = ((((long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) / 1024ll<192ll ? 1 : 0) ? ((((long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) / 1024ll<128ll ? 1 : 0) ? __ldg(tinput0_60 + (long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) : ldg(tinput0_52 + ((long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) - 131072ll)) : __ldg(tout3_71 + ((long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) - 196608ll)); } aten_cat_1[(long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)] = ((((long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) / 4096ll<192ll ? 1 : 0) ? ((((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)) / 4096ll<128ll ? 1 : 0) ? ldg(tinput0_42 + (long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) : __ldg(tinput0_46 + ((long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) - 524288ll)) : __ldg(tout3_67 + ((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)) - 786432ll)); } }

[INFO] ===== perform face tracking ===== [INFO] ===== extract semantics from data/processed/videos/May/ori_imgs to data/processed/videos/May/parsing ===== 2023-09-19 21:56:31.266322: I tensorflow/stream_executor/cuda/cuda_blas.cc:1760] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once. The deepspeech extracted successfully, saved at: data/processed/videos/May/aud_deepspeech.npy The shape is: (6073, 16, 29) [INFO] ===== extracted deepspeech ===== [INFO] ===== extracted all audio labels ===== processed parsing 100 processed parsing 200 processed parsing 300 processed parsing 400 processed parsing 500 600 loss_lan= 1.848300576210022 mean_xy_trans= -2.8656184673309326 processed parsing 600 processed parsing 700 processed parsing 800 processed parsing 900 processed parsing 1000 700 loss_lan= 1.8529926538467407 mean_xy_trans= -3.393925905227661 processed parsing 1100 processed parsing 1200 processed parsing 1300 processed parsing 1400 processed parsing 1500 800 loss_lan= 1.837084174156189 mean_xy_trans= -3.858917474746704 processed parsing 1600 processed parsing 1700 processed parsing 1800 processed parsing 1900 processed parsing 2000 processed parsing 2100 900 loss_lan= 1.8540384769439697 mean_xy_trans= -4.361268997192383 processed parsing 2200 processed parsing 2300 processed parsing 2400 processed parsing 2500 processed parsing 2600 1000 loss_lan= 1.8525820970535278 mean_xy_trans= -4.837498664855957 processed parsing 2700 processed parsing 2800 processed parsing 2900 processed parsing 3000 processed parsing 3100 processed parsing 3200 1100 loss_lan= 1.8890591859817505 mean_xy_trans= -5.419653415679932 processed parsing 3300 processed parsing 3400 processed parsing 3500 processed parsing 3600 processed parsing 3700 1200 loss_lan= 1.8725024461746216 mean_xy_trans= -5.8194451332092285 processed parsing 3800 processed parsing 3900 processed parsing 4000 processed parsing 4100 processed parsing 4200 1300 loss_lan= 1.871009111404419 mean_xy_trans= -6.298783779144287 processed parsing 4300 processed parsing 4400 processed parsing 4500 processed parsing 4600 processed parsing 4700 1400 loss_lan= 1.8714978694915771 mean_xy_trans= -6.7916669845581055 processed parsing 4800 processed parsing 4900 processed parsing 5000 processed parsing 5100 processed parsing 5200 1500 loss_lan= 1.8839856386184692 mean_xy_trans= -7.355538845062256 processed parsing 5300 processed parsing 5400 processed parsing 5500 processed parsing 5600 processed parsing 5700 1600 loss_lan= 1.899548053741455 mean_xy_trans= -7.9502482414245605 find best focal 800 processed parsing 5800 processed parsing 5900 processed parsing 6000 [INFO] ===== extracted semantics ===== trained on focal= 800 best_loss_lan= 1.884203314781189 mean_xy_trans= -4.012221336364746 Traceback (most recent call last): File "/home/qw/proj/GeneFace/data_util/face_tracking/face_tracker.py", line 226, in sel_ids = np.arange(0, num_frames, int(num_frames/batch_size))[:batch_size] ZeroDivisionError: division by zero [INFO] ===== finished face tracking ===== [INFO] ===== extract background image from data/processed/videos/May/ori_imgs ===== 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 304/304 [07:52<00:00, 1.55s/it] [INFO] ===== extracted background image ===== [INFO] ===== extract head images for data/processed/videos/May ===== 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 6073/6073 [06:53<00:00, 14.68it/s] [INFO] ===== extracted head images ===== [INFO] ===== extract torso and gt images for data/processed/videos/May ===== 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 6073/6073 [23:49<00:00, 4.25it/s] [INFO] ===== extracted torso and gt images ===== [INFO] ===== save transforms ===== Traceback (most recent call last): File "/home/qw/proj/GeneFace/data_util/process.py", line 446, in save_transforms(processed_dir, ori_imgs_dir) File "/home/qw/proj/GeneFace/data_util/process.py", line 294, in save_transforms params_dict = torch.load(os.path.join(base_dir, 'track_params.pt')) File "/home/qw/anaconda3/envs/geneface/lib/python3.9/site-packages/torch/serialization.py", line 699, in load with _open_file_like(f, 'rb') as opened_file: File "/home/qw/anaconda3/envs/geneface/lib/python3.9/site-packages/torch/serialization.py", line 231, in _open_file_like return _open_file(name_or_buffer, mode) File "/home/qw/anaconda3/envs/geneface/lib/python3.9/site-packages/torch/serialization.py", line 212, in init super(_open_file, self).init(open(name, mode)) FileNotFoundError: [Errno 2] No such file or directory: 'data/processed/videos/May/track_params.pt' Loading the Wav2Vec2 Processor... Loading the HuBERT Model... Hubert extracted at data/processed/videos/May/aud_hubert.npy Mel and F0 extracted at data/processed/videos/May/aud_mel_f0.npy loading the model from deep_3drecon/checkpoints/facerecon/epoch_20.pth loading video ... extracting 2D facial landmarks ...: 0%| | 1/6073 [00:18<31:03:52, 18.42s/it]WARNING: Caught errors when fa.get_landmarks, maybe No face detected at frame 6073 in data/raw/videos/May.mp4! extracting 2D facial landmarks ...: 0%| | 1/6073 [00:20<33:44:48, 20.01s/it] Traceback (most recent call last): File "/home/qw/proj/GeneFace/data_gen/nerf/extract_3dmm.py", line 56, in process_video lm68 = fa.get_landmarks(frames[i])[0] # 识别图片中的人脸,获得角点, shape=[68,2] File "/home/qw/anaconda3/envs/geneface/lib/python3.9/site-packages/face_alignment/api.py", line 113, in get_landmarks return self.get_landmarks_from_image(image_or_path, detected_faces, return_bboxes, return_landmark_score) File "/home/qw/anaconda3/envs/geneface/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, *kwargs) File "/home/qw/anaconda3/envs/geneface/lib/python3.9/site-packages/face_alignment/api.py", line 168, in get_landmarks_from_image out = self.face_alignment_net(inp).detach() File "/home/qw/anaconda3/envs/geneface/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(input, **kwargs) RuntimeError: nvrtc: error: invalid value for --gpu-architecture (-arch)

nvrtc compilation failed:

define NAN __int_as_float(0x7fffffff)

define POS_INFINITY __int_as_float(0x7f800000)

define NEG_INFINITY __int_as_float(0xff800000)

template device T maximum(T a, T b) { return isnan(a) ? a : (a > b ? a : b); }

template device T minimum(T a, T b) { return isnan(a) ? a : (a < b ? a : b); }

extern "C" global void fused_cat_cat(float tinput0_42, float tinput0_46, float tout3_67, float tinput0_60, float tinput0_52, float tout3_71, float aten_cat_1, float aten_cat) { { if (blockIdx.x<512ll ? 1 : 0) { aten_cat[(long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)] = ((((long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) / 1024ll<192ll ? 1 : 0) ? ((((long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) / 1024ll<128ll ? 1 : 0) ? __ldg(tinput0_60 + (long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) : ldg(tinput0_52 + ((long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) - 131072ll)) : __ldg(tout3_71 + ((long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) - 196608ll)); } aten_cat_1[(long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)] = ((((long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) / 4096ll<192ll ? 1 : 0) ? ((((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)) / 4096ll<128ll ? 1 : 0) ? ldg(tinput0_42 + (long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) : __ldg(tinput0_46 + ((long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)) - 524288ll)) : __ldg(tout3_67 + ((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)) - 786432ll)); } }

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/qw/proj/GeneFace/data_gen/nerf/extract_3dmm.py", line 112, in process_video(video_fname, out_fname, skip_tmp=False) File "/home/qw/proj/GeneFace/data_gen/nerf/extract_3dmm.py", line 59, in process_video raise ValueError("") ValueError | Unknow hparams: [] args.config: egs/datasets/videos/May/lm3d_radnerf.yaml | Hparams chains: ['egs/egs_bases/radnerf/base.yaml', 'egs/egs_bases/radnerf/lm3d_radnerf.yaml', 'egs/datasets/videos/May/lm3d_radnerf.yaml'] | Hparams: accumulate_grad_batches: 1, ambient_out_dim: 2, amp: True, base_config: ['egs/egs_bases/radnerf/lm3d_radnerf.yaml'], binary_data_dir: data/binary/videos, bound: 1, camera_offset: [0, 0, 0], camera_scale: 4.0, clip_grad_norm: 0, clip_grad_value: 0, cond_out_dim: 64, cond_type: idexp_lm3d_normalized, cond_win_size: 1, cuda_ray: True, debug: False, density_thresh: 10, density_thresh_torso: 0.01, desired_resolution: 2048, dt_gamma: 0.00390625, eval_max_batches: 100, exp_name: , far: 0.9, finetune_lips: True, finetune_lips_start_iter: 200000, geo_feat_dim: 128, grid_interpolation_type: linear, grid_size: 128, grid_type: tiledgrid, gui_fovy: 21.24, gui_h: 512, gui_max_spp: 1, gui_radius: 3.35, gui_w: 512, hidden_dim_ambient: 128, hidden_dim_color: 128, hidden_dim_sigma: 128, individual_embedding_dim: 4, individual_embedding_num: 13000, infer: False, infer_audio_source_name: , infer_bg_img_fname: , infer_c2w_name: , infer_cond_name: , infer_lm3d_clamp_std: 2.5, infer_lm3d_lle_percent: 0.0, infer_lm3d_smooth_sigma: 0.0, infer_out_video_name: , infer_scale_factor: 1.0, infer_smo_std: 0.0, infer_smooth_camera_path: True, infer_smooth_camera_path_kernel_size: 7, lambda_ambient: 0.1, lambda_lpips_loss: 0.01, lambda_weights_entropy: 0.0001, load_ckpt: , load_imgs_to_memory: False, log2_hashmap_size: 16, lr: 0.0005, max_ray_batch: 4096, max_steps: 16, max_updates: 250000, min_near: 0.05, n_rays: 65536, near: 0.3, num_ckpt_keep: 1, num_layers_ambient: 3, num_layers_color: 2, num_layers_sigma: 3, num_sanity_val_steps: 2, num_steps: 16, num_valid_plots: 5, optimizer_adam_beta1: 0.9, optimizer_adam_beta2: 0.999, print_nan_grads: False, processed_data_dir: data/processed/videos, raw_data_dir: data/raw/videos, resume_from_checkpoint: 0, save_best: True, save_codes: ['tasks', 'modules', 'egs'], save_gt: True, scheduler: exponential, seed: 9999, smo_win_size: 5, smooth_lips: False, task_cls: tasks.radnerfs.radnerf.RADNeRFTask, tb_log_interval: 100, torso_head_aware: False, torso_individual_embedding_dim: 8, torso_shrink: 0.8, update_extra_interval: 16, upsample_steps: 0, use_window_cond: True, val_check_interval: 2000, valid_infer_interval: 10000, valid_monitor_key: val_loss, valid_monitor_mode: min, validate: False, video_id: May, warmup_updates: 0, weight_decay: 0, with_att: True, work_dir: , loading deepspeech ... loading Esperanto ... loading hubert ... loading Mel and F0 ... loading 3dmm coeff ... Traceback (most recent call last): File "/home/qw/proj/GeneFace/data_gen/nerf/binarizer.py", line 277, in binarizer.parse(hparams['video_id']) File "/home/qw/proj/GeneFace/data_gen/nerf/binarizer.py", line 267, in parse ret = load_processed_data(processed_dir) File "/home/qw/proj/GeneFace/data_gen/nerf/binarizer.py", line 98, in load_processed_data coeff_dict = np.load(coeff_npy_name, allow_pickle=True).tolist() File "/home/qw/anaconda3/envs/geneface/lib/python3.9/site-packages/numpy/lib/npyio.py", line 416, in load fid = stack.enter_context(open(os_fspath(file), "rb")) FileNotFoundError: [Errno 2] No such file or directory: 'data/processed/videos/May/vid_coeff.npy'

stevetatum commented 9 months ago

hello,do you solve the problem ?

zqdsdfg commented 1 month ago

Hello, have you solved this problem? I also encountered the same problem, do not know how to solve?

harshsaini88 commented 3 weeks ago

Hello, i'm also facing the same problem