Doubiiu / CodeTalker

[CVPR 2023] CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior
MIT License
537 stars 58 forks source link

render failed #76

Closed hrWong closed 4 months ago

hrWong commented 4 months ago

Thank you for your outstanding work! I get an error when running sh scripts/demo.sh vocaset

Some weights of the model checkpoint at facebook/wav2vec2-base-960h were not used when initializing Wav2Vec2Model: ['lm_head.bias', 'lm_head.weight']
- This IS expected if you are initializing Wav2Vec2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing Wav2Vec2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of Wav2Vec2Model were not initialized from the model checkpoint at facebook/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

=> loading checkpoint 'vocaset/vocaset_stage2.pth.tar'
=> loaded checkpoint 'vocaset/vocaset_stage2.pth.tar'
Generating facial animation for demo/wav/man.wav...
Save facial animation in demo/npy/man/condition_FaceTalk_170725_00137_TA_subject_FaceTalk_170809_00138_TA.npy
rendering:  man
pyrender: Failed rendering frame
pyrender: Failed rendering frame
pyrender: Failed rendering frame
pyrender: Failed rendering frame
pyrender: Failed rendering frame
pyrender: Failed rendering frame
pyrender: Failed rendering frame

then I commented out try,The error is reported as follows

Some weights of the model checkpoint at facebook/wav2vec2-base-960h were not used when initializing Wav2Vec2Model: ['lm_head.weight', 'lm_head.bias']
- This IS expected if you are initializing Wav2Vec2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing Wav2Vec2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of Wav2Vec2Model were not initialized from the model checkpoint at facebook/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
=> loading checkpoint 'vocaset/vocaset_stage2.pth.tar'
=> loaded checkpoint 'vocaset/vocaset_stage2.pth.tar'
Generating facial animation for demo/wav/man.wav...
Save facial animation in demo/npy/man/condition_FaceTalk_170725_00137_TA_subject_FaceTalk_170809_00138_TA.npy
rendering:  man
Traceback (most recent call last):
  File "main/demo.py", line 219, in <module>
    main()
  File "main/demo.py", line 129, in main
    test(model, cfg.demo_wav_path, save_folder, condition, subject)
  File "main/demo.py", line 199, in test
    pred_img = render_mesh_helper(cfg,render_mesh, center)
  File "main/demo.py", line 100, in render_mesh_helper
    r = pyrender.OffscreenRenderer(viewport_width=frustum['width'], viewport_height=frustum['height'])
  File "/root/miniconda3/lib/python3.8/site-packages/pyrender/offscreen.py", line 31, in __init__
    self._create()
  File "/root/miniconda3/lib/python3.8/site-packages/pyrender/offscreen.py", line 149, in _create
    self._platform.init_context()
  File "/root/miniconda3/lib/python3.8/site-packages/pyrender/platforms/osmesa.py", line 19, in init_context
    from OpenGL.osmesa import (
ImportError: cannot import name 'OSMesaCreateContextAttribs' from 'OpenGL.osmesa' (/root/miniconda3/lib/python3.8/site-packages/OpenGL/osmesa/__init__.py)

How do I fix this error?

hrWong commented 4 months ago

I fix it by pip install pyopengl==3.14