mayuelala / FollowYourEmoji

[Siggraph Asia 2024] Follow-Your-Emoji: This repo is the official implementation of "Follow-Your-Emoji: Fine-Controllable and Expressive Freestyle Portrait Animation"
327 stars 25 forks source link

Runing in windows #6

Closed zhenyuanzhou closed 4 months ago

zhenyuanzhou commented 4 months ago

(f:\py310ai) F:\FollowYourEmoji>python -m torch.distributed.run --nnodes 1 --master_addr localhost --master_port 12345 --node_rank 0 --nproc_per_node 1 infer.py --config ./configs/infer.yaml --model_path F:/FollowYourEmoji/pretrained --input_path F:/FollowYourEmoji/inference_temple --lmk_path ./data/mplmks --output_path ./data/out --model_step 30

[2024-07-18 21:27:15,095] torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs. init model Traceback (most recent call last): File "f:\py310ai\lib\site-packages\huggingface_hub\utils_errors.py", line 304, in hf_raise_for_status response.raise_for_status() File "f:\py310ai\lib\site-packages\requests\models.py", line 1024, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/sd-vae-ft-mse/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "F:\FollowYourEmoji\diffusers\configuration_utils.py", line 371, in load_config config_file = hf_hub_download( File "f:\py310ai\lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn return fn(*args, *kwargs) File "f:\py310ai\lib\site-packages\huggingface_hub\file_download.py", line 1403, in hf_hub_download raise head_call_error File "f:\py310ai\lib\site-packages\huggingface_hub\file_download.py", line 1261, in hf_hub_download metadata = get_hf_file_metadata( File "f:\py310ai\lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn return fn(args, **kwargs) File "f:\py310ai\lib\site-packages\huggingface_hub\file_download.py", line 1667, in get_hf_file_metadata r = _request_wrapper( File "f:\py310ai\lib\site-packages\huggingface_hub\file_download.py", line 385, in _request_wrapper response = _request_wrapper( File "f:\py310ai\lib\site-packages\huggingface_hub\file_download.py", line 409, in _request_wrapper hf_raise_for_status(response) File "f:\py310ai\lib\site-packages\huggingface_hub\utils_errors.py", line 352, in hf_raise_for_status raise RepositoryNotFoundError(message, response) from e huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6699183d-5edbef3d30948f6f22bf9224;23040edb-95fb-42ea-85fa-396650e0e3ed)

Repository Not Found for url: https://huggingface.co/sd-vae-ft-mse/resolve/main/config.json. Please make sure you specified the correct repo_id and repo_type. If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "F:\FollowYourEmoji\infer.py", line 228, in main(args, config) File "F:\FollowYourEmoji\infer.py", line 125, in main vae = AutoencoderKL.from_pretrained(config.vae_model_path).to(dtype=weight_dtype, device="cuda") File "F:\FollowYourEmoji\diffusers\models\modeling_utils.py", line 712, in from_pretrained config, unused_kwargs, commit_hash = cls.load_config( File "F:\FollowYourEmoji\diffusers\configuration_utils.py", line 385, in load_config raise EnvironmentError( OSError: sd-vae-ft-mse is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login. [2024-07-18 21:27:25,157] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 51688) of binary: f:\py310ai\python.exe Traceback (most recent call last): File "f:\py310ai\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "f:\py310ai\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "f:\py310ai\lib\site-packages\torch\distributed\run.py", line 810, in main() File "f:\py310ai\lib\site-packages\torch\distributed\elastic\multiprocessing\errors__init__.py", line 346, in wrapper return f(*args, **kwargs) File "f:\py310ai\lib\site-packages\torch\distributed\run.py", line 806, in main run(args) File "f:\py310ai\lib\site-packages\torch\distributed\run.py", line 797, in run elastic_launch( File "f:\py310ai\lib\site-packages\torch\distributed\launcher\api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) File "f:\py310ai\lib\site-packages\torch\distributed\launcher\api.py", line 264, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

infer.py FAILED

Failures:

------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2024-07-18_21:27:25 host : ZZY rank : 0 (local_rank: 0) exitcode : 1 (pid: 51688) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================
mayuelala commented 4 months ago

Thanks for your attention, I have invited you into the group for communication. You can put the model into your file