primepake / wav2lip_288x288

MIT License
560 stars 143 forks source link

color_syncnet_train.py报错 #60

Closed ltbjwzttzz closed 10 months ago

ltbjwzttzz commented 1 year ago

运行: python color_syncnet_train.py --data_root ./data/preprocessed_root/original_data --checkpoint_dir ./lipsync_expert

报错: C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\Loss.cu:103: block: [0,0,0], thread: [36,0,0] Assertion input_val >= zero && input_val <= one failed. 0it [00:07, ?it/s] Traceback (most recent call last): File "E:\AI\wav2lip_288x288\color_syncnet_train.py", line 276, in train(device, model, train_data_loader, test_data_loader, optimizer, File "E:\AI\wav2lip_288x288\color_syncnet_train.py", line 162, in train loss.backward() File "C:\AIr\anaconda3\envs\wav2lip_288\lib\site-packages\torch_tensor.py", line 487, in backward torch.autograd.backward( File "C:\AIr\anaconda3\envs\wav2lip_288\lib\site-packages\torch\autograd__init__.py", line 200, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Canyao45 commented 1 year ago

改为relu

A-ka-duo-jiu-o commented 1 year ago

@Canyao45 请问改如何修改呢

cookabc commented 1 year ago

@Canyao45 请问改如何修改呢

you may try here: https://github.com/primepake/wav2lip_288x288/blob/b351d584523707c055225444b4883d40409071a4/models/conv2.py#L12

shengzewen commented 1 year ago

@cookabc It works!!!Thank you so much.