deepfakes / faceswap-playground

User dedicated repo for the faceswap project
306 stars 194 forks source link

DetectedFaces is not subscripable #273

Closed GISGuy closed 5 years ago

GISGuy commented 5 years ago

Same error message as issue #242

There is no crash log. I tried run the "Train" from faceswap UI as well as run it in Anaconda command.

Here is the full log of running:

(faceswap) C:\Users\User\faceswap>python faceswap.py convert -i C:/Users/User/Desktop/FS/Video_Test/Test2.mp4 -o C:/Users/User/Desktop/FS/Output -l 0.6 -f C:/Users/User/Desktop/FS/Test.jpeg -m C:/Users/User/Desktop/FS/Model -t original -c masked -M facehull -b 5.0 -e 0.0 -g 1 -sh none -L INFO 04/07/2019 00:37:17 INFO Log level set to: INFO 04/07/2019 00:37:22 INFO Input Video: C:\Users\User\Desktop\FS\Video_Test\Test2.mp4 04/07/2019 00:37:22 INFO Filter: ['C:/Users/User/Desktop/FS/Test.jpeg'] 04/07/2019 00:37:23 INFO Adding post processing item: Face Filter 04/07/2019 00:37:23 WARNING No Alignments file found. Extracting on the fly. 04/07/2019 00:37:23 WARNING NB: This will use the inferior dlib-hog for extraction and dlib pose predictor for landmarks. It is recommended to perfom Extract first for superior results 04/07/2019 00:37:23 INFO Loading Detect from Dlib_Hog plugin... 04/07/2019 00:37:23 INFO Loading config: 'C:\Users\User\faceswap\config\extract.ini' 04/07/2019 00:37:23 INFO Loading Align from Dlib plugin... 04/07/2019 00:37:23 INFO Initializing Dlib-HOG Detector... 04/07/2019 00:37:23 INFO Initialized Dlib-HOG Detector... 04/07/2019 00:37:23 INFO Initializing Dlib Pose Predictor... 04/07/2019 00:37:25 INFO Initialized Dlib Pose Predictor. 04/07/2019 00:37:25 INFO Loading Model from Original plugin... Using TensorFlow backend. 04/07/2019 00:37:28 INFO Loading config: 'C:\Users\User\faceswap\config\train.ini' 04/07/2019 00:37:28 INFO Using configuration saved in state file 04/07/2019 00:37:30 INFO Loaded model from disk: 'C:\Users\User\Desktop\FS\Model' 04/07/2019 00:37:30 INFO Loading Convert from Masked plugin... 0%| | 41/62916 [00:22<9:25:27, 1.85it/s] Exception in thread Thread-5: Traceback (most recent call last): File "C:\Users\User\MiniConda3\envs\faceswap\lib\threading.py", line 916, in _bootstrap_inner self.run() File "C:\Users\User\faceswap\lib\multithreading.py", line 463, in run for item in self.generator: File "C:\Users\User\faceswap\scripts\convert.py", line 124, in prepare_images {"detected_faces": detected_faces}) File "C:\Users\User\faceswap\scripts\fsmedia.py", line 283, in do_actions action.process(output_item) File "C:\Users\User\faceswap\scripts\fsmedia.py", line 420, in process if not self.filter.check(detected_face["face"]): TypeError: 'DetectedFace' object is not subscriptable

Operating system and version: Windows 10 Python version: 3.6.8 Faceswap method: GPU

Details system info: ' ============ System Information ============ encoding: cp1252 git_branch: master git_commits: bd08b9e Merge pull request #691 from kvrooman/histogram-fix gpu_cuda: 10.1 gpu_cudnn: 7.5.0 gpu_devices: GPU_0: Quadro K4100M gpu_driver: 418.96 gpu_vram: GPU_0: 4096MB os_machine: AMD64 os_platform: Windows-10-10.0.15063-SP0 os_release: 10 py_command: C:\Users\User\faceswap/faceswap.py gui py_conda_version: conda 4.5.12 py_implementation: CPython py_version: 3.6.8 py_virtual_env: True sys_cores: 8 sys_processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel sys_ram: Total: 32673MB, Available: 27230MB, Used: 5442MB, Free: 27230MB

=============== Pip Packages =============== absl-py==0.7.0 astor==0.7.1 certifi==2019.3.9 Click==7.0 cloudpickle==0.8.0 cmake==3.13.3 cycler==0.10.0 cytoolz==0.9.0.1 dask==1.1.4 decorator==4.4.0 dlib==19.16.99 face-recognition==1.2.3 face-recognition-models==0.3.0 ffmpy==0.2.2 gast==0.2.2 grpcio==1.16.1 h5py==2.9.0 imageio==2.5.0 Keras==2.2.4 Keras-Applications==1.0.7 Keras-Preprocessing==1.0.9 kiwisolver==1.0.1 Markdown==3.0.1 matplotlib==2.2.2 mkl-fft==1.0.10 mkl-random==1.0.2 mock==2.0.0 networkx==2.2 numpy==1.15.4 nvidia-ml-py3==7.352.0 olefile==0.46 opencv-python==4.0.0.21 pathlib==1.0.1 pbr==5.1.3 Pillow==5.4.1 protobuf==3.6.1 psutil==5.6.1 pyparsing==2.3.1 pyreadline==2.1 python-dateutil==2.8.0 pytz==2018.9 PyWavelets==1.0.2 PyYAML==5.1 scikit-image==0.14.2 scikit-learn==0.20.3 scipy==1.2.1 six==1.12.0 tensorboard==1.12.2 tensorflow==1.12.0 tensorflow-estimator==1.13.0 termcolor==1.1.0 toolz==0.9.0 toposort==1.5 tornado==6.0.2 tqdm==4.31.1 Werkzeug==0.14.1 wincertstore==0.2

============== Conda Packages ============== packages in environment at C:\Users\User\MiniConda3\envs\faceswap:

Name Version Build Channel _tflow_select 2.1.0 gpu
absl-py 0.7.0 py36_0
astor 0.7.1 py36_0
blas 1.0 mkl
ca-certificates 2019.1.23 0
certifi 2019.3.9 py36_0
Click 7.0 cloudpickle 0.8.0 py36_0
cmake 3.13.3 cudatoolkit 9.0 1
cudnn 7.3.1 cuda9.0_0
cycler 0.10.0 py36h009560c_0
cytoolz 0.9.0.1 py36hfa6e2cd_1
dask-core 1.1.4 py36_1
decorator 4.4.0 py36_1
dlib 19.16.99 face-recognition 1.2.3 face-recognition-models 0.3.0 ffmpeg 4.1.1 h6538335_0 conda-forge ffmpy 0.2.2 freetype 2.9.1 ha9979f8_1
gast 0.2.2 py36_0
grpcio 1.16.1 py36h351948d_1
h5py 2.9.0 py36h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha66f8fd_1
imageio 2.5.0 py36_0
intel-openmp 2019.3 203
jpeg 9b hb83a4c4_2
keras 2.2.4 0
keras-applications 1.0.7 py_0
keras-base 2.2.4 py36_0
keras-preprocessing 1.0.9 py_0
kiwisolver 1.0.1 py36h6538335_0
libmklml 2019.0.3 0
libpng 1.6.36 h2a8f88b_0
libprotobuf 3.6.1 h7bd577a_0
libtiff 4.0.10 hb898794_2
markdown 3.0.1 py36_0
matplotlib 2.2.2 py36had4c4a9_2
mkl 2019.3 203
mkl_fft 1.0.10 py36h14836fe_0
mkl_random 1.0.2 py36h343c172_0
mock 2.0.0 py36h9086845_0
networkx 2.2 py36_1
numpy 1.15.4 py36h19fb1c0_0
numpy-base 1.15.4 py36hc3f5095_0
nvidia-ml-py3 7.352.0 olefile 0.46 py36_0
opencv-python 4.0.0.21 openssl 1.1.1b he774522_1
pathlib 1.0.1 py36_1
pbr 5.1.3 py_0
pillow 5.4.1 py36hdc69c19_0
pip 19.0.3 py36_0
protobuf 3.6.1 py36h33f27b4_0
psutil 5.6.1 py36he774522_0
pyparsing 2.3.1 py36_0
pyqt 5.9.2 py36h6538335_2
pyreadline 2.1 py36_1
python 3.6.8 h9f7ef89_7
python-dateutil 2.8.0 py36_0
pytz 2018.9 py36_0
pywavelets 1.0.2 py36h8c2d366_0
pyyaml 5.1 py36he774522_0
qt 5.9.7 vc14h73c81de_0
scikit-image 0.14.2 py36ha925a31_0
scikit-learn 0.20.3 py36h343c172_0
scipy 1.2.1 py36h29ff71c_0
setuptools 40.8.0 py36_0
sip 4.19.8 py36h6538335_0
six 1.12.0 py36_0
sqlite 3.27.2 he774522_0
tensorboard 1.12.2 py36h33f27b4_0
tensorflow 1.12.0 gpu_py36ha5f9131_0
tensorflow-base 1.12.0 gpu_py36h6e53903_0
tensorflow-estimator 1.13.0 py_0
tensorflow-gpu 1.12.0 h0d30ee6_0
termcolor 1.1.0 py36_1
tk 8.6.8 hfa6e2cd_0
toolz 0.9.0 py36_0
toposort 1.5 tornado 6.0.2 py36he774522_0
tqdm 4.31.1 py36_1
vc 14.1 h0510ff6_4
vs2015_runtime 14.15.26706 h3a45250_0
werkzeug 0.14.1 py36_0
wheel 0.33.1 py36_0
wincertstore 0.2 py36h7fe50ca_0
xz 5.2.4 h2fa13f4_4
yaml 0.1.7 hc54c509_2
zlib 1.2.11 h62dcd97_3
zstd 1.3.7 h508b16e_0
'

Appreciate for any clue. Thanks in advance!

Kirin-kun commented 5 years ago

Why are you extracting on the fly? And with HOG extractor to boot, which is the worse one. And you're trying to convert before training? Or after? It's not clear.

Just try to do it step by step:

Kirin-kun commented 5 years ago

The thing with converting "on the fly" is that when a face has screwed up landmarks, it will still try to convert it. You should extract first and clean your datasets from any strange looking faces that aren't face. And when converting, point the converter to the directory containing only the faces you need to convert.

Kirin-kun commented 5 years ago

And wait... you have CUDA 10 but tensorflow 1.12 ? Since I doubt you built it from source, this is not going to work anyway.

The more I read, the more I think you don't know what you are doing.

GISGuy commented 5 years ago

Hello Kirin, Thanks for your reply. I did train the model already. Afterward, in the "Convert" section, "Input Dir" field hit says

Input Dir - Input directory or video. Either a directory containing the image files you wish to procecss or path to a video file.

In which case, I assuming it will support video file (which is so called on the fly?). So I should extract frame and extract face first, then convert and then merge all the output frames back to video. Is that how it works? Thanks again!

Kirin-kun commented 5 years ago

Yes, you better extract all the frames from your video and then the faces in a directory, with faceswap extract.

I see that your video has 62k frames, which is a big file obviously. If you are just starting with faceswap, you should start with a smaller clip, something like 1 or 2 minutes, which will already produce a few thousands frames. Then you should extract the faces from these frames and clean the faceset of any false positive (or faces of other person you don't want).

Then train, convert frames, remerge into a video, eventually re-adding the sound afterward.

Kirin-kun commented 5 years ago

And eventually, scrap the filter option. The crash has something to do with it.

torzdf commented 5 years ago

Just to clear up some stuff here:

1) Uninstall your system wide Cuda, if you don't use it. Conda Tensorflow installs it's own local Cuda, which is a lot easier and reliable, but it can sometimes clash with your system wide install.

2) You can convert without running extract, but it isn't recommended. As pointed out, it will use the HOG detector, which is vastly inferior

3) @Kirin-kun is correct. Start with small clips. However, you can extract faces straight from a video and feed that into extract

GISGuy commented 5 years ago

Well, it turned out that some optional parameters of the command are not well supported. Following command works. python faceswap.py convert -i C:/Users/User/Desktop/FS/Video_Test/Test2.mp4 -o C:/Users/User/Desktop/FS/Output