Closed Rafiur closed 3 years ago
Hey @Rafiur ,
thanks for passing by! Seems like you're trying to install as a package "conda-forge", which instead is a channel for downloading packages. To replicate our environment just clone our repo, and then from the bash execute the following lines:
$ conda env create -f environment.yml
$ conda activate icpr2020
On what kind of dataset do you want to train on? Please be aware that this is a repository for replicating the results of a scientific paper, so the pipeline is tailored to the two datasets we have used (i.e., Deep Fake Detection Challenge and FaceForensics++). Bests,
Edoardo
The error still occurs when I run "conda env create -f environment.yml" on ananconda prompt
yes I have seen that the repository is tailored for two datasets but I was hoping to know the preprocessing procedure of the dataset and how to do that
Hey @Rafiur, I just managed to recreate the environment from scratch under Ubuntu 20.10, so I think the error is related to Windows specific packages managed by Anaconda. I don't have a solution for you, try updating/reinstalling Anaconda hoping for an improvement. In general, I saw many people here using Windows without this problem, so my guess is that there is something broken in your Anaconda configuration. Maybe @CrohnEngineer has some Windows specific hints to share.
Regarding the training with your dataset: we don't cover this in our scripts. If you are really motivated in doing so, the steps to follow are: 1) indexing videos by modifying one of the index scripts index_dfdc.py or index_ffpp.py 2) extracting faces by modifing extract_faces.py
then it should be straightforward using one of the training scripts and test them accordingly.
Yes, it seems it was an issue on my end as I have done the same on another windows device and no error occurred. I will try to do so following the given steps. Thank you for the valuable feedbacks.
While creating the environment this error occurs and rolls back the creation of all other libraries. How do I get past this? Also if I wanted to train my own dataset what is the pipeline?