Closed HumoristReoccupy closed 11 months ago
Hello,
Thank you for your question. The class images should be put in the data
folder. Sorry for not being clear in readme. I will update it when I have time. I guess the original script just uses wandb by default but disabling it is fine. Please let me know if you have any further question.
That was a fast reply. Thanks.
I made the separate \data\ subfolder in the classification_data_dir
and moved the class subfolders into it and it seem to do the trick.
I originally did try this but ran into an error so I wasn't sure at the time if this is what I had to do. Now that I know that was supposed to be the intended organization, the error I was getting was:
File "C:\Users\usernmae\AppData\Local\Programs\Python\Python310\lib\site-packages\einops\_backends.py", line 513, in is_appropriate_type
return self.K.is_tensor(tensor) and self.K.is_keras_tensor(tensor)
AttributeError: module 'keras.backend' has no attribute 'is_tensor'. Did you mean: '_to_tensor'?
The error was coming from the einops-0.3.0
package installed from the classifier_training\requirements.txt
file.
Checking einops' Github showed that it was a known issue and updating it to the latest version, einops-0.6.0
, resolves the issue.
I went ahead and made sure I had no further issues before confirming and was able to train the new vision model successfully and ran the generated classify data it produced during the folder arrangement step successfully.
The only other question I have is how to organize characters that do not play nice with the face detection due to uncommon facial structures or if hoods or blindfolds hide the face in certain shots, both training data for the classifying script or the frames being classified, as they will be flagged for "0 faces detected" and skipped?
Glad to hear that this is working for you now. The dependency issue is always tricky and you are right I am also using einops-0.6.0
. I will keep these in mind when I update the repo.
Concerning the face detection part. I acknowledge this is the current bottleneck and I think it would be the bottleneck for any similar workflow. I also have questions detecting some characters when I trained my models. With the current models we can only fix this manually, but hopefully we can get a better detection model (for probably full body + head) in the near future. (I will not have time for that and I don't know if I will be able to urge some friend to do it.)
I see, I guess then for manual fixes would to be move images into their respective folders if they were listed as unknowns and run the metadata correcting script like suggested when dealing with character's backsides, and any character not tag from the beginning due to the limitations are SoL until improvements are found?
For now I can work with this as this pipeline already helps my work flow a lot. Thanks again for your help.
also I don't know if just me but if anyone have problem with extract frame ffmpeg script just remove the " ' " (quote) part of -i and filter parameter otherwise ffmpeg can't find your path
The new version should work with back shot as well when --crop_with_face
is not specified. The way of calling ffmpege command has also been modified so I think I can close the issue now.
I've been trying to set up this pipeline and have ran into this output when running the train.py script for the Character Classification Training section.
(
pretrain.ckpt
is thedanbooruFaces_L_16_image...False_lastEpoch.ckpt
file suggested, the name was just too long when writing this up) (I am running this on Windows 10)I've gone back and made sure I have all the required dependencies that needed to be installed. One of the odd things that I noticed is that it is adding a \data\ subfolder in the path that should not be there. Removing the image and recompiling the labels.csv just made the script stop at a different image on a different class so what I did was just left one class and it errored on every single image until it had too few and prompted the error asking for more data images.
On a side note, is a wandb API required? It was originally asking for an api_key and while I did sign up and get one, the result didn't change so I just ran
wandb disabled
during the troubleshooting process so it would stop appearing and would reenable when testing.