hcmlab / GANonymization

A GAN-based Face Anonymization Framework for Preserving Emotional Expressions
https://hcmlab.github.io/GANonymization/
MIT License
12 stars 6 forks source link

Training results #2

Closed PedroKBrant closed 7 months ago

PedroKBrant commented 9 months ago

Hello @FHellmann, thank you for making your code available. I tried to train the network myself but got an awkward result. The three images are: the original face from CelebA, the anonymization using your available model, and the model I trained (epoch 107), respectively. Any idea on what I did wrong would be greatly appreciated :)

Results ![1_resized](https://github.com/hcmlab/GANonymization/assets/34447224/740cb15d-3bc6-461b-9531-45e2d1939a3a) ![1_anon_baseline](https://github.com/hcmlab/GANonymization/assets/34447224/fd85c0bb-479e-4f79-a220-8bd604fe39c0) ![1_anon](https://github.com/hcmlab/GANonymization/assets/34447224/1bcbde6e-e858-4c95-8ab3-4db8f51b78a4)

Oddly, my output results seem fine.

Output ![107-616998](https://github.com/hcmlab/GANonymization/assets/34447224/08102c46-6acd-415a-a94f-a7a3d29ec932)

My conda env is CUDA Version: 12.2, python 3.8 and torch 2.1.2. I-m running on Ubuntu 22.04 lts on a RTX3090, 32 RAM.

I' have followed the instructions on README and prepared the dataset with the commands:

python main.py preprocess --input_path ../../../../../media/voxar/datasets/pkb/celeba_splitted/ --img_size 512 --test_size 0.1 --output_dir ../../../../../media/voxar/datasets/pkb/celeba_splitted

def preprocess(input_path: str, img_size: int = 512, align: bool = True, test_size: float = 0.1, shuffle: bool = True,
               output_dir: str = None, num_workers: int = 8):

The training used this hyperparametes python main.py train_pix2pix --data_dir lib/datasets/ --log_dir logs/ --models_dir baseline/ --output_dir results/ --dataset_name celeba_splitted/FaceSegmentation/

def train_pix2pix(data_dir: str, log_dir: str, models_dir: str, output_dir: str, dataset_name: str, epoch: int = 0,
                  n_epochs: int = 200, batch_size: int = 128, lr: float = 0.0002, b1: float = 0.5, b2: float = 0.999,
                  n_cpu: int = 16, img_size: int = 256, checkpoint_interval: int = 2800, device: int = 0):

I have also changed detect_anomaly to False

trainer = pytorch_lightning.Trainer(deterministic=True,
                                        accelerator="gpu" if device >= 0 else "cpu",
                                        devices=[device] if device >= 0 else None,
                                        callbacks=callbacks,
                                        logger=tb_logger,
                                        max_epochs=num_epoch,
                                        val_check_interval=checkpoint_interval,
                                        log_every_n_steps=1,
                                        detect_anomaly=False)
FHellmann commented 8 months ago

Hi @PedroKBrant, I just merged some bug fixes. Furthermore, under the test folder, you can find a test_main.py with examples of how to use the code. If those don't help to solve your issue, let me know.

PedroKBrant commented 8 months ago

Thank you for the attention, I will pull the bugfixes and try to run this file first.

PedroKBrant commented 8 months ago

@FHellmann, just a quick question: when I run the train_pix2pix script, should the data_dir point to the FaceSegmentation folder?

FHellmann commented 8 months ago

You should train it on the folder (probably "FacialLandmarks478") with the images including a face and the corresponding mesh. Like this for example: mesh_0-segmentation_0-crop_0-000001

PedroKBrant commented 8 months ago

It is now working, thank you