HamadYA / GhostFaceNets

This repository contains the official implementation of GhostFaceNets, State-Of-The-Art lightweight face recognition models.
https://ieeexplore.ieee.org/document/10098610
MIT License
180 stars 37 forks source link

Custom Training #51

Open krishnan5 opened 5 months ago

krishnan5 commented 5 months ago

Thank you so much for this Face recognition module

I have a custom Training dataset of my own of 10 people in each different folder as u have mentioned in the Training dataset format.

I have tried Basic Training for that dataset but it throws an error as - " ZeroDivisionError: division by zero "

(https://github.com/HamadYA/GhostFaceNets?tab=readme-ov-file#basic-training) - This what I tried

Here is my code - All I did was to Copy paste and changed the dataset and the bin that you have provided !

from tensorflow import keras

import losses, train, GhostFaceNets

data_path = '/content/drive/MyDrive/GhostFaceNets-main/datasets_112x112_folders'

eval_paths = ['/content/drive/MyDrive/GhostFaceNets-main/lfw.bin', '/content/drive/MyDrive/GhostFaceNets-main/cfp_fp.bin', '/content/drive/MyDrive/GhostFaceNets-main/agedb_30.bin']

basic_model = GhostFaceNets.buildin_models("ghostnetv1", dropout=0, emb_shape=512, output_layer='GDC', bn_momentum=0.9, bn_epsilon=1e-5)

basic_model = GhostFaceNets.add_l2_regularizer_2_model(basic_model, weight_decay=5e-4, apply_to_batch_normal=False)

basic_model = GhostFaceNets.replace_ReLU_with_PReLU(basic_model)

tt = train.Train(data_path, eval_paths=eval_paths,

save_path='ghostnetv1_w1.3_s2.h5', basic_model=basic_model, model=None,

lr_base=0.1, lr_decay=0.5, lr_decay_steps=45, lr_min=1e-5,

batch_size=128, random_status=0, eval_freq=1, output_weight_decay=1)

optimizer = keras.optimizers.SGD(learning_rate=0.1, momentum=0.9)

sch = [

{"loss": losses.ArcfaceLoss(scale=32), "epoch": 1, "optimizer": optimizer},

{"loss": losses.ArcfaceLoss(scale=64), "epoch": 50},

]

tt.train(sch, 0)

Am lost what to do in this and also please guide me through.

Should I use the pre trained model to go further or how am I suppose to use the bin folders and should I train and create my own bin folders for the training?

Thanks in advance ! GitHub - HamadYA/GhostFaceNets: This repository contains the official implementation of GhostFaceNets, State-Of-The-Art lightweight face recognition models. This repository contains the official implementation of GhostFaceNets, State-Of-The-Art lightweight face recognition models. - HamadYA/GhostFaceNets

image

12194916 commented 1 month ago

Hello,

I also was using my custom dataset, where did you get the eval_paths = ['datasets/faces_emore/lfw.bin', 'datasets/faces_emore/cfp_fp.bin', 'datasets/faces_emore/agedb_30.bin'] for custom dataset? can it be created using prepare_data.py?

And did you solve the issue you posted, i am having that issue also?

ntckim commented 1 month ago

i am having trouble with my custom training as well, what did you do to create the .lst as well as the .idx and .rec for the custom dataset?

12194916 commented 1 month ago

The same problem with me also. I do not have solution. I will let you know once I found. I simply created a bin file for my custom dataset and used it for eval_path, but it did not work.

Below is to create bin bassed on the dataset description in the read me file. But I do not think it is correct

import numpy as np
from itertools import combinations
from PIL import Image
import pickle

def load_and_pair_images(folder, label, cross_folder=None):
    """ Load images, pair them within the folder or cross folders, and assign labels. """
    images = []
    labels = []

    # List of files in the folder
    file_list = os.listdir(folder)
    if cross_folder:
        # Cross folder pairing
        cross_file_list = os.listdir(cross_folder)
        for file1 in file_list:
            for file2 in cross_file_list:
                img1 = Image.open(os.path.join(folder, file1))
                img2 = Image.open(os.path.join(cross_folder, file2))
                img1 = img1.resize((160, 160))  # Resize to uniform size
                img2 = img2.resize((160, 160))
                images.append((np.array(img1), np.array(img2)))
                labels.append(label)
    else:
        # Intra-folder pairing
        for (file1, file2) in combinations(file_list, 2):
            img1 = Image.open(os.path.join(folder, file1))
            img2 = Image.open(os.path.join(folder, file2))
            img1 = img1.resize((160, 160))  # Resize to uniform size
            img2 = img2.resize((160, 160))
            images.append((np.array(img1), np.array(img2)))
            labels.append(label)

    return images, labels

def create_eval_bin(path, output_file):
    """ Create a .bin file for evaluation """
    same_images_0, same_labels_0 = load_and_pair_images(os.path.join(path, '0'), True)
    same_images_1, same_labels_1 = load_and_pair_images(os.path.join(path, '1'), True)
    diff_images, diff_labels = load_and_pair_images(os.path.join(path, '0'), False, cross_folder=os.path.join(path, '1'))

    # Combine data
    images = np.array(same_images_0 + same_images_1 + diff_images, dtype=object)
    labels = np.array(same_labels_0 + same_labels_1 + diff_labels, dtype=bool)

    # Shuffle data (optional)
    idx = np.random.permutation(len(labels))
    images = images[idx]
    labels = labels[idx]

    # Serialize using pickle
    with open(output_file, 'wb') as f:
        pickle.dump((images, labels), f)

# Example usage
create_eval_bin('datasets/friends', 'datasets/friends_eval.bin')
ntckim commented 1 month ago

Hi, just following up to see if you were able to figure it out