Manoj-2702 / PoseSync-Flower-Federated-Learning-for-Yoga-Poses

Demo showcasing Federated Learning with Flower for yoga pose classification, enabling collaborative training across distributed datasets while preserving data privacy.
MIT License
1 stars 0 forks source link

How to change the model to GAN and dataset to cifer10 or own Custom dataset #1

Open ibad321 opened 2 weeks ago

ibad321 commented 2 weeks ago

How We can change the model and datset

Manoj-2702 commented 2 weeks ago

@ibad321 Can you please describe the question properly? I didnt get you

ibad321 commented 2 weeks ago

@Manoj-2702 I recently checked out your project where you used the Flower framework with a CNN model on the Yoga dataset—really impressive work! I'm interested in modifying this project by using a different dataset, like CIFAR-10, and implementing a different deep learning model, such as a GAN. Could you provide some guidance on how I might go about making these changes? If possible, could you share your WhatsApp number for more direct communication?

Thanks a lot for your help!

ibad321 commented 2 weeks ago

@ibad321 Can you please describe the question properly? I didnt get you

@Manoj-2702 I run the project Actually the model is overfit its give 100 training Acuuarcy on client 2 and around 60 to 70 Accuarcy on validation and testing Accuracy also around 60 to 70 and one thing Another you created A stremlit App which does not predict actual pose you write only Placeholder Class Name 7 and Same yoga name.

Manoj-2702 commented 2 weeks ago

@ibad321 Yes, the model is overfitted due to scarcity of the data. We have trained it on very less amount of Yoga Poses.

Manoj-2702 commented 2 weeks ago

@ibad321 In your Clients, you can define the model you want to train it on. So, basically that should contain the architecture of the model you are training on. Make sure that the architecture remains the same in all the clients.

In the streamlit app, I have used a placeholder as of now, but as you can see there is a commented line for predicting the poses as well. Hence please go through the code once again.

ibad321 commented 2 weeks ago

@ibad321 Yes, the model is overfitted due to scarcity of the data. We have trained it on very less amount of Yoga Poses.

@Manoj-2702 Thanks For Responce Actually i am Requested to You i am looking GAN Model implementation on any Dataset Through Flower There is no Github Repository With Due Respect i Request to you Add new Branch To implement GAN model On any dataset like cifer10 Etc Thanks

Manoj-2702 commented 2 weeks ago

@ibad321 Yeah sure, we can do that. I will create a branch for that. You can start contributing.

ibad321 commented 2 weeks ago

@ibad321 Yeah sure, we can do that. I will create a branch for that. You can start contributing.

@Manoj-2702 Thanks For Responce I am new in federated learning and I have Run Cnn on Cifer10 dataset but i am fail to train GAN model i will provide Code So i am requesting to you please if you kindly do it Will help and I will appreciate it. Thanks

Manoj-2702 commented 2 weeks ago

@ibad321 You can provide me the code. I will try my best to integrate it. But due to my college timelines, it may take some time.

ibad321 commented 2 weeks ago

@ibad321 You can provide me the code. I will try my best to integrate it. But due to my college timelines, it may take some time.

@Manoj-2702 Thanks for responce I think the code is fine may be flower version issue you look its one i think you can do Easily.

client.py

import flwr as fl
import tensorflow as tf
import argparse
from keras.datasets.mnist import load_data
from numpy import expand_dims
from layers import create_model, generator_optimizer, discriminator_optimizer, generator_loss, discriminator_loss, seed, generate_and_save_images

BATCH_SIZE = 256
noise_dim = 100

def main() -> None:
    # Parse command line argument `partition`
    parser = argparse.ArgumentParser(description="Flower")
    parser.add_argument("--partition", type=int, choices=range(0, 2), required=True)
    args = parser.parse_args()

    x_train, x_test = load_partition(args.partition)
    x_train = expand_dims(x_train, axis=-1)
    x_train = x_train.astype('float32')
    x_train = x_train / 255.0

    x_test = expand_dims(x_test, axis=-1)
    x_test = x_test.astype('float32')
    x_test = x_test / 255.0

    model = create_model()

    client = GanClient(model, x_train, x_test)
    fl.client.start_numpy_client("[::]:8080", client=client)

def load_partition(idx: int):
    (x_train, y_train), (x_test, y_test) = load_data()
    x_train = expand_dims(x_train, axis=-1)
    x_train = x_train.astype('float32')
    x_train = x_train / 255.0

    x_test = expand_dims(x_test, axis=-1)
    x_test = x_test.astype('float32')
    x_test = x_test / 255.0
    return x_train[idx * 30000 : (idx + 1) * 30000], x_test[idx * 5000 : (idx + 1) * 5000]

class GanClient(fl.client.NumPyClient):
    def __init__(self, model, x_train, x_test):
        self.model = model
        self.x_train = x_train
        self.x_test = x_test
        x_train_ds = tf.data.Dataset.from_tensor_slices((x_train))
        self.x_train_ds = x_train_ds.batch(BATCH_SIZE)  # batch_size can be 1
        x_test_ds = tf.data.Dataset.from_tensor_slices((x_test))
        self.x_test_ds = x_test_ds.batch(BATCH_SIZE)  # batch_size can be 1

    def get_parameters(self):
        return self.model.get_weights()

    def fit(self, parameters, config):
        self.model.set_weights(parameters)
        generator = self.model.layers[0]
        discriminator = self.model.layers[1]
        for i, images in enumerate(self.x_train_ds):
            noise = tf.random.normal([BATCH_SIZE, noise_dim])
            with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
                generated_images = generator(noise, training=True)

                real_output = discriminator(images, training=True)
                fake_output = discriminator(generated_images, training=True)

                gen_loss = generator_loss(fake_output)
                disc_loss = discriminator_loss(real_output, fake_output)

            gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
            gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)

            generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
            discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
            print('%d d=%.3f, g=%.3f' % (i + 1, disc_loss, gen_loss))
        return self.model.get_weights(), len(self.x_train), {}

    def evaluate(self, parameters, config):
        generator = self.model.layers[0]
        discriminator = self.model.layers[1]
        noise = tf.random.normal([BATCH_SIZE, noise_dim])
        generated_images = generator(noise, training=True)
        real_output = discriminator(self.x_test, training=True)
        fake_output = discriminator(generated_images, training=True)
        loss = discriminator_loss(real_output, fake_output)
        generate_and_save_images(self.model.layers[0], seed)
        np = float(loss.numpy())
        return np, len(self.x_test), {}

if __name__ == "__main__":
    main()

server.py

import flwr as fl
import numpy as np
from typing import Optional

class SaveModelStrategy(fl.server.strategy.FedAvg):
    def aggregate_fit(
        self,
        rnd: int,
        results: list[tuple[fl.server.client_proxy.ClientProxy, fl.common.FitRes]],
        failures: list[BaseException],
    ) -> Optional[fl.common.Weights]:
        aggregated_weights = super().aggregate_fit(rnd, results, failures)
        if aggregated_weights is not None:
            # Save aggregated_weights
            print(f"Saving round {rnd} aggregated_weights...")
            np.savez(f"round-{rnd}-weights.npz", *aggregated_weights)
        return aggregated_weights

# Create strategy and run server
strategy = SaveModelStrategy(
    fraction_fit=0.1,  # Sample 10% of available clients for the next round
    min_fit_clients=2,  # Minimum number of clients to be sampled for the next round
    min_eval_clients=2,
    min_available_clients=2,
)
fl.server.start_server(config={"num_rounds": 20}, strategy=strategy)

layer.py (Architecture of GAN)

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
import matplotlib.pyplot as plt

def define_discriminator():
    model = tf.keras.Sequential()
    model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',
                                     input_shape=[28, 28, 1]))
    model.add(layers.LeakyReLU())
    model.add(layers.Dropout(0.3))

    model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
    model.add(layers.LeakyReLU())
    model.add(layers.Dropout(0.3))

    model.add(layers.Flatten())
    model.add(layers.Dense(1))

    return model

def define_generator():
    model = tf.keras.Sequential()
    model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))

    model.add(layers.BatchNormalization())
    model.add(layers.LeakyReLU())

    model.add(layers.Reshape((7, 7, 256)))
    assert model.output_shape == (None, 7, 7, 256)  # Note: None is the batch size

    model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
    assert model.output_shape == (None, 7, 7, 128)
    model.add(layers.BatchNormalization())
    model.add(layers.LeakyReLU())

    model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
    assert model.output_shape == (None, 14, 14, 64)
    model.add(layers.BatchNormalization())
    model.add(layers.LeakyReLU())

    model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
    assert model.output_shape == (None, 28, 28, 1)

    return model

def create_model():
    model = Sequential()
    model.add(define_generator())
    model.add(define_discriminator())
    return model

def discriminator_loss(real_output, fake_output):
    real_loss = cross_entropy(tf.ones_like(real_output), real_output)
    fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
    total_loss = real_loss + fake_loss
    return total_loss

def generator_loss(fake_output):
    return cross_entropy(tf.ones_like(fake_output), fake_output)

def generate_and_save_images(model, test_input):
    predictions = model(test_input, training=False)

    fig = plt.figure(figsize=(4, 4))

    for i in range(predictions.shape[0]):
        plt.subplot(4, 4, i+1)
        plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
        plt.axis('off')

    plt.savefig('image.png')
    plt.show()

cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
seed = tf.random.normal([16, 100])
ibad321 commented 2 weeks ago

@ibad321 You can provide me the code. I will try my best to integrate it. But due to my college timelines, it may take some time. @Manoj-2702 Thanks you So much For Accepting My Request i think You can debug This code And add new branch of GAN mean GAN model Implementation.