tamiratGit / FedELM

1 stars 0 forks source link

Paper plan #13

Open akusok opened 9 months ago

akusok commented 9 months ago

The paper goes approximately like this:

Introduction

Experimental setup for federated learning in the paper

Present our implementation of federated ELM

New stuff in the paper

akusok commented 9 months ago

In Federated methods, we will never see data (X, Y) directly. Instead we will use (H'H, H'Y) for training.

All code for each user is in a separate function (or separate code file). Only that code can read data (X,Y) for that user. Users share out the processed data (H'H, H'Y).

  1. Generate weights, biases etc and share for all users.
  2. Get data chunks (H'H, H'Y) from each client.
  3. Send all data chunks (H'H, H'Y) to every client.
  4. Finish training ELM from (H'H, H'Y) and evaluate client performance. Finish training is very easy: add up all H'H together, H'Y together, then B = solve(H'H_sum, H'Y_sum)

code example

W, B = generate_weights_and_biases()
user1_data = user_1_function(W, B)

def user_1_function(W, B):
  x1, y1 = load_data()  # only exists inside this function
  H1 = ...
  H'H = H1.T.dot(H1)
  ...
  return (H'H, H'Y)  
akusok commented 9 months ago
image
tamiratGit commented 9 months ago

Hi, Anton. Sorry, I am confused now, Let sat the server call "user_1_function(W, B)" which returns (H'H, H'Y). Can the server call "user_1_function(W, B)" with an aggregate of H'H, H'Y (of C1, C2, C3). I couldn't figure out how the FL server managed the process.

akusok commented 9 months ago

You should have another function for that, like "compute_accuracy(HH, HY)".

akusok commented 9 months ago

Or you can make a user an object, that stores data of W, bias, HH_own, HY_own, [HH of other users], [HY of other users]. Then give that object functions like "get_data(W, bias) -> HH_own, HY_own", and "add_other_users_data(HH, HY)", "compute_accuracy()". You can create data structures and objects that you need, din't limit yourself to what we discussed.