Open lynnprosper opened 5 years ago
Hi, I met too. Have you solved it?
Yes, you're right. I never reproduced the acc as reported. Try more epochs and data augmentation, i achieved 60+, but still low.
Similar issues in another repo https://github.com/AshwinRJ/Federated-Learning-PyTorch/issues/2
Thanks a lot. I also used some parts of your code. It's very clear and useful.
I congratulate you for this nice codes.
cut down the number of args.num_users may work
Thanks for your code. I have a question regarding following lines:
`num_shards, num_imgs = 200, 300 idx_shard = [i for i in range(num_shards)] dict_users = {i: np.array([], dtype='int64') for i in range(num_users)} idxs = np.arange(num_shards*num_imgs) labels = dataset.train_labels.numpy()
# sort labels
idxs_labels = np.vstack((idxs, labels))
idxs_labels = idxs_labels[:,idxs_labels[1,:].argsort()]
idxs = idxs_labels[0,:]
# divide and assign
for i in range(num_users):
rand_set = set(np.random.choice(idx_shard, 2, replace=False))
idx_shard = list(set(idx_shard) - rand_set)
for rand in rand_set:
dict_users[i] = np.concatenate((dict_users[i], idxs[rand*num_imgs:(rand+1)*num_imgs]), axis=0)`
Are you setting a fixed number of images for each user in this part equal to 600? So it works in case that we have 100 client?
@Minoo-Hsn yes, but you can change via --num_users
Hi, shaoxiong~ I've read your code, it's nice, but I still cannot figure out this line in your Readme.md: "The scripts will be slow without the implementation of parallel computing." So, is that means we readers have to implement parallel-computing by ourselves? Thank you~
@Sprinter1999 yes
Dear, First thank you for your code. I have run your code, however, the result is not satisfying. Result: Training accuracy: 43.00 Testing accuracy: 43.00
my cmd:
python main_fed.py --dataset cifar --num_channels 1 --model cnn --epochs 10 --gpu 0 --iid
look forward to your reply. best wishes~
me too . low acc!
increase the numbers of local epochs may work. Obviously, the running time will also increase.
my cmd:
python main_fed.py --dataset cifar --num_channels 1 --model cnn --epochs 10 --gpu 0 --iid --local_ep 10
Result: Training accuracy: 50.45 Testing accuracy: 48.43
However, increasing the numbers of local epochs blindly may do harm to acc and cost longer running time. When I change local_ep from 10 to 15 or 20, the acc is even lower.
Your experimental results make sense. In Non-iid scenario, too much local training does harm to the generalization of the global model of FedAvg.
Dear, First thank you for your code. I have run your code, however, the result is not satisfying. Result: Training accuracy: 43.00 Testing accuracy: 43.00
my cmd:
look forward to your reply. best wishes~