Open qwertyuiop-s opened 3 years ago
- You can create any network with existing layers (PaddleFL/python/paddle_fl/mpc/layers/). If you want to develop new layers/operators, pls refer to PaddlePaddle tutorial: https://www.paddlepaddle.org.cn/tutorials/projectdetail/456143.
I can not calculate accuracy now? Is there no accuracy op now in mpc ?
- You can create any network with existing layers (PaddleFL/python/paddle_fl/mpc/layers/). If you want to develop new layers/operators, pls refer to PaddlePaddle tutorial: https://www.paddlepaddle.org.cn/tutorials/projectdetail/456143.
I can not calculate accuracy now? Is there no accuracy op now in mpc ?
"accuracy op" is not supported yet. You can decrypt predicted results and evaluate accuracy in plaintext.
- Yes. In this case, client will send parameters 10 times each epoch.
Okay, there is question again: When I increase inner_step
time of trainin does not change much, but however we make less interactions beetwen server and nodes. Why really?
- Yes. In this case, client will send parameters 10 times each epoch.
Okay, there is question again: When I increase
inner_step
time of trainin does not change much, but however we make less interactions beetwen server and nodes. Why really?
It mainly depends on the model you use.
Firstly, different models have different ratio of training and network transmission.
Secondly, the iterations between server and worker is actually parameter synchronization, so different models have different packages to transmit in each iteration.
According to my experience, parameter synchronization does not take muck time. So increase inner_step
will not affect the time of training much.
- Yes. In this case, client will send parameters 10 times each epoch.
Okay, there is question again: When I increase
inner_step
time of trainin does not change much, but however we make less interactions beetwen server and nodes. Why really?It mainly depends on the model you use.
Firstly, different models have different ratio of training and network transmission.
Secondly, the iterations between server and worker is actually parameter synchronization, so different models have different packages to transmit in each iteration.
According to my experience, parameter synchronization does not take muck time. So increase
inner_step
will not affect the time of training much.
Okay, but I have next strange thing. Imaging , I have dataset size 6000, batch = 100. So, I have 60 steps per each epoch.
Sometimes, when I set inner_step
according to synchronization parameters every 1 epoch (inner_step=60
) it has less accuracy and loss metrics on training data if I set inner_step=120
(for example, which means it will be synch every 2 epochs, as I understood right). And training process is not really stable.
What could it be ? Retraining ? ( I can send log screen later for details)
Inner Step in paddle_fl module. default
inner_step = 10
This variable means that we will send parameters from nodes to sever every 10 steps ? For example, 10.000 lines in dataset: batch = 100, inner step = 10 Means: 10.000 / 100 = 100 steps 100 / 10 = 10 times nodes will send parameters during 1 epoch? True?How I can create NN in MPC with operators I have now? I only see Linear Regression, Logistic and CNN. Any examples with simple NN, for example, with 2 hidden layers.
Thanks!