knowledgedefinednetworking / a-deep-rl-approach-for-sdn-routing-optimization

A Deep-Reinforcement Learning Approach for Software-Defined Networking Routing Optimization
MIT License
282 stars 100 forks source link

can not get the paper result #3

Open softmicro929 opened 5 years ago

softmicro929 commented 5 years ago

have you anyone get the fig2 result in paper? my model doesn't convergence。

etleader commented 5 years ago

have you anyone get the fig2 result in paper? my model doesn't convergence。

I hava the same question, what's your specific condition? When i train the model, the reward almost don't change. When i test the TMs, i find the training as if never learned sth.

softmicro929 commented 5 years ago

have you anyone get the fig2 result in paper? my model doesn't convergence。

I hava the same question, what's your specific condition? When i train the model, the reward almost don't change. When i test the TMs, i find the training as if never learned sth.

yes, learn nothing, but you should fix the TM when testing over the training stage

etleader commented 5 years ago

have you anyone get the fig2 result in paper? my model doesn't convergence。

I hava the same question, what's your specific condition? When i train the model, the reward almost don't change. When i test the TMs, i find the training as if never learned sth.

yes, learn nothing, but you should fix the TM when testing over the training stage

Sorry to bother you, i don't really understand what u mean? How to fix the TMs?

softmicro929 commented 5 years ago

have you anyone get the fig2 result in paper? my model doesn't convergence。

I hava the same question, what's your specific condition? When i train the model, the reward almost don't change. When i test the TMs, i find the training as if never learned sth.

yes, learn nothing, but you should fix the TM when testing over the training stage

Sorry to bother you, i don't really understand what u mean? How to fix the TMs?

the author's code didn't do testing , so you have to write test code by yourself to get the fig result in paper.

Lui-Chiho commented 5 years ago

have you anyone get the fig2 result in paper? my model doesn't convergence。

Sorry to bother you! Have You get the Fig. 1 result ? I still can't understand how to use the TMs mentioned in the paper to train this DRL-Agent . Can you explain the whole training process, because in the given code , I did not find any correlation between the previous state and the new state, It seems that they are all randomly generated using np.random.

wqhcug commented 5 years ago

have you anyone get the fig2 result in paper? my model doesn't convergence。

Sorry to bother you! Have You get the Fig. 1 result ? I still can't understand how to use the TMs mentioned in the paper to train this DRL-Agent . Can you explain the whole training process, because in the given code , I did not find any correlation between the previous state and the new state, It seems that they are all randomly generated using np.random.

Excuse me. Also have similar question, I can't understand why the state(TMs) and the new_state(TMs) are randomly generated in the step function which in Environment.py .It isn't meeting the logic of DRL.

FaisalNaeem1990 commented 5 years ago

Can any get the result of the same as in paper as the model is not converging

CZMG commented 5 years ago

I have the same question. I dont't understand why the old state and the new state are randomly generated in the Environment.py.

FaisalNaeem1990 commented 5 years ago

Did you run the whole simulations or not.

On Wednesday, May 29, 2019, CZMG notifications@github.com wrote:

I have the same question. I dont't understand why the old state and the new state are randomly generated in the Environment.py.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/knowledgedefinednetworking/a-deep-rl-approach-for-sdn-routing-optimization/issues/3?email_source=notifications&email_token=AMC5WSNJIYYDIC37B2Y2SEDPXZ6D3A5CNFSM4GRGLWCKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWPIXII#issuecomment-496929697, or mute the thread https://github.com/notifications/unsubscribe-auth/AMC5WSOD2RZMH5N4WAZV5VDPXZ6D3ANCNFSM4GRGLWCA .

wqhcug commented 5 years ago

I have the same question. I dont't understand why the old state and the new state are randomly generated in the Environment.py. Did you run the whole simulations or not.

Excuse me. I run the whole simulations. But in my daily study, the STATE of Reinforcement Learning is usually changed by the ACTION, but in the code of this paper, we can find flie that in Environment.py, its NEW STATE and OLD STATE are randomly generated, which does not seem to meet the logic of Reinforcement Learning. Which teacher can answer my confusion? Thank you very much.

ljh14 commented 4 years ago

I have the same question. I dont't understand why the old state and the new state are randomly generated in the Environment.py. Did you run the whole simulations or not.

Excuse me. I run the whole simulations. But in my daily study, the STATE of Reinforcement Learning is usually changed by the ACTION, but in the code of this paper, we can find flie that in Environment.py, its NEW STATE and OLD STATE are randomly generated, which does not seem to meet the logic of Reinforcement Learning. Which teacher can answer my confusion? Thank you very much.

I've also found this question. I think that the author need to do some explanations. It disobey the basic logic of reinforcement learning. @gissimo

slblbwl commented 1 year ago

hello,Please ask how I can run the whole simulation, can you tell me the approximate steps, thank you very much!