baturaysaglam / RIS-MISO-Deep-Reinforcement-Learning

Joint Transmit Beamforming and Phase Shifts Design with Deep Reinforcement Learning
MIT License
130 stars 37 forks source link

.npy file for reproduction for figure no 4? #17

Open Zainmustafajajja opened 9 months ago

Zainmustafajajja commented 9 months ago

Dear @baturaysaglam and @Amjad-iqbal3, I have been working on this code for a while and i am also stuck at same issue (as described in issue no 14 by @Amjad-iqbal) for figure no 4. How did you generate data (result.npy file) to produce this figure. Because in code there is no option to generate this data. Did you do it manualy to match the result in paper? Please let me know if anyone has an idea.

baturaysaglam commented 5 months ago

I think I don't get the question. when you run main.py, the achieved rewards are already saved as a .npy file. if you are looking for the NumPy arrays of the figures contained in the repo, I'm sorry that I don't have them anymore. it's been a while

apple234566 commented 6 days ago

Hello,

I have some parts that I don't quite understand and would like to ask for clarification. In the main function, the optional parameters are only custom, power, rsi_elements, learning_rate, and decay. Why does a folder named sum_rate_power appear in the end? How are the files in this folder generated? Are the sum rates shown in Figure 4 of the paper the maximum values of the reward?

apple234566 commented 6 days ago

Dear @baturaysaglam and @Amjad-iqbal3, I have been working on this code for a while and i am also stuck at same issue (as described in issue no 14 by @Amjad-iqbal) for figure no 4. How did you generate data (result.npy file) to produce this figure. Because in code there is no option to generate this data. Did you do it manualy to match the result in paper? Please let me know if anyone has an idea.

@Zainmustafajajja Hi ,I meet the same issue as well, Did you find solutions? Thank you