There are some limitations to this work. If you have any questions or suggestions, please feel free to contact me (haoranpeng@cuhk.edu.hk). Your suggestions are greatly appreciated.
Please consider citing our paper if this repository is helpful to you.
Haoran Peng, and Li-Chun Wang, “Energy Harvesting Reconfigurable Intelligent Surface for UAV Based on Robust Deep Reinforcement Learning”, IEEE Trans. Wireless Commun., vol. 22, no. 10, pp. 6826——6838, Oct., 2023, doi: 10.1109/TWC.2023.3245820
Bibtex:
@ARTICLE{10051712,
author={Peng, Haoran and Wang, Li-Chun},
journal={IEEE Trans. Wireless Commun.},
title={Energy Harvesting Reconfigurable Intelligent Surface for {UAV} Based on Robust Deep Reinforcement Learning},
year={2023},
month={Oct.}
volume={22},
number={10},
pages={6826--6838},
}
@INPROCEEDINGS{peng1570767WCNC,
author={Peng, Haoran and Wang, Li-Chun and Li, Geoffrey Ye and Tsai, Ang-Hsun},
booktitle={Proc. IEEE Wireless Commun. Netw. Conf. (WCNC)},
title={Long-Lasting {UAV}-aided {RIS} Communications based on {SWIPT}},
address={Austin, TX},
year={2022},
month = {Apr.}
}
For the TD3 and DDPG, please execute the TD3.py and DDPG.py to train the model, such as
python TD3.py / python DDPG.py
Please change the training mode in the file "gym_foo/envs/foo_env.py" before you executing the training progress. For example:
class FooEnv(gym.Env):
metadata = {'render.modes': ['human']}
def __init__(self, LoadData = True, Train = False, multiUT = True, Trajectory_mode = 'Fermat', MaxStep = 41):
If you want to conduct the training phase, the value of "Train" should be "True", otherwise, the value of "Train" should be "Flase" when excuting the testing phase.
For the exhaustive search, please execute the ExhaustiveSearch.py to reproduce the simulation results.
For the SD3, please execute main.py to train a new model.
Please use the version of 0.15.3 for Gym, otherwise there may have some issues in the training phase.
Please execute test.py to evaluate DRL models. Before you produce the testing results, please change the dataset and scenario in 'gym_foo/envs/foo_env.py'.
The EH efficiency = the harvested energy / the received energy from RF signals