Open Vaeoer opened 1 month ago
I concur with your assessment, and my implementation is grounded in the research conducted by Xu K et al. @ARTICLE{ author={Kang, Xu and Song, Bin and Guo, Jie and Qin, Zhijin and Yu, Fei Richard}, journal={IEEE Transactions on Communications}, title={Task-Oriented Image Transmission for Scene Classification in Unmanned Aerial Systems}, year={2022}, volume={70}, number={8}, pages={5181-5192}, doi={10.1109/TCOMM.2022.3182325}}
I concur with your assessment, and my implementation is grounded in the research conducted by Xu K et al. @Article{ author={Kang, Xu and Song, Bin and Guo, Jie and Qin, Zhijin and Yu, Fei Richard}, journal={IEEE Transactions on Communications}, title={Task-Oriented Image Transmission for Scene Classification in Unmanned Aerial Systems}, year={2022}, volume={70}, number={8}, pages={5181-5192}, doi={10.1109/TCOMM.2022.3182325}}
Thanks for sharing your expertise.
This problem is not an MDP problem, it does not constitute a markov chain. Train_policy.py essentially does not use a DRL algorithm, but a combination of unsupervised and supervised learning. Is my understanding correct?