Open ShelyH opened 2 years ago
Yes, I think so. Please let me know what results you get!
Sorry, so far I haven't learned a successful navigation strategy. I wonder if it is the way the policy is updated. Could you give me some suggestions? Thanks!
Without knowing your implementation details, I apologize that I cannot give very useful advice. The issue might be a bug in your code, unsuitable hyperparameters, or something else.
Hi, sorry to bother you again! This experiment uses the PPO algorithm to update the policy, but this is an online policy. Can I use an offline policy, such as SAC, to replace the PPO algorithm in my experiment?