Closed hyunsooda closed 5 years ago
Hi, @hyunsooda I'm sorry for late reply. Yes, I will merge your PR. Thanks for your contribution.
Gilbert.
Hello @kijongGil :)
First of all thx for reply!
I have revised your environment source code because there some wrong points I think. Anyway you can see comparison between your environment system and my own
Notice : episode is 190 in both video
This is yours https://www.youtube.com/watch?v=Y6UMgEVfs84
And this is mine https://www.youtube.com/watch?v=tkaiIBJDEP4&t=12s
I will give explanation on my work later, since I am focusing other thing nowadays. Please check video and give me to your comment. See you later.
@hyunsooda Your work is nice! The meaning of this package is to provide a machine learning environment. Actually, the environment and reward function in this package are not optimized. So, I hoped the developers like you would learn reinforcement learning by applying their own methods.
Thank you for your interest.
Hello @kijongGil :)
Actually, your base code was so helpful that I could add something or extend easily. I have a plan to change your DQN algorithm to A3C algorithm if I can. Anyway I will try to change the algorithm, I will show it if it is successful. See you later.
Problem : Burger cannot read proper distance value of distance between burger and obstacle.
See the picture (Now, burger is stucked)
Description : In the original source code, variable min_range was 0.13 but it read wrong distance sometimes Actually I think that the reason why reading wrong value is that depends on Hardware problem(LRF sensor) So, I have tried diverse value to check which value is nice to detect obstacle surrounding burger. Conclusively in my opinion, 0.15 is not bad to simulate.
Can I submit PR If you agree about my thought ?
Thanks.