Closed 111Delai closed 2 years ago
Hi, When runing the train_process_s1, the arguments in this process will be saved in current folder by the follow lines:
"f = open(model_abs_path + model_name, 'wb') pickle.dump(args, f) f.close()"
Thus, the argument path in the policy test.py is defined by follows:
args_path = model_base_path + '/' + policy_args.arg_name
Thus, you should check whether these two paths are same: "model_abs_path + model_name" in train_process_s.py and "model_base_path + '/' + policy_args.arg_name" in policy test.py. In addition, you should also check in the model save folder that whether the argument file is saved successfully. For example, as follows, the argument file is the file without any suffix
Thank you for your reply. I checked test.py file, no error found in the path, r4_ 17 file is a binary file without suffix, so I don't understand the error message and can't find the corresponding file.Should I have no problem running with python policy_test.py.
------------------ 原始邮件 ------------------ 发件人: "hanruihua/rl_rvo_nav" @.>; 发送时间: 2022年5月10日(星期二) 下午4:56 @.>; @.**@.>; 主题: Re: [hanruihua/rl_rvo_nav] Code consult (Issue #4)
Hi, When runing the train_process_s1, the arguments in this process will be saved in current folder by the follow lines:
"f = open(model_abs_path + model_name, 'wb') pickle.dump(args, f) f.close()"
Thus, the argument path in the policy test.py is defined by follows:
args_path = model_base_path + '/' + policy_args.arg_name
Thus, you should check whether these two paths are same: "model_abs_path + model_name" in train_process_s.py and "model_base_path + '/' + policy_args.arg_name" in policy test.py. In addition, you should also check in the model save folder that whether the argument file is saved successfully. For example, as follows, the argument file is the file without any suffix
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Hi, I think the problem should be in the _argspath. The args_path should be the absolute path, but your path starts with './' which means current root path. In vscode, './' means the path of current project or current file. It is different. The reason may be on the Pathlib. I recommend to assign the absolute path to the _argspath
Hello, the path problem has been solved, although I don't understand it. I have no problem running directly with vscode. Path problems only occur when I run with Anaconda prompt.My question is that the training is very good. After training to 150epoch, we can accurately reach the target point, but the success rate is very low during the test. And I think the test is a graph of motion state. The agent runs from the starting point to the target point, which is a test process. Now the output is episode. I don't understand how to judge the success rate of each episode.
------------------ 原始邮件 ------------------ 发件人: "hanruihua/rl_rvo_nav" @.>; 发送时间: 2022年5月11日(星期三) 上午9:45 @.>; @.**@.>; 主题: Re: [hanruihua/rl_rvo_nav] Code consult (Issue #4)
Hi, I think the problem should be in the args_path. The args_path should be the absolute path, but your path starts with './' which means current root path. In vscode, './' means the path of current project or current file. It is different. The reason may be on the Pathlib. I recommend to assign the absolute path to the args_path
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
The success rate is defined by testing the trained policy with 100 episodes and the proportion of the successful cases. During the training process, there are another process to test current policy every 50 epoches by default. I recommend to train over 200 epoches for the first state with four robots and over 1500 for the second stages with 10 robots. The success rate should be 100% commonly
Thank you for your reply. Indeed, as you said, there is a current strategy test every 50 steps during training, and the performance effect is very good after 150 steps. I would like to ask whether the test can be visualized as during training, and the motion trajectory can be drawn to generate the result diagram in your paper.
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2022年5月13日(星期五) 上午8:49 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [hanruihua/rl_rvo_nav] Code consult (Issue #4)
通过用 100 集测试训练的策略和成功案例的比例来定义成功率。在训练过程中,默认情况下,还有另外一个过程,每 50 个 epoch 测试当前策略。我建议在第一阶段用 4 个机器人训练 200 次以上,在第二阶段用 10 个机械人训练 1500 次以上。成功率一般应该是 100 %
— 直接回复此邮件,在 GitHub 上查看或取消订阅. 您收到这个消息是因为您编写了这个线程。留言 ID @.*** success rate is defined by testing the trained policy with 100 episodes and the proportion of the successful cases. During the training process, there are another process to test current policy every 50 epoches by default. I recommend to train over 200 epoches for the first state with four robots and over 1500 for the second stages with 10 robots. The success rate should be 100% commonly.—
Hi Please use the commend as described in readme to test the trained policy:
python policy_test.py --robot_number 10 --dis_mode 3 --model_name YOUR_MODEL_NAME --render
To plot the trajectory:
python policy_test.py --robot_number 10 --dis_mode 3 --model_name YOUR_MODEL_NAME --render --show_traj
Thank you. I'll try it. Visualization can appear successfully. I have some questions about the two stages you said yesterday. First, 4 agents 250epoch, and then 10 agents 1500epoch. The model saved in this way is in two folders. Can the model of 10 agents 1500 epoch be used to test whether the success rate of 4 agents can reach 100%
------------------ 原始邮件 ------------------ 发件人: "hanruihua/rl_rvo_nav" @.>; 发送时间: 2022年5月14日(星期六) 上午10:27 @.>; @.**@.>; 主题: Re: [hanruihua/rl_rvo_nav] Code consult (Issue #4)
Hi Please use the commend as described in readme to test the trained policy:
python policy_test.py --robot_number 10 --dis_mode 3 --model_name YOUR_MODEL_NAME --render
To plot the trajectory:
python policy_test.py --robot_number 10 --dis_mode 3 --model_name YOUR_MODEL_NAME --render --show_traj
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Yes It can do that
Hi, what commands are needed to draw the motion trajectory of the agent when testing visualization
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2022年5月14日(星期六) 晚上10:27 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [hanruihua/rl_rvo_nav] Code consult (Issue #4)
Yes It can
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Try to add the command on the end of the test python files --show_traj
I tried to add this command and the test appeared visual. I'd like to ask how to draw the trajectory of an agent.Figure 1 shows the operation interface without trajectory,Figure 2 is the trajectory of your paper.
------------------ 原始邮件 ------------------ 发件人: "hanruihua/rl_rvo_nav" @.>; 发送时间: 2022年5月18日(星期三) 晚上8:58 @.>; @.**@.>; 主题: Re: [hanruihua/rl_rvo_nav] Code consult (Issue #4)
Try to add the command on the end of the test python files --show_traj
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I can not understand, what is Figure 1
Hello, I have a question. For drawing trajectory diagram, my idea is to keep the motion path every time during the test and save it to the list. Then show trajectory according to list. If you add show_traj after the test command, is it just testing and trajectory saving without calling list data, so it can't draw trajectory.
------------------ 原始邮件 ------------------ 发件人: "hanruihua/rl_rvo_nav" @.>; 发送时间: 2022年5月19日(星期四) 上午8:57 @.>; @.**@.>; 主题: Re: [hanruihua/rl_rvo_nav] Code consult (Issue #4)
I can not understand, what is Figure 1
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Hi You can access the robot i state from the following structure:
_env.ir_gym.robotlist[i].state
Save the robot state in each timestep as a list by using numpy.save(). After that, you can use numpy.load() to read this list
I'm very sorry, I understand what you mean, but I can't implement it in code.In policy_ train. py,I saw this line of code. I don't know if it's the recording agent trajectory you mentioned.I hope you can provide the code of paper comparison experiment results. Thank you very much.
------------------ 原始邮件 ------------------ 发件人: "hanruihua/rl_rvo_nav" @.>; 发送时间: 2022年5月20日(星期五) 下午3:35 @.>; @.**@.>; 主题: Re: [hanruihua/rl_rvo_nav] Code consult (Issue #4)
Hi You can access the robot i state from the following structure: env.ir_gym.robot_list[i].state
Save the robot state in each timestep as a list by using numpy.save(). After that, you can use numpy.load() to read this list
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Hi I have no time and plan now to arrange these codes which are unnecessary for this approach. Current code is easy to be modified to accomplish your task. Please read the code carefully. Thanks
Thank you
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2022年5月23日(星期一) 晚上8:31 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [hanruihua/rl_rvo_nav] Code consult (Issue #4)
Hi I have no time and plan now to arrange these codes which are unnecessary for this approach. Current code is easy to be modified to accomplish your task. Please read the code carefully. Thanks
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Hello, I'm running train_processs1.py,the model is saved in mode save, and then I run policy _ test.py,after I run the file an error occurred that the file could not be found.I want to ask the author what content this binary file holds(parser.add_argument('--arg_name', default='r4_17/r4_17')) File "policy_test.py", line 33, in
r = open(args_path, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: './policy_train/model_save/r4_17/r4_17'