Closed jianffu closed 2 years ago
I am sorry that you are experiencing the issue.
I believe the problem is in the number of actions for the environment.
Can you change the following line: https://github.com/jkulhanek/robot-visual-navigation/blob/6c1107ef8751b54e2b6b2241fb668eddacc4532e/python/agent.py#L55 Such that the second argument is True? I.e. replace the condition with True?
Let me know if it helped.
Thanks for your timely reply! And, following your instruction by changing the corresponding code line, the error is solved. Then, the output becomes correct and the terminal has no error. The terminal outputs:
[INFO][XXXXXX]: received action 1
action: 1
I think we are not familiar with the code enough. Thank you for your sincere help!
Thank you!
Fixed in commit 58403e7c6277c9828272a42a98116c397cce7cc8
We try to run the real world part of the open source code. While, an error occured when running the code described in the part of the [Using the trained model for the final navigation] in README file(we follow the instruction in your README.md in ~/Documents/AGV_visualnav_dev/robot-visual-navigation/ros/)
The launch file to run:
As we write in the launch, in order to run the goal visual navigation in the real world, we run 3 parts of the codes in the ros folder, which are as follows:
The error information in the terminal output:
So the error is
Error Analysis: The target image and observation image saved in the function compute_step during running in ~/Documents/AGV_visualnav_dev/robot-visual-navigation/ros/src/ros_agent_service/src/main.py, so we guess the pictures are suitable with the codes and there are no business between the pictures and the error.
Potentially relevant error codes located in the model.py (~/Documents/AGV_visualnav_dev/robot-goal-visual-navigation/python/model.py)
Running environment:
Hardware description: use Nvidia Jetson AGX Xavier and an AGV to run the code in ros folder
Software description: Ubuntu 18.04 Use miniforge(conda) to create python3.6 environment (/home/agv01/miniforge-pypy3/envs/visualnav/bin/python3.6) Pytorch version 1.10
The illustration of the model in the paper:
The question is whether the error is caused by our data format in captured image or the RNN / LSTM model code in pytorch provided in ros folder ? or, in order to run the real world part, are the codes we run correct ?
Thanks very much.