reiniscimurs / GDAE

A goal-driven autonomous exploration through deep reinforcement learning (ICRA 2022) system that combines reactive and planned robot navigation in unknown environments
110 stars 15 forks source link

Procedural issues #4

Closed hjj-666 closed 1 year ago

hjj-666 commented 2 years ago

Excuse me, I stopped there after running to self.client.wait_for_server(), do I need to write another server python file to create a server

reiniscimurs commented 2 years ago

Excuse me, I stopped there after running to self.client.wait_for_server(), do I need to write another server python file to create a server

The command is waiting for a path planner server to start running through a SimpleActionClient. You don't need to write another server file, but you should make sure that you have the Navigation and Move_Base ROS packages installed and that they are up and running.

hjj-666 commented 2 years ago

Thank you for your reply, I will check if these ros packages are activated later

hjj-666 commented 2 years ago

Hello, my current algorithm uses the td3 algorithm you uploaded, and the rest uses the method of adding intermediate nodes in your project. But when I test the real scene, I find that the target point can be reached quickly when it is set close; However, when the target point is set far away (10, 0), the car will go forward for a while and then turn around, and then go back and forth for a while before reaching the target point. Sometimes it even goes in the opposite direction. What may be the reason for this? The parameters obtained from this training

reiniscimurs commented 2 years ago

Can you present a video of the action or any additional data? It is difficult to guess the issue just from a description.

Generally, the network will not work outside of its training range. Please take a look at the description of the problem here.

You could try limiting the Dist measure for the distance to the target as follows: Dist = min(Dist,7) this would cap the distance at 7 meters even if the actual distance is further than that. See if that helps.

hjj-666 commented 2 years ago

Thank you for your reply. I'll upload my video a little. I did annotate the line dist = min (dist, 7) when I ran. I'll test it later

hjj-666 commented 2 years ago

Thank you very much. With this code, dist = min (dist, 7), he is very helpful to me

hjj-666 commented 2 years ago

Does the simulation code corresponding to your project also normalize the distance and the angle to the target point, that is toGoal = [Dist / 10, (beta2 + np.pi) / (np.pi*2)], Have you ever tested that not being treated as normalization has a big impact on him, like toGoal = [Dist, beta2]

reiniscimurs commented 2 years ago

You mean during the training? Yes, the network inputs are normalized there as well. I assume it can work without normalization, but you'll probably need to use original laser data then as well and figure out how to deal with INF laser reading inputs.

hjj-666 commented 2 years ago

thank you for your reply

hjj-666 commented 2 years ago

https://user-images.githubusercontent.com/83230521/145547254-c4523d36-ff5f-4dbe-b6b6-057ac9b3d8e9.mp4

hjj-666 commented 2 years ago

Hello, the target point I set is on the left side of the video, but the robot has been moving left and right in front of the table in the video. How to solve this problem

reiniscimurs commented 2 years ago

Naturally, I don't know what's going on there as there's not enough information to go on. Seems like it should already be considered as crashed. But I would point to these 2 things:

1) Check your laser readings. If the obstacle is too close, the minimum laser device range might kick in and you will get INF inputs. INF is generally treated and clear space for the robot.

2) Seems like your robot is "frozen", as in it cannot make a decision what to do. Since the goal position is on the left, it turns left, then recognizes that there is an obstacle, then starts turning right to avoid this obstacle. Since the state input is only 1 time step, it forgets about the obstacle and starts turning back left, thus being stuck in a cycle. In the code you will see a function called "freeze" that is meant to deal with this issue. https://github.com/reiniscimurs/GDAE/blob/1e44ecc2e1451e0f8b04ef0978c2453bf3b72755/Code/GDAM_env.py#L793 You might need to rewrite your own recovery functions for dealing with your specific robot.

Again, it is very difficult to know what is going on without having data to look at. Video is a good start but it is not a good representation of what is happening "under the hood". At least Rviz data would also be necessary in such a case. In this case I can only give educated guess without knowing what is actually happening.

hjj-666 commented 2 years ago

Thank you for your reply. I'll try your suggestion

hjj-666 commented 2 years ago

Is this your current email: reinis Cimurs@de.Bosch.com , I just wanted to upload a video, but the video is too large to upload. It has just been sent to your email (the subject of the email is the actual robot problem video). As shown in the video, my robot doesn't seem to know how to reach the green target point in the second half of the video. What may be the reason

reiniscimurs commented 2 years ago

I've received the emails though I get an error when trying to download the file "The file is too large to download directly. Recommend you to use tools to download it."

If you can compress it in size and send it directly to reinis@incorl.hanyang.ac.kr, I will take a look

hjj-666 commented 2 years ago

How much video can you receive? My original video is nearly 900m in size

------------------ 原始邮件 ------------------ 发件人: "reiniscimurs/GDAE" @.>; 发送时间: 2021年12月13日(星期一) 晚上10:10 @.>; @.**@.>; 主题: Re: [reiniscimurs/GDAE] Procedural issues (Issue #4)

I've received the emails though I get an error when trying to download the file "The file is too large to download directly. Recommend you to use tools to download it."

If you can compress it in size and send it directly to @.***, I will take a look

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.

hjj-666 commented 2 years ago

I will try to compress the video tomorrow, thank you

------------------ 原始邮件 ------------------ 发件人: "reiniscimurs/GDAE" @.>; 发送时间: 2021年12月13日(星期一) 晚上10:10 @.>; @.**@.>; 主题: Re: [reiniscimurs/GDAE] Procedural issues (Issue #4)

I've received the emails though I get an error when trying to download the file "The file is too large to download directly. Recommend you to use tools to download it."

If you can compress it in size and send it directly to @.***, I will take a look

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.

reiniscimurs commented 2 years ago

I think the hanyang email should allow for 25mb so that might not be enough for you, but if you reduce the size of the output rendered file properly it could be done. Alternatively, you can upload and private the video on something like youtube and just send me a link and I will take a look.

hjj-666 commented 2 years ago

https://www.bilibili.com/video/BV1ZS4y1D7be/ At present, I have uploaded to the above website. Can you use VPN to access it? If not, I will continue to compress the video or upload it to Youtube

------------------ 原始邮件 ------------------ 发件人: "reiniscimurs/GDAE" @.>; 发送时间: 2021年12月14日(星期二) 凌晨4:02 @.>; @.**@.>; 主题: Re: [reiniscimurs/GDAE] Procedural issues (Issue #4)

I think the hanyang email should allow for 25mb so that might not be enough for you, but if you reduce the size of the output rendered file properly it could be done. Alternatively, you can upload and private the video on something like youtube and just send me a link and I will take a look.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.

reiniscimurs commented 2 years ago

I looked at the video and it is clear that something is off with the transforms for your robot. The laser is not set up in the same frame as the map and nodes. The laser data should line up with the map information but it is rotating wildly in the video. As you can read in the paper, the way that the nodes work is that they are constantly updated against the current laser information. If laser shows an obstacle near the node, the node will be deleted. You can see in the video that when the laser sweeps across, it keeps deleting the nodes. The reason why it doesn't go to the last location is because the node that was placed there was deleted. You can manually re-add it later if you want, then the robot will navigate there, but you need to fix the laser issue first. I think there might be an issue with how the nodes are updated as well, but I am not sure about that.

To make it concise: Your robot setup (frames, perhaps tf tree) are not set up correctly.

hjj-666 commented 2 years ago

Thank you for your reply. After that, I will check my laser problem

hjj-666 commented 2 years ago

Hello, what are the intermediate waypoints (POI) near the car based on, and how to choose the correct intermediate waypoint (POI), can you explain the principle

reiniscimurs commented 2 years ago

You can read the paper explaining the principle here: https://ieeexplore.ieee.org/abstract/document/9494668

hjj-666 commented 2 years ago

Hello, now I refer to your GDAE and the TD3 algorithm in the simulation for the actual test, but still not as good as the video in your project, there is a question I would like to ask, the radar I actually use is similar to The rplidar used in GDAE, but the 16-line velodyne radar is used in the simulation. Is there any problem with this direct porting of the code? Do I need to replace the actual radar with a 16-line velodyne radar?

reiniscimurs commented 2 years ago

You should not need to use a 16-channel laser in implementation. The way that the information is processed is that the 16 channel data is 'squashed' into 1 channel, and that information used. So 1 channel laser should be enough, but the more channels you have at different levels, the more robust your implementation. In GDAE I used 2 lasers at different heights in some implementations for SLAM and navigation. However, there is a need to make sure that your simulated data is as similar to real laser as possible and the angles of laser data fit to those used in simulation. So there could be a sim2real problem. What exactly is the issue you are running into?

hjj-666 commented 2 years ago

At present, I have adjusted the radar FOV from 270 degrees to 90 degrees on the velodyne radar's background management website, that is to only use the first 180 degrees of the lidar data, but under this premise, I use the same velodyne_callback(self, v) function, finally the converted 20 laser_states are symmetrical, it seems to only use the data of 90 degrees on the left, for example, laser_state[0] is equal to laser_state[19], and laser_state[1] is equal to laser_state[18] Kind of, but I can see from the map that the lidar only uses the first 180 degrees of data, what could be the reason for this?

reiniscimurs commented 2 years ago

I do not have access to velodyne puck and that in general is not part of this repo, so I do not think that I will pick up the task to find the issue. I am afraid you will have to try to find the solution of this problem on your own.

You can check what your variable data in the velodyne callback is what you expect. If not, perhaps the self.gaps you need to assign differently. Try to understand the velodyne_callback code and see that your incoming data is the same as it would be from the simulated velodyne.

reiniscimurs commented 1 year ago

Closing due to inactivity