zkytony / cos-pomdp

Code for "Towards Optimal Correlational Object Search" | ICRA 2022
Apache License 2.0
13 stars 5 forks source link

Path planning issue in Thor - corner case #13

Closed RajeshDM closed 5 months ago

RajeshDM commented 5 months ago

I was running the system and saw that the plan was succesfull but the agent was not unable to reach the goal - even with low level planner coming up with a path. I found that when the agent has rotated by some angle and does a move ahead- technically it moves to the diagonally next step. But AI2Thor does not allow this since it returns blocked option. Here is the example:

Screenshot from 2024-06-02 15-08-16

In this example - you can see that the agent has turned 45 degrees. The plan says, move ahead, rotate right and move ahead twice. The issue here is, AI2Thor does not allow for this

Screenshot from 2024-06-02 15-10-34 MoveAhead to happen.

Here is the response from AI2Thor after the MoveAhead has been executed. It says the agent is blocked by the chair. So the navigation planner needs to take this into account while coming up with a plan for low-level navigation.

zkytony commented 5 months ago

I think I have experienced this before as well. AI2Thoe thinks the robot will have a collision. This happens when there is 45 deg actions. I believe I tried making thortils somehow address this issue so the agent doesn’t get stuck, but don’t remember the details. The ultimate way is to teleport but that may not be ideal.

zkytony commented 5 months ago

I don’t think the pomdp planner has to be responsible for this. This is a detail in execution of MoveAheas. You can imagine a motion planner executes that and will move around the obstacle instead of running into it, permitting the diagonal move.

RajeshDM commented 5 months ago

Yep this is definitely not a POMDP Planner issue. It is an issue in navigation planner only. It's just that the POMDP Planner gets affected by this (I'm not sure if the POMDP Planner now thinks it's in the wrong place but it definitely affects the overall planning) - even though POMDP Planner has done it's job the goal is not achieved thereby leading to it seeming like POMDP Planner failed but it did it's job exactly as intended.

zkytony commented 5 months ago

Ah I see. How often do you replan? When you execute an action, the agent should usually update its belief. If it didn’t move, the belief of robot pose remains the same. The pomdp planner will just replan given the belief. (The belief state may not be exactly the same though even if the robot didn’t move; it further reduces beliefs in locations within field of view)

RajeshDM commented 5 months ago

The Replanning is happening same as before - after every low level action is taken, the high level planner replans, (it comes up with the same sequence of high level actions) and hence the low level planner's existing navigation plan is continued to be used (even after the MoveAhead) has failed.

What happens is, after taking a few actions which fail (where the object no longer remains visible), the agent's belief has changed about the goal from a single location to a few locations around it and hence it picks a different location to go to. Then it approaches the object from a different side and on the way it's belief once again changes and it keeps Oscillating between these 2 paths. (This issue occurs even after the navigation bug has been fixed - as the agent gets closer, the object is not visible and the path gets changed completely sometimes unfortunately) - this issue is a more conceptual one - what is a good weight to update the belief based on not seeing anything .. it seems that it is too strong in reducing the probability of the last seen location as the current location.

zkytony commented 5 months ago

You could try tuning the parameters in the sensor models. Should be configurable. I remember this is a major part of this project, be able to tune parameters based on true positive and false positive rates.

RajeshDM commented 5 months ago

I'm currently still running all of this with perfect detector only. I am trying achieve perfect object search with perfect vision. So Tuning should not affect the results in any way right?

zkytony commented 5 months ago

Perfect detector doesn’t make a mistake.

What do you mean by “perfect object search”?

RajeshDM commented 5 months ago

Perfect detector doesn’t make a mistake.

What do you mean by “perfect object search”?

What I mean by perfect object search is, that I want to be able to find all objects all the time.

What I'm hoping is, that when the detector is 100% correct, it should be possible to find the object of interest every single time. Is there anything that would stop this from happening? Conceptually, I believe, given a very large number of steps, the agent should be able to find the object of interest when there is no mistake by the detector

zkytony commented 5 months ago

It’s theoretically possible, but note that the system makes compromises (sensor model, height belief, no occlusion etc; so it’s arguably not the “right system”), plus the diversity of scenes , and issues like action execution that you encountered, I think it’s not realistic to expect 100%. You could if you make a 2d grid world and everything clean and simple.

zkytony commented 5 months ago

I only got 52% success rate with COS-POMDP (gt); all the issues I mentioned were fair game for all baselines.

zkytony commented 5 months ago

If you eliminate diversity, like only run the system in one particular scene with one or a few object types, you may get 100% all the time. I used the first kitchen scene for sanity checking during development, searching for pepper shaker around stove, or vase on the lower level of the shelf, and the system works there almost every time.

RajeshDM commented 5 months ago

Yeah, it definitely makes sense that that is only a theoretical number.

I've just tried to get it as close to 2D as possible with the available information.

For example, the height belief - when the detector is perfect, the height belief, just like the location belief is close to perfect when the detector is perfect (with the updated height calculation from the other issue). [I've had the height belief to be 0.99999 after the first time the object has been seen]

With that, the diversity of rooms also should not cause too much of a problem [The system will definitely be more efficient in some rooms than others]. I understand that a few things will not be perfect - that is the nature of POMDP and this is a very hard problem to solve.

Please let me know if you can think of any other theoretical limitations (I think I have most of the practical limitations (like action execution) sorted). Thank you very much for all the potential pitfalls you have already mentioned

zkytony commented 5 months ago

One is occlusion, where the object is occluded, so detector doesn’t detect it and belief will be low in the object’s true location. This is due to occlusion not being accounted for in the sensor model here. Also, the sensor model approximates the 3d view cone as 2d when the camera tilts up & down. I’d say a good attempt towards the “right system” was made but it’s not there. Check out GenMOS too - it’s 3D, considers occlusion and general (but correlation isn’t a part of it; could be possible to add that).

Also POUCT with large branching factor (action and observation) doesn’t approximate the value function very well due to sparsity. It might not yield the proper behavior every single time.

RajeshDM commented 5 months ago

Oh, I'm working without co-relation altogether.

But yea, it makes sense that the occlusion would definitely cause issues. I have checked out GenMos and that is great too. I was just trying to create a hierarchical multi-object search system in AI2Thor so COSPOMDP seemed the best place to start out with.

I guess the only way to handle the large branching factor is to have simulations in the order of 10^4 or 10^5 for more reliable planning and sometimes even more I am guessing (for my branching factor of 15 for sure). I am working on reducing the action space for the system.

Thank you so much for the insights.