Hey, I am confused on how the topological graph works. I see it takes in the camera feed and convert it into a series of images naming them sequentially. Isn't needed to be in a tree structure. I remember the paper contains a tree shape. Also, with the current implementation, I believe wont it be difficult to navigate from one room to other room? Also, will it be possible to test it on Gazebo. How accurate would it work with zero shot learning? Do I need to make a long enough training or it ok if I use the current weights?
The public code does not actually support the full topological graph and is implemented to get a much simpler version running where there is a single "path" in your graph that you want to follow. This is primarily to keep the release code clean and have a simple starting point for others to use.
Yes, it should be quite easy to get single-trajectory navigation out of the box with the repo (to go from one room to another).
Whether or not the model works well on Gazebo heavily depends on the environment you test it in. Unfortunately, the models were not trained on any sim data, so unless the sim is photorealistic, it may struggle. I would recommend trying out zero-shot with the current weights first and if it doesn't work, then I am happy to share pointers to do a small amount of fine-tuning on sim data to improve performance.
Hey, I am confused on how the topological graph works. I see it takes in the camera feed and convert it into a series of images naming them sequentially. Isn't needed to be in a tree structure. I remember the paper contains a tree shape. Also, with the current implementation, I believe wont it be difficult to navigate from one room to other room? Also, will it be possible to test it on Gazebo. How accurate would it work with zero shot learning? Do I need to make a long enough training or it ok if I use the current weights?