mit-acl / cadrl_ros

ROS package for dynamic obstacle avoidance for ground robots trained with deep RL
556 stars 156 forks source link

need a help professor #16

Closed Balajinatesan closed 2 years ago

Balajinatesan commented 3 years ago

@mfe7 Hi, professor. Your work was so wonderfully we loved the way you created your project. I got some errors while training in your model I would like to clarify with you. If you have time to discuss with me it might be a great helpfully for me to solve my problem. I like to make this conversation personal this is my Gmail id balajinatesan30@gmail.com. If you contact me to solve my problem it will be greatly helpful for my team. Thanks in advance for the help.

mfe7 commented 3 years ago

Hi @Balajinatesan - thanks for the message. I’d rather you post specific issues on GitHub so others can benefit from the answers.

Balajinatesan commented 3 years ago

Sure professor @mfe7 , I have 3 doubts.

  1. While I run a run docker.sh I can't run that. Error was /entrypoint.sh: line 4: jupyter: command not found.
  2. Is that possible can we install your ros code for a custom build robot and what are step we need to do for that.
  3. Also how to connect the lidar with our code. Please share your idea for the above questions professor. It might be a great help for me and my team. Also I need to mention that your work are so helpful for us.
mfe7 commented 3 years ago
  1. Hmm did you build the docker image before running? I haven't tried those commands in a few years. It looks like the dockerfile installs jupyter, but note that the docker container doesn't support ROS in its current form. You might be able to change the first line of the Dockerfile to inherit from a public ros image instead?
  2. Sure, you'd just clone this repo into your catkin workspace and compile (e.g., catkin_make). You should then be able to use the provided node and/or launch file
  3. Using lidar will require a bit more work. We had a perception pipeline to extract dynamic obstacle states from lidar data, and then the dynamic obstacle states were an input to the collision avoidance node. That is beyond the scope of this repo, so unfortunately I can't support that part.
Balajinatesan commented 3 years ago

Thanks for your info professor.

Balajinatesan commented 3 years ago

Hi sir @mfe7, When and what are we need to subscribe to the topic in the below picture. Could please explain to me if you know about this subscription and also we need to know what do we need to give in the Diff_mp(~Mode) because we are using a custom build robot. If you please share this thing with us it will be a great help for us.
Thanks in advance for the reply.
Screenshot from 2021-07-07 19-13-52

Balajinatesan commented 2 years ago

@mfe7 Sorry to disturb you. Please provide me the above information professor. It might be more helpful for us to work.

Balajinatesan commented 2 years ago

Hi professor, @mfe7 Sorry to disturb you. Could you please explain the previous comment question? Thanks in advance for your reply.

mfe7 commented 2 years ago

@Balajinatesan sorry for the delay. Please see #12 and #1 if interested in a discussion of where to find those messages. In terms of those two subscriptions, it may be easier to modify the code to not use those, rather than writing something to publish the right values on those topics.

NNActions is supposed to contain some info about the static world (e.g., how far the robot can travel in each direction before hitting a static obstacle) -- computed by a separate (unreleased) node that has access to a costmap of the environment.

PlannerMode switches between some basic vehicle states, so you could probably just set the self.operation_mode flag to something like True during __init__ and modify any logic that uses self.operation_mode to be a simple passthrough. The behavior is in there for our robot software architecture, but not very general purpose.

Balajinatesan commented 2 years ago

Thanks for the help sir.

Balajinatesan commented 2 years ago

Hi professor, I could like to know what kind of perception pipeline did you used for the robot. Do you have any reference links for that? It might be a great help for me.

mfe7 commented 2 years ago

The perception pipeline is described here: https://dspace.mit.edu/handle/1721.1/111698