carla-simulator / ros-bridge

ROS bridge for CARLA Simulator
MIT License
527 stars 438 forks source link

Feature request carla 0.9.2 rosbridge #23

Open anshulpaigwar opened 5 years ago

anshulpaigwar commented 5 years ago

Multi Client Support in Carla 0.9.x is incredibly useful feature and writing a ros_bridge for the it is also a huge task and I appreciate the efforts. I believe some things could be improved.

1) The actor_id in carla and child id in ros_bridge are different and they both changes everytime I relaunch my python script. This in turn changes my topic names and my frame_id everytime.

2) It is very difficult with current version of ros_bridge to rename the frame_id and topic names without hardcoding them and naming do not follow the ros conventions. (we should have args in launch file to set the names)

3) Only in the case of ego_vehicle the topic name remains same. But what if I don't want to control my vehicle with ros but with another client for example in scenario_runner for junction_crossing. I need to control two vehicle through python client and acquire the sensor data from both vehicle through ros.

4) All the TF transforms in carla 0.92 rosbridge are with respect to the /map frame. If we follow the robot TF setup guide for navigation stack in ros: link Transforms for the sensors should actually be with respect to the vehicle (if it is attached to the vehicle while spawning in Carla). Some of the standard codes in ros could not run because they are expecting TF base_link --> lidar. But through carla_ros_bridge we get /map--> /base_link and /map --> lidar. How to get the transform with respect to vehicle??

fabianoboril commented 5 years ago

Hi,

@1 and @2: Let me look into this.

@3: Please see PR #22. With this in place that should not be a problem

@4: Well, in CARLA the sensors currently provide their information in the world frame (i.e. /map). Hence, the ROS bridge does the same. The ROS bridge is only intended to forward the CARLA information, not to modify it or do any additional transforms.

anshulpaigwar commented 5 years ago

Actually in carla 0.8x and rosbridge the sensor transforms were published with respect to the vehicle.

5) Also a quick thing, all the markers for the vehicles are published as Marker msg on one topic. We could use MarkerArray msg instead to avoid any time synchronisation issue. Carla 0.8x and rosbridge used MarkerArray msg

berndgassmann commented 5 years ago

@4 Let me add some additions here. As you already said correctly we publish the following frames: map (global frame) map -> ego_vehicle ( which compares to the 'base_link' in the robot tutorial ) map -> {ego_vehicle_sensors} ( which compares to the 'base_laser' in the robot tutorial) map -> {other_vehicles} ( which compares to the 'base_link' in the robot tutorial for the other robots ;-) Because every transformation implicitly also defines the inverse transformation, all transformations required are available. Having said that, also ego_vehicle -> map {ego_vehicle_sensors} -> map {other_vehicles} -> map are implicitly available within TF. So you just use tf::TransformListener as usual. waitForTransform("ego_vehicle", "ego_vehicle_sensor", ...) and lookupTransform("ego_vehicle", "ego_vehicle_sensor", ...) Within ROS you can calculate the transformation from any node of the TF tree to any other... so I don't see how the current ROS bridge implementation will make problems.

In our AD-Stack code we e.g. receive the CARLA objects and transform these into the vehicle coordinate frame to make them look as if they were received by some real object sensor (I believe, that's what you search for, right?) tf::StampedTransform objectToCarTransform; mTfListener.lookupTransform( "ego_vehicle", object.header.frame_id, object.header.stamp, objectToCarTransform);

tf::Transform objectPoseInObjectCS; tf::poseMsgToTF(object.pose, objectPoseInObjectCS); tf::Transform const objectPoseInCarCSTransform = objectToCarTransform * objectPoseInObjectCS; ::ros::geometry_msgs::Pose objectPoseInCarCS; tf::poseTFToMsg(objectPoseInCarCSTransform, objectPoseInCarCS);

@1 And yes, currently the frame_id and the actor_id are not the same, but you're code should not rely on such, because the frame_id in ROS is 32bit and the actor_id is 64bit (coming from Unreal...). We could change the actor_id_registry in the sense, that it tries to keep these two the same as long as it is possible, but exactly then there will be some rare cases where it will not work (because of the actor_id exceeding the 32-bit range) and then it's hard to detect the error at that point. Therefore, we decided to use an independent id here to make clear, that these two numbers are semantically not the same. I would be interested in the actual use-case.

@2 Which exactly are the frame_ids and topic names that don't follow ROS convention and you need to rename? All topic names considering the ego_vehicle should be constant, the sensors make use of the assigned role_name ... so you could define the same sensor type with different role names for your specific vehicle setup...

berndgassmann commented 5 years ago

Just had an idea. If you want the actor id to differentiate also the frames of the other vehicles you create via python interface, one might be able to use the role_name of the vehicle instead of the actor_id as the frame_id. But that would require the role_names are not conflicting ... Maybe that's actually what you want/need?

anshulpaigwar commented 5 years ago

@4 Yes using a LookupTransform helped me solve my issue. But as we already have the information about the vehicle the sensor is attached directly providing transform vehicle-->sensor would be nice.

@2 Difficulties I had was mostly with renaming frame_ids. Frame_id of ego_vehicle should be ego_vehicle/base_link or (/base_link) and all sensors should be /. It was difficult to rename sensor frame_ids e.g my code stack uses 'velodyne' instead of 'lidar'. The workaround I did was to use a static transformer in my launch file ego_vehicle/front/lidar --> zoe/velodyne as 0,0,0,0,0,0,0 transform (which is a bit messy if you have multiple sensors). For the topic names, it is fine by just using remap parameter.

anshulpaigwar commented 5 years ago

@1 In my use case where I was testing a dynamic object tracking algorithm with carla. I have one ego vehicle and other vehicles on the auto mode that I have to track and I spawn them at the desired location. To validate the algorithm I have to run the same scenario multiple times. But every time I launch my script the frame_ids and topic names of other vehicles were changing (maybe I need to fix the random seed) making it difficult to match ground truth values with the output of the algorithm. I had the same Idea of using role names to differentiate between the vehicles. My workaround was to make a class for other vehicles (similar to ego vehicle class with fixed frame_id and topic name corresponding to their role names) test_vehicle.py

#!/usr/bin/env python

import rospy

from std_msgs.msg import ColorRGBA
from derived_object_msgs.msg import Object
from shape_msgs.msg import SolidPrimitive
from visualization_msgs.msg import Marker

# from carla_ros_bridge.actor import Actor
import carla_ros_bridge.transforms as transforms
from carla_ros_bridge.vehicle import Vehicle

class TestVehicle(Vehicle):
    def __init__(self, carla_actor, parent):

        prefix = carla_actor.attributes.get('role_name')

        super(TestVehicle, self).__init__(carla_actor=carla_actor,
                                      parent=parent,
                                      topic_prefix=prefix,
                                      append_role_name_topic_postfix=False)
    def destroy(self):
        """
        Function (override) to destroy this object.

        Finally forward call to super class.

        :return:
        """
        rospy.logdebug("Destroy Vehicle(id={})".format(self.get_id()))
        super(Vehicle, self).destroy()

    def update(self):
        """
        Function (override) to update this object.

        On update vehicles send:
        - tf global frame
        - object message
        - marker message

        :return:
        """
        self.send_tf_msg()
        self.send_object_msg()
        self.send_marker_msg()
        super(Vehicle, self).update()

    def get_marker_color(self):
        """
        Function (override) to return the color for marker messages.

        :return: the color used by a vehicle marker
        :rtpye : std_msgs.msg.ColorRGBA
        """
        color = ColorRGBA()
        color.r = 255
        color.g = 0
        color.b = 0
        return color

    def send_marker_msg(self):
        """
        Function to send marker messages of this vehicle.

        :return:
        """
        marker = self.get_marker(use_parent_frame=False)
        marker.type = Marker.CUBE

        marker.pose = transforms.carla_location_to_pose(
            self.carla_actor.bounding_box.location)
        marker.scale.x = self.carla_actor.bounding_box.extent.x * 2.0
        marker.scale.y = self.carla_actor.bounding_box.extent.y * 2.0
        marker.scale.z = self.carla_actor.bounding_box.extent.z * 2.0
        self.publish_ros_message('/carla/vehicle_marker', marker)

    def send_object_msg(self):
        """
        Function to send object messages of this vehicle.

        A derived_object_msgs.msg.Object is prepared to be published via '/carla/objects'

        :return:
        """
        vehicle_object = Object(header=self.get_msg_header(use_parent_frame=False))
        # ID
        vehicle_object.id = self.get_global_id()
        # Pose
        vehicle_object.pose = self.get_current_ros_pose()
        # Twist
        vehicle_object.twist = self.get_current_ros_twist()
        # Acceleration
        vehicle_object.accel = self.get_current_ros_accel()
        # Shape
        vehicle_object.shape.type = SolidPrimitive.BOX
        vehicle_object.shape.dimensions.extend([
            self.carla_actor.bounding_box.extent.x * 2.0,
            self.carla_actor.bounding_box.extent.y * 2.0,
            self.carla_actor.bounding_box.extent.z * 2.0])

        # Classification if available in attributes
        if self.classification != Object.CLASSIFICATION_UNKNOWN:
            vehicle_object.object_classified = True
            vehicle_object.classification = self.classification
            vehicle_object.classification_certainty = 1.0
            self.classification_age += 1
            vehicle_object.classification_age = self.classification_age

        self.publish_ros_message('/carla/objects', vehicle_object)

and then changing vehicle.py file as:

        if (carla_actor.attributes.get('role_name') ==
                parent.get_param('ego_vehicle').get('role_name')):
            return EgoVehicle.create_actor(carla_actor=carla_actor, parent=parent)
        else:
            return TestVehicle(carla_actor=carla_actor, parent=parent) # requires all role names to be different
slinshady89 commented 5 years ago

I have the same issue like the 4th. I'm trying to use viso2_ros and it's not possible to use /ego_vehicle like it is with tf2::TransformListener. However, until now I only changed topic-names and their namespace. Probably it is possible to also apply some changes to the baselink that any other node is usable.

jjw2 commented 4 years ago

I'm experiencing the same issue noted in #4. @berndgassmann, @fabianoboril, I'm hoping I can explain why this is indeed an issue and why it would be helpful for it to be resolved.

First, a general discussion: @anshulpaigwar is correct that robot sensor coordinate frame TFs typically do not relate directly to the /map frame, but instead relate to some fixed frame on the vehicle (ex: baselink, though it could be any fixed frame on the vehicle). This is partly because a robot doesn't automatically know where it is in the world, and must actually use sensor measurements to determine its location_, which is what's referred to as "localization". Think about it: when a robot starts up, all that's known is where the sensors are on the robot structure; it is not know where all sensors are relative to the world (or /map). Just from a philosophical perspective, it doesn't make sense to give a robot simulated sensor data and specify out of the box how that simulated sensor data relates to the world (or /map), because that is not how a robot receives that information in the real world; in the real world, it receives the data with no indication of exactly how that data relates to the world (obviously, GPS is slightly different, but even with GPS, there can be substantial error). The purpose of localization is to use sensor measurements (GPS data, IMU data, lidar returns with respect to a pre-generated 3D map, wheel speeds, and other info) to actually determine the location of the robot in the world. So, prior to being "localized" you'd have a world-fixed /map coordinate frame, and a "tree" of body-fixed frames on the robot (base_link, sensor frames, etc) the relationships of which you already know; the job of localization is to "fill in the gap" by determining the relationship between the map frame and the body-fixed frames (note: an odometry frame comes into play here as well, but I'm not going to get into the details here). See ROS Rep 105 for more detail, and check out the "map" frame, where it states that "[i]n a typical setup, a localization component constantly re-computes the robot pose in the map frame based on sensor observations."

Now on to some technical issues: ROS TF is implemented such that each TF node can only have one parent (though it can have multiple children). The standard TF tree for most robots, as mentioned above, has sensors related via TF to some body-fixed frame. Since you've already made each sensor a child of /map, they can't be a child of any other frame, which breaks the TF tree for (I would think) the majority of people who have a pre-existing ROS robot setup that want to use CARLA. For example, in my system, my lidar frame is already a child of my base_link frame, so the fact that CARLA makes the lidar frame a child of /map causes issues for me.

Furthermore, any "lookup" of the relationship between sensors on the ROS side of things would need to pass through a dynamic transform to /map, which is just not a good idea (why is beyond scope and this is already long-winded, but check out that ROS Rep 105 for an explanation of possible discontinuities in positions in the map frame). The relationship between sensors on the car never changes, and thus, these relationships should be represented by a static transform, which typically publishes must faster and is always consent with no changing error. For example, let's say I'm fusing lidar and camera data together in my perception stack, meaning I need to know the relationship between the location of the camera, and the location of the lidar. The way the CARLA ros-bridge TF tree is implemented, the relationship lookup would be processed as camera -> map -> lidar, which doesn't really make much sense. The lookup should only have to pass throughs static TF links, such as, say, camera -> lidar directly, or camera -> base_link -> lidar (this all depends on how one sets up their body-fixed TF tree).

The real problem here is that you're actually publishing the TF data out of the box, with no way to modify it without modifying the ros-bridge code. All of the sensor data has been converted correctly and whatnot, but the TF tree is wrong. To summarize the issues: 1) The way you've related everything back to map breaks any existing TF tree in ROS, and also has the potential to cause issues, because TF lookups between sensors would have to pass through the map frame, which has a dynamic relationship with the body-fixed frames (and just kind of doesn't make sense). 2) You've already solved the localization problem with the way you've related everything back to map out of the box in the TF tree. I can't test my localization system in CARLA.

I'd suggest the following as a solution: 1) Either determine and publish the tf between the base frame (/ego_vehicle) and the respective sensors, or force users to define the relationships between their sensors in ROS and publish their own TF tree. Many ROS users already use URDF files to define the relationships between all the fixed frames on their vehicle (see ROS URDF). This way, you'd still have exactly the same info, but all of the sensors would be related to /ego_vehicle, and then one just needs to relate /ego_vehicle to /map. 2) Give users the option of publishing the relationship between /map and /ego_vehicle (again, this is simplified, because there's the /odom frame to consider), or leave this task up to them. This would allow users to run/test their localization systems in CARLA.

fpasch commented 4 years ago

You're totally right. The current tf publishing might not be suitable use cases like localization. The carla_ad_demo with an attached "view" camera: frames

As work packages I would suggest:

  1. Extend carla_ego_vehicle node, to not only spawn the sensors in CARLA but also to create a static tf publishers for every attached sensor.
  2. Make the /map - /ego_vehicle tf publishing optional in carla_ros_bridge (using carla_ros_bridge/config/settings.yaml)
  3. Find solution for /odom

Would you be able to contribute?

TNAV1 commented 4 years ago

@fabianoboril & @berndgassmann was @4 fix (sensor tf related to vehicle) included in CARLA0.9.9 release? @DSantosO does https://github.com/carla-simulator/carla/pull/2997 fix issue @4?

goksanisil23 commented 3 years ago

Hi, Completely on board with @jjw2 I haven't checked 0.9.11 but as of 0.9.10, tf relation for sensors are still w.r.t. "map" frame. Is there a plan to change this?

fpasch commented 3 years ago

@goksanisil23 If you use the latest version from master in synchronous mode, you should get a tf tree where the sensors are attached to the vehicle.

goksanisil23 commented 3 years ago

Thank you @fpasch !

ammar-n-abbas commented 3 years ago

The only problem is that with the synchronous mode, carla-ros bridge is too slow