carla-simulator / carla

Open-source simulator for autonomous driving research.
http://carla.org
MIT License
11.55k stars 3.73k forks source link

Resolution for depth sensor #6283

Open patel-rishi opened 1 year ago

patel-rishi commented 1 year ago

I am trying to run scenarios from Scenic using Carla and with VerifAI to do analysis of scenarios where the vehicle takes incorrect decisions. RGB and Depth sensors are mounted on the vehicle (ego), using which I am getting images and distance information in front of ego. The issue I am facing is that when I am using the depth map resolution as 10801080, the generated depth images are kind of broken, while the same code generates fine images with 12801280 resolution. I am guessing 1080*1080 is somehow not compatible with Carla but can't figure out the exact reason for this. (I am attaching the piece of relevant code, and also the generated images for the depth map)

    'if opt_dict:
        if 'target_speed' in opt_dict:
            self.location_pid_dict['dt'] = 1.0 / self.target_speed
        if 'clear_dist' in opt_dict:
            self.clear_dist = opt_dict['clear_dist']
        if 'location_pid_dict' in opt_dict:
            self.location_pid_dict = opt_dict['location_pid_dict']

    self.controller = PIDFollowController(
        vehicle,
        clear_dist=self.clear_dist,
        args_lateral=self.lateral_pid_dict,
        args_longitudinal=self.longitudinal_pid_dict)
    # Magnitude of lane_state is how far vehicle is from its
    # desired lane. Negative/positive means to the left/right
    # of desired lane, respectively.
    self.lane_state = 0
    self.is_changing_lane = False
    self.current_image = None

    self.actor = self._world.get_actor(vehicle.id)
    print(self.actor)
    #Add rgb and depth sensors to the Ego here
    #Adding RGB and Depth Camera on Ego
    self.image_size = 1080 # RESOLUTION PARAMETER - 1080/1280
    self.fov = None
    cam_bp = self._world.get_blueprint_library().find('sensor.camera.rgb')
    cam_bp.set_attribute("image_size_x",str(self.image_size))
    cam_bp.set_attribute("image_size_y",str(self.image_size))
    cam_bp.set_attribute("fov",str(110))
    cam_bp.set_attribute('sensor_tick', '0.2')
    cam_location = carla.Location(2,0,1)
    cam_rotation = carla.Rotation(0,0,0)
    cam_transform = carla.Transform(cam_location,cam_rotation)
    self.rgb_camera = self._world.try_spawn_actor(cam_bp,cam_transform,self.actor,carla.AttachmentType.Rigid)

    depth_sensor_bp = self._world.get_blueprint_library().find('sensor.camera.depth')
    depth_sensor_bp.set_attribute("image_size_x",str(self.image_size))
    depth_sensor_bp.set_attribute("image_size_y",str(self.image_size))
    depth_sensor_bp.set_attribute("fov",str(110))
    depth_sensor_bp.set_attribute('sensor_tick', '0.2')
    depth_location = carla.Location(2,0,1)
    depth_rotation = carla.Rotation(0,0,0)
    depth_transform = carla.Transform(depth_location,depth_rotation)
    self.depth_sensor = self._world.spawn_actor(depth_sensor_bp,depth_transform,self.actor, attachment_type=carla.AttachmentType.Rigid)

    self.rgb_image = None
    self.depth_image = None

    #Model parameters
    self.imgsz=(640, 640)
    self.weights = '/home/e5_5044/Desktop/yolov5/runs/train/yolo5_carla_attempt1/weights/best.pt'
    data = '/home/e5_5044/Desktop/yolov5/data/carla_data.yaml'
    self.conf_thres=0.25  # confidence threshold
    self.iou_thres=0.45 # NMS IOU threshold
    self.max_det=1000  # maximum detections per image
    self.device='cpu'  # cuda device, i.e. 0 or 0,1,2,3 or cpu
    half=False  # use FP16 half-precision inference
    dnn=False # use OpenCV DNN for ONNX inference
    self.agnostic_nms=False  # class-agnostic NMS
    self.device1 = select_device(self.device)

    #Load model for object detection
    self.od_model = DetectMultiBackend(self.weights, dnn=dnn, device=self.device1, data=data, fp16=half)
    self.stride, self.names, self.pt = self.od_model.stride, self.od_model.names, self.od_model.pt
    self.imgsz = check_img_size(self.imgsz, s=self.stride)  # check image size
    self.classes = self.od_model.names
    #print("Model loaded")
    self.rgb_camera.listen(lambda image: self.process_rgb_sensor_data(image))
    self.depth_sensor.listen(lambda image: self.process_depth_sensor_data(image))
    #print("Sensors [listening")`]

this was with 1280*1280: 61

This was with 1080*1080: 220

mawi42 commented 1 year ago

Maybe also original caused by an alignment problem? See https://github.com/carla-simulator/carla/issues/6225

stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

SExpert12 commented 1 year ago

I am trying to run scenarios from Scenic using Carla and with VerifAI to do analysis of scenarios where the vehicle takes incorrect decisions. RGB and Depth sensors are mounted on the vehicle (ego), using which I am getting images and distance information in front of ego. The issue I am facing is that when I am using the depth map resolution as 1080_1080, the generated depth images are kind of broken, while the same code generates fine images with 1280_1280 resolution. I am guessing 1080*1080 is somehow not compatible with Carla but can't figure out the exact reason for this. (I am attaching the piece of relevant code, and also the generated images for the depth map)

    'if opt_dict:
        if 'target_speed' in opt_dict:
            self.location_pid_dict['dt'] = 1.0 / self.target_speed
        if 'clear_dist' in opt_dict:
            self.clear_dist = opt_dict['clear_dist']
        if 'location_pid_dict' in opt_dict:
            self.location_pid_dict = opt_dict['location_pid_dict']

    self.controller = PIDFollowController(
        vehicle,
        clear_dist=self.clear_dist,
        args_lateral=self.lateral_pid_dict,
        args_longitudinal=self.longitudinal_pid_dict)
    # Magnitude of lane_state is how far vehicle is from its
    # desired lane. Negative/positive means to the left/right
    # of desired lane, respectively.
    self.lane_state = 0
    self.is_changing_lane = False
    self.current_image = None

    self.actor = self._world.get_actor(vehicle.id)
    print(self.actor)
    #Add rgb and depth sensors to the Ego here
    #Adding RGB and Depth Camera on Ego
    self.image_size = 1080 # RESOLUTION PARAMETER - 1080/1280
    self.fov = None
    cam_bp = self._world.get_blueprint_library().find('sensor.camera.rgb')
    cam_bp.set_attribute("image_size_x",str(self.image_size))
    cam_bp.set_attribute("image_size_y",str(self.image_size))
    cam_bp.set_attribute("fov",str(110))
    cam_bp.set_attribute('sensor_tick', '0.2')
    cam_location = carla.Location(2,0,1)
    cam_rotation = carla.Rotation(0,0,0)
    cam_transform = carla.Transform(cam_location,cam_rotation)
    self.rgb_camera = self._world.try_spawn_actor(cam_bp,cam_transform,self.actor,carla.AttachmentType.Rigid)

    depth_sensor_bp = self._world.get_blueprint_library().find('sensor.camera.depth')
    depth_sensor_bp.set_attribute("image_size_x",str(self.image_size))
    depth_sensor_bp.set_attribute("image_size_y",str(self.image_size))
    depth_sensor_bp.set_attribute("fov",str(110))
    depth_sensor_bp.set_attribute('sensor_tick', '0.2')
    depth_location = carla.Location(2,0,1)
    depth_rotation = carla.Rotation(0,0,0)
    depth_transform = carla.Transform(depth_location,depth_rotation)
    self.depth_sensor = self._world.spawn_actor(depth_sensor_bp,depth_transform,self.actor, attachment_type=carla.AttachmentType.Rigid)

    self.rgb_image = None
    self.depth_image = None

    #Model parameters
    self.imgsz=(640, 640)
    self.weights = '/home/e5_5044/Desktop/yolov5/runs/train/yolo5_carla_attempt1/weights/best.pt'
    data = '/home/e5_5044/Desktop/yolov5/data/carla_data.yaml'
    self.conf_thres=0.25  # confidence threshold
    self.iou_thres=0.45 # NMS IOU threshold
    self.max_det=1000  # maximum detections per image
    self.device='cpu'  # cuda device, i.e. 0 or 0,1,2,3 or cpu
    half=False  # use FP16 half-precision inference
    dnn=False # use OpenCV DNN for ONNX inference
    self.agnostic_nms=False  # class-agnostic NMS
    self.device1 = select_device(self.device)

    #Load model for object detection
    self.od_model = DetectMultiBackend(self.weights, dnn=dnn, device=self.device1, data=data, fp16=half)
    self.stride, self.names, self.pt = self.od_model.stride, self.od_model.names, self.od_model.pt
    self.imgsz = check_img_size(self.imgsz, s=self.stride)  # check image size
    self.classes = self.od_model.names
    #print("Model loaded")
    self.rgb_camera.listen(lambda image: self.process_rgb_sensor_data(image))
    self.depth_sensor.listen(lambda image: self.process_depth_sensor_data(image))
    #print("Sensors [listening")`]

this was with 1280*1280: 61

This was with 1080*1080: 220

Hi, which distance information you are extracting and how you are doing that?

TzabarDolev commented 1 year ago

I wrote a short blog post about this issue on the Carla simulator research blog - https://carlasimblog.wordpress.com/2023/09/16/the-64-division-curse/. And added a short code snippet to fix this.