carla-simulator / carla

Open-source simulator for autonomous driving research.
http://carla.org
MIT License
11.17k stars 3.6k forks source link

LiDAR map artifacts #3287

Closed tiger-bug closed 3 years ago

tiger-bug commented 4 years ago

Good afternoon,

I have an issue with creating a full LiDAR point cloud. It seems to have artifacts similar to the ones found in issue 594. Everything seems to be aligned except for points following the path of the vehicle. I'm using the apt-get version of Carla (CARLA 0.9.9.2). As you can see, I'm getting points that are floating and parallel above the ground (similar to the aforementioned issue).
map floating_points

I saw that in order to fix the problem, they set the pitch and roll angles to zero. My original transformation matrices are

        gamma = rpy[i][0]
        beta = rpy[i][1]
        alpha = rpy[i][2]
        Rot[i,0,0] = np.cos(alpha)*np.cos(beta)
        Rot[i,0,1] = np.cos(alpha)*np.sin(beta)*np.sin(gamma) - np.sin(alpha)*np.cos(gamma)
        Rot[i,0,2] = np.cos(alpha)*np.sin(beta)*np.cos(gamma) + np.sin(alpha)*np.sin(gamma)
        Rot[i,1,0] = np.sin(alpha)*np.cos(beta)
        Rot[i,1,1] = np.sin(alpha)*np.sin(beta)*np.sin(gamma) + np.cos(alpha)*np.cos(gamma)
        Rot[i,1,2] = np.sin(alpha)*np.sin(beta)*np.cos(gamma) - np.cos(alpha)*np.sin(gamma)
        Rot[i,2,0] = -1*np.sin(beta)
        Rot[i,2,1] = np.cos(beta)*np.sin(gamma)
        Rot[i,2,2] = np.cos(beta)*np.cos(gamma)

where rpy is the roll, pitch, yaw values from the IMU. (Note that array rpy is already in radians). Now, In order to fix this, I attempted to do what the author from issue 594 did:

        cy = np.cos(alpha)
        sy = np.sin(alpha)
        cr = np.cos(0)
        sr = np.sin(0)
        cp = np.cos(0)
        sp = np.sin(0)

        scalex,scaley,scalez = 1,1,1

        Rot[i, 0, 0] = (cp * cy)
        Rot[i, 0, 1] = scalex * (cy * sp * sr - sy * cr)
        Rot[i, 0, 2] = scalex * (cy * sp * cr + sy * sr)
        Rot[i, 1, 0] = scaley * (sy * cp)
        Rot[i, 1, 1] = scaley * (sy * sp * sr + cy * cr)
        Rot[i, 1, 2] = scaley * (sy * sp * cr - cy * sr)
        Rot[i, 2, 0] = scalez * (-sp)
        Rot[i, 2, 1] = scalez * (cp * sr)
        Rot[i, 2, 2] = scalez * (cp * cr)

This did not seem to fix the issue. Is this the same issue as before that was solved in June or am I perhaps doing something wrong? I am taking the roll pitch yaw values from the IMU (placing the IMU and LiDAR sensor at the same location). Feel free to close this if this was already solved in a later version of CARLA. Thank you!

germanros1987 commented 4 years ago

@DSantosO @joel-mb could you take a look at this?

DSantosO commented 4 years ago

Hello @tiger-bug, If I understood you correctly, you want to put several frames together, right? As the Lidar's point cloud is given in the local sensor frame of reference, you need to transform it into the world one. You don't need to generate your own transformation but you can use the one we provide with get_transform() of the sensor and then just use it: points = points[:, :-1] # Because the lidar data is a 4D including the intensity. points = np.append(points, np.ones((points.shape[0], 1)), axis=1) points = np.dot(tran.get_matrix(), points.T).T points = points[:, :-1]

I hope this solves your problem.

tiger-bug commented 4 years ago

Good morning @DSantosO !

Thank you for the response and advice. I was using the IMU roll pitch and yaw angles to transform the point cloud into the lidar frame into the world frame, so you are correct in that I would like to transform the point clouds into the world frame. I suppose I should tell you what my end goal is and I apologize for not putting this in the original post.

I would like to simulate how point clouds are created using direct georeferencing. This integrates the data from the from IMU/GNSS onboard the vehicle. Here was my goal:

The end goal is to add some noise to all of these sensors and see how this affects the point cloud. I know you can with the LiDAR, but I'd like to try and integrate all three of these, if that makes sense. Thank you!

I do appreciate the method you suggested and can use that, however I was just looking to integrate more sensors for this simulation.

DSantosO commented 3 years ago

Hello @tiger-bug, Don't trust much issue #392 because this is old and the frame of reference of the LiDAR changed recently. Now, it is given in the usual frame reference of CARLA (x-front, y-right, z-up). The mini-script I gave you to show using the world-to-local transformation we provide and we are using also the convention RzRyRx so it should be equivalent to what you want to do, You can look into Rotation and Transform if you want to check the rotation matrices that we are using. I see that there are some minus signs of difference with respect to the ones you show in your first post. Check this and the xyz change and if you still have the problem, it would be very helful if you can provide me a script to test this. If you prefer to provide it privately, you can mail me to daniel.santos@osvf.org.

tiger-bug commented 3 years ago

Thank you for the response and I apologize for the delay, I must have missed the notification that you responded. I just updated the version of CARLA so I will change the code and test it again. I will provide you with a script if it doesn't work or if I'm still getting those artifacts. Thank you!

tiger-bug commented 3 years ago

Here is a little script I made to collect data and make a point cloud:


#import statements for code
##################################################
import glob
import os
import sys
import random
import datetime
import numpy as np
##### Upon close...save as plyfile (https://github.com/dranjan/python-plyfile)
from plyfile import PlyData, PlyElement
##### Time experiment
from time import time
##################### Create empty array to append points to 
pnt_tot = np.empty((0,4),dtype=np.float32)

try:
    sys.path.append(glob.glob('/opt/carla-simulator/PythonAPI/carla/dist/carla-*%d.%d-%s.egg' % (
        sys.version_info.major,
        sys.version_info.minor,
        'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
except IndexError:
    pass
import carla
##################################################
npy_f = '/path/to/npy/file'
os.mkdir(npy_f)
###### I save the file as a PLY file using 
npy_name = os.path.join(npy_f,'outfile.ply')
def main():
    try:   
        client = carla.Client('localhost', 2000)
        client.set_timeout(10.0) # seconds
        world = client.get_world()
#############################################
### Section for creating Lidar instantiating LiDAR senor.  I didn't include it for fear of 
### cluttering up this seciton
#############################################

###########Function for creating point cloud.  This is my code to append to pnt_tot.  This is how I'm creating the code########
    def prnt_pnts(point_cloud):
    ############# pnt cloud is the point cloud we are going to append to#####################
        global pnt_tot

        ############  GEt transform matrix!###############
        trns_mat = point_cloud.transform.get_matrix()
        #### Reshape to 4x4: Note that I assume trns_mat[0:3,0:3] is the rotation matrix
        #### and trns_mat[3:,3:] is the translation
        trns_mat = np.array(trns_mat).reshape((4,4))
        ############ Get points from raw data#################
        r_p = np.frombuffer(point_cloud.raw_data,dtype=np.dtype('f4'))
        ########### reshape the array to XYZ, Intensity
        pnts = r_p.reshape(int(r_p.shape[0]/4),4)
        ### Label in this case is intensity...oops. I will change this when I start
        ### using semantic LiDAR
        pnts, label = pnts[:,:3], pnts[:,3:]
        ### Add ones column so points are 4x1
        pnts = np.hstack((pnts,np.ones(pnts.shape[0])[:,None]))
        new_pnts = np.dot(trns_mat,pnts.T)
        new_pnts = new_pnts.T
        new_pnts = np.hstack((new_pnts[:,:3],label))
        ### Append to global point cloud
        pnt_tot = np.vstack((pnt_tot,new_pnts))

    # Collect points with prnt_pnts function
    lidar_sen.listen(lambda point_cloud: prnt_pnts(point_cloud))
    ego_vehicle.set_autopilot(True)
    print('\nEgo autopilot enabled')
    [print(actor) for actor in actor_list]

    while True:
        world_snapshot = world.tick()

    except:
         ##### Save point cloud here.  This may not be the best way...
        print('Destroying actors and sensors')
        [sensor.destroy() for sensor in sensor_list]
        print('Saving point cloud....')
        # np.savetxt(os.path.join(f_name, 'test.txt'),pnt_tot,delimiter=',',comments='')
        print('Number of points: %d ' % pnt_tot.shape[0])
        print('Saving points to the file: {}'.format(npy_name))
        s = time()
        vertex = np.array(list(map(tuple,pnt_tot)),dtype=[('x', 'f4'), ('y', 'f4'),('z', 'f4'),('Intensity','f4')])
        el = PlyElement.describe(vertex, 'output')
        PlyData([el]).write(npy_name)
        e = time()
        print('Total time to create plyfile: %.4f seconds' % (e - s))

The main function is prnt_pnts, this is how I'm creating the points. Here is a picture of the result. Seems like I'm still getting the issue. Is there something in the code I'm doing incorrectly? I will provide more if you need it. Thank you so much!!

new_transform_floating_points

side_view

DSantosO commented 3 years ago

Hello @tiger-bug, I have been able to run your script and I think I see something that it could be what you are mentioning. Where are you spawning your car? Are you seeing this in different places or just in some places? Could you try to spawn your car in Town 5 at (x=30, y=200, z=0.5) and tell me if you can see the same problem?

tiger-bug commented 3 years ago

Good morning @DSantosO,

I changed the code. Here is a little snapshot of it to make sure I did it correctly:

        spawn_points=[]
        spawn_point = carla.Transform(carla.Location(x=30, y=200, z=0.5), carla.Rotation(0,0,0))
        spawn_points.append(spawn_point)

I can still see the issue, but it does look better. Is it due to the z-shift in the spawn point? It looks like it's off by about 0.3 meters

town-05-issue-view

Here is a general view of the point cloud

town-05-side-view

I appreciate your help with this!

DSantosO commented 3 years ago

Hello @tiger-bug I have checked extensively the point cloud that I have generated with your code and I cannot see anything problematic. The shift that you are seeing in this image (https://user-images.githubusercontent.com/40045042/95607027-c6794680-0a20-11eb-8d11-b50cfa70ca5f.png) could it be because you are projecting things in the same plane that are in different places? Be aware that even if the roads seem flat, they have small slopes that can generate artifacts when you project them in a picture like this. Try to visualize points in 3D or to slice some range of the projected coordinate, not the full obtained range. You can also try to reduce the range of the lidar to avoid having trees and far objects introducing noise.

This is a plot of the point clouds merged for 400 frames for a lidar with a small range in the curve of Town05: lidar_point_cloud

When I plotted projected is completely flat because all the points are road ones and in this location, the road does not have a slope.

tiger-bug commented 3 years ago

Good afternoon @DSantosO

I will try to reproduce your results with my code and only use the first 400 frames and see what I get. The side view you see in my previous post is from Cloud Compare. It's a 2D projection of the left side view (or right side, I can't remember). I just noted the negative z values all seemed to be about -0.3 meters.

Just to note I am using CARLA 0.9.10. Here are my LiDAR specifications:

Points per second: 90000 Rotation Freq: 20 Hz upper FOV: 30 deg lower FOV: -10 deg Range: 50 Location: 0,0,0 Rotation: 0,0,0

I believe everything else is at the default setting. Maybe this is making a difference, and I am not sure what your settings are. I will attempt to recreate what you did though and see what I get. Thanks again!

tiger-bug commented 3 years ago

This will probably give you the best image of what I am talking about. I changed the lower FOV to 0 degrees and saved after 400 frames starting at the same location you mentioned in the previous post. Maybe see if you get the same result? I don't know what those rings are from. They look like ground points but they are not on the ground

0-lower-fov-high-view

Here is a zoomed in view

0-lower-fov-high-view-zoomed-in

Thanks again!

DSantosO commented 3 years ago

Hello @tiger-bug, Those rings that you see in the route of the car are due to the discretizacion of the simulation. The lidar measurement is performed via raycasting once per frame from the sensor position for an angle that depends on the rotation_frequency and the delta_time (for exampe, freq 20hz and dt = 0.05 will do a full 360 sweep). In this case you are pasting several 360 sweeps together, so you are seeing this ring structure. If you want to minimize that, you can get a more realistic decreasing the delta_time so a 360 sweep will be done in several frames and therefore, not from the same sensor position. I hope this helps you.

tiger-bug commented 3 years ago

Good morning @DSantosO ,

Sounds good. I've been messing with it a little bit this morning and I think I might not have a full grasp of what you mean. I messed with some of the settings and I am still getting the rings, so I believe I am doing something wrong. freq obviously refers to rotations per second of the scanner itself, however what does dt refer to? Does it refer to settings.fixed_delta_seconds setting? I originally have that set to 0.05 seconds. Write now it seems dt = freq, so should I have freq > dt? I will play around with it more, but I feel like I'm missing something.

DSantosO commented 3 years ago

Hello @tiger-bug,

The dt is the delta time between frames that is the same as fixed_delta_time is you have set it or is a variable one if you haven't, you can check here for more information about this and about the sync/async modes. The thing is that if you select the dt = 1/rotation_frequency, the lidar will raycast a full 360-angle per frame and therefore these rings would appear when you merge all steps together. If you select a different dt, for example, dt = 0.1 / rotation_frequency, the lidar will performed a full revolution in ten steps, and therefore, one revolution of the point cloud 'will have' different origins being closer to reality as you decrease dt. One thing you can do is to try with different dt and see if you get different results.

tiger-bug commented 3 years ago

Good morning @DSantosO ,

Sorry for the delayed response.

That makes more sense. Here is the command I have for changing dt using config.py in the utils folder.

python config.py -m Town05 --delta-seconds $dt where dt is in {0.05, 0.005, 0.025,0.0025}.

I maintained a frequency of 20 Hz for the scanner. Below are the images:

0.05-dt (1/freq)

0 05-dt

0.005-dt (0.1/freq)

0 005-dt

0.025-dt (0.5/freq)

0 025-dt

0.0025-dt (0.05/freq)

0 0025-dt

I'm still getting rings, but they are changing based on dt values. 0.5/freq makes the most sense (it does half a sweep per dt, so I'm getting half a ring), however I thought for 0.1/freq I would only get .1 of the scan per dt. Do these seem consistent? Thanks!

yasser-h-khalil commented 3 years ago

Hi,

I would like to mention for readers that having the word 'output' in the line el = PlyElement.describe(vertex, 'output') prevented my ply file from being visualized. I was using MeshLab. Replacing the word 'output' with 'vertex' fixed the issue.

Thanks, @tiger-bug for the ply file creation code.

DSantosO commented 3 years ago

Hello @tiger-bug, In principle, it could be consistent because the exact shape depends a lot of course on the car trajectory/velocity. If you are not sure, you can check individual steps to check if they are alright and if the sum up makes sense. For example, I have been playing with a fast car (around 100 km/h) and the rings shows very differently depending on the dt:

dt = 0.1 with full sweep per step (rot = 10Hz) lidar_artifacts_10_1_1 dt = 0.1 with half sweep per step (rot = 5Hz) lidar_artifacts_10_1_2

dt = 0.05 with half sweep per step (rot = 20Hz) lidar_artifacts_20_1_1

For a slow car (20 km/h) and a dt = 0.1, here I did a full sweep per step (10 Hz): lidar_artifacts_slow_10_1_1

and here 1/10 of sweep per step: lidar_artifacts_slow_10_1_10

As you can see, if you look close enough, you always are going to see these artifacts due to the time discretization but you can tune your dt to minimize them and to look 'continuous' in the scales you need.

tiger-bug commented 3 years ago

Good morning @DSantosO ,

Thank you so much for your help. I know this topic has been open for a while. I will go ahead and close the issue as what you posted makes sense and it sounds like I just need to test more of the settings. I was making sure this wasn't a bug or more likely a programming mistake on my part. If I have any additional questions (I don't think I will) may I post after it is closed, or is there a forum I can post to? I hate to take up more of your time since this isn't a bug.

Vishnu-sai-teja commented 2 months ago

Hey what is this tool that you guys are using ?