naurril / SUSTechPOINTS

3D Point Cloud Annotation Platform for Autonomous Driving
GNU General Public License v3.0
867 stars 222 forks source link

Image with bounding box #146

Closed AgapeGithub closed 1 year ago

AgapeGithub commented 1 year ago

It's not an issue but a question, could it be possible :

Thank you very much for the great work

naurril commented 1 year ago

for the first feature you can use fusion branch for the second we don't have it by now but you can generate such images with python scripts (refer tools in src folder)

AgapeGithub commented 1 year ago

Thank you very much ^^ i will check that and come back to you if i have any issues

AgapeGithub commented 1 year ago

Hey @naurril

I hope you're doing well, to a follow up to the question, I would like to ask if there exists a method to export the annotations to KITTI 3D object detection format. Alternatively, would it be possible for you to assist me in converting the present export format into KITTI 3D object detection format (a little python script is really appreciated)?

I have attempted to carry out the conversion process to obtain all eight coordinates for the bounding box. However, I have encountered some difficulties with regards to the rotation in the x, y, and z axes.

Thank you very much for your understanding

naurril commented 1 year ago

Hi, as you know the KITTI 3d object detection format uses camera's coordinate system rather than LiDAR's, so if you have a camera and want to save your labels in the coordinate system of this camera you have to convert both the coordinate system (with extrinsic calibration matrix) and format. But if you are just using the format and can go without a specific camera, you can easily do it as in the following script

note also that KITTI labels contain fields (truncated, occluded, alpha, and bbox ) that our annotation tool doesn't provide directly, we leave them as 0 in the following script.

I don't see why you need the all eight coordinates since KITTI does not use them, but you can check this code for reference implementation.

import os
import json
import argparse
import math

parser = argparse.ArgumentParser(description='convert label to kitti format')
parser.add_argument('src', type=str,default='./data', help="source data folder")
parser.add_argument('tgt', type=str,default='./data_kitti', help="target folder")
parser.add_argument('--scenes', type=str,default='.*', help="")
parser.add_argument('--frames', type=str,default='.*', help="")

args = parser.parse_args()

scenes = os.listdir(args.src)

for s in scenes:
    labels = os.listdir(os.path.join(args.src, s, 'label'))
    for l in labels:
        with open(os.path.join(args.src, s, 'label', l))  as fin:
            label = json.load(fin)

        if 'objs' in label:
            label = label['objs']

        output_path = os.path.join(args.tgt, s, 'label_kitti')
        if not os.path.exists(output_path):
            os.makedirs(output_path)

        with open(os.path.join(output_path, os.path.splitext(l)[0]+".txt"), 'w') as fout:
            for obj in label:
                line = "{} 0 0 0 0 0 0 0 {} {} {} {} {} {} {}\n".format(
                    obj['obj_type'],
                    obj['psr']['scale']['z'], #h
                    obj['psr']['scale']['y'], #w
                    obj['psr']['scale']['x'], #l
                    -obj['psr']['position']['y'], #x
                    -obj['psr']['position']['z'] + 0.5*obj['psr']['scale']['z'], #y
                    obj['psr']['position']['x'],  #z
                    -obj['psr']['rotation']['z'] - math.pi/2,  #rotation_y
                )

                fout.write(line)
AgapeGithub commented 1 year ago

Thank you very much for you help ^^

I will test this code ;)

Thank you again