w2kun / RNW

This is the official repository for Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark (ICCV 2021).
MIT License
84 stars 13 forks source link

about produce the ground truth depth of OXFORD ROBOTCAR DATASET #2

Closed xxlbigbrother closed 3 years ago

xxlbigbrother commented 3 years ago

it is good job! Can i ask about how to create depth gt of ROBOTCAR DATASET ? I could not find the official toolboxes. Thanks very much!

w2kun commented 3 years ago

Hi, the official toolbox of RobotCar is at https://github.com/ori-mrg/robotcar-dataset-sdk. You can use python/build_pointcloud.py to build pointcloud then project to 2d grid to acquire the depth map.

xxlbigbrother commented 3 years ago

Hi, the official toolbox of RobotCar is at https://github.com/ori-mrg/robotcar-dataset-sdk. You can use python/build_pointcloud.py to build pointcloud then project to 2d grid to acquire the depth map. thanks for your reply! i try to build pointcloud and acquire thr depth map, or if you save the depth gt and convenient, would you mind share a baiduyun or google drive link in this github?

w2kun commented 3 years ago

I can't directly distribute the materials produced from RobotCar. You can refer to our script to prepare the ground truth.

import os
import re
from multiprocessing.pool import Pool

import numpy as np

from python_scripts.build_pointcloud import build_pointcloud
from python_scripts.camera_model import CameraModel
from python_scripts.transform import build_se3_transform

# date of data
_DATA_DATE = '2014-12-16-18-44-24'
# image directory
_IMG_DIR = '/test/Dataset/Oxford_RobotCar_Raw/06/{}/stereo/left/'.format(_DATA_DATE)
# laser directory
_LASER_DIR = '/test/Dataset/Oxford_RobotCar_Raw/laser/{}/lms_front/'.format(_DATA_DATE)
# poses file
_POSES_FILE = '/test/Dataset/Oxford_RobotCar_Raw/gps/{}/gps/ins.csv'.format(_DATA_DATE)
# raw test timestamps
_RAW_TEST_TIMESTAMPS = '{}_test_timestamps.txt'.format(_DATA_DATE)
# camera models directory
_CAMERA_MODEL_DIR = 'camera_models/'
# sensor extrinsics directory
_EXTRINSICS_DIR = 'extrinsics/'
# image shape
_IMAGE_SHAPE = (960 - 1, 1280 - 1, 3)
# number of workers
_NUM_WORKERS = 4
# output directory
_OUT_DIR = 'outputs/'  # '/test/Dataset/Oxford_RobotCar_Processed/{}/depth_new/'.format(_DATA_DATE)
# time span
_TIME_SPAN = 4e6

def get_static_frames(fn):
    frames = []
    with open(fn, 'r') as fo:
        while True:
            line = fo.readline()
            if not line:
                break
            if not line.startswith('-----'):
                continue
            chunk = []
            while True:
                line = fo.readline()
                if line.startswith('-----'):
                    frames.append(chunk)
                    break
                if not line.startswith('# '):
                    raise Exception('Incorrect format.')
                line = int(line[2:].split()[0])
                chunk.append(line)
    return frames

def get_valid_frames(frames, time_span):
    result = []
    for chunk in frames:
        start, end = chunk[0], chunk[-1]
        for item in chunk:
            if item - time_span >= start and item + time_span <= end:
                result.append(item)
    return result

def process_and_save(ts: int):
    point_cloud, reflectance = build_pointcloud(_LASER_DIR, _POSES_FILE, _EXTRINSICS_DIR, ts - _TIME_SPAN,
                                                ts + _TIME_SPAN, ts)

    point_cloud = np.dot(G_camera_pose_source, point_cloud)

    uv, depth = model.project(point_cloud, _IMAGE_SHAPE)

    # create depth map
    h, w, _ = _IMAGE_SHAPE
    uv = np.round(uv).astype(np.int)
    depth_map = np.zeros((h + 1, w + 1), dtype=np.float32)
    for i in range(uv.shape[1]):
        u, v, d = uv[0, i], uv[1, i], depth[i]
        if depth_map[v, u] <= 0:
            depth_map[v, u] = d
        else:
            depth_map[v, u] = min(d, depth_map[v, u])

    # output depth map
    fn = '{}_depth.npy'.format(ts)
    np.save(os.path.join(_OUT_DIR, fn), depth_map, allow_pickle=False)
    print('{} saved.'.format(fn))

if __name__ == '__main__':
    model = CameraModel(_CAMERA_MODEL_DIR, _IMG_DIR)

    extrinsics_path = os.path.join(_EXTRINSICS_DIR, model.camera + '.txt')
    with open(extrinsics_path) as extrinsics_file:
        extrinsics = [float(x) for x in next(extrinsics_file).split(' ')]

    G_camera_vehicle = build_se3_transform(extrinsics)
    G_camera_pose_source = None

    poses_type = re.search('(vo|ins|rtk)\.csv', _POSES_FILE).group(1)
    if poses_type in ['ins', 'rtk']:
        with open(os.path.join(_EXTRINSICS_DIR, 'ins.txt')) as extrinsics_file:
            extrinsics = next(extrinsics_file)
            G_camera_pose_source = G_camera_vehicle * build_se3_transform([float(x) for x in extrinsics.split(' ')])
    else:
        # VO frame and vehicle frame are the same
        G_camera_pose_source = G_camera_vehicle

    # read timestamps
    valid_timestamps = get_valid_frames(get_static_frames(_RAW_TEST_TIMESTAMPS), _TIME_SPAN)
    # pool
    pool = Pool(_NUM_WORKERS)
    # process
    for timestamp in valid_timestamps:
        pool.apply_async(process_and_save, (timestamp,))
    # wait
    pool.close()
    pool.join()
    # print
    print('Total: {}.'.format(len(valid_timestamps)))

The python_scripts corresponds to the python directory of official SDK.

xxlbigbrother commented 3 years ago

I can't directly distribute the materials produced from RobotCar. You can refer to our script to prepare the ground truth.

import os
import re
from multiprocessing.pool import Pool

import numpy as np

from python_scripts.build_pointcloud import build_pointcloud
from python_scripts.camera_model import CameraModel
from python_scripts.transform import build_se3_transform

# date of data
_DATA_DATE = '2014-12-16-18-44-24'
# image directory
_IMG_DIR = '/test/Dataset/Oxford_RobotCar_Raw/06/{}/stereo/left/'.format(_DATA_DATE)
# laser directory
_LASER_DIR = '/test/Dataset/Oxford_RobotCar_Raw/laser/{}/lms_front/'.format(_DATA_DATE)
# poses file
_POSES_FILE = '/test/Dataset/Oxford_RobotCar_Raw/gps/{}/gps/ins.csv'.format(_DATA_DATE)
# raw test timestamps
_RAW_TEST_TIMESTAMPS = '{}_test_timestamps.txt'.format(_DATA_DATE)
# camera models directory
_CAMERA_MODEL_DIR = 'camera_models/'
# sensor extrinsics directory
_EXTRINSICS_DIR = 'extrinsics/'
# image shape
_IMAGE_SHAPE = (960 - 1, 1280 - 1, 3)
# number of workers
_NUM_WORKERS = 4
# output directory
_OUT_DIR = 'outputs/'  # '/test/Dataset/Oxford_RobotCar_Processed/{}/depth_new/'.format(_DATA_DATE)
# time span
_TIME_SPAN = 4e6

def get_static_frames(fn):
    frames = []
    with open(fn, 'r') as fo:
        while True:
            line = fo.readline()
            if not line:
                break
            if not line.startswith('-----'):
                continue
            chunk = []
            while True:
                line = fo.readline()
                if line.startswith('-----'):
                    frames.append(chunk)
                    break
                if not line.startswith('# '):
                    raise Exception('Incorrect format.')
                line = int(line[2:].split()[0])
                chunk.append(line)
    return frames

def get_valid_frames(frames, time_span):
    result = []
    for chunk in frames:
        start, end = chunk[0], chunk[-1]
        for item in chunk:
            if item - time_span >= start and item + time_span <= end:
                result.append(item)
    return result

def process_and_save(ts: int):
    point_cloud, reflectance = build_pointcloud(_LASER_DIR, _POSES_FILE, _EXTRINSICS_DIR, ts - _TIME_SPAN,
                                                ts + _TIME_SPAN, ts)

    point_cloud = np.dot(G_camera_pose_source, point_cloud)

    uv, depth = model.project(point_cloud, _IMAGE_SHAPE)

    # create depth map
    h, w, _ = _IMAGE_SHAPE
    uv = np.round(uv).astype(np.int)
    depth_map = np.zeros((h + 1, w + 1), dtype=np.float32)
    for i in range(uv.shape[1]):
        u, v, d = uv[0, i], uv[1, i], depth[i]
        if depth_map[v, u] <= 0:
            depth_map[v, u] = d
        else:
            depth_map[v, u] = min(d, depth_map[v, u])

    # output depth map
    fn = '{}_depth.npy'.format(ts)
    np.save(os.path.join(_OUT_DIR, fn), depth_map, allow_pickle=False)
    print('{} saved.'.format(fn))

if __name__ == '__main__':
    model = CameraModel(_CAMERA_MODEL_DIR, _IMG_DIR)

    extrinsics_path = os.path.join(_EXTRINSICS_DIR, model.camera + '.txt')
    with open(extrinsics_path) as extrinsics_file:
        extrinsics = [float(x) for x in next(extrinsics_file).split(' ')]

    G_camera_vehicle = build_se3_transform(extrinsics)
    G_camera_pose_source = None

    poses_type = re.search('(vo|ins|rtk)\.csv', _POSES_FILE).group(1)
    if poses_type in ['ins', 'rtk']:
        with open(os.path.join(_EXTRINSICS_DIR, 'ins.txt')) as extrinsics_file:
            extrinsics = next(extrinsics_file)
            G_camera_pose_source = G_camera_vehicle * build_se3_transform([float(x) for x in extrinsics.split(' ')])
    else:
        # VO frame and vehicle frame are the same
        G_camera_pose_source = G_camera_vehicle

    # read timestamps
    valid_timestamps = get_valid_frames(get_static_frames(_RAW_TEST_TIMESTAMPS), _TIME_SPAN)
    # pool
    pool = Pool(_NUM_WORKERS)
    # process
    for timestamp in valid_timestamps:
        pool.apply_async(process_and_save, (timestamp,))
    # wait
    pool.close()
    pool.join()
    # print
    print('Total: {}.'.format(len(valid_timestamps)))

The python_scripts corresponds to the python directory of official SDK.

thanks!!!

mzy97 commented 2 years ago

Hi, about processing depth, I have two questions

  1. In script above, how _RAW_TEST_TIMESTAMPS generated, and what chunk is used for ?
  2. How to choose the _TIME_SPAN, when the value is large, some occluded pixel will appear when project accumulated point cloud to image, like the visualization I ran below ( the red house suppose to hide behind the wall, but large _TIME_SPAN will make it appear, and it is a invalid GT depth image) image @w2kun
mzy97 commented 2 years ago

And It will be a great help if you could provide the already processed data used in this paper. @w2kun

zxcqlf commented 2 years ago

And It will be a great help if you could provide the already processed data used in this paper. @w2kun

Hi, see "https://github.com/zxcqlf/RobotCar_DepthGT_Generate", it may help you.

huynhbaobk commented 1 year ago

Hi, about processing depth, I have two questions

  1. In script above, how _RAW_TEST_TIMESTAMPS generated, and what chunk is used for ?
  2. How to choose the _TIME_SPAN, when the value is large, some occluded pixel will appear when project accumulated point cloud to image, like the visualization I ran below ( the red house suppose to hide behind the wall, but large _TIME_SPAN will make it appear, and it is a invalid GT depth image) image @w2kun

Hi @mzy97, could you show how to generate the ground truth I tried this https://github.com/ori-mrg/robotcar-dataset-sdk But get wrong result Figure_1

hurish-tian commented 1 year ago

Hello! May I ask where did you get raw timestamp? According to your get_static_frames(fn) function , the '2014-12-16-18-44-24_test_timestamps.txt' has its specific format which includes '------' to cut timestamps into chunks. But I didn't find .txt timestamps documents includes '--------' on the official website. Could you please tell me where to find it? Thanks!!!

breknddone commented 1 year ago

And It will be a great help if you could provide the already processed data used in this paper. @w2kun

Hi, see "https://github.com/zxcqlf/RobotCar_DepthGT_Generate", it may help you.

thank u!