mengmengliu1998 / LAformer

[CVPRW 2024]Official PyTorch Implementation of "LAformer: Trajectory Prediction for Autonomous Driving with Lane-Aware Scene Constraints"
Apache License 2.0
113 stars 13 forks source link

preprocess on nuscenes #11

Open KYYOnull opened 5 months ago

KYYOnull commented 5 months ago

maps/ | ├── basemaps/ | ├── expansion/ | ├── prediction/ are there no files in these three dirs?

i have .pngs and .jsons in these dirs but when preproccessing i have error like this:

python src/datascripts/dataloader_nuscenes.py --DATAROOT .\nuScenes\ --STOREDIR .\preprocessed\

Traceback (most recent call last): File "src/datascripts/dataloader_nuscenes.py", line 959, in dataset.extract_multiprocess() File "src/datascripts/dataloader_nuscenes.py", line 895, in extract_multiprocess each.start() File "D:\work\anaconda\envs\car\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "D:\work\anaconda\envs\car\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "D:\work\anaconda\envs\car\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "D:\work\anaconda\envs\car\lib\multiprocessing\popen_spawn_win32.py", line 93, in init reduction.dump(process_obj, to_child) File "D:\work\anaconda\envs\car\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'NuScenesData.extract_multiprocess..run'
Traceback (most recent call last): File "", line 1, in File "D:\work\anaconda\envs\car\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel) File "D:\work\anaconda\envs\car\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) EOFError: Ran out of input

marcelmmm commented 4 months ago

The problem is the run function inside of the extract_multiprocess function: AttributeError: Can't pickle local object 'NuScenesData.extract_multiprocess..run'. Pickle cannot handle this. It should work if you move the definition of the run ftc outside of the extract_multiprocessing fct.

def run(self, queue: multiprocessing.Queue, queue_res: multiprocessing.Queue):
        process_id = queue.get()
        if process_id == self.args['cores'] - 1:
            li = list(range(process_id * self.data_per_core, self.data_num))
        elif process_id is not None:
            li = list(range(process_id * self.data_per_core, (process_id + 1) * self.data_per_core))

        for idx in tqdm(li):
            a = self.__getitem__(idx)
            if not self.args['img_only']:
                if a is None:
                    pass
                else:
                    data_compress = zlib.compress(pickle.dumps(a))
                    queue_res.put(data_compress)

def extract_multiprocess(self):
    """
    the parallel process of extracting data
    """
    ex_list = []

    queue = multiprocessing.Queue(self.args['cores'])
    queue_res = multiprocessing.Queue()

    processes = [Process(target=self.run, args=(queue, queue_res)) for _ in range(self.args['cores'])]

    for each in processes:
        each.start()

    for i in range(self.args['cores']):
        queue.put(i)
        # pbar.update(1)

    while not queue.empty():
        pass

    save_pbar = tqdm(range(self.data_num))
    if not self.args['img_only']:
        for _ in save_pbar:
            try:
                a = queue_res.get(block=True, timeout=20)
                ex_list.append(a)
                save_pbar.update(1)
            except Exception as e:
                break

    for each in processes:
        each.join()

    print("all thread end")

    print("length of ex list: ", len(ex_list))
    if not self.args['img_only']:
        os.makedirs(self.data_dir, exist_ok=True)
        if 'train' in self.args['split']:
            with open(os.path.join(self.data_dir, 'ex_list'), 'wb') as f:
                pickle.dump(ex_list, f)
        elif 'val' in self.args['split']:
            with open(os.path.join(self.data_dir, 'eval.ex_list'), 'wb') as f:
                pickle.dump(ex_list, f)
    print("dump finished!")
KYYOnull commented 1 week ago

The problem is the run function inside of the extract_multiprocess function: AttributeError: Can't pickle local object 'NuScenesData.extract_multiprocess..run'. Pickle cannot handle this. It should work if you move the definition of the run ftc outside of the extract_multiprocessing fct.

def run(self, queue: multiprocessing.Queue, queue_res: multiprocessing.Queue):
        process_id = queue.get()
        if process_id == self.args['cores'] - 1:
            li = list(range(process_id * self.data_per_core, self.data_num))
        elif process_id is not None:
            li = list(range(process_id * self.data_per_core, (process_id + 1) * self.data_per_core))

        for idx in tqdm(li):
            a = self.__getitem__(idx)
            if not self.args['img_only']:
                if a is None:
                    pass
                else:
                    data_compress = zlib.compress(pickle.dumps(a))
                    queue_res.put(data_compress)

def extract_multiprocess(self):
    """
    the parallel process of extracting data
    """
    ex_list = []

    queue = multiprocessing.Queue(self.args['cores'])
    queue_res = multiprocessing.Queue()

    processes = [Process(target=self.run, args=(queue, queue_res)) for _ in range(self.args['cores'])]

    for each in processes:
        each.start()

    for i in range(self.args['cores']):
        queue.put(i)
        # pbar.update(1)

    while not queue.empty():
        pass

    save_pbar = tqdm(range(self.data_num))
    if not self.args['img_only']:
        for _ in save_pbar:
            try:
                a = queue_res.get(block=True, timeout=20)
                ex_list.append(a)
                save_pbar.update(1)
            except Exception as e:
                break

    for each in processes:
        each.join()

    print("all thread end")

    print("length of ex list: ", len(ex_list))
    if not self.args['img_only']:
        os.makedirs(self.data_dir, exist_ok=True)
        if 'train' in self.args['split']:
            with open(os.path.join(self.data_dir, 'ex_list'), 'wb') as f:
                pickle.dump(ex_list, f)
        elif 'val' in self.args['split']:
            with open(os.path.join(self.data_dir, 'eval.ex_list'), 'wb') as f:
                pickle.dump(ex_list, f)
    print("dump finished!")

no user for this method it still comes out EOFError: Ran out of input btw i tried some preproccessing code of many other works but they all failed.. i wonder why there is no good preproccessing method of the nuscene dataset to get traj input..

KYYOnull commented 1 week ago

no use for this method it still comes out EOFError: Ran out of input btw i tried some preproccessing code of many other works but they all failed.. i wonder why there is no good preproccessing method of the nuscene dataset to get traj input..

KYYOnull commented 1 week ago

====== Loading NuScenes tables for version v1.0-trainval... 23 category, 8 attribute, 4 visibility, 64386 instance, 12 sensor, 10200 calibrated_sensor, 2631083 ego_pose, 68 log, 850 scene, 34149 sample, 2631083 sample_data, 1166187 sample_annotation, 4 map, Done loading in 35.018 seconds.

Reverse indexing ... Done reverse indexing in 7.1 seconds.

length of data: 32186 Traceback (most recent call last): File ".\src\datascripts\dataloader_nuscenes.py", line 952, in dataset.extract_multiprocess() File ".\src\datascripts\dataloader_nuscenes.py", line 888, in extract_multiprocess each.start() File "C:\Users\gcn\anaconda3\envs\autobots\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self) File "C:\Users\gcn\anaconda3\envs\autobots\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\gcn\anaconda3\envs\autobots\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Users\gcn\anaconda3\envs\autobots\lib\multiprocessing\popen_spawn_win32.py", line 89, in init reduction.dump(process_obj, to_child) File "C:\Users\gcn\anaconda3\envs\autobots\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) OSError: [Errno 22] Invalid argument Traceback (most recent call last): File "", line 1, in File "C:\Users\gcn\anaconda3\envs\autobots\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "C:\Users\gcn\anaconda3\envs\autobots\lib\multiprocessing\spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) EOFError: Ran out of input