Open HITwyx opened 2 years ago
Hi,
Thank you for your replying!
1.
I've download the nuscenes_data several times, but it still doesn't work.
This error always occur:
Traceback (most recent call last):
File "/home/wyx/Downloads/translating-images-into-maps-main/train.py", line 945, in
I think it happens because of this part of code: (src/data/dataloader.py line128~145)
sample_token` = self.tokens[index]
sample_record = self.nusc.get("sample", sample_token)
cam_token = sample_record["data"]["CAM_FRONT"]
cam_record = self.nusc.get("sample_data", cam_token)
cam_path = self.nusc.get_sample_data_path(cam_token)
id = Path(cam_path).stem
# Load intrinsincs
calib = self.nusc.get(
"calibrated_sensor", cam_record["calibrated_sensor_token"]
)["camera_intrinsic"]
calib = np.array(calib)
# Load input images
image_input_key = pickle.dumps(id)
with self.images_db.begin() as txn:
value = txn.get(key=image_input_key)
image = Image.open(io.BytesIO(value)).convert(mode='RGB')
I have tried many times for get the image_input_key, but the value is always None. Is there any way to fix it? Or is it the lmdb file OK? I have check the libraries which I have installed: torch.version 1.11.0 cv2.version 4.6.0 numpy.version 1.22.3 pickle.format_version 4.0 shapely.version 1.8.2 lmdb.version__ 1.3.0 are these version OK?
Thank you for your replying!
I've download the nuscenes_data several times, but it still doesn't work. This error always occur: Traceback (most recent call last): File "/home/wyx/Downloads/translating-images-into-maps-main/train.py", line 945, in main() File "/home/wyx/Downloads/translating-images-into-maps-main/train.py", line 932, in main train(args, train_loader, model, optimizer, epoch) File "/home/wyx/Downloads/translating-images-into-maps-main/train.py", line 53, in train for i, ((image, calib, grid2d), (cls_map, vis_mask)) in enumerate(dataloader): File "/home/wyx/miniconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 530, in next data = self._next_data() File "/home/wyx/miniconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1224, in _next_data return self._process_data(data) File "/home/wyx/miniconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1250, in _process_data data.reraise() File "/home/wyx/miniconda3/lib/python3.9/site-packages/torch/_utils.py", line 457, in reraise raise exception PIL.UnidentifiedImageError: Caught UnidentifiedImageError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/wyx/miniconda3/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/home/wyx/miniconda3/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/wyx/miniconda3/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/wyx/Downloads/translating-images-into-maps-main/src/data/dataloader.py", line 144, in getitem image = Image.open(io.BytesIO(value)).convert(mode='RGB') File "/home/wyx/miniconda3/lib/python3.9/site-packages/PIL/Image.py", line 3008, in open raise UnidentifiedImageError( PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f716c06f360>
I think it happens because of this part of code: (src/data/dataloader.py line128~145)
sample_token` = self.tokens[index] sample_record = self.nusc.get("sample", sample_token) cam_token = sample_record["data"]["CAM_FRONT"] cam_record = self.nusc.get("sample_data", cam_token) cam_path = self.nusc.get_sample_data_path(cam_token) id = Path(cam_path).stem # Load intrinsincs calib = self.nusc.get( "calibrated_sensor", cam_record["calibrated_sensor_token"] )["camera_intrinsic"] calib = np.array(calib) # Load input images image_input_key = pickle.dumps(id) with self.images_db.begin() as txn: value = txn.get(key=image_input_key) image = Image.open(io.BytesIO(value)).convert(mode='RGB')
I have tried many times for get the image_input_key, but the value is always None. Is there any way to fix it? Or is it the lmdb file OK? I have check the libraries which I have installed: torch.version 1.11.0 cv2.version 4.6.0 numpy.version 1.22.3 pickle.format_version 4.0 shapely.__version 1.8.2 lmdb.version 1.3.0 are these version OK?
- Other than that, is there any models like trained models and pre-trained models? Thank you!
The first question has been solved. This error occurred because of the pick.dumps() protocol version. When I change the Protocol version to 3, every thing goes fine. Thanks again for the excellent work!
I met similar problem too. The little difference is the "image_input_key" is not None in my workspace. However, "value" that got from txn is always None. The solution is to change the protocol version to 3 in pickle.dumps().
To be more clear, the error lies in dataloader.py/nuScenesMaps
# Load input images image_input_key = pickle.dumps(id)
For those who also meet this error, please modify the code to:
# Load input images image_input_key = pickle.dumps(id,protocol=3)
So the encoded key would match the keys stored in image_db, for example:
b'\x80\x03X:\x00\x00\x00n008-2018-05-21-11-06-59-0400__CAM_FRONT__1526915243012465q\x00.'
I met similar problem too. The little difference is the "image_input_key" is not None in my workspace. However, "value" that got from txn is always None. The solution is to change the protocol version to 3 in pickle.dumps(). To be more clear, the error lies in dataloader.py/nuScenesMaps
# Load input images image_input_key = pickle.dumps(id)
For those who also meet this error, please modify the code to:# Load input images image_input_key = pickle.dumps(id,protocol=3)
So the encoded key would match the keys stored in image_db, for example:b'\x80\x03X:\x00\x00\x00n008-2018-05-21-11-06-59-0400__CAM_FRONT__1526915243012465q\x00.'
agree add protocol=3 for pickle.dumps at line 142 and line 153 in file dataloader.py.
I met similar problem too. The little difference is the "image_input_key" is not None in my workspace. However, "value" that got from txn is always None. The solution is to change the protocol version to 3 in pickle.dumps(). To be more clear, the error lies in dataloader.py/nuScenesMaps
# Load input images image_input_key = pickle.dumps(id)
For those who also meet this error, please modify the code to:# Load input images image_input_key = pickle.dumps(id,protocol=3)
So the encoded key would match the keys stored in image_db, for example:b'\x80\x03X:\x00\x00\x00n008-2018-05-21-11-06-59-0400__CAM_FRONT__1526915243012465q\x00.'
@ziyan0302 i have changed the protocol version to 3 in pickle.dumps(),but "value" that got from txn is always None
I met similar problem too. The little difference is the "image_input_key" is not None in my workspace. However, "value" that got from txn is always None. The solution is to change the protocol version to 3 in pickle.dumps(). To be more clear, the error lies in dataloader.py/nuScenesMaps
# Load input images image_input_key = pickle.dumps(id)
For those who also meet this error, please modify the code to:# Load input images image_input_key = pickle.dumps(id,protocol=3)
So the encoded key would match the keys stored in image_db, for example:b'\x80\x03X:\x00\x00\x00n008-2018-05-21-11-06-59-0400__CAM_FRONT__1526915243012465q\x00.'
@ziyan0302 i have changed the protocol version to 3 in pickle.dumps(),but "value" that got from txn is always None
you can debug your dataloader in notebook like below:
from src.data.dataloader import nuScenesMaps
train_data = nuScenesMaps(
root='nuscenes_data',
split='train_mini',
grid_size=(50.0, 50.0),
grid_res=0.5,
classes=[
"drivable_area",
"ped_crossing",
"walkway",
"carpark_area",
"road_segment",
"lane",
"bus",
"bicycle",
"car",
"construction_vehicle",
"motorcycle",
"trailer",
"truck",
"pedestrian",
"trafficcone",
"barrier",
],
dataset_size=0.2,
desired_image_size=[1600, 900],
mini=True,
gt_out_size=(100, 100),)
then call this line and debug until no error reported.
train_data[0]
I shall release the dataloader which works without lmdb's soon, this should make it easier.
@HITwyx @ziyan0302 @howardchina hello, Have you trained the model on the dataset? Can you share the weight? I have trained on the mini-Nuscenes and the inference result is very poor.
@avishkarsaha Did you use the stock config for the G.T. generation from (mono-semantic-maps)[https://github.com/tom-roddick/mono-semantic-maps]? It appears that the resolution differs at 196 x 200 (perhaps the map extent you used is [-25., 0., 25., 50.]
?).
Is the correct approach just to use data_generation.py
or is that not fully developed yet? I see it also produces the lidar ray mask in a different way than (mono-semantic-maps)[https://github.com/tom-roddick/mono-semantic-maps].
Hi,
- Please download the mini again from the google drive link, it took a while to upload the lmdb for CAM_FRONT, so it might not have finished uploading at the time you tried to download.
- The train.py has a validation function as well as functions for loading pretrained models. If you have a trained model in the experiment directory, it will automatically load this and you can then continue training or run inference. To run inference on a pretrained model simply comment out the train function. I will be uploading a separate file for inference only soon.
- To train on the bigger dataset the procedure is exactly the same and uses the same dataloader. However, the ground truth maps will have to be generated first. I'll provide details on this soon, but in the meantime follow the ground truth generation procedure here: https://github.com/tom-roddick/mono-semantic-maps. You can also then use their dataloader.
Can you release the code for generating the ground truth maps?
@HITwyx @ziyan0302 @howardchina hello, Have you trained the model on the dataset? Can you share the weight? I have trained on the mini-Nuscenes and the inference result is very poor.
Hi. I meant to share you the weight file with you but I didn't save the checkpoint. Just show you the IoU of 600 epoch on the validation dataset val_ious.txt as below. `Epoch: 600, Total Loss: 35.372528076171875, s200_ious_per_class: [0.66846114 0.03846689 0.03922446 0.05169741 0. 0. 0.00104679 0. 0. 0. 0. 0.
你好,
- 请从 Google Drive 链接再次下载 mini,上传 CAM_FRONT 的 lmdb 需要一段时间,因此您尝试下载时可能尚未完成上传。
- train.py 具有验证函数以及加载预训练模型的函数。如果您在实验目录中有一个经过训练的模型,它将自动加载该模型,然后您可以继续训练或运行推理。要在预训练模型上运行推理,只需注释掉训练函数即可。我将很快上传一个单独的文件仅用于推理。
- 要在更大的数据集上进行训练,过程完全相同,并使用相同的数据加载器。但是,必须先生成地面实况图。我很快会提供有关此内容的详细信息,但与此同时,请遵循此处的地面实况生成过程:https://github.com/tom-roddick/mono-semantic-maps。然后您也可以使用他们的数据加载器。
您能发布生成地面真实地图的代码吗?
how to inference the model i trained?
Thank you for sharing! I have some questions about the code:
When I run the train.py, this error occurred: lmdb.Error: /home/###/Downloads/translating-images-into-maps-main/nuscenes_data/lmdb/samples/CAM_FRONT: No such file or directory, It is about the lmdb.open. I guess it is caused by missing data.mdb in /nuscene_data/samples/CAM_FRONT.
where is the verification file? Files like validation.py
How to train the bigger Dataset, something like v1.0-trainval or others? Are there any Readme about dataloader?
Thanks again for sharing! Looking forward to your reply.