Open Thang1703hrsh opened 1 year ago
I'm training mono-semantic-maps on a full nuscene dataset, with 16GB nvidia T4, 200 epoch, it's about 460 hours(each epoch 2.3 hours). I do think translating-imgs-into-maps will cost more time. On mini dataset, translating-imgs-into-maps took about 7 hours for 600 epoch under the default configuration.
Thanks you very much, with the mini dataset, is the prediction result good or not?
I am working on a thesis related to this topic, can I ask for your code for reference, thank you very much
mono-semantic-maps url: https://github.com/tom-roddick/mono-semantic-maps
you'd better train with the full dataset, prediction on mini-trained model is terrible.
Do you use a personal computer or rent a platform, I was able to run it but colab pro is not enough for 1 month, because there are only 100 computing units
I'm working on a server in office when there is no job running on it. In fact you don't need to do 200 epochs in one time, I think after training 10 epochs, a testable model would be generated. By saving and reloading you can get a procedural checkpoint for tests.
Hi, I bought a google colab pro account but it still doesn't meet the requirements, can you share me some checkpoints, thank you very much
Tomorrow when arriving office, I'll try to post a checkpoint to huggingface. It's trained on the full nuscene database.
Here: https://huggingface.co/roam/TIIM/tree/main Only trained 4 epoch, don't expect too much on the result.
Thank you very much, nice to meet you
@basbaba can you share your nuscenes convert code please
- There is a pull request(New DataLoader) to this project, you can find here: https://github.com/avishkarsaha/translating-images-into-maps/pulls
- I modified few things from the New DataLoader
- the modified code is zipped and pushed to https://huggingface.co/roam/TIIM/tree/main
- After downloading nuscenes full dataset, remember to pre-process it using https://github.com/tom-roddick/mono-semantic-maps
- import dataloader_new.py to load data from the processed dataset
@basbaba How can I produce lane data?
If you mean how to get lane from datasets, I have no ideal. In Nuscenes, there is a definition: NUSCENES_CLASS_NAMES = [ 'drivable_area', 'ped_crossing', 'walkway', 'carpark', 'car', 'truck', 'bus', 'trailer', 'construction_vehicle', 'pedestrian', 'motorcycle', 'bicycle', 'traffic_cone', 'barrier' ] 'drivable_area', 'ped_crossing', 'walkway' may be what you want.
how did you go about getting the ground truth image, i did the same in https://github.com/tom-roddick/mono-semantic-maps however it is not the same as the ground truth in the mini episode, can you give me an example image of your ground truth. Thank you very much
Almost the same
@basbaba do you have facebook, skype or any social network, i want to exchange something about datasets and how to train models, there are some places i'm a bit confused, tks you very much
tks u, i am ducthang170301@gmail.com
you can add contact me, https://t.me/ducthang1703
@basbaba How can I produce lane data?
Recently I'm working on nuScenes map datasets and found the Lane data in it's NuScenesMap:
def _load_layers(): self.lane = self._load_layer('lane')
you can get it by NuScenesMap or directly from json file.
Let me ask if train model with full nuscenes data, what is your configuration need, I have problem with too much RAM when training, my RAM is 13GB, I use colab or kaggle
Thank you very much