Closed xingbw closed 2 years ago
Thank you very much for your interest in our project!
In the section 4 of the paper we described that the dataset is collected with a privileged behavior agent in all 8 publicly available towns, under random weathers. It is very similar to how we collected our prior work World on Rails's dataset, which has that part of the code public.
Let me know if you have further questions.
Hi, when you collect the training data, each route will have a random weather chosen from 21 kinds of weather in total (including those night weathers), is that right?
Sorry for the late reply. Yes, we choose random weather for each trajectory we collect, it is not strictly 21 kinds of weather though because we also tweak the individual weather parameters.
Thank you for your reply. So what is the portion of night weather, is it roughlt 1/3?
I dont have the exact the number because we sample the sun altitude to control the day light time. You can do the same if you want to have a continuous range of day lights.
So what is the range of sun altitude that I should sample from?
You can try -50 to 30
Edit: -50 to 50
OK, thank you a lot.
After I detailed viewed WOR codes
with lmdb_env.begin(write=True) as txn:
txn.put('len'.encode(), str(len(self.locs)).encode())
# TODO: Save to wandb
for i, (loc, rot, spd, act) in enumerate(zip(self.locs, self.rots, self.spds, self.acts)):
txn.put(
f'loc_{i:05d}'.encode(),
np.ascontiguousarray(loc).astype(np.float32)
)
txn.put(
f'rot_{i:05d}'.encode(),
np.ascontiguousarray(rot).astype(np.float32)
)
txn.put(
f'spd_{i:05d}'.encode(),
np.ascontiguousarray(spd).astype(np.float32)
)
txn.put(
f'act_{i:05d}'.encode(),
np.ascontiguousarray(act).astype(np.float32)
)
it only has five key in datasets but the LAV data have 16 key labels
['bbox', 'bra', 'cmd', 'id', 'len', 'lidar', 'loc', 'map', 'nxp', 'ori', 'rgb', 'sem', 'spd', 'tel', 'town', 'type']
And some of them, I don't know the detailed generation steps. like the bbox
, how many of vehicles should be included and also about others.
If possible, would you like to provide the code on generation data type in the following keys? Since it is a little waste of time to guess how the LAV setting about dataset generation.
Sure I can refactor my data collection codes and release later. In the mean time you can just use the released dataset.
Also the code you linked here is the random collector, aimed to train the dynamics model. The q collector saves much more info.
Sure I can refactor my data collection codes and release later. In the mean time you can just use the released dataset. Also the code you linked here is the random collector, aimed to train the dynamics model. The q collector saves much more info.
Thanks for letting me know, I will wait for your data collection and also see the q collector saved info.
Is there any updates on data collection script?
Sorry for the delay. I am very busy with grad school recently, but I shall be able to find some time in May to do this.
Sorry to bother, but... it there any updates?
If it is hard to refactor on experts, it's also ok to just release the data-produced codes in data.append
and flush
... part is ok.
Hello,
Thank you for your patience. We have a slightly updated inference time code where I ported the point painting codes to GPU and fixed a bug that causes unnecessary vehicle collisions. I will put the updated codes, weights and data collection code once the updated run finishes on the online leaderboard. Expect this to be done in 10 days. Ping me again if it is not updated by then.
Hello,
I am curious about data generation. There are two goals in the perception step: segment the roads and detect vehicles. I jump to your previous WOR model and find you use semantic camera data to deal with the segmentation task. However, I didn't find how you generate labeled data that deal with the vehicle detection task on fusion lidar data. Could you tell me how you get bounding box labels from painted lidar data?
Thank you for your instructive work. Is there any update to the data collection codes?
We have just released the v2 inference codes with an option for improved speed. Sorry for the delay! The data collection is a bit messy since it involves a modified leaderboard + scenario runner codes. I will work on cleaning it up but this will probably be in a separate branch of the repo.
@dotchen Hello, Would you mind informing me what 'nxp' stands for in the LAV data? Thank you for your time. You mentioned in your paper 'We use a standard RNN formulation [28, 38] to predict n = 10 future waypoints'. How many seconds for 10 future waypoints?
The nxp
is the next waypoint normalized to the current ego-car's frame. Also sorry for the delay in the data collection codes... Just finished cleaning them up and they reside in the data-collect
branch, still a bit messy but please let me know if you have trouble with the data collection codes.
The
nxp
is the next waypoint normalized to the current ego-car's frame. Also sorry for the delay in the data collection codes... Just finished cleaning them up and they reside in thedata-collect
branch, still a bit messy but please let me know if you have trouble with the data collection codes.
Thank you very much for releasing the code of data collection. I have a question. You mentioned in your paper 'We use a standard RNN formulation [28, 38] to predict n = 10 future waypoints'. How many seconds do these 10 future waypoints represent?
FPS=20 and we do a skip of 4 -- 2.5s.
You use uniplanner to predict trajectories 1 second ahead. Am I right?
At inference time the model predicts trajectories for all N timesteps for all vehicles, but yes the aim point loc for ego vehicle control corresponds to 1s future (for the leaderboard model it also depends on high lvl cmd) in the trajectory.
Thank you for your clarification. It is very helpful. Thanks.
Thanks for your fantastic work! I have read the paper and noticed that the mapping prediction and detection in perception training are both supervised. Since I didn't find details about data generation in the paper, I am really curious about how do you generate the labelled data for such a large scale dataset, or how do you get the labels for the collected data?
Looking forward for your reply. Thanks very much!