SeokjuLee / VPGNet

VPGNet: Vanishing Point Guided Network for Lane and Road Marking Detection and Recognition (ICCV 2017)
MIT License
487 stars 165 forks source link

DataSet #2

Closed sunpeng1996 closed 4 years ago

sunpeng1996 commented 6 years ago

I am looking forward to meet the dataset. But I don't understand the grid-level annotations, what means?

BAILOOL commented 6 years ago

@sunpeng1996 , Usually the segmentation is performed pixel-wise, meaning that every pixel has a specific class. In this paper, we have utilized grid-wise annotation which is simply a group of pixels in a grid cell. Specifically, the grid cell size is 8x8 pixels. Thus, the segmentation is performed on grid cells rather than individual pixels.

Regarding the dataset, please check Readme file ("Dataset contact" section) for the dataset questions. It turns out that Samsung wants to have an exclusive control who gets the dataset. Good luck!

hexiangquan commented 6 years ago

how about speed fps? did you run it on jetson tx1 tx2

发自我的 iPhone

在 2017年11月20日,上午8:32,Alex Bailo notifications@github.com 写道:

@sunpeng1996 , Usually the segmentation is performed pixel-wise, meaning that every pixel has a specific class. In this paper, we have utilized grid-wise annotation which is simply a group of pixels in a grid cell. Specifically, the grid cell size is 8x8 pixels. Thus, the segmentation is performed on grid cells rather than individual pixels.

Regarding the dataset, please check Readme file ("Dataset contact" section) for the dataset questions. It turns out that Samsung wants to have an exclusive control who gets the dataset. Good luck!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

SeokjuLee commented 6 years ago

@hexiangquan The forward pass time for whole networks (4-task: grid, objectness, multi-label, vpp) is about 30ms on a single Titan X.

wsz912 commented 6 years ago

Sorry, Sir. I can't send the email to sjlee@rcv.kaist.ac.kr. There is a problem that

Could not connect to SMTP host: 143.248.39.8, port: 25; nested exception is: java.net.ConnectException: Connection timed out

I want to apply for the dataset. What materials should I provide? Where and how can I answer the dataset questions?

I am looking forward to your reply.

Best wishes.

SeokjuLee commented 6 years ago

@wsz1029 Sorry for the mail problem. Our lab server was attacked and currently repairing.. The dataset is preserved by Samsung. Please contact Tae-Hee Lee (th810.lee@samsung.com), Hyun Seok Hong (hyunseok76.hong@samsung.com), and Seung-Hoon Han (luoes.han@samsung.com).

daixiaogang commented 6 years ago

@BAILOOL ,how did you do the grid-wise annotation?How long it takes to annotation one picture?

BAILOOL commented 6 years ago

@daixiaogang, the detailed explanation about "Data Collection and Annotation" is provided in Section 3.1 in the paper. Simply speaking, you mark corner points of lane and road markings to form a closed polygon. This closed region (when it is filled) is essentially the same as the pixel-wise annotation (traditional annotation of segmentation datasets). Further, we wrote a script to transform this annotation to a grid-wise (each grid cell is 8x8 pixels) annotation. The reason we did the annotation in such way is that the lanes and road markings can be easily bounded given a polygon and it is much faster technique than performing pixel-wise annotation.

Given the that we have an internal tool ( unfortunately, not available for disclosure) polygon annotation takes about 1-5 minutes per image depending on the complexity. Subsequently, pixel-wise to grid-wise annotation is done automatically almost instantaneously.

daixiaogang commented 6 years ago

@BAILOOL ,thanks for your explanation. In your project,different lane is regarded as a object? And the datasets will not be open?

SeokjuLee commented 6 years ago

@daixiaogang Could you explain the meaning of 'different lane as a object'? Each lane and road marking has different shape, so they are hard to defined by a single box (rectangle). That's the main reason why we use the grid annotation. Specifically, we define each object by a list of grids, such as [(x1 y1 x1' y1' c1), (x2 y2 x2' y2' c2), ...] where {x,y,x',y'} localize the position of each grid and {c} is the corresponding class. There are discontinuous lanes such as dashed lines, in this case, only painted markings are regarded as one of lane classes. The dataset will be available, but currently is being reviewed by Samsung company. They are undergoing a merger and restructuring, which is delaying the work.

YaoLing13 commented 6 years ago

Is the output of grid box in VPGNet used for lane detection?

SeokjuLee commented 6 years ago

@xumi13 We didn't use the grids for the lane detection. We elaborated the post-processing of lane detection in our paper :)

daixiaogang commented 6 years ago

@SeokjuLee ,I am sorry,my English is very poor,thanks for your explanation.

qinghuizhao commented 6 years ago

Could you explain the meaning of “Vanishing Point”?is there no public datasets?

SeokjuLee commented 6 years ago

@qinghuizhao Hello, the geometric meaning of a vanishing point is a point where parallel lines in a three-dimensional space converge to a two-dimensional plane by a graphical perspective. In our work, we define our "Vanishing Point (VP)" as the nearest point on the horizon where lanes converge and disappear predictively around the farthest point of the visible lane. You can see some examples of our "VP" annotation in this link (https://youtu.be/jnewRlt6UbI?t=16). There are several public vanishing point dataset such as The Toulouse Vanishing Points Dataset, but the meaning our "VP" is slightly different and ours assumes the case of "VP" in driving road scenario.

gexahedron commented 6 years ago

What are the news about releasing dataset?

SeokjuLee commented 6 years ago

@gexahedron Sorry for the delay in publishing our dataset. We keep asking Samsung to publish dataset, and hope to be released soon.

YangShuoAI commented 6 years ago

@SeokjuLee can u please tell me the details of train_list.txt and test_list.txt in make_lmdb.sh?

SeokjuLee commented 6 years ago

@KeplerCV The list files can be generated by 'caltech-lanes-dataset/vpg_annot_v1.m'. The format should be [Img path] [num. of grids] [x_1 y_1 x'_1 y'_1] ... [x_n y_n x'_n' y'_n] line by line. There is also a visualizing part (commented) for better understanding. Because at this moment, our dataset is being reviewed by Samsung, so we provide a baseline network that doesn't need VP annotations.

daixiaogang commented 6 years ago

@SeokjuLee ,I have run caltech-lanes-dataset/vpg_annot_v1.m and get four *.txt file which is your label ,can I use this to train your network? Do you have the pretrained model?

SeokjuLee commented 6 years ago

@daixiaogang Yes, converted list files are for the toy example that how we trained the VPGNet. You can train with them (after LMDB parsing), but they are small sets so the performance is not guaranteed. It is best to finetune, but for the pretrained model and dataset, we need to get a permission from Samsung. The reason I share this provisional code is that they are delaying the permissions..

qinghuizhao commented 6 years ago

can u please share some train pictures?

SeokjuLee commented 6 years ago

@daixiaogang This issue is caused by the path of list file. please try "../../build/tools/convert_driving_data /home/swjtu/daixiaogang/VPGNet/caltech-lanes-dataset /home/swjtu/daixiaogang/VPGNet/caltech-lanes-dataset/cordova1.txt LMDB_train" after delete the previous LMDB files.

daixiaogang commented 6 years ago

@SeokjuLee ,I have solve this problem and run the code with caltech-lanes datasets. If I used may dataset which only label the right and left lane boundary with (x1,y1,x1',y1'),(x2,y2,x2',y2'),what should I do to make label to run on your code?

daixiaogang commented 6 years ago

@SeokjuLee ,your caffe is run on CPU mode? your makefile.config have comment USECUDNN=1.

SeokjuLee commented 6 years ago

@daixiaogang In this case, you need to change your annotation format same as Caltech to VPGNet annotation method. Please refer our matlab code 'vpg_annot_v1.m' to make grid annotations. It would be easy to follow with our visualization lines. About the CUDNN option, we trained on GPU mode, but not used cudnn.

SeokjuLee commented 6 years ago

Please leave code/training questions in the CODE ISSUES pannel (the 3rd one).

SeokjuLee commented 6 years ago

@qinghuizhao The training curve depends on how you've arranged the dataset. I uploaded one example code, 'outputs/sj_show_plot.py'. Please refer it.

daixiaogang commented 6 years ago

@SeokjuLee ,What are the news about releasing dataset?

llmettyll commented 6 years ago

Hi, I tested VPGNet toy example with Caltech lane DB using your caffe project. And I checked that validation accuracy is around 98.9 % after default solver setting when I divided the dataset into train_set(8) and val_set(2) with shuffled list. However, the Caltech DB doesn't have 'bb-label', 'pixel-label', 'size-label', and 'norm-label' information, which you used in 'train_val.prototxt'. So, I am very curious about how to annotate these labels.

According to 'drive_data_layer.cpp' and 'caffe.proto' in your project, 'norm-label' and 'size-label' are assigned to '1', '1/w', or '1/h' at first. And they are partially changed to '-1' or '0' in the later code. On the other hand, I couldn't find how the 'bb-label' is annotated in 'output.txt' and is deliverd to 'drive_data_layer.cpp'.

Can you explain on these~???

Thanks :)

SeokjuLee commented 6 years ago

@llmettyll Hi, the drive-data-layer handles every label. If you see the network diagram (train_val.png), you could follow the splitted outputs (type, data, and label). Therefore to generate those information, only the list file is needed.

gexahedron commented 6 years ago

Could you send me your dataset, please, if it's possible? My email - ulyanick@gmail.com Thanks!

aisichonzi007 commented 6 years ago

@SeokjuLee hello, how to open the .ccvl file

xinping12345678 commented 6 years ago

Could you send me your dataset, please, if it's possible? My email - ulyanick@gmail.com Thanks!

hello, Did you get the data set? If you get the dataset, I would be very grateful if you could send it to me.

chenchaohui commented 5 years ago

Hi, SeokjuLee, from your paper,we can know the lane is first annotated with spline ,then convert it into grid cell annotation use a matlab script you provided. But how should we annotate the zebra line,stop line and arrows on the surface of the road. Waiting for your help.

yurenwei commented 5 years ago

Excuse me, I want to ask about the labeling format of your dataset. This means that I want to use your network, how do I label the data set.

leicao-me commented 5 years ago

Dear Authors,

Is there any update on releasing your dataset, please? Thank you

CountofMont commented 5 years ago

Could you send me your dataset, please, if it's possible? My email ----- lytao2013@gmail.com Thank you very much!

hakillha commented 4 years ago

Hi could anyone send me the access to the dataset? My email is hakillha@outlook.com. Rly appreciate it if someone could share the data!

aisichonzi007 commented 4 years ago

Sorry ,my dataset is our team

张冲 雄安产业研究院智慧城市研发一部

手机:13810291502

邮箱:zhangchong@chinamobile.com

发件人: Yingge Sun [mailto:notifications@github.com] 发送时间: 2020年3月4日 10:27 收件人: SeokjuLee/VPGNet 抄送: aisichonzi007; Comment 主题: Re: [SeokjuLee/VPGNet] DataSet (#2)

Hi could anyone send me the access to the dataset? My email is hakillha@outlook.com mailto:hakillha@outlook.com . Rly appreciate it if someone could share the data!

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/SeokjuLee/VPGNet/issues/2?email_source=notifications&email_token=AIJBEAHUG7WFBPQGNWI6GSLRFW36LA5CNFSM4EENNXO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENWAZQI#issuecomment-594283713 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AIJBEAAB2TRV2KUEUG7JHBTRFW36LANCNFSM4EENNXOQ .

SeokjuLee commented 4 years ago

Hi, the dataset is now available. Please fill out a form for the download link.