yanconglin / Deep-Hough-Transform-Line-Priors

Official implementation for Deep-Hough-Transform-Line-Priors (ECCV 2020)
MIT License
164 stars 31 forks source link

HT space to image space #3

Closed hichem-abdellali closed 3 years ago

hichem-abdellali commented 4 years ago

Hi @yanconglin ,

I would like to kindly ask how we can move from your HT space to an image space, also which of the corners of the image was set to be the origin(or it is the centre of the image?)

It will be kind to explain or provide code for this. Actually, I wanted to draw the lines obtained after the HT module but it gets too complicated (from Rho, Theta I wanted to write a code to draw it)

Thanks in advance

yanconglin commented 4 years ago
  1. To save memory, I used the centre of the image. See there for details: https://github.com/yanconglin/Deep-Hough-Transform-Line-Priors/blob/812d8c21e98e7b11b2e81b88faad8c8709cbd364/ht-lcnn/lcnn/models/HT.py#L48
  2. say (x_theta, y_rho) is the ht bin you want, simple do vote_matrix[:,:, y_rho,x_theta] and then you have the line in images. This is the binary mask of the line at (x_theta, y_rho)..
hichem-abdellali commented 4 years ago

Hi @yanconglin ,

Thanks a lot for your reply, May I ask you again what out[::-1] and y refers to in line 185 in the hourglass_ht.py and what are they expected to contain, Thanks again

yanconglin commented 4 years ago

"y" is the backbone features that are used in the line vectorization part. "out" is a list where each element contains a tensor composed of [lmpa, jmap, joff]. "[::-1]" means the order is reversed. Originally, the out[0] is from stack 1, and out[1] is from stack 2. Now it is the other way around. In the follwing procedure, the predictions from the second stack are used, which means out[::-1][0]. IMHO, the HG model is quite complex, not so easy for beginners.

yanconglin commented 3 years ago

feel free to reopen this isssue once you have other questions.