muskie82 / AR-Depth-cpp

C++ implementation of Fast Depth Densification for Occlusion-aware Augmented Reality (SIGGRAPH-Asia 2018)
GNU General Public License v3.0
160 stars 33 forks source link

bad effect #3

Open muzizhi opened 4 years ago

muzizhi commented 4 years ago

when I run the code (c++) from github,I got really bad results. so I visualized the depth edges obtained by canny and soft edges,the result is too bad. first I just change the parameter according to the paper.More specific,τhigh = 0.04 τlow = 0.01 τflow = 0.3. But the result is bad too. sometimes it has many texture edges. More often, the depth edges it acquires are incomplete, and large unrecognized blank areas appear. I need some help,do I make a mistake in parameter or the code is wrong? 3 pic 1 pic

mpottinger commented 4 years ago

I think the edge detection code in this implementation is incomplete. I noticed that too.

muzizhi commented 4 years ago

do you have some advice about this code?

mpottinger commented 4 years ago

@muzizhi Well not really, I decided that it was still too slow anyway even if it could be corrected.

It may be possible to speed up the code to realtime. 30fps but it is difficult.

ARCore will have a depth from motion API which will give results similar to this, and I already achieved better results with a TOF depth sensor

muzizhi commented 4 years ago

well, I have two more questions actually. first: will the code effect of the python version be better than c ++, or it just look similar? second: I'm also a little confused about evaluation. No code is provided in this section, I want to write the code referring to the paper. But it is difficult to understand, such as Occlusion Error: “We extract a profile of 10 depth samples {di} perpendicular to the edge.”how to choose the sample?And the occlusion.png in annotations is almost blank.Excuse me, can I refer to your evaluation part of the code for better testing?

mpottinger commented 4 years ago

@muzizhi Yes I think the Python version does seem to provide better results, however the Python code is also 100x slower or more, and not suitable for mobile apps.

Sorry that is about all I know right now, I have moved on to other solutions since playing around with it.

muzizhi commented 4 years ago

thx

limacv commented 4 years ago

It seems that the author tried to reproduce the modified canny detection but failed, so the original OpenCV's canny detection is used. So I find the bellow solution well approximated the paper that use soft edge. By using another API of the OpenCV's canny implementation. Here is what I tried and the result seems better.

in ARDepth.cpp, around line 480, replace the canny() with:

cv::Mat edges;  
{      
    cv::Mat grad_x, grad_y;        
    cv::Sobel(base_img, grad_x, CV_16S, 1, 0, 5);
    cv::Sobel(base_img, grad_y, CV_16S, 0, 1, 5);
    auto elem_mul = [&](cv::Vec3s& val, const int* pos) {
        val *= soft_edges.at<double>(pos[0], pos[1]);
    };
    grad_x.forEach<cv::Vec3s>(elem_mul);
    grad_y.forEach<cv::Vec3s>(elem_mul);
    cv::Canny(grad_x, grad_y, edges, 80, 300);
    edges.convertTo(edges, CV_64FC1);
}
Tord-Zhang commented 3 years ago

it seems that the result is temporally unstable