Luo1Cheng / LC2WF

40 stars 5 forks source link

How to get json files from obj files (output of Line3D)? #6

Open OmarAhmadin opened 1 year ago

OmarAhmadin commented 1 year ago

Hi there,

First of all, I want to thank you for this fine piece of work. Secondly, I have my own dataset, I ran line3D over it, and I got the .obj files. I want to ask you how to get those obj files into the json files that you guys are using to read the dataset and perform inference using your model. In the json file, you have got 8 variables:

`

junc3DList: N x 3, 3D junctions in world coord system

edgeList: N x 2 (idx1, idx2), edge of the line cloud, idx1/idx2: index of the junc3DList

junc2DList: N x 2, 2D junctions in a certain image/view.

viewList: N x 1, 2D junction No.i corresponds to the image/view viewList[No.i], viewList is 1-based index. This parameter is come from Line3Dpp

label: N x 1, junction No.i corresponds to the junction No.label[i] of the ground-truth wireframe junction. -1 means this junction belongs to noise.

objGTJunc3D: N x 3, ground-truth junction of the 3D wireframe.

objGTSeg3D: N x 6, ground-truth segment of the 3D wireframe. 6 means xyz of two points

line_idx: ground-truth edge of the 3D wireframe.

` How can I convert my obj file to such json file?

Thanks in advance

Luo1Cheng commented 1 year ago

Line3Dpp outputs 3D lines after clustering. We use the 3D lines before Line3Dpp's clustring because the 3D lines are denser.

The *.json file contains information about 3D lines, detected 2D lines of each view and the label of 3D lines. If you just want to test 3D lines with our method, you don't need the annotation information. You just need to feed the 3D lines from Line3Dpp into the network. Note that the pre-trained model is sensitive to the input 3D lines.

Our *.json files are generated from 1. output 3D lines of Line3Dpp before clustering 2. detected 2D lines of multi-view images 3. ground-truth annotation of multi-view images. 4. camera pose of multi-view images. You can find some details in Sec.4 in supplymentary materials

OmarAhmadin commented 1 year ago

Hi @Luo1Cheng ,

Thanks for your reply. I have a better understanding now, but I wonder how you extracted the 3D lines before clustering from the Line3Dpp. In their code, the clustering is essential to get the 3D lines. As shown in the functions below where I highlighted in bold, the sequence of extracting the 3D lines. After clusterSegments() they optimize the clusters with optimizeClusters(), then they compute the 3D lines with computeFinal3Dsegments, after that they filter out the tiny segments with filterTinySegments. So, the question now, How to take the 3D lines before clustering as you mentioned? Is it before filterTinySegments?, I do not think so. But otherwise, the main clustering is essential for the whole code to run. Any thoughts on that?

void Line3D::reconstruct3Dlines(const unsigned int visibility_t, const bool perform_diffusion,
                                    const float collinearity_t, const bool use_CERES,
                                    const unsigned int max_iter_CERES)
           .
           .
           .
           .
           .
           .
           .
           .
           .
           .
           .
            std::cout << prefix_ << "matrix diffusion..." << std::endl;
            performRDD();
        }

        // cluster matrix
        std::cout << prefix_ << "clustering segments..." << std::endl;
        clusterSegments();

        global2local_.clear();
        local2global_.clear();

        // optimize
        if(use_CERES_)
        {
            std::cout << prefix_ << "optimizing 3D lines..." << std::endl;
            optimizeClusters();
        }

        // compute final 3D segments
        std::cout << prefix_ << "computing final 3D lines..." << std::endl;
        computeFinal3Dsegments();

        clusters3D_.clear();

        // filter tiny (noisy) segments
        std::cout << prefix_ << "filtering tiny segments..." << std::endl;
        filterTinySegments();

        std::cout << prefix_ << "3D lines: total=" << lines3D_.size() << std::endl;

        // untranslate
        untranslate();

        view_reserve_mutex_.unlock();
        view_mutex_.unlock();
    }
Luo1Cheng commented 1 year ago

Q1:How to take the 3D lines before clustering as you mentioned According to the paper and code,

In code, matchImages() corresponds to Sec 3.1-3.3 in paper. We can get outputs _estimated_position3D__ which is defined in line3D.h. This is the outputs before clustering.

Then the clustering begins: computingAffinityMatrix(), clusterSegments(), ... corresponds to Sec 3.4-3.5

So I use _estimated_position3D__ as the outputs before clustering. std::vector<std::pair<L3DPP::Segment3D,L3DPP::Match> > estimated_position3D_; Each element in this vector contains a matched pair of 2D lines and a 3D lines calculated from the pair

Q2: the main clustering is essential for the whole code to run The clustering is essential to get reasonable results for Line3Dpp. I think Line3Dpp can be roughly divided into two steps, line-based reconstructed and heuristic clustering。 We use the reconstructed 3D lines before clustering because 1. we think the results before clustering contains more information 2. After clustering , we will get a more sparse results. It's hard to distinguish meaningful 3D lines and noise, and some structural lines missing after clustring. In paper, we compare with Line3dpp's results after clustering because we regard Line3Dpp's heuristic clustering as a kind of line abstraction method.

OmarAhmadin commented 1 year ago

@Luo1Cheng Hi,

Thank you very much for your response.

I generated the the 3D lines as suggested before clustering, you can find it in demonstrated by meshlab in (Fig1):

image -------------------------------------------- (Fig.1. Output of Line3D++ before clustering) --------------------------------------------

I faced some errors in ours_eval.py such:


Traceback (most recent call last):

  File "/LC2WF/eval_results/ours_eval.py", line 410, in <module>
    dynamicMatchV2()

  File "/LC2WF/eval_results/ours_eval.py", line 211, in dynamicMatchV2
    e5(predXYZ,sC,sW,wireframeJunc,wireframeLine)

  File "/LC2WF/eval_results/metric.py", line 304, in __call__
    N, _,_ = gtCombineXYZ.shape  # N,2,3

ValueError: not enough values to unpack (expected 3, got 2)

I tried to avoid such errors, because I don't have annotations, I only do inference with your method on my dataset which does not have any annotations, but the output looks like that (Fig.2) :

output_final -------------------------------------------- (Fig.2. Output of LC2WF) --------------------------------------------

Any insights from your side?

Thanks in advance