Project-10 / DGM

Direct Graphical Models (DGM) C++ library, a cross-platform Conditional Random Fields library, which is optimized for parallel computing and includes modules for feature extraction, classification and visualization.
http://research.project-10.de/dgm/
Other
188 stars 41 forks source link

terminate called after throwing an instance of 'cv::Exception' #32

Closed I3aer closed 3 years ago

I3aer commented 3 years ago

Hi Sergey,

I wrote a custom dense graph similar to GraphDenseKit. When I test that code, the decoding throws the following exception:

terminate called after throwing an instance of 'cv::Exception' what(): OpenCV(4.5.1-dev) /home/baer/opencv/modules/core/src/arithm.cpp:669: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'arithm_op'

I attached my header and cpp files.

Thank you !

my code.pdf

ereator commented 3 years ago

Hi,

my first impression that this is not a bug, but a misuse of one of the library calls. OpenCV function tells that some arguments in a function call should match, but they don't.

It is just a guess, but maybe the problem is in the line byte n_states =1;. The classification task is usually defined by the number of classes (states) to predict. Using one state, means that all you get from classification the same single class. Thus it makes no sense to have this variable be equal to 1, it should be at least 2 (for a binary classification problem) or larger. This also can cause the above mentioned problem with OpenCV.

In order to be sure, I need to run the code. I can't run it, because I miss the depth_regression.h file. However, if I could, I would run it in the "Debug" modus and check the "call stack" to identify in which exact OpenCV function call the error occurs and which its argument does not match.

I3aer commented 3 years ago

Hi,

Thanks for your reply. I've set n_states=2. Then, I created a pot Matrix (called unary_pots in my unit_test.cpp) with size of [2x81]. The first column is the foreground probabilities and the second column is the background probabilities (1 - foreground probabilities). I passed that matrix to the add_unary_pot function to set graph nodes through the addNodes function of class CGraphDense. However, the unit_test still fails at the decoding line and throws the same exception. In documentation of the addNodes function the size of a pot Mat is given by [nNodesxnStates], but my pot Mat has size of [2x81] where 81 is the number of pixels (nodes). If I take its transpose beforehand, the push_back function in the addNodes throws an error stating a mismatch. Therefore, I use [2x81] pot matrix (unary_pots).

The depth_regression.h is the name of the header file which declares the class Foo. The following in the pdf is its cpp, and the final part is the unit_test.cpp. My updated unit_test.cpp is given by

#include <iostream>
#include <string>
#include "depth_regression.h"

void show(std::string name, Mat img)
{
    namedWindow(name, WINDOW_NORMAL);
    imshow(name, img);

    return;
}

int main(void){

    const byte n_states = 2;

    const int cols = 9;

    const int rows = 9;

    const Size img_size = Size (cols,rows);

    Mat gt_seg(rows,  cols, CV_32FC1, Scalar(0));

    // draw a 3x3 rectangle 
    for( int i = 3; i<6; i++){

        float * row_i = gt_seg.ptr<float>(i);

        for(int j=3; j<6; j++){

            row_i[j] = 1;

        }
    }

    Mat  gt_img;
    gt_seg.convertTo(gt_img, CV_8U, 255, 0);

    show("gt_img", gt_img);

    Foo fcrf {n_states, img_size};

    Mat unary_fg_states;
    // blur gt_seg to get foreground predictions
    GaussianBlur(gt_seg, unary_fg_states, Size(3, 3), 1.0, 1.0);

    show("unary_foreground", unary_fg_states);

    // turn [9x9] unary_fg_states into [1x81] Mat
    unary_fg_states = unary_fg_states.reshape(0, rows*cols);

    // unary potentials: Mat(size: nNodes x nStates; type: CV_32FC1) 
    Mat unary_pots;
    hconcat(unary_fg_states,1 -  unary_fg_states, unary_pots);

    std::cout << "milestone-1" << std::endl;

    fcrf.add_unary_pot(unary_pots);

    std::cout << "milestone-2" << std::endl;

    fcrf.set_smooth_kernel(Vec2f(3,3), 1);

    std::cout << "milestone-3" << std::endl;

    Mat features;
    // obtain features
    Canny(gt_img, features, 0, 1, 7);

    show("features", features);

    fcrf.add_appearence_kernel(features, Vec2f(3,3),  5, 2);

    std::cout << "milestone-4" << std::endl;

    // estimates the marginal potentials for each graph node
    vec_byte_t optimalDecoding = fcrf.getInfer().decode(10);

    std::cout << "milestone-5" << std::endl;

    waitKey(0);

    return 0;

}
ereator commented 3 years ago

Hi,

thank you for the clarification, now I could run the code. Running it, I received an error of size mismatch in InferDense.cpp, line 58, produced by the multiplication function. The reason for this error is that the graph has 162 nodes and you use 9x9 = 81 features. The reason for that is that you add 81 nodes to the graph two times:

  1. In the Foo constructor with ext_graph.buildGraph(img_size);
  2. In the Foo::add_unary_pot method with getGraph().addNodes(unary_pot);

In order to fix the problem do the following:

  1. in the Foo::add_unary_pot method replace adding nodes with setting nodes, i.e. replace line getGraph().addNodes(unary_pot); with the line getGraph().setNodes(0, unary_pot);
  2. In the GraphDense.cpp file line 28 in DGM_ASSERT_MSG replace symbol < with <=, or update library from master. This was a bug in the DGM code.
I3aer commented 3 years ago

Hi,

Thanks for your help. Applying your steps has solved the problem.