Open kxhit opened 5 years ago
Hi! I met an error when running the code. I have tried this both with cpu and gpu.
Initialated Mask RCNN network... Creating net instance... Loading net parameters... Bus error (core dumped)
I run the Check.py successfully and can run the code without [PATH TO MASK] [PATH TO OUTPUT] parameters. Could anyone help me? Thanks!!!
Hello there! Is your problem solved? I also encountered the same problem.
@fanguohao @kxhit Hello! I also encountered the same problem. My configuration is GTX 1080. The version of my opencv is 3.4.1
@BertaBescos I am sorry to bother you. Do you have any suggestions on this problem?
I have fixed this problem. Just change your opencv version to 2.4.11
@BertaBescos I can run the code with openCV2 but not OpenCV3. I find the program not fluent either with MaskRCNN or not. Waiting for your reply. Thanks!
I could not reproduce the problem. I would suggest to do as @jsdd25 proposes and change OpenCV version to 2.4.11. We will write you back if we find out what the problem is.
I also have this problem. I find the C++ can't call the MaskRCNN.py. DynaSLAM::SegmentDynObject::SegmentDynObject(): Assertion `this->py_module != NULL' failed. I don't know how to solve it. Can you help me?
Are you using OpenCV 2.4.11? If not, I would recommend you to use it. We will try to fix this issue for newer versions within the next weeks.
Are you using OpenCV 2.4.11? If not, I would recommend you to use it. We will try to fix this issue for newer versions within the next weeks.
Thanks for your answer,I had changed my OpenCV from 2.4.13 to 2.4.11 ,but it is useless, I am looking forward to your newer versions
Then the problem must be something else. It should work without problems on OpenCV 2.4.11. Are you running it from the DynaSLAM directory path?
Then the problem must be something else. It should work without problems on OpenCV 2.4.11. Are you running it from the DynaSLAM directory path?
I run it from DynaSLAM directory path like the ORB_SLAM2,it succeeded. when I added the parameter [PATH_TO_MASK] [PATH_TO_OUTPUT] to call the MaskRCNN, the problem has happened.
Then the problem must be something else. It should work without problems on OpenCV 2.4.11. Are you running it from the DynaSLAM directory path?
I found the python version is the key problem. Mask_RCNN requires the Python3.4 + TensorFlow1.3 + Keras2.0.8,but your DynaSLAM is python2.7. I will try this way to build the DynaSLAM.
Any luck with this error? I have the same problem
I have meet the trouble for several days.But ,i can't solver it .my system is Ubuntu16.04+cuda8.0+cudnn6+opencv 2.4.11+orbslam2 environment.Can anyone help me ?
I got it working finally. The issue on my side was opencv 3.3. It works with opencv 2.4.10. When I debugged I found that the problem is in Conversion.cc inside the method toNDArray(const cv::Mat& m). The method has two sections for two opencv versions (2 and 3) in case of opencv 3 the PyObject returned from Mat seems to be the problem when it gets passed to GetDynSeg() of MaskRCNN.py. I am trying to fix it now.
You are too good!Can you tell me about your configuration environment in detail ?and have you run check.py?I have trouble again yesterday, maybe I have to configure the environment for this code again.
@PushyamiKaveti Can you help me ?
@BertaBescos Can you help me? When I give [PATH TO MASK] [PATH TO OUTPUT] ,It is "core dumped".
@PushyamiKaveti @BertaBescos My environment is :Ubuntu16.04 +cuda8.0+cudnn6+opencv2.4.11+python2.7+orbslam2.
I have solved this problem, thank you for your help.@PushyamiKaveti @BertaBescos
I have solved this problem, thank you for your help.@PushyamiKaveti @BertaBescos
I meet the same problem, can you help me? My error is Bus error too, my python is 2.7, opencv version is 2.4.11,and Ican run Check.py successfully. @qxdaaaaa
I meet the same problem, can you help me? My error is Bus error too, my python is 2.7, opencv version is 2.4.11 and 3.4.1,tensorflow-gpu1.4.0 and I can run Check.py successfully. @qxdaaaaa @kxhit @fanguohao
you can test the tensflow .the problem ,may be the tensflow.you can use tf==cpu------------------ 原始邮件 ------------------ 发件人: "zxGH86"notifications@github.com 发送时间: 2019年12月8日(星期天) 晚上8:51 收件人: "BertaBescos/DynaSLAM"DynaSLAM@noreply.github.com; 抄送: "qxd"2096172183@qq.com;"Mention"mention@noreply.github.com; 主题: Re: [BertaBescos/DynaSLAM] Bus error (core dumped) (#16)
I meet the same problem, can you help me? My error is Bus error too, my python is 2.7, opencv version is 2.4.11 and 3.4.1,tensorflow-gpu1.4.0 and I can run Check.py successfully. @qxdaaaaa @kxhit @fanguohao
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
Thanks,@qxdaaaaa. I just test the masknet .cc find that the functioncv::Mat SegmentDynObject::GetSegmentation(cv::Mat &image,std::string dir, std::string name){ .... PyObject* py_mask_image = PyObject_CallMethod(this->net, const_cast<char*>(this->get_dyn_seg.c_str()),"(O)",py_image); ......}
doesn't working, how to solve this problem?
I meet the same problem, can you help me? My error is Bus error too, my python is 2.7, opencv version is 2.4.11 and 3.4.1,tensorflow-gpu1.4.0 and I can run Check.py successfully.
I‘d like to ask if cuda10.0 + tensorflow1.13.1 is ok? I think I have correctly installed with python2 but I can't pass check when I use terminal to run 'python check.py'. I think I need you help.....
I have solved this problem, thank you for your help.@PushyamiKaveti @BertaBescos
How did you solve the problem? @PushyamiKaveti @qxdaaaaa My system is Python 2.7, OpenCV 3.3.0 , tensorflow-gpu=1.13.1, CUDA 10.0 and I can run Check.py successfully.
I have solved this problem, thank you for your help.@PushyamiKaveti @BertaBescos
How did you solve the problem? @PushyamiKaveti @qxdaaaaa My system is Python 2.7, OpenCV 3.3.0 , tensorflow-gpu=1.13.1, CUDA 10.0 and I can run Check.py successfully.
Have you run successfully in cuda10?
I got it working finally. The issue on my side was opencv 3.3. It works with opencv 2.4.10. When I debugged I found that the problem is in Conversion.cc inside the method toNDArray(const cv::Mat& m). The method has two sections for two opencv versions (2 and 3) in case of opencv 3 the PyObject returned from Mat seems to be the problem when it gets passed to GetDynSeg() of MaskRCNN.py. I am trying to fix it now.
namespace DynaSLAM {
static void init() { import_array(); }
static PyObject failmsgp(const char fmt, ...) { char str[1000];
va_list ap;
va_start(ap, fmt);
vsnprintf(str, sizeof(str), fmt, ap);
va_end(ap);
PyErr_SetString(PyExc_TypeError, str);
return 0;
}
using namespace cv; //=================== ERROR HANDLING =========================================================
static int failmsg(const char *fmt, ...) { char str[1000];
va_list ap;
va_start(ap, fmt);
vsnprintf(str, sizeof(str), fmt, ap);
va_end(ap);
PyErr_SetString(PyExc_TypeError, str);
return 0;
}
//=================== THREADING ============================================================== class PyAllowThreads { public: PyAllowThreads() : _state(PyEval_SaveThread()) { } ~PyAllowThreads() { PyEval_RestoreThread(_state); } private: PyThreadState* _state; };
class PyEnsureGIL { public: PyEnsureGIL() : _state(PyGILState_Ensure()) { } ~PyEnsureGIL() { PyGILState_Release(_state); } private: PyGILState_STATE _state; };
enum { ARG_NONE = 0, ARG_MAT = 1, ARG_SCALAR = 2 };
class NumpyAllocator: public MatAllocator { public: NumpyAllocator() { stdAllocator = Mat::getStdAllocator(); } ~NumpyAllocator() { }
UMatData* allocate(PyObject* o, int dims, const int* sizes, int type,
size_t* step) const {
UMatData* u = new UMatData(this);
u->data = u->origdata = (uchar*) PyArray_DATA((PyArrayObject*) o);
npy_intp* _strides = PyArray_STRIDES((PyArrayObject*) o);
for (int i = 0; i < dims - 1; i++)
step[i] = (size_t) _strides[i];
step[dims - 1] = CV_ELEM_SIZE(type);
u->size = sizes[0] * step[0];
u->userdata = o;
return u;
}
UMatData* allocate(int dims0, const int* sizes, int type, void* data,
size_t* step, int flags, UMatUsageFlags usageFlags) const {
if (data != 0) {
CV_Error(Error::StsAssert, "The data should normally be NULL!");
// probably this is safe to do in such extreme case
return stdAllocator->allocate(dims0, sizes, type, data, step, flags,
usageFlags);
}
PyEnsureGIL gil;
int depth = CV_MAT_DEPTH(type);
int cn = CV_MAT_CN(type);
const int f = (int) (sizeof(size_t) / 8);
int typenum =
depth == CV_8U ? NPY_UBYTE :
depth == CV_8S ? NPY_BYTE :
depth == CV_16U ? NPY_USHORT :
depth == CV_16S ? NPY_SHORT :
depth == CV_32S ? NPY_INT :
depth == CV_32F ? NPY_FLOAT :
depth == CV_64F ?
NPY_DOUBLE :
f * NPY_ULONGLONG + (f ^ 1) * NPY_UINT;
int i, dims = dims0;
cv::AutoBuffer<npy_intp> _sizes(dims + 1);
for (i = 0; i < dims; i++)
_sizes[i] = sizes[i];
if (cn > 1)
_sizes[dims++] = cn;
PyObject* o = PyArray_SimpleNew(dims, _sizes, typenum);
if (!o)
CV_Error_(Error::StsError,
("The numpy array of typenum=%d, ndims=%d can not be created", typenum, dims));
return allocate(o, dims0, sizes, type, step);
}
bool allocate(UMatData* u, int accessFlags,
UMatUsageFlags usageFlags) const {
return stdAllocator->allocate(u, accessFlags, usageFlags);
}
void deallocate(UMatData* u) const {
if (u) {
PyEnsureGIL gil;
PyObject* o = (PyObject*) u->userdata;
Py_XDECREF(o);
delete u;
}
}
const MatAllocator* stdAllocator;
};
NumpyAllocator g_numpyAllocator;
NDArrayConverter::NDArrayConverter() { init(); }
void NDArrayConverter::init() { import_array(); }
cv::Mat NDArrayConverter::toMat(const PyObject o) { cv::Mat m; bool allowND = true; if (!PyArray_Check(o)) { failmsg("argument is not a numpy array"); if (!m.data) m.allocator = &g_numpyAllocator; } else { PyArrayObject oarr = (PyArrayObject*) o;
bool needcopy = false, needcast = false;
int typenum = PyArray_TYPE(oarr), new_typenum = typenum;
int type = typenum == NPY_UBYTE ? CV_8U : typenum == NPY_BYTE ? CV_8S :
typenum == NPY_USHORT ? CV_16U :
typenum == NPY_SHORT ? CV_16S :
typenum == NPY_INT ? CV_32S :
typenum == NPY_INT32 ? CV_32S :
typenum == NPY_FLOAT ? CV_32F :
typenum == NPY_DOUBLE ? CV_64F : -1;
if (type < 0) {
if (typenum == NPY_INT64 || typenum == NPY_UINT64
|| type == NPY_LONG) {
needcopy = needcast = true;
new_typenum = NPY_INT;
type = CV_32S;
} else {
failmsg("Argument data type is not supported");
m.allocator = &g_numpyAllocator;
return m;
}
}
const int CV_MAX_DIM = 32;
int ndims = PyArray_NDIM(oarr);
if (ndims >= CV_MAX_DIM) {
failmsg("Dimensionality of argument is too high");
if (!m.data)
m.allocator = &g_numpyAllocator;
return m;
}
int size[CV_MAX_DIM + 1];
size_t step[CV_MAX_DIM + 1];
size_t elemsize = CV_ELEM_SIZE1(type);
const npy_intp* _sizes = PyArray_DIMS(oarr);
const npy_intp* _strides = PyArray_STRIDES(oarr);
bool ismultichannel = ndims == 3 && _sizes[2] <= CV_CN_MAX;
for (int i = ndims - 1; i >= 0 && !needcopy; i--) {
// these checks handle cases of
// a) multi-dimensional (ndims > 2) arrays, as well as simpler 1- and 2-dimensional cases
// b) transposed arrays, where _strides[] elements go in non-descending order
// c) flipped arrays, where some of _strides[] elements are negative
if ((i == ndims - 1 && (size_t) _strides[i] != elemsize)
|| (i < ndims - 1 && _strides[i] < _strides[i + 1]))
needcopy = true;
}
if (ismultichannel && _strides[1] != (npy_intp) elemsize * _sizes[2])
needcopy = true;
if (needcopy) {
if (needcast) {
o = PyArray_Cast(oarr, new_typenum);
oarr = (PyArrayObject*) o;
} else {
oarr = PyArray_GETCONTIGUOUS(oarr);
o = (PyObject*) oarr;
}
_strides = PyArray_STRIDES(oarr);
}
for (int i = 0; i < ndims; i++) {
size[i] = (int) _sizes[i];
step[i] = (size_t) _strides[i];
}
// handle degenerate case
if (ndims == 0) {
size[ndims] = 1;
step[ndims] = elemsize;
ndims++;
}
if (ismultichannel) {
ndims--;
type |= CV_MAKETYPE(0, size[2]);
}
if (ndims > 2 && !allowND) {
failmsg("%s has more than 2 dimensions");
} else {
m = Mat(ndims, size, type, PyArray_DATA(oarr), step);
m.u = g_numpyAllocator.allocate((PyObject*)o, ndims, size, type, step);
m.addref();
if (!needcopy) {
Py_INCREF(o);
}
}
m.allocator = &g_numpyAllocator;
}
return m;
}
PyObject NDArrayConverter::toNDArray(const cv::Mat& m)
{
if (!m.data)
Py_RETURN_NONE;
Mat temp,
p = (Mat) &m;
if (!p->u || p->allocator != &g_numpyAllocator) {
temp.allocator = &g_numpyAllocator;
ERRWRAP2(m.copyTo(temp));
p = &temp;
}
std::cout<
}
@hanxiumeng your code doesn't work either, leads to the same error could you share the modified conversion.cc file? did you get it to work with opencv 3?
Hi! I met an error when running the code. I have tried this both with cpu and gpu.
I run the Check.py successfully and can run the code without [PATH TO MASK] [PATH TO OUTPUT] parameters. Could anyone help me? Thanks!!!