Closed qianqian321 closed 1 year ago
The same is true of my device name. An error was reported when I ran the splicing part. The size of the picture I printed was found to be 0 half_rear = imgs[last_idx](cv::Range(0, rear_height), cv::Range(rear_half_width, rear_width)); imgs[0] = imgs[last_idx](cv::Range(0, rear_height), cv::Range(0, rear_half_width));
Ok, can you print "rear_width", "rear_half_width", "imgs.size()" and "rear_height" value. And try to use cpu Mat in this place, just make sure it works correctly for cuda::GpuMat. Also i will ask u close issue - https://github.com/SokratG/Surround-View/issues/12 . And write here...
rear_width=0 rear_halt_width=1 img.size()=5 rear_height=0
You have "rear_width" and "rear_height" equal 0, i.e. pass empty data to "splitRearView" method. You must check the capture data from camera.
Using other camera to read video program, camera can obtain video through v4l2 driver, is there a problem in converting to GPU
Do you have any test videos
Do you have any test videos
https://github.com/SokratG/Surround-View/issues/9#issuecomment-1252655393
When the image read by the camera is displayed, it is all black
There are many reasons this black image. In project I used that camera that capture the image in UYVY format and then convert to RGB - https://github.com/SokratG/Surround-View/blob/master/cusrc/yuv2rgb.cu#L17 . May be u don't need this conversion. For helping I need more information about platform and logs when u run the project.
for(size_t i = 0; i < _cams.size(); ++i){ auto& buff = buffs[i]; auto& dataBuffer = _cams[i].buffers[buff.index]; auto* cudaBuffer = _cams[i].cuda_out_buffer;
//gpuConvertUYVY2RGB_async((uchar*)dataBuffer.start, d_src[i], cudaBuffer, frameSize.width, frameSize.height, _cudaStream[i]);
//const auto uData = cv::cuda::GpuMat(frameSize, CV_8UC3, cudaBuffer);
cv::Mat cv_img;
cv::Mat img = cv::Mat(frameSize, CV_8UC2, dataBuffer.start);
cv::cvtColor(img, cv_img, cv::COLOR_YUV2RGB_YVYU);
//LOG_ERROR("ioctl(VIDIOC_QBUF) failed (errno=%i)", uData.cols);
//std::cout << "vfva" << std::endl;
cv::imshow("view", cv_img);
cv::waitKey(0);
//LOG_ERROR("ioctl(VIDIOC_QBUF) failed (errno=%i)", uData.cols);
/*if (_undistort){
cv::cuda::remap(uData, undistFrames[i].undistFrame, undistFrames[i].remapX, undistFrames[i].remapY,
cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::Scalar(), cudaStreamObj);
frames[i].gpuFrame = undistFrames[i].undistFrame(undistFrames[i].roiFrame);
} */
} //std::cout << cv_img.cols << std::endl; In SVCamera.cpp, I changed CV_8UC2 to show the camera image, but using the original code, using the following code to print frame[0] size is also 0, the display image is black
std::cout << frames[0].gpuFrame.cols << std::endl;
std::cout << frames[0].gpuFrame.rows << std::endl;
cv::Mat img = cv::Mat(frameSize, CV_8UC2, dataBuffer.start);
and this cv::cvtColor(img, cv_img, cv::COLOR_YUV2RGB_YVYU);
may be wrong data conversion. It's like taked from here - https://github.com/SokratG/Surround-View/blob/master/src/SVCamera.cpp#L440. That's may be not compatible with other type of camera.I'm using two cameras now, the one that came with the laptop and an external usb. Here are the dimensions of cv_img and the value of dataBuffer.start. txt files are logs cv_img cols is 1280) cv_img rows is 720) dataBuffer.start 832188416) cv_img cols is 1280) cv_img rows is 720) dataBuffer.start 828502016) log.txt
About dataBuffer.start I mean the value, not the address.
In log I see two errors:
ERROR: ioctl(VIDIOC_QBUF) failed (errno=1280) ERROR: ioctl(VIDIOC_QBUF) failed (errno=1280)
That strange because I don't see description of the error, only number. This happen when devices try to enqueue data in mapped buffer. But after this error I can't see message: capture failed
. If error is happen the function must return false
.
Did you remove the part with OMP parallel for loop(https://github.com/SokratG/Surround-View/blob/master/src/SVCamera.cpp#L622)? If not remove please and try again.
Also I see the 2-nd camera has format Motion-JPEG as a first. Check this if camera capture in this format.
Part of code taken from here - https://github.com/dusty-nv/jetson-utils/blob/master/camera/v4l2Camera.cpp. May be that's help.
Anyway as I said this hardware part only for capture frames. U can create your own video capture modules then undistord images and copy data in cuda::GpuMat frames here https://github.com/SokratG/Surround-View/blob/master/src/SVApp.cpp#L111 and here https://github.com/SokratG/Surround-View/blob/master/src/SVApp.cpp#L150.
Delete this line of code frames[i].gpuFrame = undistFrames[i].undistFrame(undistFrames[i].roiFrame); Change into frames[i].gpuFrame = uData; rear_width", "rear_half_width", "imgs.size()" and "rear_height" become 5 720 641 1280 But there was a new error: error: (-217:Gpu API call) invalid argument in function 'download'
In the function detectCornerscv:: findContours(src, cnts, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE); The value of cnts is 0, and an error is reported later cv::Point leftpt = cnts[0][0]; cv::Point rightpt = cnts[0][0];
cv::imwrite("/usr/Surround-View-master/data/cpu_imgs/warptemp.jpg", temp);
cv::Mat temp2;
gpu_result.download(temp2);
cv::imwrite("/usr/Surround-View-master/data/cpu_imgs/gputemp.jpg", temp2);
cuBlender->blend(gpu_result, warp_img, streamObj);
cv::Mat result, thresh;
gpu_result.download(result);
warp_img.download(thresh);
cv::imwrite("/usr/Surround-View-master/data/cpu_imgs/5.jpg", thresh);
cv::imwrite("/usr/Surround-View-master/data/cpu_imgs/6.jpg", result);
Before statement “ cuBlender->blend(gpu_result, warp_img, streamObj);” you can see gpu_result and wrap_img, after which gpu_result and wrap_img become black, what's the reason?
Ok, try feed(method) only one image and show result(using blend). And can u download here warped rgb images and mask images passing to feed and after blend methods?
for(size_t i = 0; i < imgs_num; ++i){
gpu_result.upload(cpu_imgs[i]);
std::string filename = "/usr/Surround-View-master/data/cpu_imgs/" + std::to_string(i) + ".jpg";
std::cout << filename << std::endl;
cv::imwrite(filename, cpu_imgs[i]);
cv::cuda::remap(gpu_result, warp_img, texXmap[i], texYmap[i], cv::INTER_LINEAR, cv::BORDER_REFLECT, cv::Scalar(), streamObj);
std::string filename1 = "/usr/Surround-View-master/data/cpu_imgs/seam" + std::to_string(i) + ".jpg";
cv::Mat temp1;
gpu_seam_masks[i].download(temp1);
cv::imwrite(filename1, temp1);
warp_img.convertTo(warp_s, CV_16S);
cuBlender->feed(warp_s, gpu_seam_masks[i], i);
}
cv::Mat temp;
warp_img.download(temp);
cv::imwrite("/usr/Surround-View-master/data/cpu_imgs/warptemp.jpg", temp);
cv::Mat temp2;
gpu_result.download(temp2);
cv::imwrite("/usr/Surround-View-master/data/cpu_imgs/gputemp.jpg", temp2);
cuBlender->blend(gpu_result, warp_img, streamObj);
cv::Mat result, thresh;
gpu_result.download(result);
warp_img.download(thresh);
cv::imwrite("/usr/Surround-View-master/data/cpu_imgs/5.jpg", thresh);
cv::imwrite("/usr/Surround-View-master/data/cpu_imgs/6.jpg", result);
So I'm just using a 1280 x 720 image, and this is how the datas is set up
std::vector
With the original parameters,_undistort is set to false, and the image after feed and blend are shown below ![Uploading 1.png…]()
I can't see image because the link is incorrect. Do you calibrate extrinsic camera parameters? I want to see warped images(rgb and mask).
I did not calibrate the parameters, now I am using your parameter file. Does warped images(rgb and mask) refer to gpu_result and wrap_img
I am confused by what you are trying to do. Because you have 4 similar images. And u have to calibrate the pose of the camera on your rig. After that use image warping, pass these images and put their poses in panorama (image corner after calibration) in blender. Your warping rgb image no match to your mask(seam image) this is problem too.
I'm using an image to see if I can run the program. By warping rgb image and mask(seam image) do you mean gpu_result and wrap_img
cv::imwrite("/usr/Surround-View-master/data/cpu_imgs/warptemp.jpg", temp); cv::Mat temp2; gpu_result.download(temp2); cv::imwrite("/usr/Surround-View-master/data/cpu_imgs/gputemp.jpg", temp2); cuBlender->blend(gpu_result, warp_img, streamObj); cv::Mat result, thresh; gpu_result.download(result); warp_img.download(thresh); cv::imwrite("/usr/Surround-View-master/data/cpu_imgs/5.jpg", thresh); cv::imwrite("/usr/Surround-View-master/data/cpu_imgs/6.jpg", result);
In SVStitcher.cpp , Before statement “ cuBlender->blend(gpu_result, warp_img, streamObj);” can see gpu_result and wrap_img, after which gpu_result and wrap_img become black, what's the reason?
已收到,我会尽快回复。
Hello! Hmm... it's hard to say without more information. If u are sure that your devices are working correctly, check camera names in the system, they should be the same as in the project. I hard coded the device name in: https://github.com/SokratG/Surround-View/blob/master/include/SVCamera.hpp#L109 . Or u can change this names to your and recompile.