Closed antithing closed 3 years ago
Hi @antithing ,
Is this happening only on this particular sequence?
If you are running on Windows I guess you are using your own "data reader" to replace ROS. Did you make sure that you are correctly timestamping the images? This is important because OV2SLAM relies on the images timestamps to predict new image pose using a constant velocity motion model. To test if this is indeed what leads to the crash, you can change the following parameters in your parameter file: turn "klt_use_prior" to 0 and "do_p3p" to 1.
Thanks for getting back to me. I have made those config changes, and i see the same result. As i plan to use this on a live camera, i have bypassed the image reader class and am just using:
double ttime = 0.0;
void SlamManager::run(cv::Mat &im0, cv::Mat &im1)
{
if (pslamstate_->bdo_stereo_rect_) {
pcalib_model_left_->rectifyImage(im0, im0);
pcalib_model_right_->rectifyImage(im1, im1);
}
bis_on_ = true;
cv::Mat img_left = im0;
cv::Mat img_right = im1;
// Main SLAM loop
// 0. Get New Images
// =============================================
ttime += 0.01;
// Update current frame
frame_id_++;
pcurframe_->updateFrame(frame_id_, ttime);
std::cout << "frame: " << frame_id_ << std::endl;
// Display info on current frame state
if (pslamstate_->debug_)
// pcurframe_->displayFrameInfo();
// 1. Send images to the FrontEnd
// =============================================
if (pslamstate_->debug_)
std::cout << "\n \t >>> [SLAM Node] New image send to Front-End\n";
bool is_kf_req = pvisualfrontend_->visualTracking(img_left, 0.03);
// Save current pose
// Logger::addSE3Pose(time, pcurframe_->getTwc(), is_kf_req);
if (pslamstate_->breset_req_) {
reset();
}
// 2. Create new KF if req. / Send new KF to Mapper
// ================================================
if (is_kf_req)
{
if (pslamstate_->debug_)
std::cout << "\n \t >>> [SLAM Node] New Keyframe send to Back-End\n";
if (pslamstate_->stereo_)
{
Keyframe kf(
pcurframe_->kfid_,
img_left,
img_right,
pvisualfrontend_->cur_pyr_
);
pmapper_->addNewKf(kf);
}
else if (pslamstate_->mono_)
{
Keyframe kf(pcurframe_->kfid_, img_left);
pmapper_->addNewKf(kf);
}
if (!bkf_viz_ison_) {
// std::thread kf_viz_thread(&SlamManager::visualizeAtKFsRate, this, time);
// kf_viz_thread.detach();
}
}
if (pslamstate_->debug_ || pslamstate_->log_timings_)
std::cout << Profiler::getInstance().displayTimeLogs() << std::endl;
// Frame rate visualization (limit the visualization processing)
if (!bframe_viz_ison_) {
// std::thread viz_thread(&SlamManager::visualizeAtFrameRate, this, time);
// viz_thread.detach();
}
}
int main(int argc, char** argv)
{
std::vector<cv::String> fnL; // std::string in opencv2.4, but cv::String in 3.0
std::string path = "D:\\testing\\LEGO\\kitti\\03\\image_0/*.png";
cv::glob(path, fnL, false);
std::vector<cv::String> fnR; // std::string in opencv2.4, but cv::String in 3.0
path = "D:\\testing\\LEGO\\kitti\\03\\image_1/*.png";
cv::glob(path, fnR, false);
if(argc < 2)
{
std::cout << "\nUsage: parameters_files/params.yaml\n";
return 1;
}
std::cout << "\nLaunching OV²SLAM...\n\n";
// Load the parameters
std::string parameters_file = argv[1];
std::cout << "\nLoading parameters file : " << parameters_file << "...\n";
const cv::FileStorage fsSettings(parameters_file.c_str(), cv::FileStorage::READ);
if(!fsSettings.isOpened()) {
std::cout << "Failed to open settings file...";
return 1;
} else {
std::cout << "\nParameters file loaded...\n";
}
std::shared_ptr<SlamParams> pparams;
pparams.reset( new SlamParams(fsSettings) );
// Setting up the SLAM Manager
SlamManager slam(pparams);
for(int i = 0; i < fnL.size();i++)
{
if (slam.pslamstate_->stereo_)
{
cv::Mat image0 = cv::imread(fnL[i]);
cv::Mat image1 = cv::imread(fnR[i]);
cv::cvtColor(image0, image0, cv::COLOR_RGB2GRAY);
cv::cvtColor(image1, image1, cv::COLOR_RGB2GRAY);
if (!image0.empty() && !image1.empty()) {
slam.run(image0, image1);
}
}
}
Does this look incorrect? Thanks again!
Yes the problem comes from the fact that you are setting a hard 0.03 here : bool is_kfreq = pvisualfrontend->visualTracking(img_left, 0.03); Just change 0.03 to ttime and it should work fine. Also, if you use it on KITTI, the delta time between images should be ttime += 0.1 instead of ttime += 0.01 (KITTI images are acquired at 10 Hz).
Thank you! That has it running through the sequence. :)
One more question for you, as i am not using ROS, i need to write my own viewer. For a start, i would like to reproject the 3d map points onto the camera frame. Where should i access the map points? And the current frame 2d feature points?
Thanks again!
I think you can take examples from the visualization functions in ov2slam, for instance : void SlamManager::visualizeFrame(const cv::Mat &imleft, const double time)
Thank you! i will dig into that. One last thing.. I plan to run at 100Hz from a live camera. Do I need to enable 'fast' mode somewhere? Or will the system run at the rate given? Thanks !
You have example of the fast configuration in the parameters files, you can use them as an example and then tune some parameters for your own needs.
Great! i have the code running live on my camera, thank you very much! One last thing.. Is relocalisation enabled in this code? for example, if the camera is blocked or 'kidnapped', will the system relocate into the existing map?
I assume this is part of the LC code, is that right? Thanks!
By default LC is disabled in the fast parameters file, so you need to set the related flag to 1 to have it running. Also, there is no true "relocalization" from kidnapped or blocked camera such as there is in ORB-SLAM when tracking is lost but closing a loop might have the effect that you are expecting.
Hi, and thnak you for making this code available. I have it compiled and running on Windows, but i am getting a crash after a few frames (Kitti dataset 03)
Any help greatly appreciated. What might be causing this?
Thanks!