Closed lxy-mini closed 4 months ago
Thank you for the question. I'll respond to you within today.
Hello, Does IPM_RESO represent the pixels of IPM? 0.05 in the setting file represents 0.05 pixels per meter.
0.05 means 0.05 meter per pixel
oK, thanks
In demo_mapping, GNSS data seems to be used only for georeferencing. Why are patches merged again after Geo-registering.?
In local mapping, the gradual drift of pose estimation will lead to overlapping of patches when re-visiting a region (see the last 20 seconds of the demo). After geo-referencing, the pose drift is corrected, and the overlapped patches need to be merged to avoid inconsistency.
By the way, notice that during local mapping, the patch merging only happens for subsequent frames within a not very long time.
My understanding in the RoadInstancePatchMap::mergePatches function is to stitch the patch map by looking for patches with intersections. Am I correct in this understanding?
Hello, in other words, can you tell me about the code idea in RoadInstancePatchMap::mergePatches function? the fuc is so complex!
OK, I'll give a brief explanation later today.
map format
hello, for the map saving format, I found that there are both vector parameters such as The properties of each patch and point cloud information such as line_points_metric. can I understand that it's a kind of mixture of vector and point cloud map?
In this project, the road marking instances are treated either as "patch-like" ones (DASHED and GUIDE) and "line-like" ones (SOLID and STOP).
In "RoadInstancePatchMap::mergePatches", the active road marking instances in the local map are considered for clustering, based on the spatial distance and other attributes. After that, the associated instances would be fused probabilistically. For patch-like instances, the fusion is integrating multiple bounding boxes to a new one. For line-like instances, the fusion is integrating multiple line strips to a new line strip, covering the range of every line segments.
The current class properties mix different types, while these data are not valid for all the instances. For patch-like instances, the bounding box (b_point_metric) is the main property. For line-like instances, the line strip (line_points_metric) is the main property. Generally speaking, both of above are vector representations, while other properties (like points_metric) are intermediate representations (not so important).
if (fabs((att_ave(0) - last_att_ave(0)) *R2D) > config.large_slope_thresold / config.pose_smooth_window && pose_history.size()>5)
{
std::cerr << "[INFO] Large slope detected!" << std::endl;
// clear pose_history
pose_history.clear();
// add current pose
pose_history.push_back(make_pair(this_pose->second.R, this_pose->second.t));
last_att_ave = att_this;
int count_temp = 0;
for (auto iter_frame = all_frames.rbegin(); iter_frame != all_frames.rend() && count_temp < 20; iter_frame++)
{
// t: iter_frame -> this->pose
Vector3d ti0i1 = iter_frame->second->R.transpose() * (this_pose->second.t - iter_frame->second->t);
std::cout << "[INFO] ti0i1 : " << ti0i1.transpose() << std::endl;
**_if (ti0i1(1) < 13.5 && (ti0i1(0)) < 5)_**
{
road_map.ignore_frame_ids.insert(make_pair(iter_frame->second->id, ti0i1(1)));
std::cout << "[INFO] ignoring : " << iter_frame->second->id << " distance threshold : " << ti0i1(1) << std::endl;
}
count_temp++;
}
}
Hello, why do you need to convert the historical camera posture to the current state to determine the offset of the x and y axes when the attitude angle changes greatly? Is there any basis for this?
Hello, after reviewing your code, I have gained a lot of insights. However, there are still a few questions. Firstly, what is the maximum distance for local map construction? Lastly, can I obtain the coordinates of elements (e.g., zebra crossings) relative to the vehicle's position after semantic segmentation? Looking forward to your response.