Closed qq351469076 closed 3 months ago
Hello,
The 2 first parameters are Vector<Point2f>
.
For the mask, I use a default Mat
.
See the following code:
let mut src_points: Vector<Point2f> = Vector::new();
let mut dst_points: Vector<Point2f> = Vector::new();
// populate the vectors of points here, usually from the keyPoints.
let mut mask = Mat::default();
trace!("Ransac threshold {}", self.model.ransac_threshold);
let m = find_homography(
&dst_points,
&src_points,
&mut mask,
RANSAC,
self.model.ransac_threshold,
)?;
Note: I've inverted dst_points
and src_points
because my use case need this.
你好,
第 2 个参数是
Vector<Point2f>
。对于掩码,我使用默认的
Mat
。 请看下面的代码:let mut src_points: Vector<Point2f> = Vector::new(); let mut dst_points: Vector<Point2f> = Vector::new(); // populate the vectors of points here, usually from the keyPoints. let mut mask = Mat::default(); trace!("Ransac threshold {}", self.model.ransac_threshold); let m = find_homography( &dst_points, &src_points, &mut mask, RANSAC, self.model.ransac_threshold, )?;
注意:我已经彻底明白了
dst_points
,src_points
因为我的例子需要这个。
Do you know how to use perspective_transform of this library?
Actually I never used perspective_transform
myself.
I use warp_perspective
instead, like this:
let mut result = Mat::default();
warp_perspective(
&mat,
&mut result,
&m,
Size::new(self.model.model_width, self.model.model_height),
INTER_LANCZOS4,
BORDER_CONSTANT,
Scalar::from(255.0),
)?;
Where mat
is the image source and m
is the "result" of find_homography
.
@qq351469076 In you original message there seem to be a confusion between getPerspectiveTransform
and perspectiveTransform
functions. At least in the initial Python code the function that's used is perspectiveTransform
, but in ChatGPT suggested C++ it's getPerspectiveTransform
.
If you look at the docs for perspectiveTransform
function: https://docs.opencv.org/4.x/d2/de8/group__core__array.html#gad327659ac03e5fd6894b90025e6900a7 you can see that Python form has the following signature cv.perspectiveTransform( src, m[, dst]) -> dst
and the argument order is different from C++/Rust so in the Rust code that H
should actually be the last argument and the second is the output Mat
.
Also notice that the Python code calls reshape
and the docs for the perspectiveTransform
indicate which particular shape the function expects for its src
argument.
It would be helpful if you could provide the working code in Python then it would be easier to help you translate it to Rust.
This is a Python video course about OpenCV.
When I understand its usage
and i ask for chatgpt3.5, it tell me
I also try to imitate the first parameter
it raise a error
my question is
my code and test picture
Locate image A in image B, and then draw lines around image A in image B.
The effect chart is as follows
opencv_orig.png
opencv_search.png