Open erenaud3 opened 6 years ago
Maybe you compiled without facemark module or without extra modules. What does console.log(cv.xmodules) say?
I found this issue 2 weeks ago. It seems face module does not export correctly. And by the way, FYI, this face module does not even work for python binding. I wrote an issue for opencv, and they said it is like that.
I suppose because it is in opencv_contrib, so less effort was put onto it.
Ahh yeah, my fault sorry. From opencv version 3.4.2 upwards we don't export some functions anymore such as the one your mentioned above:
#if CV_MINOR_VERSION < 2
Nan::SetPrototypeMethod(ctor, "addTrainingSample", AddTrainingSample);
Nan::SetPrototypeMethod(ctor, "addTrainingSampleAsync",
AddTrainingSampleAsync);
Nan::SetPrototypeMethod(ctor, "getData", GetData);
Nan::SetPrototypeMethod(ctor, "getDataAsync", GetDataAsync);
Nan::SetPrototypeMethod(ctor, "getFaces", GetFaces);
Nan::SetPrototypeMethod(ctor, "getFacesAsync", GetFacesAsync);
Nan::SetPrototypeMethod(ctor, "setFaceDetector", SetFaceDetector);
Nan::SetPrototypeMethod(ctor, "training", Training);
Nan::SetPrototypeMethod(ctor, "trainingAsync", TrainingAsync);
#endif
Since they changed the facemark API with 3.4.2 the package wouldn't compile anymore, that's why I removed them. If you want to use the aboveshown functions, you can still use them with 3.4.1, or submit a PR with bindings to the new base class.
TLDR: Probably the example simply has to be adjusted and the calls to setFaceDetector
and getFaces
have to be removed.
Yes, I did exactly what you mention in TLDR, and my project works well.
Thanks!
Okay so I have another couple of questions:
First, how do I know which Opencv version am I using ? From your answers, I guess it is above 3.4.2 but I don't see where this is defined. (i didn't set opencv on my own and I simply used npm install).
"submit a PR with bindings to the new base "
Okay so if I understand correctly what you said and this page (https://docs.opencv.org/3.4.3/d3/d81/classcv_1_1face_1_1FacemarkTrain.html), I would have to create a binding that take into account the new class FacemarkTrain, where all the non working functions were moved. Is that correct ?
TLDR: Probably the example simply has to be adjusted and the calls to setFaceDetector and getFaces have to be removed.
I don't understand this part of your reply. If we take a look at the rest of the code, we can see that we can only use the following functions:
Nan::SetPrototypeMethod(ctor, "loadModel", LoadModel);
Nan::SetPrototypeMethod(ctor, "loadModelAsync", LoadModelAsync);
Nan::SetPrototypeMethod(ctor, "fit", Fit);
Nan::SetPrototypeMethod(ctor, "fitAsync", FitAsync);
Nan::SetPrototypeMethod(ctor, "save", Save);
Nan::SetPrototypeMethod(ctor, "load", Load);
How would it be enough to adjust the example ?
Thanks.
Okay I actually managed to get the example working. I didn't realize I could be using the face classifier directly. I have the following code if it can help someone :
const cv = require("../");
const fs = require("fs");
const path = require("path");
if (!cv.xmodules.face) {
throw new Error("exiting: opencv4nodejs compiled without face module");
}
const facemarkModelPath = "../data/face/";
const modelFile = path.resolve(facemarkModelPath, "lbfmodel.yaml");
if (!fs.existsSync(modelFile)) {
console.log("could not find landmarks model");
console.log(
"download the model from: https://raw.githubusercontent.com/kurnianggoro/GSOC2017/master/data/lbfmodel.yaml"
);
throw new Error("exiting: could not find landmarks model");
}
const classifier = new cv.CascadeClassifier(cv.HAAR_FRONTALFACE_ALT2);
// create the facemark object with the landmarks model
const facemark = new cv.FacemarkLBF();
facemark.loadModel(modelFile);
const image = cv.imread("../data/got.jpg");
const gray = image.bgrToGray();
var faceClassifierOpts = {
minSize: new cv.Size(30, 30),
scaleFactor: 1.126,
minNeighbors: 1,
}
const faces = classifier.detectMultiScale(gray, faceClassifierOpts).objects
// use the detected faces to detect the landmarks
const faceLandmarks = facemark.fit(gray, faces);
for (let i = 0; i < faceLandmarks.length; i++) {
const landmarks = faceLandmarks[i];
for (let x = 0; x < landmarks.length; x++) {
image.drawCircle(landmarks[x], 1, new cv.Vec(0, 255, 0), 1, cv.LINE_8);
}
}
cv.imshowWait("VideoCapture", image);
First, how do I know which Opencv version am I using ? From your answers, I guess it is above 3.4.2 but I don't see where this is defined. (i didn't set opencv on my own and I simply used npm install).
There is a opencv-build
module, on which opencv4nodejs depends. In the install
folder, there is a file setup-opencv.js
. I figured out the opencv version by reading the line const tag = x.x.x
.
Not sure if there is a better way to find it out, or put this information into README.md
?
hey @erenaud3 why not update the example in the repo and make a PR? :)
@erenaud3 the above code looks excellent! I'd like to apply the same code for developing facial landmark detection and JavaScript for a webpage. Can you please help me get started?
I'm trying hard to create facial landmark detection in the web browser with OpenCV.js. I tried to use his example which is in C++, link: https://docs.opencv.org/3.4.2/d2/d42/tutorial_face_landmark_detection_in_an_image.html
I can't seem to find the "Ptr
I'm able to successfully run the Face Detection using Haar Cascades, link: https://docs.opencv.org/trunk/d2/d99/tutorial_js_face_detection.html
Can you please help? I'd really appreciate it.
@erenaud3 the above code looks excellent! I'd like to apply the same code for developing facial landmark detection and JavaScript for a webpage. Can you please help me get started?
Have you considered using face-api.js ? It is another npm package from justadudewhohacks. As you will see in examples, it is really well-suited to browsers.
Alternatively, you could in deed be running a server with an OpenCV back-end like you initially wanted to do. But I think it would be like reinventing the wheel: you would probably learn a lot, but other people solved this problem before you.
Thank you @erenaud3
Using face-api.js for the Browser, how could I show the Face Landmark Points from the webcam in real-time?
I mean... you already have literately everything you are asking for on face-api.js' repo.
Check out the live demo and check "Detect Face Landmarks" option.
Maybe you want landmarks to be drawn as points instead of lines ? In that case, you simply have to change drawLandmarks(...)
in examples/examples-browser/public/js/drawing.js to whatever suits you better.
Great, @erenaud3! Thank you so much!
From the above, where is the library for "new cv.FacemarkLBF();" coming from? I tried it with OpenCV.js and not finding it.
You welcome !
From the above, where is the library for "new cv.FacemarkLBF();" coming from? I tried it with OpenCV.js and not finding it.
I am not quiet sure what you are talking about here. Where did you see this line ?
it was from your comment on September 20, 2018. Please see above.
// create the facemark object with the landmarks model const facemark = new cv.FacemarkLBF();
Ha yes sorry !
Okay, it took me a while but now I remember ! You are not finding it in opencv4nodejs' docs because FaceMarkLBF was modified recently (in OpenCv 3.4.2 I think), so changes havn't been documented here yet. This problem is exactly why I opened this issue in the first place, as justadudwhohack pointed out.
To make the example work, I used the OpenCv doc (here).
so it seems you used C++ and not JavaScript?
how did you use the OpenCv doc for that particularline of code? Also, can I run the face-api.js without using node.js?
Thanks again @erenaud3 @justadudewhohacks
so it seems you used C++ and not JavaScript?
No, I used JavaScript, as you can see on the code snippet I put above.
how did you use the OpenCv doc for that particularline of code?
Sorry, I don't remember what the problem exactly was nor how I solved it precisely. But I definitely knew how to correct the facemark example thanks to C++ doc.
Also, can I run the face-api.js without using node.js?
You mean like without browsers nor node.js ? No you can't. But you are not using nodejs when using browser. (Well, from a strict point of view you are, since you need to serve the page. But back-end is not running in node.js).
Hi !
When I tried to launch the example "facemark", I get the following error :
When I comment this line, I get another one :
Am I missing something ?