Closed GoogleCodeExporter closed 9 years ago
Can you give more info about how you create the BufferedImage? They should be
the same, but you may be using an exotic format for which there is bug.. thanks
As for #2, I posted a wiki page about that
http://code.google.com/p/javacv/wiki/FaceDetection
Original comment by samuel.a...@gmail.com
on 25 Sep 2010 at 10:29
Thanks Samuel, here is the code (please let me know if you need anything
further):
public BufferedImage doFaceDetection2(BufferedImage origbffimg)
{
BufferedImage bffImage = null;
try {
String cascadeName = "lib//javaCVlibs//haarcascade_frontalface_alt.xml";
CvErrorCallback redirectError2 = new JavaCvErrorCallback().redirectError();
// Using a display CanvasFrame only for testing: it does not display the BufferedImage after conversion!
CanvasFrame frame = new CanvasFrame("Face Detection");
// Convert BufferedImage to IplImage: this seems not to work!
IplImage grabbedImage = IplImage.createFrom(origbffimg);
// To verify, let's convert from IplImage back to BufferedImage. The original image is not recovered!
BufferedImage bfim = grabbedImage.getBufferedImage();
frame.showImage(bfim); // Failed
IplImage grayImage = IplImage.create(grabbedImage.width, grabbedImage.height, IPL_DEPTH_8U, 1);
//IplImage rotatedImage = grabbedImage.clone();
// I would like to eliminate this random 3D rotation, but I cannot eliminate cvRodrigues2 even thought I am not using neither randomAxis2 nor randomR2! Solution (thanks to Samuel, and openCV):
// Because on javaCV we cannot "use cxcored.lib, cvd.lib and highguid.lib
instead of cxcored_i7.lib, cv.lib and highgui.lib" as suggested on openCV, We
need to use either choice 1 or 2 (Unsure what is more efficient?). Otherwise
you get runtime error:
// Choice 1: Let's create some random 3D rotation...
CvMat randomR2 = CvMat.create(3, 3), randomAxis2 = CvMat.create(3, 1);
cvRodrigues2(randomAxis2, randomR2, null);
// Choice 2: create dumb image, apply cvErode, then release it:
IplImage srcTest = grabbedImage.clone();
IplImage imgTest = grabbedImage.clone();
int posTest = 0;
cvErode(srcTest, imgTest, null, posTest);
cvReleaseImage(srcTest.pointerByReference());
cvReleaseImage(imgTest.pointerByReference());
// More details of this problem can be seen at:
// http://code.google.com/p/javacv/wiki/FaceDetection
// http://opencv.willowgarage.com/wiki/FaceDetection
CvHaarClassifierCascade cascade = new CvHaarClassifierCascade(cvLoad(cascadeName));
CvMemStorage storage = CvMemStorage.create();
CvSeq.PointerByReference contourPointer = new CvSeq.PointerByReference();
//int sizeofCvContour = com.sun.jna.Native.getNativeSize(CvContour.ByValue.class);
cvCvtColor(grabbedImage, grayImage, CV_BGR2GRAY);
CvSeq faces = cvHaarDetectObjects(grayImage, cascade, storage, 1.1, 3, 0/*CV_HAAR_DO_CANNY_PRUNING*/);
// This code will return a square with the face closest to the webcam only, rather than just painting a square on the original image:
if (faces.total > 0)
{
CvRect r = new CvRect(cvGetSeqElem(faces, faces.total - 1/*i*/));
// Get the region of the face only:
cvSetImageROI(grabbedImage, r.byValue());
/* create destination image*/
IplImage fcimg = cvCreateImage(cvGetSize(grabbedImage), grabbedImage.depth, grabbedImage.nChannels);
/* copy subimage */
cvCopy(grabbedImage, fcimg, null);
/* always reset the Region of Interest */
cvResetImageROI(grabbedImage);
// Convert the IplImage face into BufferedImage
bffImage = fcimg.getBufferedImage();
} else {
// No face detected, need to add some code here for those cases
}
cvClearMemStorage(storage);
} catch (Exception e) {
e.printStackTrace();
}
// Return the face in BufferedImage format
return bffImage;
}
public BufferedImage captureBufferedImage()
{
// This was extracted with some modifications from the sun forum at: http://forums.sun.com/thread.jspa?threadID=5357045
// It works very fine, both the real time video and capture (displayed on GUI
on jLables)
FrameGrabbingControl fgc = (FrameGrabbingControl) camagain.player.getControl("javax.media.control.FrameGrabbingControl");
Buffer BUF = fgc.grabFrame();
// Convert it to an image
BufferToImage BtoI = new BufferToImage((VideoFormat) BUF.getFormat());
Image img = BtoI.createImage(BUF);
BufferedImage buffimgMod = (BufferedImage) img;
return buffimgMod;
}
public static void main(String[] args){
BufferedImage bfi = captureBufferedImage();
BufferedImage buffalo = fdobj.doFaceDetection(bfi); // Problem!
}
Original comment by Jorge.Pr...@gmail.com
on 25 Sep 2010 at 3:56
Could you send me the output of "System.out.println(bfi)" in your main()
function? It would help, thanks
Original comment by samuel.a...@gmail.com
on 28 Sep 2010 at 2:08
Sure:
1. Sys.out.println(bfi): the Original BufferedImage captured:
BufferedImage@59c8b5: type = 1 DirectColorModel: rmask=ff0000 gmask=ff00
bmask=ff amask=0 IntegerInterleavedRaster: width = 320 height = 240 #Bands = 3
xOff = 0 yOff = 0 dataOffset[0] 0
2. Sys.out.println(grabbedImage): The IplImage I obtain by using: "IplImage
grabbedImage = IplImage.createFrom(origbffimg);". Sys.out.println(grabbedImage):
cxcore$IplImage(native@0x20648030) (112 bytes) {
int nSize@0=70
int ID@4=0
int nChannels@8=3
int alphaChannel@c=0
int depth@10=80000020
byte colorModel0@14=52
byte colorModel1@15=47
byte colorModel2@16=42
byte colorModel3@17=0
byte channelSeq0@18=42
byte channelSeq1@19=47
byte channelSeq2@1a=52
byte channelSeq3@1b=0
int dataOrder@1c=0
int origin@20=0
int align@24=4
int width@28=140
int height@2c=f0
cxcore$IplROI$ByReference roi@30=null
cxcore$IplImage$ByReference maskROI@34=null
Pointer imageId@38=null
cxcore$IplTileInfo tileInfo@3c=null
int imageSize@40=e1000
Pointer imageData@44=native@0x21260030
int widthStep@48=f00
int BorderMode0@4c=0
int BorderMode1@50=0
int BorderMode2@54=0
int BorderMode3@58=0
int BorderConst0@5c=0
int BorderConst1@60=0
int BorderConst2@64=0
int BorderConst3@68=0
Pointer imageDataOrigin@6c=native@0x21260030
}
3. BufferedImage bfim (convert back to BufferedImage and compare with original.
You can see they differ): grabbedImage.getBufferedImage();
Sys.out.println(bfim):
BufferedImage@91cf0b: type = 0 ColorModel: #pixelBits = 96 numComponents = 3
color space = java.awt.color.ICC_ColorSpace@b00ec2 transparency = 1 has alpha =
false isAlphaPre = false sun.awt.image.SunWritableRaster@98f352
If I try to use frame.showImage(grabbedImage) (or showImage(bfim) which is
equivalent), I get:
Exception in thread "AWT-EventQueue-0" java.lang.Error: Cannot call
invokeAndWait from the event dispatcher thread
at java.awt.EventQueue.invokeAndWait(EventQueue.java:981)
at name.audet.samuel.javacv.CanvasFrame.showImage(CanvasFrame.java:275)
at name.audet.samuel.javacv.CanvasFrame.showImage(CanvasFrame.java:291)
at name.audet.samuel.javacv.CanvasFrame.showImage(CanvasFrame.java:294)
at name.audet.samuel.javacv.CanvasFrame.showImage(CanvasFrame.java:308)
at jlp.javaCVpkg.FaceDetection.doFaceDetection(FaceDetection.java:74)
If it is hard to see what might be the problem, we can have a quick skype
session, and if we find the problem, we can always post the answer here, please
let me know.
You also have my email address.
Many thanks,
Jorge
Original comment by Jorge.Pr...@gmail.com
on 28 Sep 2010 at 3:10
By the way, notice that on the point 2., bfi and origbffimg are the same in my
programme (it is called bfi inside this function, and I pass that as parameter,
but outside where the function is called, that parameter is called bfi, so they
are the same object
Original comment by Jorge.Pr...@gmail.com
on 28 Sep 2010 at 3:14
I had not implemented any support for DirectColorModel.. also,
IplImage.getBufferedImage() would recreate the BufferedImage, even if the
IplImage was originally created using createFrom() or copied from using
copyFrom(). I fixed that, at least of the common BufferedImage types that use
DirectColorModel like TYPE_INT_RGB. Can you try the test version at the URL
below and let me know if it works all right? Thanks
http://www.ok.ctrl.titech.ac.jp/~saudet/javacv.jar
Original comment by samuel.a...@gmail.com
on 2 Oct 2010 at 4:32
Hi Samuel,
Thanks for getting back. This new javacv version seems to have solved the
BufferedImage to IplImage conversion problem! However, the rest of the
functions seem not to be able to deal with this "new" IplImage or BufferedImage
format. More specifically (point A shows that conversion now seems to work
fine; point B shows however how the format is not working with the rest of the
face detection functions):
A) Convesion seems to work fine, because now it prints:
* Original BufferedImage:
BufferedImage@16721bd: type = 1 DirectColorModel: rmask=ff0000 gmask=ff00
bmask=ff amask=0 IntegerInterleavedRaster: width = 320 height = 240 #Bands = 3
xOff = 0 yOff = 0 dataOffset[0] 0
* BufferedImage parsed to my javacv class (parse working fine, same as before):
System.out.println(origbffimg):
BufferedImage@16721bd: type = 1 DirectColorModel: rmask=ff0000 gmask=ff00
bmask=ff amask=0 IntegerInterleavedRaster: width = 320 height = 240 #Bands = 3
xOff = 0 yOff = 0 dataOffset[0] 0
* IplImage obtained from origbffimg:
IplImage grabbedImage = IplImage.createFrom(origbffimg);
System.out.println(grabbedImage):
cxcore$IplImage(native@0x1f502ec0) (112 bytes) {
int nSize@0=70
int ID@4=0
int nChannels@8=4
int alphaChannel@c=0
int depth@10=8
byte colorModel0@14=52
byte colorModel1@15=47
byte colorModel2@16=42
byte colorModel3@17=0
byte channelSeq0@18=42
byte channelSeq1@19=47
byte channelSeq2@1a=52
byte channelSeq3@1b=41
int dataOrder@1c=0
int origin@20=0
int align@24=4
int width@28=140
int height@2c=f0
cxcore$IplROI$ByReference roi@30=null
cxcore$IplImage$ByReference maskROI@34=null
Pointer imageId@38=null
cxcore$IplTileInfo tileInfo@3c=null
int imageSize@40=4b000
Pointer imageData@44=native@0x1f502f50
int widthStep@48=500
int BorderMode0@4c=0
int BorderMode1@50=0
int BorderMode2@54=0
int BorderMode3@58=0
int BorderConst0@5c=0
int BorderConst1@60=0
int BorderConst2@64=0
int BorderConst3@68=0
Pointer imageDataOrigin@6c=native@0x1f502f50
}
* BufferedImage recovered back (this time, it is the same as origbffimg, which
is a good sign!):
BufferedImage bfim = grabbedImage.getBufferedImage();
System.out.println(bfim):
BufferedImage@16721bd: type = 1 DirectColorModel: rmask=ff0000 gmask=ff00
bmask=ff amask=0 IntegerInterleavedRaster: width = 320 height = 240 #Bands = 3
xOff = 0 yOff = 0 dataOffset[0] 0
B) However, the new IplImage seems not to work very well with the rest of the
face detection code. For example, it crashes if I do:
frame.showImage(grabbedImage);
It seems like the algorithm is indeed detecting the right number of faces
though, I tried it with 1, 2 and 3 faces, and the "faces.total" did have values
1, 2 and 3 respectively. But the format still seems to be odd for some
functions, so neither my main java core program can display the BufferedImage I
return back from my javacv function, nor within the actual javacv function I
can display the image by frame.showImage -> I tried frame.showImage both with
the IplImage and with the BufferedImage, but with no success. Please let me
know.
Exception in thread "AWT-EventQueue-0" java.lang.Error: Cannot call
invokeAndWait from the event dispatcher thread
at java.awt.EventQueue.invokeAndWait(EventQueue.java:981)
at name.audet.samuel.javacv.CanvasFrame.showImage(CanvasFrame.java:287)
at name.audet.samuel.javacv.CanvasFrame.showImage(CanvasFrame.java:303)
at name.audet.samuel.javacv.CanvasFrame.showImage(CanvasFrame.java:306)
at name.audet.samuel.javacv.CanvasFrame.showImage(CanvasFrame.java:320)
at jlp.javaCVpkg.FaceDetection.doFaceDetection(FaceDetection.java:136)
at jlp.DNNemotRec.EmotionRecForm.jButtonFaceDetActionPerformed(EmotionRecForm.java:331)
at jlp.DNNemotRec.EmotionRecForm.access$500(EmotionRecForm.java:38)
at jlp.DNNemotRec.EmotionRecForm$5.actionPerformed(EmotionRecForm.java:158)
at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1995)
at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2318)
at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:387)
at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:242)
at javax.swing.plaf.basic.BasicButtonListener.mouseReleased(BasicButtonListener.java:236)
at java.awt.Component.processMouseEvent(Component.java:6263)
at javax.swing.JComponent.processMouseEvent(JComponent.java:3267)
at java.awt.Component.processEvent(Component.java:6028)
at java.awt.Container.processEvent(Container.java:2041)
at java.awt.Component.dispatchEventImpl(Component.java:4630)
at java.awt.Container.dispatchEventImpl(Container.java:2099)
at java.awt.Component.dispatchEvent(Component.java:4460)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4574)
at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4238)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4168)
at java.awt.Container.dispatchEventImpl(Container.java:2085)
at java.awt.Window.dispatchEventImpl(Window.java:2478)
at java.awt.Component.dispatchEvent(Component.java:4460)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:599)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:122)
Original comment by Jorge.Pr...@gmail.com
on 2 Oct 2010 at 2:41
Also notice this. The original (full image) BufferedImage is:
BufferedImage@da1515: type = 2 DirectColorModel: rmask=ff0000 gmask=ff00
bmask=ff amask=ff000000 IntegerInterleavedRaster: width = 97 height = 97 #Bands
= 4 xOff = 0 yOff = 0 dataOffset[0] 0
Then I do the face detection, and create a new IplImage with the detected face
only, which I convert to BufferedImage and pass back to my main project (same
old code as before, copied again below this text for reference **). However,
this is what I get for the new face-detected image extracted:
BufferedImage@643edd: type = 1 DirectColorModel: rmask=ff0000 gmask=ff00
bmask=ff amask=0 IntegerInterleavedRaster: width = 320 height = 240 #Bands = 3
xOff = 0 yOff = 0 dataOffset[0] 0
Notice it has different type (1 rather than 2), different #Bands (3 rather than
4), etc. In case this might help.
**
if (faces.total > 0)
{
CvRect r = new CvRect(cvGetSeqElem(faces, faces.total - 1/*i*/));
// Get the region of the face only:
cvSetImageROI(grabbedImage, r.byValue());
/* create destination image*/
IplImage fcimg = cvCreateImage(cvGetSize(grabbedImage), grabbedImage.depth, grabbedImage.nChannels);
/* copy subimage */
cvCopy(grabbedImage, fcimg, null);
/* always reset the Region of Interest */
cvResetImageROI(grabbedImage);
// Convert the IplImage face into BufferedImage
bffImage = fcimg.getBufferedImage();
} else {
// No face detected, need to add some code here for those cases
}
Original comment by Jorge.Pr...@gmail.com
on 2 Oct 2010 at 3:20
It works, good. About the other things:
"Cannot call invokeAndWait from the event dispatcher thread" -> I'll fix that,
but you should not do processing on the EDT anyway.. use another thread
If you want a specific type for your BufferedImage, you should create the
BufferedImage first, then use IplImage.createFrom().
Original comment by samuel.a...@gmail.com
on 2 Oct 2010 at 3:42
Original issue reported on code.google.com by
Jorge.Pr...@gmail.com
on 25 Sep 2010 at 9:05