TadasBaltrusaitis / OpenFace

OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.
Other
6.71k stars 1.82k forks source link

Issues making OpenFace work in Visual Studio 2017 #1079

Closed sanmii closed 3 months ago

sanmii commented 3 months ago

Describe the bug I am trying to incoporate OpenFace in visual studio 2017 so I can use it with /psi, but I have issues with the libraries. I have download the binaries for windows x64 and I have added the CppInerOpp.dll to my project, so I could use OpenFace, however it is not working. Although I have addded the path to the library trhough the references it keeps giving the next error: Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'CppInerop.dll' or one of its dependencies. The specified module could not be found.

Counld anybody help me in this issue and let me know if I am missing any other library which could be needed for OpenFace?

I am using visual studio 2017 with .net framework 4.7.2

brmarkus commented 3 months ago

What do you mean with "use it with /psi", what does "/psi" stand for? When you say "using visual studio 2017 with .net framework 4.7.2", you mean you are using a C++ project, or C#? (I'm not experienced with using C++ libraries from e.g. C# application...)

In your VisualStudio project you not only specify the name(s) of the thirdparty libraries, but also give library-search-path(s) for where to find them (often two different sets: for release-builds and for debug-builds). Are you sure you have specified name and path - usually separately? (usually you specifiy only the filename in a config option and the path and folder in another config option). For DLLs (dynyamic libararies contrary to static libraries) you might even need to update the PATH environment variable, as a quick Google-search revealed... EDIT: have you tried to copy the DLL to the CWD/PWD, the same folder as the application executable as quick-and-dirty solution?

sanmii commented 3 months ago

I am building a console App (.Net Framework) c#. I want to use /psi from microsoft (https://github.com/microsoft/psi) in order synchroniza the Data obtained from the camera, the AUs extracted from the images and a microphone that I am using and also in order to store that data. The /psi code work properly and I have managed to remove the error I was observing yesterday by cooping the whole binary of openface in the project-.bin->x64->release folder. However now I am facing anotehr trouble.

The images are gathered correctly since UI save them and I check they were okey, however, when I use these images to extract the landmarks, by the this line code: (74 and 75 code lines) inputImage.Resource.CopyTo(colorSharedImage.Resource); var colorImage = new RawImage(colorSharedImage.Resource.ToBitmap());

I obtain the next error: _Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at LandmarkDetector.DetectLandmarksInVideo(Mat , CLNF , FaceModelParameters , Mat ) at CppInterop.LandmarkDetector.CLNF.DetectLandmarksInVideo(RawImage rgb_image, FaceModelParameters modelParams) at SimpleNeedForHelp.IntegratedOpenFace.OpenFaceAUExtractor(Shared1 inputImage, Envelope envelope) in C:\Users\Ane\Desktop\Projects\SimpleNeedForHelp\SimpleNeedForHelp\IntegratedOpenFace.cs:line 75 at Microsoft.Psi.Pipeline.<>c__DisplayClass106_01.b0(Message`1 m) at Microsoft.Psi.Executive.PipelineElement.<>cDisplayClass43_11.<TrackStateObjectOnContext>b__1() at Microsoft.Psi.Receiver1.<>c__DisplayClass13_0.<.ctor>b_0(Message1 m) at Microsoft.Psi.Receiver1.DeliverNext() at Microsoft.Psi.Scheduling.Scheduler.ExecuteAndRelease(SynchronizationLock synchronizationObject, Action action, SchedulerContext context) at Microsoft.Psi.Scheduling.Scheduler.Run(Object workItem) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem() at System.Threading.ThreadPoolWorkQueue.Dispatch()

brmarkus commented 3 months ago

Can you describe what the callstack shows, please? I haven't used PSI so far. PSI calls your code IntegratedOpenFace.cs:line 75? And in this your code you are calling OpenFace APIs? It looks like you are passing allocated memory from your application into an OpenFace API? Do you have your application built with debug-info and can set a breakpoint before IntegratedOpenFace.cs:line 75 - and then check the parameters to pass in, their addresses, the memory's content? Is SimpleNeedForHelp.IntegratedOpenFace.OpenFaceAUExtractor(Shared1 inputImage, Envelope envelope) still your code?

EDIT: Can you try to replace calls to OpenFace with some first sanity and consistency checks, like displaying the image, printing properties of the passed parameters, store the parameters into a file or something like this - using the parameters to see if they are valid (if debugging doesn't reveal enough information)?

sanmii commented 3 months ago

So I have a main Scriot called MainScript.cs, from where I gather the data from the camera. I decode the image and I send this image into the IntegratedOpenFace.cs in order to extract the landmarks. This is the line from MainScript.cs which calls the OpenFace script: // Send imagestream from each camera to the openface component to get the AUs and confidence var openFaceComponent = new IntegratedOpenFace(p); // Every 10th image is sent to be processed making each timestep 1/3 of a second decodedImageStream.Where((img, e) => e.SequenceId % 10 == 0).PipeTo(openFaceComponent.In);

And this is the IntegratedOpenFace.cs whole code: _using System; using System.Collections.Generic; using System.Linq; using Microsoft.Psi; using Microsoft.Psi.Imaging; using CppInterop.LandmarkDetector; using FaceAnalyser_Interop; using FaceDetectorInterop; using OpenCVWrappers; using GazeAnalyser_Interop;

namespace SimpleNeedForHelp { class IntegratedOpenFace { private static FaceModelParameters faceModelParameters; private static FaceDetector faceDetector; private static CLNF landmarkDetector; private static FaceAnalyserManaged faceAnalyser;

    public IntegratedOpenFace(Pipeline p)
    {
        // Input into the component is a frame from the video
        In = p.CreateReceiver<Shared<Image>>(this, OpenFaceAUExtractor, nameof(In));
        // Outputs from the component is a dictionary of the AUs' occurence and intensity and facial detection confidence
        Out = p.CreateEmitter<Dictionary<string, Tuple<double, double>>>(this, nameof(Out));
        Conf = p.CreateEmitter<double>(this, nameof(Conf));

        // Add this component to the pipeline
        p.PipelineRun += initializeOpenFace;
        p.PipelineCompleted += OnPipelineCompleted;
        // Create the store
        var store = PsiStore.Create(p, "demo", "C:\\Users\\Ane\\Desktop\\Projects\\StoredData");
    }

    public Receiver<Shared<Image>> In { get; private set; }

    // String is AU name, tuple is <intensity, occurance>
    public Emitter<Dictionary<string, Tuple<double, double>>> Out { get; private set; }

    // Facial detection confidence
    public Emitter<double> Conf { get; private set; }

    // Initialize variables to set up OpenFace for AU
    private void initializeOpenFace(object sender, PipelineRunEventArgs e)
    {
        faceModelParameters = new FaceModelParameters(AppDomain.CurrentDomain.BaseDirectory, true, false, false);
        faceModelParameters.optimiseForVideo();

        faceDetector = new FaceDetector(faceModelParameters.GetHaarLocation(), faceModelParameters.GetMTCNNLocation());
        if (!faceDetector.IsMTCNNLoaded())
        {
            faceModelParameters.SetFaceDetector(false, true, false);
        }

        landmarkDetector = new CLNF(faceModelParameters);
        faceAnalyser = new FaceAnalyserManaged(AppDomain.CurrentDomain.BaseDirectory, true, 112, true);

        landmarkDetector.Reset();
        faceAnalyser.Reset();
    }

    // This function calculates the AUs and facial detection confidence for a given image. The arguments are:
    // inputImage: image collected from camera for the purpose of AU detection
    // envelope: contains information such as originating time
    private void OpenFaceAUExtractor(Shared<Image> inputImage, Envelope envelope)
    {
        // Create empty double to store facial detection confidence
        double confidence = 0.0;
        // Create an empty dictionary to store AU occurence and intensities
        Dictionary<string, Tuple<double, double>> actionUnits = new Dictionary<string, Tuple<double, double>>();

        // Converts image input into a raw image that can used to detect facial landmarks
        using (var colorSharedImage = ImagePool.GetOrCreate(inputImage.Resource.Width, inputImage.Resource.Height, inputImage.Resource.PixelFormat))
        {
            inputImage.Resource.CopyTo(colorSharedImage.Resource);
            var colorImage = new RawImage(colorSharedImage.Resource.ToBitmap());
            // Checks if landmark detection was successful
            if (landmarkDetector.DetectLandmarksInVideo(colorImage, faceModelParameters))
            {
                var landmarks = landmarkDetector.CalculateAllLandmarks();
                // Calculate confidence for landmark detection
                confidence = landmarkDetector.GetConfidence();
                //Calculate the AUs
                var (actionUnitIntensities, actionUnitOccurences) = faceAnalyser.PredictStaticAUsAndComputeFeatures(colorImage, landmarks);
                actionUnits = actionUnitIntensities.ToDictionary(kv => kv.Key, kv => new Tuple<double, double>(kv.Value, actionUnitOccurences[kv.Key]));
            }

            // Output the facial detection confidence and AUs
            Conf.Post(confidence, envelope.OriginatingTime);
            Out.Post(actionUnits, envelope.OriginatingTime);
        }
    }

    private void OnPipelineCompleted(object sender, PipelineCompletedEventArgs e)
    {
    }
}

}_ I have tried debug points and before the line 75 there is data and suddendly it stops wprking with the error mentioned before. Yes I have styored the Images before sending to the OpenFace Script and they work properly, the video appears ad recorded

sanmii commented 3 months ago

When debugging this is the exception thrown: Exception thrown at 0x00007FF8B7B585C7 (CppInerop.dll) in SimpleNeedForHelp.exe: 0xC0000005: Access violation reading location 0x0000000000000008.

brmarkus commented 3 months ago

Yes, this is really an access violation - the address is 0x008, this is almost a zero-pointer-access...

From the callstack it starts from your code:

        inputImage.Resource.CopyTo(colorSharedImage.Resource);
        var colorImage = new RawImage(colorSharedImage.Resource.ToBitmap());
        // Checks if landmark detection was successful
 =>     if (landmarkDetector.DetectLandmarksInVideo(colorImage, faceModelParameters))
        {

How could you validate the parameter colorImage? I don't have experience with C# and calls like inputImage.Resource.CopyTo(), new RawImage(), colorSharedImage.Resource.ToBitmap()...

Could colorImage be invalid, does Resource.ToBitmap() return something valid?

sanmii commented 3 months ago

With the functions you mentioned I am quite new too, but it looks that the information in correct, I have made debug to see the information previous to the break point and it looks that there is information of the image. I attach one foto: image image In the Resouce.ToBitmap() it is when it gives the error

brmarkus commented 3 months ago

Can you store colorSharedImage.Resource.ToBitmap() in a variable and then check the content?

What does colorImage look like in the debugger?

sanmii commented 3 months ago

I don't know why but whenever I am trying to make anything with the image, it is just jumping to the next line till the break: image I can check what it is in the confidence and Dictionary points, but it jumps the var BitImage I have created

brmarkus commented 3 months ago

Was your application re-built correctly? Otherwise the "debug-binary" doesn't contain your code-changes and the debugger just jumps to the next instructions without your latest code-changes...

        inputImage.Resource.CopyTo(colorSharedImage.Resource);
     => var colorImage = new RawImage(colorSharedImage.Resource.ToBitmap());
        // Checks if landmark detection was successful
        if (landmarkDetector.DetectLandmarksInVideo(colorImage, faceModelParameters))
        {

Does var tempVariable = colorSharedImage.Resource.ToBitmap() contain something valid?

Would var colorImage = new RawImage( tempVariable ) contain something valid?

sanmii commented 3 months ago

Ihave re-build it propely and It works since I have added console.Write in order to check what is inside the variables and exactly the inputImage has information of the image. It provides correct width, heigth and it also provides size and everything

brmarkus commented 3 months ago

Can you try to use your images or video-files and test the sample applications from OpenFace (like FeatureExtraction, FaceLandmarkVidMulti)?

Maybe your images, your videos look fine to you - but contain an unsupported color-format, or the resolution/height/width/stride is too big, contain invalid timestamp, or the image/video-codec is not supported. When using the OpenFace samples you would be able to debug OpenFace and check what the API DetectLandmarksInVideo() is actually doing with your image/video/frame.

sanmii commented 3 months ago

I have tried one more thing. i have tried to save the bitmapImage too see if it is working properly and it is code: double confidence = 0.0; var colorSharedImage2 = ImagePool.GetOrCreate(inputImage.Resource.Width, inputImage.Resource.Height, inputImage.Resource.PixelFormat); Console.WriteLine(inputImage.Resource.Height.ToString()); Console.WriteLine(colorSharedImage2.Resource.Height.ToString()); inputImage.Resource.CopyTo(colorSharedImage2.Resource); var Bitimage = colorSharedImage2.Resource.ToBitmap(); // Get an ImageCodecInfo object that represents the JPEG codec. myImageCodecInfo = GetEncoderInfo("image/jpeg");

        // Create an Encoder object based on the GUID

        // for the Quality parameter category.
        myEncoder = Encoder.Quality;

        // Create an EncoderParameters object.

        // An EncoderParameters object has an array of EncoderParameter

        // objects. In this case, there is only one

        // EncoderParameter object in the array.
        myEncoderParameters = new EncoderParameters(1);

        // Save the bitmap as a JPEG file with quality level 25.
        myEncoderParameter = new EncoderParameter(myEncoder, 25L);
        myEncoderParameters.Param[0] = myEncoderParameter;
        Bitimage.Save("Shapes025.jpg", myImageCodecInfo, myEncoderParameters);

And the image appears perfectly in my folder. Hence the issue is with the RawImage function.

sanmii commented 3 months ago

Okey I have cehecked that the ImageBitMap has informartion and that RawImage works properly, but when using it through the landmarkDetector.DetectLandmarksInVideo(colorImage2, faceModelParameters) I see next error in the rawImage variable: Before the landmark function: image

When calling the landmark function: image

brmarkus commented 3 months ago

This is strange... When I search for the API DetectLandmarksInVideo() in OpenFace, I can find it expecting at least three parameters, but your code calls it with two parameters only: landmarkDetector.DetectLandmarksInVideo(colorImage, faceModelParameters).

EDIT: Would it be possible to use one of the OpenFace sample applications with your images or videos?

sanmii commented 3 months ago

I willl try know to use theimage I have saved and see if I can makeit work with the openFace repository.

Anyway Entering to the CLNF it is clearly shows that for video it is only used to parameters too: image

brmarkus commented 3 months ago

Ok, I was searching in the OpenFace repository with C++ code only (but there is also C# code with 3 paramters). You have an additional C++ - C# abstraction-layer?

image

sanmii commented 3 months ago

I use Opencvwrapper and I have adeed these libraries: using System; using System.Collections.Generic; using System.Linq; using Microsoft.Psi; using Microsoft.Psi.Imaging; using CppInterop.LandmarkDetector; using FaceAnalyser_Interop; using FaceDetectorInterop; using OpenCVWrappers; using GazeAnalyser_Interop; using System.Drawing; using System.Drawing.Imaging;

to make the OpenFace part work Ihave added all contentof the binnari inside the realease folder

sanmii commented 3 months ago

I have been able to solve it. It was due to the models, they were not correctly downloaded. I have downloaded them manually and now the system works! Thank you for your assistance!

brmarkus commented 3 months ago

Sounds great!! How have you found it? The exception and callstack wasn't very helpful...:

Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
at LandmarkDetector.DetectLandmarksInVideo(Mat* , CLNF* , FaceModelParameters* , Mat* )
at CppInterop.LandmarkDetector.CLNF.DetectLandmarksInVideo(RawImage rgb_image, FaceModelParameters modelParams)
at SimpleNeedForHelp.IntegratedOpenFace.OpenFaceAUExtractor(Shared1 inputImage, Envelope envelope) in 
sanmii commented 3 months ago

just searching in others issues found through OpenFace by other people. It was not the same issue, but other person was also finding some issues while using this same function and it was due to the absence of the modules. So I checked it my model folder and I realized there were some models missing (:

brmarkus commented 3 months ago

Perfect Feel free to close this issue.