Closed russellfoster closed 1 year ago
The Averaging Denoise is a special function in IC Capture, which simple generates an average image. Therefore, it is not available as camera property. I asked the OpenAI chatbot for an OpenCV C# programm, that shows, how the averaging can be done. (IC Capture does not use OpenCV)
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using OpenCvSharp;
namespace ImageAveraging
{
class Program
{
static void Main(string[] args)
{
// Load the images into a list
List<Mat> images = new List<Mat>();
for (int i = 0; i < args.Length; i++)
{
images.Add(new Mat(args[i]));
}
// Initialize the result image with the size and type of the first image
Mat result = new Mat(images[0].Size(), images[0].Type());
result.SetTo(Scalar.All(0));
// Add up all the images and divide by the number of images to get the average
for (int i = 0; i < images.Count; i++)
{
result += images[i];
}
result /= images.Count;
// Save the result image
result.SaveImage("average.jpg");
}
}
}
This code assumes that you have already installed OpenCVSharp and added it to your project references.
To use this code, pass the filenames of the images you want to average as command line arguments when running the program. For example:
ImageAveraging.exe image1.jpg image2.jpg image3.jpg
This will create a new image called "average.jpg" which is the average of the input images. Averaging the images like this can help to reduce noise, as the noise will tend to cancel out when the images are added together.
You can do something similar with the IFrames.
@TIS-Stefan: Thank you so much for the very prompt and very helpful reply!
I decided to use a software trigger to capture the number of frames I needed to average, similar to the code found here: https://github.com/TheImagingSource/IC-Imaging-Control-Samples/blob/master/c%23/Softwaretrigger-Save-Image/SoftwareTrigger/Program.cs
I then tried to average the frames using a simple loop over the buffers, but this was horribly slow. Throwing the buffers into a matrix using OpenCvSharp turned out to be much, much faster! Guess the native code must use the SSE instructions or something. Here's the code I used in case anyone wants it:
private static Bitmap AverageFrames(List<ImageBuffer> frames)
{
Bitmap image;
if (frames.Count > 1)
{
ImageBuffer firstFrame = frames[0];
using (Mat firstFrameMat = BitmapConverter.ToMat(firstFrame.Bitmap))
{
// use a mat type that can handle the sum
Mat average = new Mat(firstFrameMat.Size(), MatType.CV_32SC3);
try
{
// convert frames to average mat type and add
firstFrameMat.ConvertTo(average, average.Type());
for (int frameIndex = 1; frameIndex < frames.Count; frameIndex++)
{
ImageBuffer frame = frames[frameIndex];
if (frame.FrameType != firstFrame.FrameType)
{
throw new InvalidOperationException("Frames are not the same type");
}
using (Mat frameMat = BitmapConverter.ToMat(frame.Bitmap),
convertedMat = new Mat(frameMat.Size(), average.Type()))
{
frameMat.ConvertTo(convertedMat, convertedMat.Type());
average += convertedMat;
}
}
average /= frames.Count;
// convert average mat into bitmap
using (Mat convertedMat = new Mat(average.Size(), firstFrameMat.Type()))
{
average.ConvertTo(convertedMat, convertedMat.Type());
image = BitmapConverter.ToBitmap(convertedMat);
}
}
finally
{
average.Dispose();
}
}
}
else
{
image = frames.FirstOrDefault()?.Bitmap;
}
return image;
}
A few questions came up while looking over things:
With the software trigger, there seems to be a need to "prime" the auto brightness function by capturing images manually. Is there a way around this? Or an expected number of frames that I need to capture to get the auto brightness function to a steady-state (given a constant light source)?
The FrameType for the ImageBuffers always seems to be RGB24, regardless of the video format. For that reason, I assume MatType.CV_32SC3 should always work for the average mat. Is that a valid assumption?
I don't really know the underlying buffer format of the ImageBuffer, so I used the ImageBuffer.Bitmap property for all of the conversions. Is there a perfomance overhead with using the ImageBuffer.Bitmap property vs. operating on the native buffer via ImageBuffer.GetIntPtr()?
I notice in most of the example code I have seen that properties that are acquired from the ICImagingControl (ex. ICImagingControl.VCDPropertyItems.FindInterface()) are all of type IDisposable, but I do not see them disposed. Should they be disposed? Or would this cause issues? Are there any cases where we need to dispose of any resources obtained from the ICImagingControl? Looks like disposing of the ImageBuffer causes issues down the line, so I guess that is not needed/wanted either?
Hello
With the software trigger, there seems to be a need to "prime" the auto brightness function by capturing images manually. Is there a way around this? Or an expected number of frames that I need to capture to get the auto brightness function to a steady-state (given a constant light source)?
Never use automatics in trigger mode, because the results can be unexpected. Also sensors like IMX236 and IMX290 run in Global Shutter Release mode, if trigger mode is enabled. Thus, the image becomes brighter to the end of the image. Some other sensors set exposure and gain only after a frame was delivered. Additional: The automatics work on frames, therefore, a couple of frames are needed for adjusting.
The FrameType for the ImageBuffers always seems to be RGB24, regardless of the video format. For that reason, I assume MatType.CV_32SC3 should always work for the average mat. Is that a valid assumption?
For RGB24 I use CV_8UC3. But if you sum the frames, you need CV32UC3 in order to avoid overruns.
I don't really know the underlying buffer format of the ImageBuffer, so I used the ImageBuffer.Bitmap property for all of the conversions. Is there a perfomance overhead with using the ImageBuffer.Bitmap property vs. operating on the native buffer via ImageBuffer.GetIntPtr()?
The ImageBuffer is a simple data blob. If RGB24 is used, then it is byte BGR, RGB32 is BGRA. Yes, converting this into a Bitmap first is very slow. I assume, you use a FrameQueueBuffer. Lets name it "f"
Faster should be:
C# var x = new Mat(f.FrameType.height, f.FrameType.width, MatType.CV_8UC3, f.getIntPtr());
This avoids the complete memory copy of the bitmap memory allocation and you use the FrameQueueSink. I did not test that explicitly and hope, there are no errors in. You may also keep in mind, the FrameQueueSink provides a list of FrameQueueBuffers, so you could do something like
_bufferlist = _sink.PopAllOutputQueueBuffers();
Mat result = new Mat(_bufferlist[i].FrameType.Height,_bufferlist[i].FrameType.Height, CV_8UC3);
result.SetTo(Scalar.All(0));
for (int i = 0; i < _bufferlist.Length; i++)
{
result += new Mat(_bufferlist[i].FrameType.Height _bufferlist[i].FrameType.Height, CV_8UC3,_bufferlist[i].GetIntPtr());
}
result /= _bufferlist.Length;
Stefan
Great info! So the point of using the trigger mode was to avoid the heavy CPU usage that live streaming seems to incur. I will avoid automatics if using it. Can I still use the FrameQueueBuffer? Also, disposing is no good, correct? Either way, I should be able to use the ImageBuffer I get from the trigger event in the same manner, here's the updated code:
private static Bitmap AverageFrames(List<ImageBuffer> frames)
{
Bitmap image;
if (frames.Count > 1)
{
ImageBuffer firstFrame = frames[0];
using (Mat firstFrameMat = new Mat(firstFrame.FrameType.Height, firstFrame.FrameType.Width, MatType.CV_8UC3, firstFrame.GetIntPtr()))
{
// use a mat type that can handle the sum
Mat average = new Mat(firstFrameMat.Size(), MatType.CV_32SC3);
try
{
// convert frames to average mat type and add
firstFrameMat.ConvertTo(average, average.Type());
for (int frameIndex = 1; frameIndex < frames.Count; frameIndex++)
{
ImageBuffer frame = frames[frameIndex];
if (frame.FrameType != firstFrame.FrameType)
{
throw new InvalidOperationException("Frames are not the same type");
}
using (Mat frameMat = new Mat(frame.FrameType.Height, frame.FrameType.Width, MatType.CV_8UC3, frame.GetIntPtr()),
convertedMat = new Mat(frameMat.Size(), average.Type()))
{
frameMat.ConvertTo(convertedMat, convertedMat.Type());
average += convertedMat;
}
}
average /= frames.Count;
// convert average mat into bitmap
using (Mat convertedMat = new Mat(average.Size(), firstFrameMat.Type()))
{
average.ConvertTo(convertedMat, convertedMat.Type());
image = BitmapConverter.ToBitmap(convertedMat);
}
}
finally
{
average.Dispose();
}
}
}
else
{
image = frames.FirstOrDefault()?.Bitmap;
}
return image;
}
Hello
Can I still use the FrameQueueBuffer?
Yes
Also, disposing is no good, correct?
That depends... if memory is allocated faster than the Garbage Collector cleans it up, then you will run into an out of memory error sooner or later.
Merry Christmas!
Stefan
Also, disposing is no good, correct?
That depends... if memory is allocated faster than the Garbage Collector cleans it up, then you will run into an out of memory error sooner or later.
This is worrysome... If I try and dispose of the ImageBuffer that I receive in the ICImagingControl.ImageAvailable event, at some point later another frame I get from this event has most of the properties (including FrameType) set to null, leading me to believe that the ImageBuffer may be reused by the control. Also, none of the samples I've seen dispose of the items retrieved from ICImagingControl.VCDPropertyItems.FindInterface().
Merry Christmas!
Stefan
Merry Christmas to you!
I went ahead and changed over to the use the FrameQueueSink. Thanks again for everything!
Sorry to raise another question, but it is related: So now that I am using FrameQueueSink, does this mean I no longer need to use the software trigger? And does this mean I can use autos (such as auto white balance) as well? I see the images are continuously arriving via the frameQueued function:
public FrameQueueSink(Func<IFrameQueueBuffer, FrameQueuedResult> frameQueued, FrameTypes frameTypeList, int initialBufferCount);
I thought, I answered that already...
The FrameQueueSink saves images in a queue only. It has no relation to the camera operation modes.
I lost the thread a little bit. Why do used the software trigger?
Do you want image on demand or a live stream?
Stefan
I was using the snapshot because I needed to average frames to apply noise reduction similar to what I was using in the IC Capture utility. I was originally using the software trigger to get the frames via the ICImagingControl.ImageAvailable event. I later changed to FrameQueueSink based on the discussions in this thread, but was still using the software trigger because I thought it was needed to get the image in the FrameQueueSink handler, but it looks like I get the frames either way.
From what I can tell, I do not need to use the software trigger while using FrameQueueSink, I can just grab the images and process them when I am ready. Is that correct? And if that is the case, does that mean if I have auto white balance set, will it apply correctly to the incoming frames?
You can work without trigger mode (turn trigger mode off!), which allows to use all automatics. I suppose, you simply want to snap a sequence of images and denoise them. You use the FrameSnapSink for this:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using TIS.Imaging;
namespace CaptureSequence
{
internal class Sequence
{
private ICImagingControl _ic = new ICImagingControl();
private FrameSnapSink _sink;
public Sequence()
{
_sink = new FrameSnapSink( MediaSubtypes.RGB32);
_ic.Sink = _sink;
try
{
_ic.LoadDeviceStateFromFile("device.xml", true);
}
catch
{
_ic.ShowDeviceSettingsDialog();
if (_ic.DeviceValid)
{
_ic.ShowPropertyDialog();
_ic.SaveDeviceStateToFile("device.xml");
}
}
if (_ic.DeviceValid)
{
_ic.LiveStart();
System.Threading.Thread.Sleep(1000);
}
}
public void capture(int bufferCount)
{
if (_ic.DeviceValid)
{
IFrameQueueBuffer[] bufferlist = _sink.SnapSequence(bufferCount, TimeSpan.FromSeconds(5));
for(int i = 0; i < bufferlist.Length; i++)
{
Console.WriteLine( i );
}
}
}
}
}
This class is used as follows:
var Sequence = new Sequence();
Sequence.capture(8);
You call Sequence.capture(8);
every time you need an image sequence. You can add your denoising to that class and return the denoised image. While the object of this class exists, the camera is running and the automatics adapt to your light environment.
Stefan
Hello
I got a notify with your answer, but I can not find it in the issues.
The FrameSnapSink does not create more CPU load than the FrameQueueSink. Both of the copy frames, that is all.
You can get zero frames, if there are no frames coming into the computer. Therefore, a timeout should be used.
The code you posted in your invisible code uses the FrameQueueSink and some threading stuff. You may try my sample above.
Stefan
Hi, thanks for the update and sorry about the confusion. I was planning a new response. The code I posted had some flaws, mainly it was requeuing buffers that were still being processed, probably because I originally assumed the buffers were only coming in via the software trigger. I didn't notice that until after I posted. I am refactoring the code to not do this. I'm not sure why my CPU spikes on the FrameSnapSink, but I used code very similar to the one posted above. I will experiment a bit more and update. Thanks!
OK, so I think I have things sorted out, I added a summary of events below. I still have two questions, if you don't mind answering:
Is there anything wrong about calling software trigger push in the FrameSinkQueue FrameHandler? It seems to work fine:
sink = new FrameQueueSink(FrameHandler, new FrameTypes(), frameBuffers);
private FrameQueuedResult FrameHandler(IFrameQueueBuffer frame)
{
FrameQueuedResult result = FrameQueuedResult.ReQueue;
try
{
lock (frameLock)
{
if ((frameCount > 0) && (frames.Count < frameCount))
{
frames.Add(frame);
// set the result now in case something throws later
result = FrameQueuedResult.SkipReQueue;
if (frames.Count < frameCount)
{
softwareTrigger.Push();
}
else
{
framesCaptured.TrySetResult(true);
}
}
}
}
catch (Exception ex)
{
log.Error(ex);
}
return result;
}
2. Do VCD properties I get from the camera control need to be disposed by my code? Example:
softwareTrigger = (VCDButtonProperty)cameraControl.VCDPropertyItems.FindInterface(VCDGUIDs.VCDID_TriggerMode, VCDGUIDs.VCDElement_SoftwareTrigger, VCDGUIDs.VCDInterface_Button);
Visual Studio gives me a "CA2213:Disposable fields should be disposed" warning
---
Summary of what happened:
1. I was originally using FrameSnapSink and SnapSingle to grab an image.
2. To avoid high CPU usage that I was seeing during normal operation, I switched to using a software trigger.
3. During this time, I realized the denoise property was not the same as the IC Capture denoise. I posted the question about how to denoise like the IC Capture.
4. Based on our conversations, I started acquiring frames from the ICImagingControl.ImageAvailable event and averaging them.
5. After more discussions, I was unsure of how to requeue/dispose of the images acquired from the ICImagingControl.ImageAvailable event. You had mentioned the FrameQueueSink, and I also noticed the help for ICImagingControl.ImageAvailable mentioned that it was deprecated and I should use the sink directly. At this time, I changed over to use FrameQueueSink.
6. At some point, I disabled the software trigger. I mistakenly thought it was still enabled, and I was continuously receiving frames. I thought: "I do not need a software trigger anymore". So I brought up the thread again.
7. I was correct, I no longer needed it. But then I saw my CPU was high again...so I had gone full circle.
Thank you so much for your patience, and I apologize for any trouble that I may have caused in all this.
Hello I wonder, why you have high CPU load, when saving images into memory. That is, what the FrameSnapSink does. Which computer model do you use? Question 1: You can do so. It makes image capture a little bit slower, than by using the FrameSnapSink as I showed in my sample. Which explains, why you have lower CPU Load. Suggestion: Try my sample, but set a lower frame rate. Advantage: You can run the automatics all the time. Also it makes your programming much easier.
Question 2: Normally not. Except you run into multithreading issues. At least, I did not saw this message on my own.
I still guess, you want to capture some images, average them and use them later. Is that correct?
Stefan
Hello I wonder, why you have high CPU load, when saving images into memory. That is, what the FrameSnapSink does. Which computer model do you use?
A Dell laptop running Windows 10. One core seems pegged most of the time (CPU ~ 25%), unless I am running in software trigger mode. I see the same behavior in IC Capture.
Question 1: You can do so. It makes image capture a little bit slower, than by using the FrameSnapSink as I showed in my sample. Which explains, why you have lower CPU Load. Suggestion: Try my sample, but set a lower frame rate. Advantage: You can run the automatics all the time. Also it makes your programming much easier.
I've moved away from needing automatics, so I think the FrameQueueBuffer works better for me. If I need to move back, I can try a lower frame rate as you suggest.
Question 2: Normally not. Except you run into multithreading issues. At least, I did not saw this message on my own.
The message I get is from the IDE, not the runtime. It's a code analysis error. I just wanted to make sure I wasn't leaving something undisposed if my code was responsible for disposal. Sounds like the camera control takes care of it?
I still guess, you want to capture some images, average them and use them later. Is that correct?
Stefan
Correct. I ran a cycle test over the weekend using the updated code and FrameQueueSink - worked very well, about 100k cycles and no issues.
Hello
Sounds for good news to me. The CPU Load on your laptop is somewhat high, but if IC Capture shows the same behavior, then we can not do anything about that.
Stefan
OK Thanks for everything!
I see several ways to apply a noise filter, but how do I apply the noise reduction feature found in the IC Capture utility? This is accessed via the Device Menu, and has the following values: Disabled, 2 Frames, 4 Frames, 8 Frames, 16 Frames, and 32 Frames.