Closed ZachL1 closed 2 years ago
There have been multiple discussion on this topic at [retired now] MSDN Forums and StackOverflow.
This project used to show the way to connect customized/generated video stream to applications back in time. As of now, it's not compatible with many newer apps, and it shows no way to do interrprocess communication to inject your own data (the data is assumed to be generated in process).
I am away from my normal development system for unpredicted amount of time, so I will just post a few lead links to read up from:
Essentially, this project should not be used for any new development and I maintain it only because of popularity, multiple links, non-availability of original website and as a tribute. Starting Windows 11 there is finally (!) good legal API to create virtual cameras: MFCreateVirtualCamera
. New apps should use it, even though the minimal requirement of Windows 11 seems rather high.
I am the original author of this.
Initially I tried writing a WDM driver for this but ran into many difficulties - so I figured out that exposing certain interfaces on a source filter will cause it to show up in many applications as a bonafide source.
I added this to another module - VCAM sink (this was a commercial contract for a client) along with some additions to VCAM
VCAM Sink would act as a renderer and dump bitmap frames (and some meta data) to a named shared memory section.
The VCAM source could take a "filename" which actually referred to the memory map and start sourcing data from there.
I used VCAM for many projects including one for JustinTV (2006 or something) where I exposed some custom IP camera as a source (the IP cam was only for viewable on its own webpage with activex control)
Feel free to contact me if you try implementing this - I will look and see if I can find that very old code somewhere for reference
Thanks for your answers, I have implemented "capture input from camera -> do some processing -> transmit the processed result to other apps". But I don't think my method is good.
Specifically, I directly modified the FillBuffer
function in the Filters.cpp. Something like this:
HRESULT CVCamStream::FillBuffer(IMediaSample *pms)
{
REFERENCE_TIME rtNow;
REFERENCE_TIME avgFrameTime = ((VIDEOINFOHEADER*)m_mt.pbFormat)->AvgTimePerFrame;
rtNow = m_rtLastTime;
m_rtLastTime += avgFrameTime;
pms->SetTime(&rtNow, &m_rtLastTime);
pms->SetSyncPoint(TRUE);
BYTE *pData;
long lDataLen;
pms->GetPointer(&pData);
lDataLen = pms->GetSize();
cv::Mat frame;
video_capture->read(frame); // video_capture is defined in the CVCamStream constructor
// Some processing of frame
long lFrameLen = frame.cols * frame.rows * 3 * sizeof(unsigned char);
memcpy(pData, frame.data, std::min(lFrameLen, lDataLen));
for (int i = lFrameLen; i < lDataLen; ++i)
pData[i] = 255;
Sleep(1);
return NOERROR;
} // FillBuffer
But I think this is not a good way to do it. Especially if I have a very tedious process for the frame
and everything has to be compiled together in the .dll.
I noticed some other methods (like obs-virtual-cam) seem to use a queue. My C++ program keeps filling new frames to the end of the queue, and FillBuffer
takes frames from the head of the queue.
But the question is how should I relate my C++ program to Vcam? Is there some simple and straightforward example program? (obs-virtual-cam is too complicated!)
@roman380 Thanks! The responses seem useful, but sorry I really struggle to really understand and make improvements to the project. I think a simple example would be more valuable to me!
I am away from my normal development system for unpredicted amount of time, so I will just post a few lead links to read up from:
- Read USB camera's input edit and send the output to a virtual camera on Windows
- How to use a custom capture source filter in a C++ application?
- DirectShow filter is not shown as input capture device - this shows that this virtual camera will work only with some apps
- How to implement a "source filter" for splitting camera video based on Vivek's vcam?
- SampleGrabber and VCam
@rep-movsd Thank you for your great work! Can you find some examples as I described above?
Please excuse my repeated interruptions. I've learned a little bit about COM recently, let me rephrase my problem.
My current method is:
Specifically, I directly modified the
FillBuffer
function in the Filters.cpp. Something like this:HRESULT CVCamStream::FillBuffer(IMediaSample *pms) { REFERENCE_TIME rtNow; REFERENCE_TIME avgFrameTime = ((VIDEOINFOHEADER*)m_mt.pbFormat)->AvgTimePerFrame; rtNow = m_rtLastTime; m_rtLastTime += avgFrameTime; pms->SetTime(&rtNow, &m_rtLastTime); pms->SetSyncPoint(TRUE); BYTE *pData; long lDataLen; pms->GetPointer(&pData); lDataLen = pms->GetSize(); cv::Mat frame; video_capture->read(frame); // video_capture is defined in the CVCamStream constructor // Some processing of frame long lFrameLen = frame.cols * frame.rows * 3 * sizeof(unsigned char); memcpy(pData, frame.data, std::min(lFrameLen, lDataLen)); for (int i = lFrameLen; i < lDataLen; ++i) pData[i] = 255; Sleep(1); return NOERROR; } // FillBuffer
I am not satisfied with this method, the main reason is because I built the 64-bit version, and the 32-bit application cannot detect the virtual camera! I would like to build an additional 32 bit, but processing of frame uses some dependency libraries that only support 64-bit builds, and that's the problem.
My first thought was to separate the processing of frame from the virtual camera, but that seems a bit difficult.
Through recent studies, I found out that the Filters.dll
built by this repository is an in-process COM, right? Is it possible to build it as out-of-process COM (*.exe) ? In this way, 32-bit applications can also use the (64 bit) virtual camera.
For the reference: Applicability of Virtual DirectShow Sources, also DirectShow Virtual Camera does not appear in the list on some configurations.
This repo shows how to create an in-process virtual video input device. If you want to produce the data externally, or with 32/64 sync, you need to develop the respective layer of interprocess communication.
You cannot build this repository code for out of process COM because it simply cannot by design work this way. That is, there is nearly nothing in standard COM to help you with this problem, you would need to implement all this on your own.
I see, thank you for your prompt response.
Essentially, this project should not be used for any new development and I maintain it only because of popularity, multiple links, non-availability of original website and as a tribute. Starting Windows 11 there is finally (!) good legal API to create virtual cameras:
MFCreateVirtualCamera
. New apps should use it, even though the minimal requirement of Windows 11 seems rather high.
Can the virtual camera built with that API be consumed by multiple applications simultaneously? I am a bit confused about this link: https://docs.microsoft.com/en-us/answers/questions/872883/which-api-to-use-for-creating-virtual-webcam.html#comment-877975 would appreciate your comments
@DavitMob first of all, I already have a "proof of concept" demo for the new API: https://alax.info/blog/2245
I don't remember the details of shared use of camera at MSDN, I don't think the quality of documetnation of recent features good enough, so I can just share my impression.
The new API creates a camera which mimics a hardware device, meaning that it shares the same concept of exclusivity as a physical piece of hardware - that is, the assumption is that there is just one exclusive consumer of video
The good news, however, is that Microsoft implemented a "Frame Server" service, which is the immediate consumer of video, and then this video is shared between applications. In this sense, one does not need to implement anything for multi-application consumption, frame server is responsible for this. But frame server implements it in some weird way, I am guessing they decided to keep single exclusive camera consumer approach for classic apps, and only new UWP apps can share the feed, when they connect to the camera (to frame server, that is) telling explicitly that they are okay to share the device.
Thanks @roman380 for the comment.
@roman380 I've seen your fantastic article and I have already made attempts.But unfortunately I've failed.I found that there's very little information about “MFCreateVirtualCamera” online.May I politely inquire if you would consider open-sourcing your code about your "proof"?That will help me a lot.Thanks.
@woodey233 I would possibly publish some sample code on MF virtual camera, but I have not had much time for this lately. In the meantime you can check Microsoft's sample code https://github.com/roman380/tmhare.mvps.org-vcam/issues/7#issuecomment-1400094045
@roman380 Thanks for reply.Can't wait to see your sample code!
Forgive my ignorance, I have absolutely no idea how to use it in my project.
Specifically, my program uses opencv to capture input from the camera, and then does some processing on the input. I want the processed video stream to be "output to a virtual camera". I'm not sure if this description is accurate. Anyway, I want other apps (such as zoom that get input from the camera) to select the Virtual Cam as input to get my processed video stream.
Can I do this with DirectShow VCam? And how?
I am a beginner and my native language is not English. So I'm not sure if my formulation is clear. Please give me some replies, thanks!