Closed grz0zrg closed 7 years ago
The library doesn't support only capturing part of the monitor. There doesn't appear to be any OS that supports capturing part of the screen --its all or nothing. If you can find system calls for this, please let me know and I can work to implement this.
There probably wouldn't be much of a speed difference in capturing part of a screen since copying the entire buffer only takes a single memcpy, but if it were part of the screen would require as many memcpy calls as there were rows to copy.
There are some opportunities to optimize my codebase more, but I just haven't got around to it. What are you doing that 260 fps isn't good enough? Also, you could try 0 milliseconds.. I think that will work, and the library will not pause between attempts to capture the screen.
What platform are you running this on?
There are opportunities for a performance bump when ONLY capturing OnNewFrame and NOT capturing the difs. This is because I can pass the raw image from the OS directly to the function OnNewFrame as is.
The windows version already has this optimization, I am working on the Linux version right now and ill wrap up the ios version to do this as well.
Just pushed a commit to master for the linux build and it was a pretty nice improvement.
I run Linux in a virtual environment and my fps went from an avg of 170 to 330 so the jump is nice. Again, to get this you MUST register the OnNewFrame callback and NOT the difs
Working on the mac, then ill publish a new release.
Just pushed the ios version and got an improvement as well.
I run a mac mini so its pretty slow! But my fps went from 60 to 80 with this code commit.
This is as close to bare system calls as you can get on mac, windows and linux. There simply isn't a faster way without writing drivers.
I just released a new version including all of these changes. Try these out and let me know how it goes.
Thank for your answer, with the changes that you have made i am now at 400 FPS under Linux which is great and should be sufficient even if it is not on the microseconds scale. This is for audio and visuals experiments.
As for parts of the screen capture, i don't know if that can be done on all OS, some tools like ffmpeg seem to be able to do it and I saw some code on StackOverflow with specific window capture & parts, for ffmpeg i don't know if they capture the entire screen first and crop it themselves in their implementation though, what I meant was capturing a specific rectangle, it seem you forecasted this because the CreateMonitor seem to allow offset & rectangle dimension argument, that is why I asked if that was possible
Anyway this update improve the performance quite well, thank you
Im glad it was a bump in performance. Ill respond more fully after work today, but I doubt that you will get anything faster than this. At 1920x1080 and 400 fps, the amount of raw data streaming is 3.3 GIgabytes per second which is really impressive
I have a second question, is it possible to force a specific framerate ? Right now the framerate fluctuate a bit even at 60 fps (16.66ms), i know that this can be done easily by adapting my code but i wonder why this fluctuate when set with setFrameChangeInterval function.
Btw, i am using your websocket_lite library and it is great, i really like the way you are doing your libraries.
The Monitor Struct is more of a snapshot of the dimensions (width, height, offsets) plus metadata (Name and Id) on the Monitors. Its used on each frame grab to see if the monitors have changed because if they have changed, some internal work needs to occur.
I think the best way to capture just part of a monitor is to do it in the OnNewFrame and get just the data you want. It would be a simple loop, which is what the library would have to do anyway.
There are tweaks that I can do to narrow in on a more stable framerate, but there will be a built in jitter because of the system load + the application itself doing work + the OS might not respond immediately to a new frame request.
What types of framerates are you getting when you set it to 16.6 ms?
Thanks for the compliment. I spent some time trying to figure out a good public API model and I like what I am currently doing :)
I see, thank for the details, capturing a part of the monitor that way is easy and sufficient but i thought there would be a performance hit if one were to use custom code instead of using the low level API for doing it.
Does setFrameChangeInterval allow to set something like 16.6ms ? This is the first time i am using the chrono standard library
I would try 16 ms and see what that gets
I will look at it tonight but from what i recall that's what I tried and the frame rate fluctuated between 59 and 61 FPS, that's also why I asked for microseconds intervals (greater precision) because plain 16ms is not really 60fps
I have to make a decision on timing precision for the library and I believe that milliseconds is the best choice as every other library does their waiting in terms of milliseconds as well. 59-61 fps seems great to me. Going to lower precision never actually gains anything because of natural jitter caused by the OS, and other applications.
Additionally, if I goto microseconds and someone requests nanoseconds, I wouldn't have a good argument to say no:(
If you have a good reason for this precision, please let me know...
Milliseconds is the best choice but there should be a way to ask for more precisions if that matter and if it is possible, now i don't know how really accurate the timing is, if a jitter is to be always expected even without loads, this would make a better precision useless... there is also ways to prevent it by just limiting rates in the callback by dropping frames so maybe this is not really needed in the library core.
Thank for the pause/resume btw.
I MIGGHTTTTTT..... have a way of accomplishing custom intervals without changing the public API. I am working on this now
I just updated master to reflect the change you wanted.
You can now pass what ever duration you want to the setInterval functions.
setFrameChangeInterval(std::chrono::microseconds(100));
//or
setFrameChangeInterval(std::chrono::seconds(1));
This also fixed the other bug where changing the interval wasn't taking affect. It all just works as expected. I am closing this as I believe this completes your request
Thank you, great change, i now have much less jitter when i set 16666 microseconds :)
Hello,
this is a great and very useful library, i would like to know if microseconds capture interval is possible ?
I would like to achieve < 1 millisecond intervals How would you achieve this ? Right now when capturing a single screen with 1 millisecond interval i am at 260 FPS which is around 4 milliseconds, this is already great but barely enough for my needs, i am just using
onNewFrame
callback.Also, i tried to capture a tiny part of the screen by implementing the
CreateMonitor
function to see if that make a difference but it seem it crashed, how can i capture a specific part of the screen ? Will it make a speed difference ?Thank you by advance.