Open jo-chemla opened 3 months ago
The Spout Library doesn't support (yet) this feature. Until this is available there is no way I can implement it.
Plus: The performance would be as bad as with NDI because I have to go via Numpy Arrays and the CPU and thats just slow.
Thanks for your feedback. This thread was opened as a way for windows people to watch progress on these spout implementations, because your readme states that, at the moment, Windows Spout Receiver
is not yet implemented, as well as Sender Discovery
.
Regarding Spout Receiving, the UI could probably be updated to remove Spout from the Texture Receiving panel for the moment, while implementation is not complete - no way yet to tell Blender what is the name of the Spout source. I initially assumed, when exploring the plugin UI (not the github readme) that both Spout sending and receiving would work, + parsing through the source code, it seemed that the plugin implements both SpoutClient and SpoutServer based on the Python-SpoutGL library.
Two sidenote feedbacks:
Also for people tracking Spout Sender discovery implementation into upstream Python-SpoutGL lib, see here
Hi again,
Regarding listing available spout senders, the Python-SpoutGL upstream maintainer just added API methods getSenderList()
and a couple other APIs related to listing senders in 0.1.0 release as per this discussion
Add
getSenderList(), getSenderInfo(), getActiveSender(), setActiveSender()
Hope this is useful!
Python-SpoutGL started off as a Blender texture sharing experiment. I was trying to see if I could integrate Krita with Blender so you could live-preview textures being edited in Krita, although the performance was never great because of the CPU image copying mentioned above.
https://github.com/user-attachments/assets/afa483c5-19af-45a0-bfd8-653d345635fc
Code (simplified, doesn't including polling / frame sync):
# Allocate buffers (do this whenever the source size changes)
io_bytes = BytesIO(bytes(repeat(0, width * height * 4)))
byte_buffer = self.io_bytes.getbuffer()
float_buffer = array.array('f', repeat(0, width * height * 4))
# Read image as bytes
receiver.receiveImage(byte_buffer, bgl.GL_BGRA, False, 0)
# Convert from bytes (0-255) to 32-bit float (0.0-1.0)
SpoutGL.copyToFloat32(byte_buffer, float_buffer)
# Copy float buffer to image
# Per https://blenderartists.org/t/faster-image-pixel-data-access/1161411/6 faster than image.pixels =
# when using a buffer
image.pixels.foreach_set(float_buffer)
# Make sure image gets marked dirty for rendering
image.update()
image.update_tag()
Thanks for the code sample and the demo, both looks good. Also thanks for the link stating that using image.pixels.foreach_set(float_buffer)
is faster than setting image.pixels =
- which the maybites plugin is using at the moment (for both NDI and syphon/spout).
Regarding GPU copy of textures on blender - or passing the texture gpu pointer - here is a dedicated thread in case it's useful that was opened by the maintainer, to track progress.
Thanks to you all - very interesting!
I don't have access to a windows machine at the moment - it will take until mid september when I can look into it again.
I am also open for pull requests, though I won't accept them until I can test them, too.
message to my future me: checkout https://blendermarket.com/products/audvis
I've tried my luck implementing SpoutDirectory
and SpoutClient
but am not 100% done, copy-pasted below in case it's useful.
State of implementation:
__init__.py
and uninstall/reinstall python moduleblender-texture-sharing\fbs\FrameBufferDirectory.py
, returning return SpoutDirectory(name)
instead of return NDIDirectory(name)
target_image.pixels.foreach_set(float_buffer)
is full transparent - maybe my use of the receiveImage
is incorrect. I'm fighting with the fact that the data returned by the Python-SpoutGL library is buffers rather than simpler numpy arrays, but the above code shared above by jlai is really useful! self.receiver.receiveImage(byte_buffer, bgl.GL_BGRA, False, 0)
seem to return an empty buffer since SpoutGL.helpers.isBufferEmpty(byte_buffer)
returns True, although I'm using the demo sender from Spout. apply_frame_to_image
although it could listen to spoutSender being updated - when detoggling-retoggling the spout texture in the blender receive ui. Edit: 🚀 It now works! Just edited the above SpoutClient.py
snippet to reflect changes.
What pointed me to the working solution is reusing other bits from from the original demo code repo, especially this comment Not sure why first frame is empty
. So a while loop is needed to get first frames.
The code is really messy for now, and it is not live updating yet - only first frame gets captured, then I have to detoggle/retogggle the received texture to see it updated. This might have to do with has_new_frame
or new_frame_image
although I set both to True.
I haven't tried the NDI/Syphon receiver so I'm not sure how this addon normally works, but it looks like the operator.py code for receivers only hooks into bpy.app.handlers.depsgraph_update_pre
which only gets called when something in the scene changes. For example, you can add a print statement to write_frame_handler
. If you add a cube to the scene and drag it around, you'll see it print when you move the cube, but not when the scene is unchanged.
Secondly, there's the two update calls in my example. image.update() load the changes to the float data array into blender's OpenGL/Metal/Vulkan/etc texture used for rendering.
image.update_tag() marks the image as dirty in the depgraph so that any 2d/3d views that use the image will be rerendered. This can be called from anywhere to trigger a re-render. E.g. you could set a timer operator that calls image.update_tag()
once a second which would re-render any currently-visible scenes using the image (and probably do nothing if the image was not used).
I'm not sure how the Syphon/NDI implementation flush the images since I think those two calls would normally be necessary, unless something else in Blender happens to trigger the image to update.
In my experiment, I had a thread with the Spout client which would listen for new frames and push (instead of pull) updates to the image using update_tag()
(note that any updates to blender data structures have to be done on the main thread, e.g. by scheduling a timer with bpy.app.timers.register
). But note that the "push" approach was suited for my use case which was low-frame-rate previews where it's OK if frames get dropped (and had sender-side optimizations to not send frames if nothing changed).
I'm sort of curious what you're planning on using texture streaming for, especially if real-time is probably not feasible.
image.update_tag() marks the image as dirty in the depgraph so that any 2d/3d views that use the image will be rerendered. This can be called from anywhere to trigger a re-render. E.g. you could set a timer operator that calls image.update_tag() once a second which would re-render any currently-visible scenes using the image (and probably do nothing if the image was not used).
I'm not sure how the Syphon/NDI implementation flush the images since I think those two calls would normally be necessary, unless something else in Blender happens to trigger the image to update.
Its nice how this thread develops.
I didn't know about the image.update_tag()
. So far I couldn't figure out how to force blender to updated the scene after a new frame (both syphon and NDI) has arrived. thanks.
Thanks both for the feedback! Indeed when I try to move something in the scene, then the Spout receiver texture updates in blender - low fps, probably because there is a lot of non-useful code + prints and bits that could be displaced out of the apply_frame_to_image
rather than being executed on every frame.
Our use case was simplifying the workflow for our artists to design immersive exhibitions and fit our renders (3d scans, pointclouds or meshes, 3d tiles etc) to the exhibit space, and iterate faster. The low fps NDI/windows is already a pretty good step in the right direction, and I wanted to see if spout would make things better fps-wise - although it's true that the cpu image copy seem to be blocking, hence why a blender api to retrieve the gpu pointer to the image/tewture might alleviate that.
https://github.com/user-attachments/assets/7eb36eb5-7441-4d4f-b0b3-95782a6de8ea
hence why a blender api to retrieve the gpu pointer to the image/tewture might alleviate that.
totally agree.
I was already in contact with blender developers (see: https://devtalk.blender.org/t/adding-a-write-method-to-gpu-types-gputexture/33226/4) to gauge the waters. Its technically feasible, but I don't have the time to dive into this black box of a machine room.
Hi there, just removed most of the unused code, and framerate is way smoother - see attached.
Here is the final SpoutClient.py
(folded). Do you want me to make a PR?
Note I have to move an object for the update to take place, and still had to disable return NDIDirectory(name)
within blender-texture-sharing\fbs\FrameBufferDirectory.py create()
so the spout available senders list/directory does not get overwritten by NDI list.
Best,
https://github.com/user-attachments/assets/ecc33368-15f9-48d5-b54f-c1e8a6a01756
have you tried the image.update_tag() option?
Here is the PR, which now takes the sender's name into account dynamically: https://github.com/maybites/TextureSharing/pull/39
I've tried both image.update_tag() & image.update()
but it seems that apply_frame_to_image
is not called multiple times unless I move the object in the scene as jlai mentioned, probably because
it looks like the operator.py code for receivers only hooks into bpy.app.handlers.depsgraph_update_pre which only gets called when something in the scene changes
Your PR looked good and I merged it with my master.
Thanks, great to hear! Glad I could help work out the spout client and listing for blender-texture-sharing!
I just pushed a fix for the overwriting of the NDI list in the spout receiver selection menu. At least I hope. I could only test it on MacOS.
The fix seem to work correctly to list either Spout or NDI on windows, thanks!
Same thing for both Spout and NDI, I have to move the cube to have the test pattern appear and update inside the texture. Probably image.update_tag() & image.update()
can be added at some generic place to avoid the need to do these scene updates.
Simplest option for automatic refresh would be to add a timer targeting a user-configurable FPS and call write_frame_handler
and image.update_tag()
from the callback. The timer callback could return (1/fps)
in the simplest implementation, or ideally also take into account the time passed since the last update time so that the interval is consistent.
Spout has an isFrameNew()
that you can try using for the has_new_frame
implementation to avoid drawing when there isn't a new frame ready, although I don't know how well it works (I remember it acting a bit wonky, but I haven't touched it in a few years).
At higher frame rates, it may be better to use a frame listener (push) for registering a callback with the syphon/spout client to draw when a new frame is received (otherwise if the timing doesn't line up, you might miss frames) but given the performance limitations (especially with larger textures) I don't think that'd be an issue.
Understood, thanks for the pointers. This issue can probably be closed now, since spout client and directory works both great!
I just pushed a fix for the update issue. It uses now a timer instead and has also a refresh rate setting.
I keep the issue open for the time beeing.. I am not done yet :-)
Thanks, let's keep this open then, sure! Nice fix for the timer, texture is now correctly updating without requiring user-interaction/scene updates.
Do you think it would make sense to ping back blender dev team on your original forum post, in order to know whether sharing the texture GPU pointer can be implemented for better framerate?
Do you think it would make sense to ping back blender dev team on your original forum post, in order to know whether sharing the texture GPU pointer can be implemented for better framerate?
They already confirmed the technical feasability. But I very much doubt they will do it. Not high enough on their priority. This is very much a fringe usage.
But you can try.
I think for textures the problem is that they can be used in a bunch of different contexts and environments including the UV editor, Cycles (which can be CPU rendered), Metal/Vulkan, external rendering engines, and other addons. So OpenGL is an implementation detail that may not even be available in some cases.
And it sounds like texture/image handling is even more complex in Vulkan so it would probably be harder to abstract over compared to something like GPUOffscreen which serves a more narrow purpose.
Hi there, first of all thanks for this great addon! It would be great to support windows Spout receiver, since NDI receiver has a pretty low framerate. Also, spout sender discovery would be nice, as listed in your readme.
This issue is opened to track progress son this spout progress. Thanks!
Also adding a blender devtalk reference to your blender thread regarding gpu texture sharing/writing