Open fire opened 1 year ago
I know Jellyfish / membrane has a demo videoroom for webrtc, but it's unclear the earlier steps.
Hi @fire!
Open problem: How to send a 15 MB file to membrane. UDP has a 1500 byte limit.
You have to fragment data you want to send. In case of jellyfish, we only allow for WebRTC and RTSP ingress so you would have to generate such a stream and send it to the Jellyfish. In case of Membrane, you could send data on your own using e.g. udp or tcp but in case of udp you have to deal with packet-loss and congestion control on your own so I don't think it's a good idea.
I want to process the gltf file so that it becomes a video
We don't support this in Jellyfish, at least at the moment. In Membrane, you can read some image for example png, decode it and treat as one raw video frame. Then, just emit it with the frame rate you want. The other way would be to do similar thing on the client side and send it via WebRTC to the Jellyfish. We do something similar in our dashboard. We have very simple static image, we create MediaStreamTrack using canvas with some framerate and send as WebRTC stream to the Jellyfish.
Open problem: How to render frame by frame through blender? three.js? Godot Engine?
I'm not sure if I follow you. What would you like to render in blender or Godot?
You mentioned the approach of read some image for example png, decode it and treat as one raw video frame. Then, just emit it with the frame rate you want.
So one way is that for example blender would take the gltf, do processing and emit a 4k output png frames of for example 500 frames of pngs one by one.
Edited:
The link I attached is a 3d animation
I'm interested in WebRTC ingress. Do you think REST ingress is a good idea?
I think it's possible to treat the blender render frames as a membrane pipeline, but it needs the large gltf file, some parameters and blender.
I'm interested in coding some of these features if you're able to draft a design.
Could you please describe your use case more precisely? What do you want to achieve? Where is the client side and where the server side?
sequenceDiagram
participant C as Client
participant S as Server
C->>S: Uploads gltf file
S->>S: Loads gltf file
loop For each frame in model.frames
S->>S: Processes frame with parameters
S->>S: Saves processed frame
end
sequenceDiagram
participant GMP as Google Media Pipe
participant OSC as OSC
participant BR as Blender Render
participant T as Twitch
loop For each person in motion capture data
GMP->>OSC: Sends person animation frames in real-time
loop For each frame in animation frames
OSC->>BR: Translates frame into membrane pipeline format
BR->>BR: Processes frame as membrane pipeline in real-time
end
BR->>T: Streams processed frames as video in real-time
end
I'm trying to discover a design for this sample problem.
See also https://lox9973.com/ShaderMotion/player-gltf.html that takes a video capture and playbacks a 3d character motion video frame.