Open hb98jeon opened 4 months ago
I apologize for our late reply.
RVGPU is a feature to perform remote transfers of graphics apps. To transfer video content remotely using RVGPU, the decoded video needs be transferred "as a texture".
The remote display performance at this time mainly depends on the network bandwidth. For example, at 1Gbps of network bandwidth, theoretical performance limits are about 15fps for 1080p video and about 3.7fps for 2160p video. (If video software decoding performance falls below these limits, software decoding becomes a bottleneck.)
In order to efficiently transfer video content remotely, it is necessary to remotely transfer the encoded video and to decode the video at the transfer destination. We hope to make these enhancements in the future, but we don't have any concrete plans to do so in the near future.
First of all, thank you for your kind response.
I agree with your opinion that the encoded video stream should be sent to the host and decoded in there. I think it is one appropriate method among others to improve the video rendering performance of RVGPU. To apply this approach, we need to use the "virgl-video" feature provided by MESA and virglrenderer.
https://www.phoronix.com/news/Virgl-Encode-H264-H265 https://www.phoronix.com/news/Mesa-Virgl-More-Video-Accel https://gitlab.freedesktop.org/virgl/virglrenderer/-/merge_requests/838
What do you think about running RVGPU in an environment where the virgl-video feature is enabled? Are there any technical difficulties or risks for RVGPU to support virgl-video feature?
And how about applying a resource blob as another way to enhance the rendering performance of RVGPU?
We apologize for the significant delay in our response. Thank you very much for presenting the methods to improve video streaming performance. We believe that incorporating the functionality of virgl-video into RVGPU is a good idea.
However, we recognize that virgl-video handles video processing between VMs on the same SoC, and using resource BLOB (shared memory) to share video data between the Sender and Receiver is challenging. Additional design considerations are required to handle interactions over the network.
As we have not yet fully understood the content of virgl-video, we need to deepen our understanding further.
Hello.
I have tried playing a 1080p@30fps streaming video contents using rvgpu. Of course, I used a software decoder for decoding video. When playing a 1080p@30fps video, the playback performance was measured to be around an average of 14 fps. So, I want to know if it is possible to play videos at 1080p@30fps or 2160p@30fps with rvgpu at 30fps. In this case, I believe that the reason for the poor rendering performance of rvgpu is due to the use of TCP/IP socket communication between rvgpu-proxy and rvgpu-renderer. For high resolution videos, the size of textures per frame will be very large, and there might be some performance limitations in sending them through sockets.
Could you please let me know if there are any plans or ongoing work to improve the rendering performance of RVGPU?