Closed ThoenigAdrian closed 1 year ago
Have you tried changing the offwidth
and offheight
attributes in your XML?
@saran-t no, not yet what should i change it too ? It's currently set at:
<visual>
<map force="0.1" zfar="30"/>
<rgba haze="0.15 0.25 0.35 1"/>
<quality shadowsize="2048"/>
<global offwidth="800" offheight="800"/>
</visual>
I am using the default humanoid.xml .
As far as i understood it should just be the size of the offscreen buffer in this case it renders to 800 * 800 pixels . But since I use mjrRect viewport = mjr_maxViewport(&con); . The mjr_render should always use the exact dimensions for the offscreen buffer. And since blitBuffer should do the scaling according to the documentation it should be fine.
Blit transfers pixels between the two buffers on the GPU and is therefore much faster. The direction is from the active buffer to the buffer that is not active. Note that mjr_blitBuffer has source and destination viewports that can have different size, allowing the image to be scaled in the process.
Found the issue.
mjr_blitBuffer(viewport, dstviewport, 0, 0, &con);
vs. mjr_blitBuffer(viewport, viewport_right, 1, 0, &con);
so flg_color needs to be "1" . Since this decides whether the color buffer is copied.
To be honestly I completly misinterpreted the options of mjr_blitBuffer
.
If src, dst have different size and flg_depth==0, color is interpolated with GL_LINEAR.
When I read this line I thought this flags refer to some advanced options related to openGL (GL_LINEAR or GL_NEAREST). However this flags are crucial and are basically boolean options which specify whether to blit the depth buffer and/or the color buffer of the Framebuffer object.
In hindsight it might be obvious. Still I think documentation could be improved there to explicitly mention it, just in case.
relevant source code which made me realise my mistake:
void mjr_blitBuffer(mjrRect src, mjrRect dst,
int flg_color, int flg_depth, const mjrContext* con) {
// construct mask and filter for blit
GLbitfield mask = (flg_color ? GL_COLOR_BUFFER_BIT : 0) |
(flg_depth ? GL_DEPTH_BUFFER_BIT : 0);
Hi,
I am trying to get offscreen rendering to work.
mjr_setBuffer(mjFB_WINDOW, &con);
mjr_setBuffer(mjFB_OFFSCREEN, &con);
mjr_blitBuffer(viewport, dstviewport, 0, 0, &con);
and use the viewport for the right halfglfwSwapBuffer(window)
Actual Behaviour All the pixels which have been transfered from the offscreen buffer are black.
Expected Behaviour
I would expect the same image to be rendered twice. On the left the one I rendered to the onscreen buffer directly. And on the right the same image which was first rendered to the offscreen and then transferred to the onscreen buffer.
Full code
Additional things i've tried
1.
I also tried writing directly to the offscreen buffer via
mjr_drawPixels()
since thought maybe there is a issue with mjr_render().mjr_readPixels()
everything is black.Use case
In case you are wondering why I am trying to do this. I know the above example seems useless however it's just a minimal reproducible example. I have a mujoco scene with multiple agents, where every agent has cameras and i don't want to display all this camera outputs in the window however i need to render them so the neural networks down the line can use this pixels.
What i think the issue might be
I also tried the record.exe sample . And I can get a properly rendered video out of it. So I guess offscreen rendering isn't completly broken. Maybe it has something to do with how the windows is created. Invisible window vs. Visibile Window. Or mabye related to double buffering ? Anyone an Idea how to solve this ?