LukasBanana / LLGL

Low Level Graphics Library (LLGL) is a thin abstraction layer for the modern graphics APIs OpenGL, Direct3D, Vulkan, and Metal
BSD 3-Clause "New" or "Revised" License
2.03k stars 135 forks source link

How to add skia to the LLGL project for rendering complex text? #101

Closed jayzhen521 closed 7 months ago

jayzhen521 commented 7 months ago

Skia uses the following function to create surface: `sk_sp OpenGLBackgound::CreateSurface(int widht, int height) { m_dc = GetDC(m_WHandle);

if (!(m_hRC = CreateWGLContext(m_dc)))
    return nullptr;

glClearStencil(0);
glClearColor(0, 0, 0, 0);
glStencilMask(0xffffffff);
glClear(GL_STENCIL_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glViewport(0, 0, widht, height);

m_BackendContext = GrGLMakeNativeInterface();
m_Context = GrDirectContext::MakeGL(m_BackendContext, {});

GrGLint buffer;
m_BackendContext->fFunctions.fGetIntegerv(GR_GL_FRAMEBUFFER_BINDING, &buffer);

GrGLFramebufferInfo fbInfo;
fbInfo.fFBOID = buffer;
fbInfo.fFormat = GR_GL_RGBA8;

SkSurfaceProps props(0, kRGB_H_SkPixelGeometry);

GrBackendRenderTarget backendRT(widht, height, nSampleCount, nStencilBits, fbInfo);
return SkSurface::MakeFromBackendRenderTarget(m_Context.get(), backendRT,
    kBottomLeft_GrSurfaceOrigin, kRGBA_8888_SkColorType, nullptr, &props);

} `

My question is:I want to create a texture for skia rendering a text texture and then using LLGL show the texture, can you give me some suggestion about how to finish it?

LukasBanana commented 7 months ago

You should be able to do that by obtaining the texture data from the Skia generated OpenGL texture and then pass it to LLGL, e.g. use glGetTexImage (or better glGetnTexImage) and pass the pixel data to LLGL::RenderSystem::CreateTexture or LLGL::RenderSystem::WriteTexture.

In terms of interoperability with other libraries, there is currently no way for LLGL to use Skia's generated GL texture directly as LLGL is highly encapsulated, i.e. it only uses its own created and managed hardware textures. I presume you are not generating these font textures every frame, though, which means this shouldn't be a performance issue. Just create the font texture at startup and pay a little bit of CPU overhead by pulling the texture data from GPU back to CPU and then create a new GPU texture with LLGL.

Can I ask you how much of Skia do you use for rendering? Be aware that mixing rendering libraries can cause problems because LLGL won't keep track of GL state changes Skia or other libraries are performing, such as glStencilMask. LLGL queries the GL state when its first SwapChain is created, so you should probably do that after all your Skia renering is done. Otherwise, you would have to reset all GL states to their initial states like so:

GLint initialStencilMask = 0;
glGetIntegerv(GL_STENCIL_WRITEMASK, &initialStencilMask);
/* Skia rendering ... */
glStencilMask(initialStencilMask);
jayzhen521 commented 7 months ago

Thank you very much for your reply. LLGL is a concise and excellent rendering library. Text animation rendering is an important part of the entire rendering process, and I have some scenes that need to be rendered in every frame. Therefore, I must find a way to generate it on the GPU and then use it directly. You provided two methods: The first method is that I create an OpenGL environment by myself, then generate textures, transfer them to the CPU, and finally load them into the LLGL environment, right? The second method is to save the already created LLGL status, then start executing Skia and get an texture object, and afterward, restore the LLGL environment and use it? Does LLGL offer a mechanism for saving and restoring, so that third-party libraries can be better integrated?

LukasBanana commented 7 months ago

In that case you might want to render your font with Skia in a separate thread. This way, Skia has its own GL context since those are thread-local, i.e. each thread has only one GL context and you can only use GL resources in the thread where they were created. This way you have two separate threads and the GL contexts won't collide. That implies, of course, that you still have to pull the texture data from the GPU (from your Skia thread) and then upload it again to the GPU into LLGL (on your LLGL or main thread). I have to say, however, that I have bad experience with multiple threads with OpenGL as it was not designed for multi-threading. This will likely have a negative performance impact.

Having said said, I still have to advise against using two separate rendering libraries using the same GL context because (I can only speak for LLGL here) they won't keep track of all the GL states that the other library is changing. If you really want to go down this route, make sure to store and restore all GL states that Skia is changing (if LLGL is your main rendering library that is).

There is also the option to use the GPU-to-CPU texture data trick where you render the font with Skia and GL and then upload to LLGL with Direct3D. This way you won't have to deal with a GL context that needs to be kept in sync between two libraries. If you have to use GL, then that's not an option of course.

EDIT: Maybe take a look at my other project, MentalGL, it's a single header library to query all GL states. You could extend it to also store/restore the GL state using the struct MGLRenderState.

LukasBanana commented 7 months ago

I just pushed an update for the ClearCache native command in the GL backend with 5394ae4. This should allow you to clear the state cache of the OpenGL renderer after you're done with Skia rendering:

while (/* Main loop */) {

    // Skia rendering ...

    // LLGL command recording:
    myCmdBuffer->Begin();

    // Clear state cache in LLGL backend with native command submission:
    LLGL::OpenGL::NativeCommand clearCacheCmd;
    clearCacheCmd.type = LLGL::OpenGL::NativeCommandType::ClearCache;
    myCmdBuffer->DoNativeCommand(&clearCacheCmd, sizeof(clearCacheCmd));

    // LLGL rendering ... BeginRenderPass/EndRenderPass etc.

    myCmdBuffer->End();
}

This requires that the command buffer is created with an immediate context, i.e. the LLGL::CommandBufferFlags::ImmediateSubmit flag and also include <LLGL/Backend/OpenGL/NativeCommand.h>.

Let me know if that works for you.

jayzhen521 commented 7 months ago

Thank you very much for your help. I am checking to see if there are any issues. But I don't know how the texture rendered by Skia can be utilized by LLGL. I feel that more steps need to be added:

//Init and create a LLGL FBO with color texture named "LLGL::Texture* textTexture"

while (/* Main loop */) {
    //**binding FBO**

    // Skia rendering ...

    //**unbinding FBO and Get filled textTexture**

    // LLGL command recording:
    myCmdBuffer->Begin();

    // Clear state cache in LLGL backend with native command submission:
    LLGL::OpenGL::NativeCommand clearCacheCmd;
    clearCacheCmd.type = LLGL::OpenGL::NativeCommandType::ClearCache;
    myCmdBuffer->DoNativeCommand(&clearCacheCmd, sizeof(clearCacheCmd));

    // LLGL rendering ... BeginRenderPass/EndRenderPass etc.**Using the textTexture...**

    myCmdBuffer->End();
}

I am not yet deeply familiar with LLGL. Are there some more reasonable function calls that could be used to accomplish this task?

jayzhen521 commented 7 months ago

If we take RenderTarget as an example of modification, could it be that the execution of CreateRenderTarget() and CreatePipelines() causes a change in the OpenGL state, resulting in OpenGL calls not displaying correctly? Do we need to restore the state before Skia starts, to enable Skia to render correctly, and then I restore it to LLGL's state before proceeding? But would this conflict with the shared texture?

LukasBanana commented 7 months ago

could it be that the execution of CreateRenderTarget() and CreatePipelines() causes a change in the OpenGL state

Yes, any LLGL function could change the state of the current GL context. That's why it's very brittle to have two separate libraries share the same GL context. For the sake of simplicity, I would recommend going down the route of pulling the texture data from Skia to the CPU first. If the performance becomes a problem, you can always extend LLGL or start messing with the GL state in an intertwined fashion later.

First, try to retrieve the texture data from Skia and save the data to a file to see if it worked. For example, use STBI or whatever other library or custom code you want to use to write an image file. If that works as intended, hook up the CPU image data by updating an LLGL texture with WriteTexture.

FBOs introduce additional problems because LLGL uses a unified coordinate system where the screen origin is assumed to be on the top-left corner of the screen, like it is in Direct3D. OpenGL, on the other hand, defines the screen origin to be on the bottom-left corner. LLGL converts that in the GL backend by adjusting viewport coordinates on-the-fly and also patching vertex shaders (if specified by the client programmer; see PatchClippingOrigin) depending on FBO states. Long story short, I can't tell you how messed up all this will be if you mix and match Skia's and LLGL's GL contexts. So having this clearly separated and clearing the state cache with the native command (as mentioned above) is likely the best way to go.

To not leave you without an answer to your question regarding render targets, though, here is what you could do to render Skia's texture into LLGL's render target (assuming that you have the GL texture name, or ID rather):

GLint skiaTextureID = /* Get texture ID from Skia ... */

cmdBuffer->Begin();
cmdBuffer->BeginRenderPass(*renderTargetForSkiaTexture);
cmdBuffer->SetViewport(renderTargetForSkiaTexture->GetResolution());
cmdBuffer->SetPipelineState(*renderTargetPSO);

// Bind Skia texture to the GL context. This is brittle, but LLGL does not modify binding slots in the GL backend,
// so whatever binding slot we specified in the PSO layout will be used to bind textures.
// glBindTexture could be used as well, but glBindTextures does less state changes (requires GL 4.4+).
glBindTextures(skiaTextureBindingSlot, 1, &skiaTextureID);

cmdBuffer->Draw(/* Draw fullscreen quad with provided texture ... */);
cmdBuffer->EndRenderPass();

// Since we messed with the GL state, we have to clear the state cache.
// This can be expensive (performance wise), and should be weighed in
// if it's better or worse than download the texture data to CPU and uploading it again to LLGL:
LLGL::OpenGL::NativeCommand clearCacheCmd;
clearCacheCmd.type = LLGL::OpenGL::NativeCommandType::ClearCache;
myCmdBuffer->DoNativeCommand(&clearCacheCmd, sizeof(clearCacheCmd));

/* Continue rendering with LLGL ... */

cmdBuffer->End();

For this example, you will also need a sampler state (i.e. glBindSampler) and take a look at the Texturing example to see how to create a PSO layout.

I might add additional native commands in the future to simplify this process, but I want to be very careful with that, because those commands are nasty to maintain and they are not usable for any other backend, which is against the idea behind LLGL. They are the right spot for you if you decide to extend LLGL for project-specific purposes nonetheless.

jayzhen521 commented 7 months ago

Thank you very much for your reply. But I still don't understand how to let LLGL use the texture object. If I use OpenGL methods directly, it leads to errors, but through Skia's backendContext using _backendContext->fFunctions.f[glGenFramebuffer()], it works fine and renders correctly. However, I don't have a function like _backendContext->fFunctions.fBindTextures to call. Can you give me more details about how to let LLGL use the texture object. I have gained a deeper understanding of rendering through the adventure.

LukasBanana commented 7 months ago

LLGL cannot use your Skia texture, but you can create an LLGL texture and render or upload your Skia texture data into it with a RenderTarget object or the WriteTexture function. Have you tried saving your Skia texture to an image file? And what are the errors you are seeing? Can you share some code snippets what your current scenario looks like?

EDIT: I just realized that if you use CreateWGLContext from Skia, you already have your own GL context which means none of those textures and other resouces will be visible to the GL context LLGL creates. Let me come up with something to allow the GL backend share the its context (i.e. HGLRC on Windows). LLGL already does that when multiple contexts are created internally, but currently there is no interface to make use of it from outside. Until then, you could switch between your GL contexts by grabbing the handles when you initialize both libraries:

InitializeLLGL(); // CreateSwapChain here
HGLRC nativeGLContextLLGL = wglGetCurrentContext(); // Save native GL context from LLGL
HDC nativeDeviceContextLLGL = wglGetCurrentDC(); // Also save HDC (Handle for Device Context) from LLGL

InitializeSkia(); // OpenGLBackgound::CreateSurface here
HGLRC nativeGLContextSkia = wglGetCurrentContext(); // Save GL context from Skia
HDC nativeDeviceContextSkia = wglGetCurrentDC(); // Also save HDC (Handle for Device Context) from LLGL

Now you can switch between the contexts like this:

wglMakeCurrent(nativeGLContextSkia, nativeDeviceContextSkia);
/* Render with Skia ... */

wglMakeCurrent(nativeGLContextLLGL, nativeDeviceContextLLGL);
/* Render with LLGL ... */

Please note that switching GL contexts has a huge performance impact, so this should only be an intermediate solution until we find a better way. This context switching also means that you won't be able to use any of the GL resources between the libraries as they are only accessible within their own context. So that leaves you again with the only option to download the data via glGetTexImage and upload to LLGL via WriteTexture. We can come back to sharing the same GL context once I updated the interface to allow specifying a shared HGLRC when loading the GL backend in LLGL. This would look something like this (preview):

// This is just a concept: the struct 'NativeContext' will be platform dependent:
LLGL::OpenGL::NativeContext nativeContextGL;
{
    #ifdef _WIN32
    nativeContextGL.hGLRC = /* Use your Skia GL context ... */;
    #else
    /* Different fields for other platforms ... */
    #endif
}
LLGL::RendererConfigurationOpenGL rendererConfigGL;
{
    // This is just a concept: Allow to specify a platform dependent shared GL context (HGLRC on Windows)
    rendererConfigGL.sharedContext = &nativeContextGL;
    rendererConfigGL.sharedContextSize = sizeof(nativeContextGL);
}
LLGL::RenderSystemDescriptor rendererDesc;
{
    rendererDesc.moduleName = "OpenGL";
    rendererDesc.rendererConfig = &rendererConfigGL;
    rendererDesc.rendererConfigSize = sizeof(rendererConfigGL);
}
LLGL::RenderSystemPtr renderer = LLGL::RenderSystem::Load(rendererDesc);
jayzhen521 commented 7 months ago

Have you tried saving your Skia texture to an image file? And what are the errors you are seeing?

Yes, I have tried, and it works (without errors) by:

  1. skia rendering
  2. read pixel
  3. clear context after commands->Begin();
  4. create LLGL::Texture
  5. adding to command.

image

I will temporarily use this method, as the performance issues have not yet been manifested in text animation. Thank you very much for your help, and I look forward to your subsequent support for GL context sharing.

LukasBanana commented 7 months ago

I started with new fields in RenderSystemDescriptor to pass custom handles to LLGL with commit 2028ee5. Please note that this is currently only supported for the GL backend on Windows and Linux and I only tested it on Windows so far, but I'll add the other backends successively. Take a look at Test_OpenGL.cpp and set TEST_CUSTOM_GLCONTEXT to 1 to see how it can be used.

LukasBanana commented 7 months ago

I updated the remaining backends in the most recent commits. Feel free to re-open this ticket if you have trouble using the new API.