Perl-GPU / OpenGL-Modern

Perl OpenGL bindings for modern OpenGL 3.1-4.x
7 stars 5 forks source link

can we make glGetError processing available for every function automatically or with a switch? #44

Closed wchristian closed 1 day ago

wchristian commented 7 years ago

Official spec: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glGetError.xhtml

Examples: https://blog.nobel-joergensen.com/2013/01/29/debugging-opengl-using-glgeterror/

Short version, every GL function has the potential to push an error state onto the gl error stack. Right now in Perl in order to interrogate this, one has to do roughly the following for every single call (optimally) or group of calls.

sub check_gl_errors {
    my ($msg) = @_;
    my @errors;
    while ((my $err = glGetError()) != GL_NO_ERROR) {
        push @errors, $err;
    }
    return if not @errors;
    die "$msg with GL errors: @errors";
}

sub display {
    check_gl_errors("Went into display()"); # make sure there were no errors beforehand
    glClearColor(1.,0.,0.,1.);
    glClear(GL_COLOR); # error - should be GL_COLOR_BUFFER_BIT
    check_gl_errors("Cleared screen in display()");

    glutSwapBuffers();
    check_gl_errors("Swapped screen in display()");
}

As you can see the performance implications of this are immense. Better solutions i can think of:

Do you think any of that would be acceptable?

CandyAngel commented 7 years ago

I think XS helper function and compile switch (so no runtime overhead when the "autodie" isn't enabled) would be the best way to go.

If someone wants to "autodie" at runtime, could an OpenGL::Modern::Strict module do that using AUTOLOAD and injecting the check?

devel-chm commented 7 years ago

My understanding of OpenGL debugging is that it is something that you use to develop your code and not something to be in the non-Debug version due to the extreme performance consequences. At the moment, your perl implementation seems reasonable as glGetError() is one of the "good" routines with no pointer args or return values and so is correctly implemented. If we want a better performing debug capability, I think it would need to hook in at the C layer.

wchristian commented 7 years ago

@devel-chm You're right about the performance implications, however you also need to keep in mind that if it's not easy to use, nobody will use it. Further, the easier to use, the easier it is for the dev to get information out of the end user, if they can just tell them to run perl my_gl_program.pl --debug to switch the gl library to call checking mode. That's why i was floating the options above.

Also my perl implementation would work, but also be horrible shit performance-wise to the point of making proper debugging nearly impossible. So yeah, some kind of C implementation is necessary.

However so far you haven't actually opined yet on which of the options i floated are viable or if you see other options. Are you going to make a more detailed post later?

@CandyAngel I had made an OpenGL::Debug module that wrapped the older one like that, however that wouldn't work well here since many of the XS calls now check argument counts, so you'd need a database of those as well. Not sure if chm has thoughts on that.

wchristian commented 7 years ago

This option is perfectly doable performance-wise:

We already have this code: https://github.com/devel-chm/OpenGL-Modern/blob/master/utils/generate-XS.pl#L307-L312

I profiled OpenGL::Modern with and without that code, and that check seems to be nearly free, in terms of CPU time, or at the very least so small that it completely disappears into the noise. In Microidium:

As such, we can implement:

devel-chm commented 7 years ago

Thanks for the timings. I think the general option should be the runtime enable/disable which would give convenience at the cost of performance. You could have a use switch to choose a non-debug version for more performance. XS acceleration of glGetError can be done as well---even for the case of non-debug XS for OpenGL.

wchristian commented 7 years ago

XS acceleration of glGetError

Again i am confused. Why are you talking about XS acceleration? Why are you talking about performance cost? I described above that i profiled the XS code we already have that does something like that and found zero measurable performance impact.

The proposal is not to do any perl code at all, but go to XS immediately, and this is so simple i can probably do it myself.

    if ( glGetErrorAfterGLCalls ) {
        GLenum err;
        while ((err = glGetError()) != GL_NO_ERROR) {
        warn( "OpenGL error: " '%s'", err);
        }
    croak( "OpenGL error encountered.");
    }

Due to being in XS the performance impact of this is below the noise level if glGetErrorAfterGLCalls is 0.

At the same time glGetErrorAfterGLCalls can be activated or disabled any time with a dead simple XS function that sets/unsets it.

devel-chm commented 7 years ago

I'm confused about why you are confused. The original question had 4 options, I just replied based on your timings that option 4 would be a good default, option 3 could be used for extra performance by cutting out unneeded overhead---even if small, option 2 is not good (build time selection is only good for perl gurus and not general perl OpenGL developers), and that option 1 could be done at any case.

wchristian commented 7 years ago

Because the conclusion of my timings post was "we can do option 4 just fine in all XS with no measurable performance impact", which means no further discussion of Perl code, performance, or even the other options was needed, as far as i can tell.

devel-chm commented 7 years ago

That was not the case but, as I started with, option 4 is good.

wchristian commented 7 years ago

Alright, i get it, you were agreeing with that last proposal, just in such a way that i failed to read it correctly.