Moguri / panda3d-gltf

glTF utilities for Panda3D
BSD 3-Clause "New" or "Revised" License
82 stars 19 forks source link

Segmentation fault with error: <stride> is larger than GL_MAX_VERTEX_ATTRIB_STRIDE #117

Closed Yonnji closed 1 year ago

Yonnji commented 1 year ago

Got "Segmentation fault" after loading glTF file, skipping "write_bam_file" step and trying to render the actual "converter.active_scene". If I load the model using recommended way (glTF parse->BAM export->BAM import) then it renders fine.

The output:

ModelRoot scene0
  PandaNode armature
    Character Hips T:m(pos 0 0.00414693 0.170849)
  PandaNode Body
    GeomNode Body.baked (2 geoms: S:(MaterialAttrib TexMatrixAttrib TextureAttrib))
  PandaNode secondary
:display:gsg:glgsg(error): GL_INVALID_VALUE error generated. <stride> is larger than GL_MAX_VERTEX_ATTRIB_STRIDE.
:display:gsg:glgsg(error): GL_INVALID_VALUE error generated. <stride> is larger than GL_MAX_VERTEX_ATTRIB_STRIDE.
:display:gsg:glgsg(error): GL_INVALID_VALUE error generated. <stride> is larger than GL_MAX_VERTEX_ATTRIB_STRIDE.
Segmentation fault (core dumped)

Minimal example:

#!/usr/bin/env python3
import json
import os
import struct
import sys

from direct.showbase.ShowBase import ShowBase
from panda3d.core import get_model_path, load_prc_file_data, Filename

from gltf.converter import read_glb_chunk, Converter, GltfSettings

def convert_lite(src, settings=None):
    if settings is None:
        settings = GltfSettings()

    if not isinstance(src, Filename):
        src = Filename.from_os_specific(src)

    indir = Filename(src.get_dirname())

    get_model_path().prepend_directory(indir)

    converter = Converter(indir=indir, outdir=os.getcwd(), settings=settings)

    with open(src, 'rb') as glb_file:
        if glb_file.read(4) == b'glTF':
            version, = struct.unpack('<I', glb_file.read(4))
            if version != 2:
                raise RuntimeError("Only GLB version 2 is supported, file is version {0}".format(version))

            length, = struct.unpack('<I', glb_file.read(4))

            chunk_type, chunk_data = read_glb_chunk(glb_file)
            assert chunk_type == b'JSON'
            gltf_data = json.loads(chunk_data.decode('utf-8'))

            if glb_file.tell() < length:
                #if read_bytes % 4 != 0:
                #    glb_file.read((4 - read_bytes) % 4)
                chunk_type, chunk_data = read_glb_chunk(glb_file)
                assert chunk_type == b'BIN\000'
                converter.buffers[0] = chunk_data

            converter.update(gltf_data, writing_bam=True)
        else:
            # Re-open as a text file.
            glb_file.close()

            with open(src) as gltf_file:
                gltf_data = json.load(gltf_file)
                converter.update(gltf_data, writing_bam=True)

    if settings.print_scene:
        converter.active_scene.ls()

    return converter.active_scene

class Test(ShowBase):
    def __init__(self):
        load_prc_file_data('', '''
            framebuffer-alpha f
            win-size 1280 720
            gl-debug true
        ''')

        ShowBase.__init__(self)

        self.cam.set_y(-10)

        scene = convert_lite('9049971711724066034.glb')
        scene.ls()
        scene.reparent_to(self.render)

        self.accept('esc', sys.exit)
        self.accept('q', sys.exit)

app = Test()
app.run()

The model file (zipped): 9049971711724066034.zip

rdb commented 1 year ago

panda3d-gltf should probably avoid strides higher than 2048, which is the most typical limit: https://opengl.gpuinfo.org/displaycapability.php?name=GL_MAX_VERTEX_ATTRIB_STRIDE

rdb commented 1 year ago

Disregard my previous (deleted) comment, I had missed the part where panda3d-gltf is repacking the array.

This is now fixed, it's now putting the morphs on a separate array. It should be a lot more efficient like this, because the morphs are never uploaded now, and it is no problem for the stride of the morphs array to exceed 2048 now.

You need to clear your model-cache after picking up the fix.