TencentARC / InstantMesh

InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models
Apache License 2.0
2.66k stars 255 forks source link

我把生成的3d模型放到微信小程序报错 #80

Open mingooglegit opened 1 month ago

mingooglegit commented 1 month ago

MiniProgramError "GLTF validation failed at [AccessorNode]: [10602] Normalized accessors are not supported." at Object.errorReport (WAServiceMainContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at Function.thirdErrorReport (WAServiceMainContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at Object.thirdErrorReport (WAServiceMainContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at i (WASubContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at Object.cb (WASubContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at q._privEmit (WASubContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at q.emit (WASubContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at WASubContext.js?t=wechat&s=1715563339608&v=3.3.4:1 at n (WASubContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at je (WASubContext.js?t=wechat&s=1715563339608&v=3.3.4:1) 有什么办法解决吗?可以直接导出glb格式可以怎么调整使他可以适配小程序呢?我看在电脑上预览是可以的,ai回答说是需要模型转换,有办法可以直接导出正常的吗

bluestyle97 commented 1 month ago

抱歉,微信小程序我们也不是很了解。

mingooglegit commented 1 month ago

抱歉,微信小程序我们也不是很了解。

比较https://github.com/dreamgaussian/dreamgaussian和https://github.com/TencentARC/InstantMesh仓库代码中关于glb模型处理的归一化访问器,dreamgaussian是支持的,instantmesh不支持,有没有可能参考这个改动一下呢,我看大具体报错是Normalized accessors are not supported。dreamgaussian生成的模型是没有这个问题,是不是他单独处理了这块呢

jtydhr88 commented 1 month ago

我觉得你可以用blender或是c4d处理一下再放进微信小程序试一下。 这个严格上讲不算是这个库的问题,而是微信小程序对glb格式的验证,你可以查看下小程序的文档。

mingooglegit commented 1 month ago

MiniProgramError "GLTF validation failed at [AccessorNode]: [10602] Normalized accessors are not supported." at Object.errorReport (WAServiceMainContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at Function.thirdErrorReport (WAServiceMainContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at Object.thirdErrorReport (WAServiceMainContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at i (WASubContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at Object.cb (WASubContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at q._privEmit (WASubContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at q.emit (WASubContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at WASubContext.js?t=wechat&s=1715563339608&v=3.3.4:1 at n (WASubContext.js?t=wechat&s=1715563339608&v=3.3.4:1) at je (WASubContext.js?t=wechat&s=1715563339608&v=3.3.4:1)

我明白,但是我知道dream Gaussian这个图像生成3d的模型生成的glb格式是可以直接在微信小程序里面使用,我看了下代码,使用了这个库pygltflib,但是我尝试改这个代码,因为我发现好像生成有些类似,但是我最终改的生成的模型始终有问题, def write_glb(self, path):

    assert self.vn is not None and self.vt is not None # should be improved to support export without texture...

    # assert self.v.shape[0] == self.vn.shape[0] and self.v.shape[0] == self.vt.shape[0]
    if self.v.shape[0] != self.vt.shape[0]:
        self.align_v_to_vt()

    # assume f == fn == ft

    import pygltflib

    f_np = self.f.detach().cpu().numpy().astype(np.uint32)
    v_np = self.v.detach().cpu().numpy().astype(np.float32)
    # vn_np = self.vn.detach().cpu().numpy().astype(np.float32)
    vt_np = self.vt.detach().cpu().numpy().astype(np.float32)

    albedo = self.albedo.detach().cpu().numpy()
    albedo = (albedo * 255).astype(np.uint8)
    albedo = cv2.cvtColor(albedo, cv2.COLOR_RGB2BGR)

    f_np_blob = f_np.flatten().tobytes()
    v_np_blob = v_np.tobytes()
    # vn_np_blob = vn_np.tobytes()
    vt_np_blob = vt_np.tobytes()
    albedo_blob = cv2.imencode('.png', albedo)[1].tobytes()

    gltf = pygltflib.GLTF2(
        scene=0,
        scenes=[pygltflib.Scene(nodes=[0])],
        nodes=[pygltflib.Node(mesh=0)],
        meshes=[pygltflib.Mesh(primitives=[
            pygltflib.Primitive(
                # indices to accessors (0 is triangles)
                attributes=pygltflib.Attributes(
                    POSITION=1, TEXCOORD_0=2, 
                ),
                indices=0, material=0,
            )
        ])],
        materials=[
            pygltflib.Material(
                pbrMetallicRoughness=pygltflib.PbrMetallicRoughness(
                    baseColorTexture=pygltflib.TextureInfo(index=0, texCoord=0),
                    metallicFactor=0.0,
                    roughnessFactor=1.0,
                ),
                alphaCutoff=0,
                doubleSided=True,
            )
        ],
        textures=[
            pygltflib.Texture(sampler=0, source=0),
        ],
        samplers=[
            pygltflib.Sampler(magFilter=pygltflib.LINEAR, minFilter=pygltflib.LINEAR_MIPMAP_LINEAR, wrapS=pygltflib.REPEAT, wrapT=pygltflib.REPEAT),
        ],
        images=[
            # use embedded (buffer) image
            pygltflib.Image(bufferView=3, mimeType="image/png"),
        ],
        buffers=[
            pygltflib.Buffer(byteLength=len(f_np_blob) + len(v_np_blob) + len(vt_np_blob) + len(albedo_blob))
        ],
        # buffer view (based on dtype)
        bufferViews=[
            # triangles; as flatten (element) array
            pygltflib.BufferView(
                buffer=0,
                byteLength=len(f_np_blob),
                target=pygltflib.ELEMENT_ARRAY_BUFFER, # GL_ELEMENT_ARRAY_BUFFER (34963)
            ),
            # positions; as vec3 array
            pygltflib.BufferView(
                buffer=0,
                byteOffset=len(f_np_blob),
                byteLength=len(v_np_blob),
                byteStride=12, # vec3
                target=pygltflib.ARRAY_BUFFER, # GL_ARRAY_BUFFER (34962)
            ),
            # texcoords; as vec2 array
            pygltflib.BufferView(
                buffer=0,
                byteOffset=len(f_np_blob) + len(v_np_blob),
                byteLength=len(vt_np_blob),
                byteStride=8, # vec2
                target=pygltflib.ARRAY_BUFFER,
            ),
            # texture; as none target
            pygltflib.BufferView(
                buffer=0,
                byteOffset=len(f_np_blob) + len(v_np_blob) + len(vt_np_blob),
                byteLength=len(albedo_blob),
            ),
        ],
        accessors=[
            # 0 = triangles
            pygltflib.Accessor(
                bufferView=0,
                componentType=pygltflib.UNSIGNED_INT, # GL_UNSIGNED_INT (5125)
                count=f_np.size,
                type=pygltflib.SCALAR,
                max=[int(f_np.max())],
                min=[int(f_np.min())],
            ),
            # 1 = positions
            pygltflib.Accessor(
                bufferView=1,
                componentType=pygltflib.FLOAT, # GL_FLOAT (5126)
                count=len(v_np),
                type=pygltflib.VEC3,
                max=v_np.max(axis=0).tolist(),
                min=v_np.min(axis=0).tolist(),
            ),
            # 2 = texcoords
            pygltflib.Accessor(
                bufferView=2,
                componentType=pygltflib.FLOAT,
                count=len(vt_np),
                type=pygltflib.VEC2,
                max=vt_np.max(axis=0).tolist(),
                min=vt_np.min(axis=0).tolist(),
            ),
        ],
    )

    # set actual data
    gltf.set_binary_blob(f_np_blob + v_np_blob + vt_np_blob + albedo_blob)

    # glb = b"".join(gltf.save_to_bytes())
    gltf.save(path)