Stability-AI / stable-fast-3d

SF3D: Stable Fast 3D Mesh Reconstruction with UV-unwrapping and Illumination Disentanglement
https://stable-fast-3d.github.io
Other
975 stars 86 forks source link

[Gradio error]: UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 4598657: character maps to <undefined> #15

Closed MNeMoNiCuZ closed 1 month ago

MNeMoNiCuZ commented 1 month ago
To create a public link, set `share=True` in `launch()`.
C:\AI\stable-fast-3d\sf3d\models\tokenizers\dinov2.py:266: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
  context_layer = F.scaled_dot_product_attention(
C:\AI\stable-fast-3d\sf3d\box_uv_unwrap.py:524: UserWarning: Using torch.cross without specifying the dim arg is deprecated.
Please either pass the dim explicitly or simply use torch.linalg.cross.
The default value of dim will change to agree with that of linalg.cross in a future release. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\Cross.cpp:66.)
  torch.cross(main_axis, seconday_axis), dim=-1, eps=1e-6
Generation took: 1.4101009368896484 s
Peak Memory: 6161.3818359375 MB
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 399, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 70, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\fastapi\applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\starlette\applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\starlette\middleware\errors.py", line 186, in __call__
    raise exc
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\starlette\middleware\errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\gradio\route_utils.py", line 707, in __call__
    await self.app(scope, receive, send)
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\starlette\middleware\exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\starlette\routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\starlette\routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\starlette\routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\starlette\routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\starlette\routing.py", line 72, in app
    response = await func(request)
               ^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\fastapi\routing.py", line 278, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\fastapi\routing.py", line 193, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\starlette\concurrency.py", line 42, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\anyio\_backends\_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\anyio\_backends\_asyncio.py", line 859, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\gradio\routes.py", line 473, in custom_component_path
    Path(path).read_text().encode()
    ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\pathlib.py", line 1028, in read_text
    return f.read()
           ^^^^^^^^
  File "C:\Python312\Lib\encodings\cp1252.py", line 23, in decode
    return codecs.charmap_decode(input,self.errors,decoding_table)[0]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 4598657: character maps to <undefined>

I've tried both with custom images and the sample images provided.

When running the run.py example, it generates just fine:

(venv) C:\AI\stable-fast-3d>python run.py demo_files/examples/chair1.png --output-dir output/
C:\AI\stable-fast-3d\sf3d\models\network.py:68: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  @custom_fwd(cast_inputs=torch.float32)
C:\AI\stable-fast-3d\sf3d\models\network.py:74: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
  @custom_bwd
C:\AI\stable-fast-3d\venv\Lib\site-packages\open_clip\factory.py:129: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  checkpoint = torch.load(checkpoint_path, map_location=map_location)
0it [00:00, ?it/s]C:\AI\stable-fast-3d\sf3d\models\tokenizers\dinov2.py:266: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
  context_layer = F.scaled_dot_product_attention(
C:\AI\stable-fast-3d\sf3d\box_uv_unwrap.py:524: UserWarning: Using torch.cross without specifying the dim arg is deprecated.
Please either pass the dim explicitly or simply use torch.linalg.cross.
The default value of dim will change to agree with that of linalg.cross in a future release. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\Cross.cpp:66.)
  torch.cross(main_axis, seconday_axis), dim=-1, eps=1e-6
Peak Memory: 6168.84814453125 MB
1it [00:01,  1.17s/it]

When running the run.py with the texture-size parameter, it seems to run just fine (although it doesn't export a texture image file for me, should it?).

When running the run.py with texture-size and --remesh_option quad, I get the following error:

(venv) C:\AI\stable-fast-3d>python run.py demo_files/examples/otter_samurai.png --output-dir output/ --texture-resolution 1024 --remesh_option quad
C:\AI\stable-fast-3d\sf3d\models\network.py:68: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  @custom_fwd(cast_inputs=torch.float32)
C:\AI\stable-fast-3d\sf3d\models\network.py:74: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
  @custom_bwd
C:\AI\stable-fast-3d\venv\Lib\site-packages\open_clip\factory.py:129: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  checkpoint = torch.load(checkpoint_path, map_location=map_location)
0it [00:00, ?it/s]C:\AI\stable-fast-3d\sf3d\models\tokenizers\dinov2.py:266: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
  context_layer = F.scaled_dot_product_attention(
Computing mesh statistics .. done. (took 0.0ms)
Output mesh goals (approximate)
   Vertex count           = 3000
   Face count             = 3000
   Edge length            = 0.0220178
Input mesh is too coarse for the desired output edge length (max input mesh edge length=0.0314817), subdividing ..
Building a directed edge data structure .. done. (took 1.0ms)
Subdividing mesh .. done. (split 34323 edges, took 25.0ms, new V=46863, F=93726, took 25.0ms)
Building a directed edge data structure .. done. (took 2.0ms)
Generating adjacency matrix .. done. (took 0.0ms)
Computing vertex & crease normals .. done. (39435 crease vertices, took 9.0ms)
Computing dual vertex areas .. done. (took 0.0ms)
Processing level 0 ..
    Coloring .. done. (7 colors, took 2.0ms)
Building multiresolution hierarchy ..
  Collapsing .. done. (46863 -> 25933 vertices, took 4.0ms)
    Coloring .. done. (7 colors, took 0.0ms)
  Collapsing .. done. (25933 -> 14104 vertices, took 2.0ms)
    Coloring .. done. (7 colors, took 0.0ms)
  Collapsing .. done. (14104 -> 7670 vertices, took 1.0ms)
    Coloring .. done. (7 colors, took 0.0ms)
  Collapsing .. done. (7670 -> 4192 vertices, took 0.0ms)
    Coloring .. done. (7 colors, took 0.0ms)
  Collapsing .. done. (4192 -> 2273 vertices, took 0.0ms)
    Coloring .. done. (6 colors, took 0.0ms)
  Collapsing .. done. (2273 -> 1234 vertices, took 0.0ms)
    Coloring .. done. (7 colors, took 0.0ms)
  Collapsing .. done. (1234 -> 668 vertices, took 0.0ms)
    Coloring .. done. (6 colors, took 0.0ms)
  Collapsing .. done. (668 -> 362 vertices, took 0.0ms)
    Coloring .. done. (7 colors, took 0.0ms)
  Collapsing .. done. (362 -> 200 vertices, took 0.0ms)
    Coloring .. done. (6 colors, took 0.0ms)
  Collapsing .. done. (200 -> 107 vertices, took 0.0ms)
    Coloring .. done. (6 colors, took 0.0ms)
  Collapsing .. done. (107 -> 59 vertices, took 0.0ms)
    Coloring .. done. (5 colors, took 0.0ms)
  Collapsing .. done. (59 -> 32 vertices, took 0.0ms)
    Coloring .. done. (6 colors, took 0.0ms)
  Collapsing .. done. (32 -> 18 vertices, took 0.0ms)
    Coloring .. done. (5 colors, took 0.0ms)
  Collapsing .. done. (18 -> 11 vertices, took 0.0ms)
    Coloring .. done. (4 colors, took 0.0ms)
  Collapsing .. done. (11 -> 6 vertices, took 0.0ms)
    Coloring .. done. (3 colors, took 0.0ms)
  Collapsing .. done. (6 -> 3 vertices, took 0.0ms)
    Coloring .. done. (3 colors, took 0.0ms)
  Collapsing .. done. (3 -> 2 vertices, took 0.0ms)
    Coloring .. done. (2 colors, took 0.0ms)
  Collapsing .. done. (2 -> 1 vertices, took 0.0ms)
    Coloring .. done. (1 colors, took 0.0ms)
Hierarchy construction took 20.0ms.
Setting to random solution .. done. (took 0.0ms)
Constructing Bounding Volume Hierarchy .. done. (SAH cost = 70.7621, nodes = 61807, took 7.0ms)
Compressing BVH node storage to 32.97% of its original size .. done. (took 1.0ms)
Preprocessing is done. (total time excluding file I/O: 77.0ms)
Optimizing orientation field .. Propagating updated solution.. done. (took 0.0ms)
done. (took 48.0ms)
Orientation field has 78 singularities.
Optimizing position field .. done. (took 169.0ms)
Step 1: Classifying 281178 edges in parallel .. done. (took 25.0ms)
Step 2: Collapsing 91730 edges .. done. (ignored 313 conflicting edges, took 15.0ms)
Step 3: Assigning vertices .. done. (3051 vertices, took 1.0ms)
Step 3a: Removing spurious vertices .. done. (removed 87 vertices, took 0.0ms)
Step 4: Assigning positions to vertices .. done. (took 8.0ms)
Step 5: Snapping and removing unnecessary edges ... done. (snapped 40 vertices, removed 26 edges, took 1.0ms)
Step 6: Orienting edges .. done. (took 0.0ms)
Step 7: Extracting faces .. done. (2985 faces, took 0.0ms)
Step 8: Filling holes .. Not trying to fill a hole of degree 12
done. (1 holes, took 0.0ms)
Intermediate mesh statistics: degree 3: 132 faces, degree 4: 2803 faces, degree 5: 48 faces, degree 6: 2 faces, degree 8: 1 face
Step 9: Regular subdivision into pure quad mesh .. done. (took 3.0ms)
Step 10: Running 2 smoothing & reprojection steps .... done. (took 2.0ms)
Step 12: Reordering mesh for efficient access .. done. (took 9.0ms)
Extraction is done. (total time: 70.0ms)
Faces 11876, 4
Verts 11895, 3
0it [00:01, ?it/s]
Traceback (most recent call last):
  File "C:\AI\stable-fast-3d\run.py", line 88, in <module>
    mesh, glob_dict = model.run_image(
                      ^^^^^^^^^^^^^^^^
  File "C:\AI\stable-fast-3d\sf3d\system.py", line 278, in run_image
    meshes, global_dict = self.generate_mesh(
                          ^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-fast-3d\sf3d\system.py", line 320, in generate_mesh
    mesh = mesh.quad_remesh()
           ^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-fast-3d\sf3d\models\mesh.py", line 162, in quad_remesh
    mesh = trimesh.Trimesh(vertices=new_vert, faces=new_faces)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\trimesh\base.py", line 207, in __init__
    self.process(validate=validate, merge_tex=merge_tex, merge_norm=merge_norm)
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\trimesh\base.py", line 258, in process
    self.merge_vertices(merge_tex=merge_tex, merge_norm=merge_norm)
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\trimesh\base.py", line 1129, in merge_vertices
    grouping.merge_vertices(
  File "C:\AI\stable-fast-3d\venv\Lib\site-packages\trimesh\grouping.py", line 72, in merge_vertices
    referenced[mesh.faces] = True
    ~~~~~~~~~~^^^^^^^^^^^^
IndexError: index 2350517376 is out of bounds for axis 0 with size 11895

In case this helps.

jammm commented 1 month ago

Try python -X utf8 gradio_app.py

This is a Windows specific issue.

MNeMoNiCuZ commented 1 month ago

python -X utf8 gradio_app.py solved it! Thanks a lot!

Maybe it's worth mentioning on the Local Gradio App section on the front page?