Closed polycamnick closed 5 months ago
Thanks for the report. This should be enough to start looking into the issue.
This was due to a misunderstanding of how RealityKit maps the materials list in a ModelComponent
to the corresponding MeshResource
primitives. It should be fixed as of 95d6aaf.
If you're able to test this fix without me needing to spin a new release, I'd appreciate any feedback. Otherwise this fix will be incorporated into an upcoming release. Thanks again for the report.
This was due to a misunderstanding of how RealityKit maps the materials list in a
ModelComponent
to the correspondingMeshResource
primitives. It should be fixed as of 95d6aaf.If you're able to test this fix without me needing to spin a new release, I'd appreciate any feedback. Otherwise this fix will be incorporated into an upcoming release. Thanks again for the report.
Thanks for taking a look Warren! We'll test it out and revert back.
Thanks for the rapid turnaround
Still seeing issues with 95d6aaf
Can you share a .glb that reproduces this issue? I suspect it's a separate bug.
I can but GLB doesn't compress well and these files are bigger than the GitHub max attachment size
Google Drive link work? https://drive.google.com/file/d/1MelF1Qo_kE83h4Ym_cYbWxqMWOzNSZYI/view?usp=share_link
The immediate issue here is that we're trying to extract a color channel from a monochrome image as if it's an RGBA image. That's fairly easy to fix, but in the process of fixing it, I've encountered a lot of flakiness in the runtime behavior, and the perennial flakiness of Swift+lldb is making it hard to debug. I'll continue looking at this.
The situation should be much improved with c5501dd. In addition to never attempting to extract a channel from a single-channel image, I switched to a more efficient method from the Accelerate framework to do the extraction when necessary. I also added some defensive image decoding code that only runs on visionOS to address some problems I encountered there.
You're likely to hit validation errors when running with the Metal API validation layer due to the use of 8K images for all material properties, but you can address that when and how you see fit.
Tested with https://github.com/warrenm/GLTFKit2/commit/c5501dd7fe78bdb4cbecddb849d0eee251ef687d
Things do seem improved, there's intermittency on some of the failures (not sure if that's this library or RealityKit/visionOS Simulator)
However some of the models (the jet engine) remain broken with that "zebra" white/gray material failure mode.
Here's is another model that consistently zebras: https://drive.google.com/file/d/1cQDxuaT44CjQHENjNUlJSmaspaRd8Srp/view?usp=share_link
Are you able to reproduce that issue with that model?
(edit:) my testing environment:
master
Nothing too crazy here, I can provide the project if that helps with reproduction
Be happy to take a look at a sample project if you can provide it. The table model seems to consistently render without obvious bugs for me with Xcode 15.2 + visionOS Simulator 1.0:
I do see one failure on the console:
callDecodeImage:2006: *** ERROR: decodeImageImp failed - NULL _blockArray
and the app crashes with the Metal validation layer enabled (due to the large image size), but without validation enabled, it renders as above.
Sample project, very simple modifications to the default project setup from 15.2
Large size due to embedded gib files
Hope I'm just holding it wrong :)
https://drive.google.com/file/d/1gPBJwy8YBSyL7oc6mHnpwCjqufudWp9L/view?usp=share_link
I think if you're consuming the framework as a Swift package (which I don't recommend to anyone, but grudgingly support), it uses the contents of the Package.swift in the checkout to determine how to build. In this case, because the package file points to the latest GitHub release, you're not actually building and linking against the code at top of tree; you're linking against version 0.5.7, which doesn't include the changes made to address this issue.
Here's an XCFramework built from top of tree. It should work as a drop-in replacement for the SPM dependency.
https://drive.google.com/file/d/1SmnapYontNls7Wuwv2lyXmhs648derKV/view?usp=sharing
SHA256: 9caf87301f184abb6a268053f5e89f07d06002acba7a5532ec89bcb6b804b1cb
Oh wow okay that "makes sense" but damn
So my project's package.swift points to GLTFKit master, and so the git history of the local GLTFKit copy shows your recent commits (which is why I thought I was using them)
But because GLTFKit's package.swift on master points to a specific release, it "overrides" my project's package.swift file, and what actually gets installed is the older release copy...
HUH. Good to know.
Okay that's great... I'll be able to test this later today.
Thanks Warren! Much appreciated
Yep, you got it. Sorry for the confusion. I only distribute binaries because SPM still doesn't support mixed-language targets after all these years. Maybe we get that some time this decade.
Thanks so much for the support here, @warrenm!
If you're interested, we'd be happy to give you free access to Polycam Pro. If you create a Polycam account, send me the email address you used to create it, and I'll upgrade it. My email is my first name "elliott" at polycam.ai
Perfection, thank you so much @warrenm
Sweet! Thanks for sticking with it.
@warrenm Awesome! Just upgraded your Polycam account - you should have access to Pro indefinitely.
Loading certain GLB files via GLTFRealityKitLoader into a RealityView scene on visionOS is resulting in bad rendering
Example 1: Expected:
Result:
Example 2: Expected:
Result:
GLB Samples: tank.glb.zip
More samples providable upon request, GitHub attachments max at 25mb