Closed NoTalkOnlyProx closed 2 months ago
Can you please join a minimal test project? It's often better than a lot of (too small) screenshots and text and saves time.
I am at this point completely speculating, but this does seem like it might be a concurrency issue?
For now I don't think it is. I'm rather leaning towards uninitialized memory, miscompilation, or maybe corruption (though if it was the latter you'd have crashes too eventually).
It's often better than a lot of (too small) screenshots and text and saves time
Oof, I see now that Github absolutely destroyed the resolution of all the screenshots I took.
Can you please join a minimal test project?
I'll try to send something at the earliest time I am able. But in the mean time, the screenshots are actually unnecessary
If you just make a generator graph that outputs a flat terrain, and then add four OutputWeight nodes, set the first one to 0.5/layer 0
, and the others to 1.0/layer1,2,3
that will be sufficient to recreate the problem, as long as you also use my shader
Update: A friend of mine who is able to at the moment just tried to reproduce the steps described here and did NOT reproduce this bug.
So either this is hardware-related, or there is a key step I missed in my instructions. At this point, yeah, it might be better to hold off until I can produce a project
Perhaps it really is an uninitialized buffer somewhere, and on his hardware it "just works", but mine is sensitive to it
If you just make a generator graph that outputs a flat terrain, and then add four OutputWeight nodes, set the first one to 0.5/layer 0, and the others to 1.0/layer1,2,3 that will be sufficient to recreate the problem
So far I did that and noticed something strange going on with the 4th weight output seemingly getting some garbage data from previous tests (I previously generated a sine wave terrain, then tried with weights... and got sine-wave-ish data into the 4th weight; which later turned into a bunch of 1 after resetting things enough times). So I'm trying to isolate the problem at the moment. I'm not too surprised about this though, because these specific output nodes needed some special behavior to work properly, as every other output node is unique, while these are identified by a property. There might be an issue about that somewhere.
I think what's happening is that weight outputs don't properly support range analysis optimization (also known as "use optimized execution map" property in advanced tab). My test doesn't reproduce the bug when that property is turned off, and systematically does when it's on.
When asked for a chunk, the generator figures out that the value of weight output nodes doesn't change, so it skips their operations. That logic works using a short-circuit for SDF outputs, and other unique outputs, but not weight outputs: they still try to access results as if they had been calculated individually, so instead it reads some random memory that was used before by something else.
This is particularly annoying because any output can end up constant and some can be non-constant (i.e some can have data as an array, some have only one value). But in order to gather weights to store in voxels, they have to all be looked up in a long function that currently assumes they are all arrays. This is behavior specific to texture weights... Writing all combinations of array/constant permutations that there can be for all weight outputs isn't practical (and there can be up to 16 weights, according to that old design!). Alternatively it can be written by checking on every access whether it's an array or not, but that pretty much makes performance worse. So not sure yet what approach to take here.
I pushed a fix in a76509dcb49f5ed2380dde15012a00a1a5759623 to specifically address this case. Not sure yet if that will fix all your findings, but it fixes what I found
Fantastic, thank you! I will try to build it tonight if I have time and see what happens. Any advice for someone who knows nothing about building Godot on windows? :P Obviously I'll google it and check your docs, but I'd be a fool not to ask
Aha, I see the builds here, nice https://github.com/Zylann/godot_voxel/actions/runs/10517425490
0417f38bf7519f9d4da358fabd974f81b8c81cb1 fixes the issue described here for me 100%, nice sleuthing!
Split from https://github.com/Zylann/godot_voxel/issues/685
Issue
It would seem that (seemingly) equivalent VoxelGeneratorGraphs produce inconsistent and distinct results when painting texture weights via the
OutputWeight
node.I have observed several functionally identical texture weight graphs producing wildly different results, which appear to have randomly distributed weight artifacts.
How to reproduce
VoxelMesherTransvoxel
. Leave the streamer unset.VoxelMesherTransvoxel
must have the texturing mode set toTEXTURES_BLEND_4_OVER_16
.Examples
section.Test Shader
Here is a simple test shader you can use to reproduce this problem. It just outputs the first three encoded texture weights to RGB channels of albedo.
This is adapted from the example here: https://voxel-tools.readthedocs.io/en/latest/smooth_terrain/#shader
But I changed the divisor from 255 to 16, since
encode_weights_to_packed_u16_lossy
already divides by 16 inherently.I have verified manually that that
v_indices.x == 0
andv_indices.y == 1
always, so none of what follows is due to index swapping.Examples
Here are a few various equivalent graphs I tried, and their results.
In each case, the goal is to set the weight of channel 0 to 0.5, and the weights of channels 1,2, and 3 to 1.0.
Notes for recreating these:
OutputWeight
nodes with what layer they are set to, so that you can more easily reproduce the graphs.ClampC
node is set to clamp 0 to 1 in all screenshots.Direct output of (0.5, 1, 1, 1) using default values on OutputWeight nodes:
Graph:
Result:
Using four constants to output (0.5, 1, 1, 1) via OutputWeight nodes:
The const. values are not shown in the screenshot, but they are 0.5, and 1, 1, 1, to match with the other examples.
Graph:
Results:
I have seen two totally distinct results for this exact graph. Not sure why.
Using four constants with 0 added to output (0.5, 1, 1, 1) viaOutputWeight nodes:
Graph: Result:
Using four constants with realtime-derived-0 added to output (0.5, 1, 1, 1) via OutputWeight nodes:
Instead of using the default values in the
Add
nodes to output 0, in this example I multiply the SDF by zero, and use that result as the0
constant.Graph:
Speculation
I am at this point completely speculating, but this does seem like it might be a concurrency issue? That is the only way I can imagine these graphs, which should hypothetically be outputting identical data, are somehow outputting completely different results.
Or am I just crazy? Am I doing something totally wrong here?
Environment
v4.3.stable.custom_build [77dcf97d8]
Forward+
I think?