Gnurfos / transvoxel_rs

Rust implementation of the Transvoxel algorithm
Apache License 2.0
24 stars 6 forks source link

Request: The positions of the sampling points from which the mesh is built #4

Closed davids91 closed 1 year ago

davids91 commented 1 year ago

To help in generating UV positions I found myself in need of querying for every triangle the edge points of the cubes used to generate them.

I mean the green points in the below image: image

Gnurfos commented 1 year ago

For that, we'll need to change the interface:

First, would you be OK if the information was actually output per vertex rather than for every triangle ? I think it makes more sense, as vertices are reused, and for each output vertex there's just a pair of grid points that were involved.

I'm thinking that rather than outputting to you something that you can use to compute the UVs from, and leak out "private" structures like cube corner indices, or wastefully copy densities in the output, the algorithm could directly output UVs. These are typical as vertex attributes anyway. Of course some people don't care about UVs, so this would have to be customizable somehow (I need to look a bit into how game engines are handling vertex attributes).

Some of this logic (what attributes you want to output and how you want them computed) could be moved into the "Density" trait. This trait would have to be renamed to reflect that it can store something else than just a scalar (if you need more), and that it handles output computations.

Do you have a small example of some UVs you might want to generate, with the source data they rely on ?

davids91 commented 1 year ago

I would love to have that! But I'm not sure my request fits the libraries requirements though.. I would definitely be okay with the information being put out on a per vertex basis! The way I calculated UVs comes in phases:

  1. Extract vertex positions: ( I do this step because of the UV calculation actually, the bevy_mesh feature could be used otherwise)

    let extracted_mesh =
        extract_from_field(field, &block, SURFACE_VALUE_THRESHOLD, transition_sides);
    let mut mesh = Mesh::new(PrimitiveTopology::TriangleList);
    
    let positions = extracted_mesh
        .positions
        .into_iter()
        .tuples()
        .map(|(a, b, c)| [a, b, c])
        .collect::<Vec<_>>();
  2. Calculate UVs:

    let uvs: Vec<[f32; 2]> = positions
        .iter()
        .map(|position: &[f32; 3]| {
    
            //2. a: Normalize each coordinates relative to the world, and apply a zoom effect on it
            let zoomifyer = 25.;  // So that not the whole texture is shown on 1 triangle but 25th of it
            let normalizer = /*world size */ * zoomifyer;
            let (mut n_x, mut n_y, mut n_z) = (
                position[0] / normalizer,
                position[1] / normalizer,
                position[2] / normalizer,
            );
    
            // This is only needed in case the texture is not configured to be on repeat, or you want to use a textureatlas
            let modulo_function = |num: f32| num - num.floor();
            n_x = modulo_function(n_x);
            n_y = modulo_function(n_y);
            n_z = modulo_function(n_z);
    
           //This way each UV coordinate remains structurally cohesive based on the coordinates
    
           //2.b : Tri-planar mapping of the UV cordniates 
            use std::ops::Div;
            let mut uv = (Vec2{x: n_x, y: n_z} /* projection to xz plane */
                 + Vec2{x: n_x, y: n_y} /* projection to xy plane */
                 + Vec2{x: n_y, y: n_z}/* projection to yz plane */)
            .div(3.);
    
            // 2.c: this is where I am stuck
            /* Calculate which material the coordinate belongs to and align to it */
            let material = Materials::Stone; //Workaround until a solution is found for the material crysis
            uv.x /= MATERIAL_COUNT; //The textureAtlay used is of one row, and MATERIAL_COUNT columns
            uv += get_texture_uv_offset_for(&material);
    
            uv.into()
        })
        .collect();

I seemed to have this working , but my problem I can't seem to figure out how to get the materials for the triangles. If I sample from the position of the vertex, I often get an invalid value. for each position inside the relevant space I can provide the material value:

    pub fn material(&self, x: f32, y: f32, z: f32) -> Materials {
        let ix = (x * self.scale).round() as i32;
        let iy = (y * self.scale).round() as i32;
        let iz = (z * self.scale).round() as i32;
        let dim = self.dimension() as i32;
        if (ix < 0 || ix >= dim) || (iy < 0 || iy >= dim) || (iz < 0 || iz >= dim) {
            return Materials::What;
        }
        if let Some(cell) = self.octree.get([ix as u32, iy as u32, iz as u32]) { //Sparse voxel octree, meaning not every position contains a value
            cell.material
        } else {
            Materials::What
        }
    }

Which is being sampled by the trait implementation:

use transvoxel::density::ScalarField;
impl ScalarField<f32, f32> for &WorldContent {
    fn get_density(&self, x: f32, y: f32, z: f32) -> f32 {
        self.material(x, y, z).density()
    }
}

The problem I am trying to solve is if I use the position for the triangles, there's almost definitely an out of bounds value, and I understand that because of the issue images as well, the surface is some distance away from the point it was sampled on.

I tried to compensate for the algo by aligning the vertex position to a grid cell:

            let cell_alignment = 10.; //(chunk.size() as f32 + extension * 2.) / (subdivisions as f32 * 2.0);
            let aligned_pos =
                crate::math_utils::align_vec3_to_base(&((*position).into()), cell_alignment as i32);
            collect_fn(aligned_pos);
            collect_fn(aligned_pos + Vec3::X * cell_alignment);
            collect_fn(aligned_pos + Vec3::Y * cell_alignment);
            collect_fn(aligned_pos + Vec3::Z * cell_alignment);
            collect_fn(aligned_pos - Vec3::X * cell_alignment);
            collect_fn(aligned_pos - Vec3::Y * cell_alignment);
            collect_fn(aligned_pos - Vec3::Z * cell_alignment);

            collect_fn(aligned_pos + Vec3::X * cell_alignment + Vec3::Y * cell_alignment);
            collect_fn(aligned_pos + Vec3::X * cell_alignment + Vec3::Z * cell_alignment);
            collect_fn(aligned_pos + Vec3::Z * cell_alignment + Vec3::Y * cell_alignment);
            collect_fn(
                aligned_pos + Vec3::X * cell_alignment + Vec3::Y * cell_alignment + Vec3::Z * cell_alignment,
            );

Which sometimes gave correct values, but most of time it didn't. image

In short, what I am trying to do is to calculate somehow the green dots ( original image ) from the vertex positions.

Another method I will try is to iterate through the triangles through the index values, calculating normals from them, and subtract the normals from the center of the triangles. I just think it would be more effective to have the points the triangles were created from.

I really appreciate your library and the work you put in this! Thank you!

davids91 commented 1 year ago

I think it fit the interface with minimal changes if it would be included in the result mesh: image

My idea of this was to inject a bool parameter into the algorithm to enable storing this, otherwise it can be left empty.

Gnurfos commented 1 year ago

I'm not sure I understand your triplanar mapping: basically, if we ignore normalizing, you seem to have

u = (2x+y)/3 v = (2z+y)/3

which seems strange to me. Also, I understand the get_texture_uv_offset_for part, but not dividing by MATERIAL_COUNT. But I had to read that very quickly, so I didn't give it proper attention.

In some of my old projects, I was addressing 3 textures with UVs being respectively simply xy, xz and yz, and then blending the result with weights based on the normals (output in the mesh) at that point, with the xy texture weighted by normal_z, and so on.

Either way, I think the transvoxel algorithm is really meant to operate on a gradual field, and you might not get nice looking results with a density function having abrubt (discrete) changes.

Gnurfos commented 1 year ago

What I had in mind for changing the interface was along the lines of:

davids91 commented 1 year ago

I really like the second point! However, it is important to mention that closures can only be coerced to fn types if they do not capture any variables., so the way I understand it it can only work as an enclosed unit.

Is it a big favor to ask if there's a usage example provided, and/or the current function implementation be exposed through the API?

I think the other aspects, like the projection and the division by MATERIAL_COUNT can be discussed in another thread if you'd like, it's just a bit off-topic here. I'd be glad to talk about them though!

Gnurfos commented 1 year ago

I don't think the closure thing is really an issue: I'd create a trait in which to implement that method, so we don't have to deal with function pointers.

I definitely would try to add examples and provide a sensible default implementation... when I get to coding that. I just can't predict when

davids91 commented 1 year ago

Sure thing! any way I can help?

Gnurfos commented 1 year ago

Do you need to turn off vertex reuse for your case? The default algorithm tries to reuse most vertices: it does not duplicate the positions/normals when a vertex is part of several triangles (including in different cells). This might be a problem if you want not "density field gradient" normals but rather flat normals determined by only the triangle positions (although I think such things might be better to do in a shader).

Or will a given vertex always have the same attributes (normal, but also UVs), regardless of which triangle it's part of ?

Another thing to clarify: I find it strange that your density functions are using round(). Are you trying to get a mesh made of cubes (minecraft style) ? Because you probably won't get that, even with a binary density function.

davids91 commented 1 year ago

For my usecase reusing vertex values is fine; It is the optimal way of doing things ..

One consequence of this would be that one triangle may be part of different materials, but that is a different problem entirely and it need not be handled by the mesh extraction algorithm. The Triangle normals can be computed afterwards should the need arise anyway. It is more consistent if a given vertex always have the same attributes.

As for the usage of round(): That is because the field only has values at whole numbers, and is not continuous. Later on I plan implement caching for the storage unit, and introduce a nearest neighbor interpolation, but for now the query is giving out the nearest cells value always.

Gnurfos commented 1 year ago

I managed to put in some time today, and it looks like it's going somewhere. Now to cleanup everything and update docs, and make an example. I can't give any ETA though. And also the example will probably at best just output vertex colors, rather than UVs, as I'm not too proficient with texture atlases etc in bevy yet.

davids91 commented 1 year ago

That is amazing news! Thank you! no need to rush! :) I am really grateful for your work!

Gnurfos commented 1 year ago

I just published version 1.0.0 of the lib allowing (requiring) you to use a custom MeshBuilder, wherein you decide what attributes to output in the vertices, based on voxel data (which can now be anything, not just a float).

Interfaces changed a lot. Let me know if you need help converting your code.

There's an example that you can look at or run, using different vertex colors per "material" in examples/non_scalar_voxel_data. The visual result doesn't look great in my opinion, and I'm afraid it might be the same using textures and UVs, since it's limited in "resolution" to one value per emitted vertex. But maybe you'll get better results.

davids91 commented 1 year ago

I saw! Looks great! I think this is more than enough flexibility! If I understood the new trait correctly, this makes it possible to iterate the triangles as well as per-vertex data! Very nice!

I already adapted my current code base for the changes,; I have not started to implement my custom builder yet, but I don't think the issue is unresolved, as this is what I asked, so now it's up to me and me alone :)

Thank you very much for the interface update and your wonderful library!