ousnius / BodySlide-and-Outfit-Studio

BodySlide and Outfit Studio, a tool to convert, create, and customize outfits and bodies for Bethesda games.
GNU General Public License v3.0
290 stars 64 forks source link

Normal Map generation #388

Closed Iloei closed 2 years ago

Iloei commented 2 years ago

BodySlide should generate normal maps in addition to nifs. This should be optional (and disabled by default) because this would be a slow process.

This would have some limitations. Normal map updates could only take into account pixels which would be mapped onto the surface of the .nif model.

So, for example, where several independent .nif files may share some normal map textures, in the general case it would be possible that conflicts would not be handled properly by BodySlide. (This example issue should not be fixed and should instead be documented. And those hypothetical source .nif files should eventually be replaced with files which do not conflict.)

Conceptually speaking, the generated normal maps have three "upstream" sources of information:

(1) The original normal maps for the source nif (2) The lowpoly geometry for the source nif (3) The lowpoly geometry for the generated nif

However, for the purpose of generating the normal map, it would probably be advantageous to smooth the source nif into a (throwaway) high-poly model. This high poly model should conform to the original normal map, and the sliders should be applied to the resulting high poly model would be used to generate the resulting normal maps.

It's not necessary for every pixel in the normal map to be mapped to a single vertex in the high poly model. The point would to minimize artifacting introduced by the interpolation algorithm.

Note that generating a high poly model from a low poly model necessarily carries some assumptions. For example using the catmull clark surface subdivision algorithm, sharp edges would be represented by duplicating vertices along the edge (same position, different index in memory) while smooth surfaces would not use duplicated vertices. An alert should be displayed to the user if the algorithm cannot match the supplied normal map.

That said, a variation would not use triangles at all, but might instead start with the low poly source model and for every mapped pixel in the normal map generate a vertex (or several vertices) projected from the low poly surface to a displacement which fits the normal map. BodySlide would then map these vertices based on the slider set and use their resulting positions to regenerate the normal map. We would not need to maintain triangle data for all of these vertices, and this part of the data structure should probably instead roughly correspond to the UV map, with depth coordinates.

This might require an update to the .osd files. (I am writing this without knowing the file structure for the .osd files, they are probably already sufficient.)

ousnius commented 2 years ago

@Iloei There already were attempts for this in the past (in fact there's still normal generation code and UI left in the code right now if you look for it).

What triggered us to try it was the fact that Skyrim uses model-space normal maps for anything with skin, such as the body (FO4 or other Bethesda games don't use model space normals for skin). So changing the body shape through a tool like BodySlide won't have any effect whatsoever on the way the body is actually being lit in-game (with the exception of post processing effects like SSAO). A good example for this are muscle morphs for abs, which usually don't appear at all if there were no SSAO effect.

What it did was that it generated a new model-space normal map from scratch (not using any original normal maps as a template) from the low poly mesh. In the case of Skyrim SE, CBBE has just enough vertices for that to be (somewhat) reliable. Using a higher poly mesh isn't possible, because you cannot easily transform the morphs that were created for the low poly mesh onto the high poly mesh.

Even if you had a morphed high poly mesh available, your generated normal map will be missing any "fine" detail like skin pores, moles, you name it. One could try to "merge" the generated model-space map with additional detail brought in from a tangent space map, but at that point you just need more and more setup for it to work and it becomes a (usability) mess.

So unfortunately we've always come to the conclusion that it's pretty complicated to do properly and not worth the effort.

For FO4 or for outfit mods (those use tangent space normal maps instead), generating normal maps isn't really needed and way, WAY too much coding and setup effort for it to be worth it. BodySlide updates the mesh normals for those, so the lighting in-game will match for the most part. If the shape varies too much (for example a part of the body changes not just in size, but also in shape/direction by a lot), then it's possible there can be mismatches in certain specular effects of the new morphed mesh after its now mismatching tangent space map is applied to the new mesh normals.

So I don't really want to put any more time or thought into generating normal maps as of right now.

Iloei commented 2 years ago

@ousnius What should I look for to find that normal generation code?

While I cannot guarantee that I could address all of these problems, I have some ideas which might work.

(I was not expecting you to implement this -- I opened this issue because of this sentence in CONTRIBUTING.md: "If you are thinking of making a large contribution, open an issue for it before starting work.If you are thinking of making a large contribution, open an issue for it before starting work." That said, I imagine I could have done a better job of stating my intent.)

ousnius commented 2 years ago

@Iloei It's part of the preview window, look for SetNormalsGenerationLayers and go from there. It requires creating some XML files in a format that has several layers. You can find the source for that as well.

You can try but it'll be really hard to make something that works well because a lot of logic and different formats are involved. I won't be putting any energy into it myself for that reason.

Iloei commented 2 years ago

Understood -- and you have done so much already.

Here is my plan:

(1) A low poly mesh can be converted to a high poly mesh. This approach has some limitations, but should be sufficient for the sliders in the context of normal map generation.

(2) For the normal map itself, a mesh should not be necessary. Instead of working with triangles, I plan on working directly with vertices projected from the reference model based on a reference normal map. Instead of the traditional mesh data structure, these vectors would have u,v indices.

My hope is that I can smooth the geometry represented by the sliders and map from the smoothed representation of the sliders directly into u,v coordinates to apply to my normal vectors.

For my initial draft, I hope to have one normal vector for each pixel in the normal map which has a u,v coordinate which is used by the mesh. (It may be better to have a denser normal map, with an approach patterned after an anti-aliasing algorithm. But I am already being rather ambitious here and having something somewhat functional must be my first priority if I want to have any chance of getting this to work.)

I expect this to be slow work. And, it's entirely possible that I will not succeed. But failing might teach me something interesting.

Anyways, thank you again.