Closed nobuyukinyuu closed 1 year ago
This sounds very cool to support, though I really have very little idea what I'm doing with some 3D concepts (still), such as normals. If you have a shader you can share, or just links to how these types of shader are implemented, that would really help, since it would allow me to verify that the normals actually are correct. The example you have looks great currently, so if I can help improve it that would be nice!
Internally, each voxel in the .vox model corresponds to 1 screen pixel, and typically 2 layers of voxels deep are visible at some angles (where the voxels on the surface don't quite cover up the one beneath them). To ensure the interior has a reasonable color, the exterior is "soaked" down into the interior voxels, which isn't a perfect operation at corners. The size where the voxels are drawn unscaled is tiny, so I usually use 2x scale with some simple smoothing on the shapes (forming slopes instead of staircases, etc). The slopes are still made of voxels that get rendered as single pixels, and currently they don't have their own normal data.
The lighting is a whole set of tricks that seem to work reasonably well but aren't exactly based in real physical lighting. There's tracking for which voxels have only open space above them (they can "see the sky"), and also which voxels have only open space along the y-axis (the pink/magenta axis in your lighting gif above). Where a voxel can see the y-axis lighting, it gets some light, where it can see the sky it gets much more light, and where both are lighting a voxel the refraction is used to determine how shiny a small area is (becoming much brighter). Using normals for this would be very powerful, I just haven't figured out how it would work yet.
As for the P.S., that's much more doable in the short term! I'm very happy with how the 1-pixel-per-voxel approach works with alternate camera angles.
I'll try to get the pitch configurable tonight, or soon at least. The normals would have to be a longer-term project, and I might need some help from your side to get the renderer to look as you have it.
This alternate pitch, sort-of, can be produced now. I think other desirable viewing angles can be produced like this. There are some parameters for this. --horizontal-xy
is how many pixels apart a single x or y voxel difference makes horizontally. Likewise, --vertical-xy
affects how many vertical pixels apart an x or y voxel difference moves the screen pixel. --vertical-z
determines how many vertical pixels a z voxel difference moves the screen pixel. The parameters should usually be between 1 and 4, inclusive.
Hi again,
Here's the shader I wrote, in Godot's shader language. It is close to GLSL, which provides surface normals automatically (those things provided automatically are the all-caps looking things) in 3d shaders:
const float deg30 = 0.26; //30 degrees
mat3 rotation3dY(float angle) {
float s = sin(angle);
float c = cos(angle);
return mat3(
vec3(c, 0.0, -s),
vec3(0.0, 1.0, 0.0),
vec3(s, 0.0, c)
);
}
vec3 rotateY(vec3 v, float angle) {
return rotation3dY(angle) * v;
}
mat4 rotation3d(vec3 axis, float angle) {
axis = normalize(axis);
float s = sin(angle);
float c = cos(angle);
float oc = 1.0 - c;
return mat4(
vec4(oc * axis.x * axis.x + c, oc * axis.x * axis.y - axis.z * s, oc * axis.z * axis.x + axis.y * s, 0.0),
vec4(oc * axis.x * axis.y + axis.z * s, oc * axis.y * axis.y + c, oc * axis.y * axis.z - axis.x * s, 0.0),
vec4(oc * axis.z * axis.x - axis.y * s, oc * axis.y * axis.z + axis.x * s, oc * axis.z * axis.z + c, 0.0),
vec4(0.0, 0.0, 0.0, 1.0)
);
}
void fragment() { //The part where pixels get shaded
vec3 n = NORMAL;
n = normalize(rotateY(n, deg30));
ALBEDO = n;
}
The rotation functions are just for rotating the normal vector around to match whatever orientation is necessary, since OpenGL/Vulkan and DirectX have different canonical Up axis and handedness and different game engines handle normal map assets differently as well, so they'd map to different colors depending on your use case.
In my use case though, I don't need to do any rotation except to rotate 30 degrees to align the normals with the pitch everything in the 2d part of the game is rendered at, like seen in this example:
As for normal mapping itself, this article really helped me a lot when developing a lighting shader that leveraged normal maps. In my original post, each voxel is rendered as a cube with 6 surfaces, which means it doesn't really average out when the surface albedo (color basically) is shaded mapping XYZ to RGB. Instead you get that color flickering and "mostly" the result of the surface most parallel to the screen.
I'm hoping to get output that's closer to the image example above for 2d isometric normal mapping, which seems like it would be easier to calculate, perhaps by estimating angle from a unit sphere to each voxel point, or from the reflection of individual spheres for voxels, or a marching cube wrapped surface or something (IE: the color / normal of each voxel's "surface" would be estimated starting from a lookup table based on what adjacent voxel positions are filled). I'm not so sure myself!
Great job so far and thank you for the incredibly fast work on adjusting the pitch angle :)
I'm getting somewhere! I've implemented the Sobel and Scharr operators, which can be used to get from a heightmap (which the program already has internally) to a normal map. I attached 128 standard renders and their corresponding 128 normal maps; I can switch the left-right and up-down directions easily if you need a different set. These normal maps aren't perfect and should probably be blurred before using; they're just a rough draft for now. normal-map-voxels.zip
The next step is probably to add a mode that does no lighting at all, so the normal maps can handle all of it.
Fascinating example, I didn't think it would be possible to see edge contrast at a depth of 1px/vx for surfaces directly facing the camera! Do these edge detection methods operate in 3d space? The effect appears to be convincing on first glance (I haven't tried plugging the output into a lighting renderer yet); the pixel colors are seemingly able to reflect at least one axis of the expected normal (I'm observing the brick faces as the character spins). I'll have to look into how you came to the conclusion to use those methods, if you have any reading material I could take a peek at :)
Sure! I think I got the idea to use Sobel from https://arxiv.org/pdf/2212.09692v1.pdf . The code I loosely based the Sobel operator/filter on is here, https://gamedev.stackexchange.com/questions/165575/calculating-normal-map-from-height-map-using-sobel-operator but I found the Scharr operator worked better https://en.wikipedia.org/wiki/Sobel_operator#Alternative_operators . Getting the normalization right was tricky; it is important to the result though. I have the height map available internally and I can potentially make it available as an option to users; the results look like this:
OK, now I have an option to eliminate the automatic lighting (or increase its intensity, if you want), which might be handy if your lighting is coming purely from the normal map. I've attached 128 un-shaded voxel renders and their (blurred with sigma=1.0, which makes the blur effect fairly strong) normal maps. I hope the normal maps will work as-is, but they might need to be regenerated with a lower sigma. I can generate these in under a minute, so the workflow isn't bad at all. normal-map-unshaded-voxels.zip
Will the blur be adjustable? I was curious how it's going to look, so I took both examples you posted and imported and lit them up with 3 strong lights:
Honestly, whatever baked-in shading was done to the "hard" edge copy before applying the normal map looks pretty great, even if the normals in that version have too obvious a look of being passed through an edge detection algo (the embossed looking edges give it away). So, while relying purely on runtime lights is cool and all, I think it looks even better with the softened normal map against the shaded sprites from the first example: (The left one is the one mixing and matching soft normals with pre-shaded sprites.)
This is looking like it's going to work really well!
Edit: Would it be possible also by any chance to adjust the canonical frame of reference for the vector? I might be able to do this with a shader but it would really help me out if I could bake proper lighting against multiple axises, so sprites can be rotated to be put on walls for example and still be properly lit, provided their normal map is swapped out for one with the correct alignment:
Wow, that does look nice! Yes, blur is adjustable already and the strength of shading can be turned up or down, not just on or off. Outlines can also be disabled, which might be good here since I don't know how outlines interact with the normal map (do outlines even have normals?). I think I'll push a release so you can try this, it might be a pre-release while this is still in progress.
Regarding the frame of reference, right now that would be a challenge, but I could rather easily re-enable the roll/pitch/yaw rotation modifiers, so the monster on the green wall would cast a shadow downwards rather than against the wall. I don't know how you want to handle shadows cast on things outside the sprite and its normal map; I've wanted to export shadows separately for a while, but I'm not entirely sure how I would do that.
Also, a thought occurred to me -- you might not need to rotate the normals with these, since a pure-blue normal (pointing directly at the camera) is already pointing up about 30 degrees.
It would be really nice if you could write up the technical side of your lighting/normal system in any level of detail, so other people can make use of the normal maps feature! Anything from a blog post to a link dump would be really helpful.
I don't imagine outlines would have normals, as one of the techniques used to generate them in 3d I know used to be to create a backfaced-hull offset by the desired amount and to disable lighting on those surfaces to get hard edges. No clue what that would look like if the normal were flipped for those (perhaps a reverse-embossed effect). Might be worth experimenting with but I imagine if it's not completely unlit, alternatively they might be the color of the vector parallel to the camera (monitor screen). That might produce something weird too, I don't really know to be honest xD
I use two systems to generate lights for testing purposes currently, the first is Godot 3.x's Light2D
system, but that system breaks down due to render order issues when compositing gets too complex, so I also have a custom lighting shader I intended to use for production, which currently works with 1 light and you might be able to play around with in whatever engine you're most comfortable with:
shader_type canvas_item;
uniform bool enabled=true;
uniform sampler2D lightTexture;
uniform vec3 lightSource;
uniform vec4 color : hint_color = vec4(1.0);
uniform float width_ratio : hint_range(0.0, 1.0, 0.05) = 0.5;
uniform bool invert_green; //Used to convert DX-format->GL-format normal maps and vice-versa
uniform float specularStrength : hint_range(0.0, 5.0, 0.01);
uniform float energy : hint_range(0.0, 16.0, 0.1) = 1.0;
uniform float attenuation : hint_range(0.1, 32.0, 0.1) = 1.0;
const int DIRECTIONAL_LIGHT = 0;
const int POINT_LIGHT = 1;
const int SPOTLIGHT = 2;
void fragment()
{
if (enabled){
//Retreive surface normal
vec3 n = textureLod(NORMAL_TEXTURE, UV, 0).rgb;
if(invert_green) n.g = 1.0- n.g;
//Transform normal vector to range [-1, 1]
n = normalize(n * 2.0 - 1.0);
//Normalize the lighting offset from 2d coordinates to ones in shader space
vec3 ls = lightSource;
vec2 sz = TEXTURE_PIXEL_SIZE;
vec2 sz2 = normalize(TEXTURE_PIXEL_SIZE).yx; //Apply aspect ratio correction for non-square surfaces.
ls.xy *= sz;
ls.xy *= sz2;
//Determine how far UV is from the light source
vec2 ratio = vec2(1.0-width_ratio, width_ratio) * 2.0;
float intensity = max(0.0,(1.0-min(pow(distance(UV*sz2,ls.xy * ratio), 1.0/attenuation), 1.0)) * energy);
// float intensity = 1.0;
ls.xy -= UV*sz2; //Move the light source relative to the texel being sampled.
ls.y = -ls.y; //Flip the coordinate system for Y-axis
ls.xy *=ratio;
ls = normalize(ls);
//Apply lambertian reflection
float reflectionCoef = dot(n, ls);
vec3 shadeColor = color.rgb * reflectionCoef * color.a ;
vec3 specular = shadeColor * pow(max(reflectionCoef, 0.0), 32) * specularStrength;
COLOR = texture(TEXTURE, UV);
COLOR.rgb = max(COLOR.rgb, intensity*(specular+shadeColor * COLOR.rgb) + COLOR.rgb);
// COLOR.rgb = vec3(distance(ls.xy, vec2(0.0)));
} else /* disabled */ {
COLOR = texture(TEXTURE,UV);
}
}
The reason you might want to rotate the normals would be for when the base sprite is rotated at runtime; The normal textures in this case would rotate with them but since their colors are precomputed, the lighting would reflect an understanding that the canonical "up" axis in worldspace is also rotated. So, it wouldn't just be helpful for different types of projection, but also for baking different normal maps for sprites that are also expected to be displayed rotated at runtime.
Edit: Oh, I forgot, regarding shadows, I'll just be doing some simple tricks with masks to project them; I'm not super concerned about super-realistic cast shadows on anything so for sprites, if they need a shadow to help a player identify their position they can have it. I just liked how shadows or whatever shading spotvox uses to tint the voxels a different color helps it pop a little more. In my original post, the spinning head actually is unshaded completely, and the voxels themselves were painted with an assumption about the light source similar to how most 2d sprite art is done.
I know it's not correct per-se to apply realistic lighting as a post-process to that, but tons of indie games do this, and usually it's either very labor-intensive to draw the normal map, or it just ends up looking cheap because the sprites are either assumed to be flat or otherwise a program like SpriteLamp is used which often produces results that look embossed or less than ideal. Voxel models have the potential to be a real game-changer for stuff like that in some use cases!
OK, I'm probably understanding some of this wrong, but I re-enabled roll/pitch/yaw rotation options. That can get sprites that look like:
If I rotated it how you had it on the green wall, there would still be soft shadows cast from the implicit light from the left.
I think I will push a pre-release next, so you can play with the different options and tell me what's working and what's completely broken, haha. I'll try to document some common combinations of options here, because I don't want them in the README.md file if the options are likely to change.
-n 0.8
or -n 1.0
seem to both make decent blurs on the normal map. -n 0.5
seems too weak, and looks almost identical to -n 0.0
. -n -1.0
will avoid generating a normal map at all (the default).
-R -90
or -P -90
will rotate the model so its base faces against a northwest or northeast wall (not sure which goes against which wall).
--vertical-xy 2
will stretch the projection vertically, so instead of the top of a cube rendering as a wide rhombus, it renders as a 45-degree-rotated square.
If you want to try making some of these renders from your own models, you can try https://github.com/tommyettinger/spotvox/releases/tag/v0.0.7-b1 now. I hope it works!
Thanks! I'll see if I can give it a shot during the week and will let you know if I run into any issues.
Sorry it took so long to get back to you! It's not a bad effect to be honest. Not as convincing when the light needs to be on a lower Y position than the sprite (perhaps calculated height of the light matters), but it looks pretty good.
Left: Smoothing 0.5, Right: Smoothing 1.0
Woah! It looks like you figured out the distort parameters just fine, since you have it spinning from a side view, that's reassuring to me. And the smoothed version looks great from where I'm standing! Like it's still pixel/voxel art, but with advanced 3D lighting. I like how the light color affects the outline color, too. The smoothing looks really good at 1.0, and I wonder if higher values would look better or worse...
If you're intending to delve really deep into the asset production code, it might be worth taking a look at (and maybe forking) Isonomicon, which is what SpotVox is based on, and they share lots of code. I haven't brought over the normal maps yet, but the amount of code that required changes was rather small, and mostly copy+pasting code from other codebases of mine. Isonomicon can render everyday PNGs like SpotVox can, but it can also do some specialized rendering that allows preserving material info in each pixel of an image, and when rendering you can change the color entirely based on the palette and that material. I'm wondering if having the material info would be useful for supporting reflective objects. Materials can use lots of custom properties defined in code in Isonomicon, which I use to do things like make fire transition to smoke over an animation. The normal maps are based entirely on the height map/depth map, so that part should be unchanged between SpotVox and Isonomicon.
Somewhat related, I'm playing Triangle Strategy now on the Switch, and this looks like a higher-detail version of its art. With a lot more rotations, hooray!
Much appreciated! I don't know if this is the same material system MagicaVoxel uses, but that one seemed more like a state machine that I didn't look into much. While pretty cool, I probably wouldn't use most of the features of it -- with this the test model, everything was originally modeled for fullbright use and relied on baked voxel colors from the presumed typical viewing angle, kinda like lofi AO I guess.
I was a bit intimidated before filing this issue to look into the code much, because originally I was intending to try to hack this feature in myself before requesting it and was a bit confounded by some of the systems used to map voxel coordinates to the final 2d projection 😰 But I might give isonomicon a look. I still have yet to understand the coordinate system and just set the XY to 0 to get the "straight on" look. Didn't try messing with any of the rotational params yet. You may have noticed that in my very first post, the axis the head rotates on in the live lighting demo is modified to sway towards 30-36° as it faces away from the camera and 0° as it faces towards it; deliberate design choice there since his goofy hair covers up most of the facial detail at full isometric perspective. I'll probably mess with the params more to fake this effect should I end up using this particular model in the final (and choose to bake it into a sprite using spotvox).
I appreciate all the help, and if you feel this resolves the issue, feel free to close it.
Great, I'll close this. Feel free to ask any questions you have here or in Isonomicon's issues; I'll try to bring over the features I added for the purposes of this issue into Isonomicon.
Also, yes, there is a state machine in Isonomicon's materials, but it isn't the only thing they do -- I added a bunch of unusual properties like allowing negative emission (darkening, rather than lighting) and swirling between frames in a loop:
That's an approach that's really heavily targeted at palette swapping sprites to reduce the amount of assets a game needs, and is somewhat unorthodox since you "draw with materials" rather than "draw with colors." If you want your game to have switchable skin, hair, and clothing color for the same base model, this might be useful. It also acts a little like asset obfuscation if someone only has access to the textures pre-shading, since they don't look like they will in the final product.
When you have a demo or something out, I'd be happy to link to it as a "Games Using SpotVox" in a gallery or something on the main README. Cheers!
Hello!
Currently I'm working on a 2d fixed-viewpoint isometric project and some months ago had been exploring voxels for sprite-based characters. I found spotvox and bookmarked it because it looked really helpful for exporting 2d, but forgot about it after attempting to import and render voxels at runtime instead. Since my 3d knowledge and experience is extremely limited, I would still very much like to use baked 2d images for sprite data, and in that time I noticed some much-appreciated added functionality to generate angles and turntables for voxel art added to spotvox.
In my engine, I have been testing using normal mapped tiles to simulate realtime lighting. I was also able to extend this to voxel models by having a render pass with a shader that displayed faces based on their normal. The result was somewhat blocky due to how a voxel is represented in-engine, but mitigated with a cheap smoothing technique (rendering at slightly higher resolution and then scaling down):
My request is that I would like for a slightly better option if spotvox is able to support it -- One-to-one representation of each voxel as a single color representing the averaged contribution of the visible surface normals when viewed from the camera's position, using whatever scaling technique is currently used by spotvox for upscaled versions to interpolate. I don't know how voxels are represented internally, so this may not be trivial to implement, but I'm assuming that the result would be much better detailed than my above attempt using any technique other than simply rendering voxels as cubes and shading the individual surface normals.
Really appreciate this tool existing and the work you've put into it, and thanks for reading this whole thing :)
P.S. Any chance of being able to specify the camera pitch for games using slightly different angles? (30° dimetric, variants of trimetric and oblique, etc)