Closed markusmoenig closed 10 months ago
Hello!
euc
is designed to roughly resemble OpenGL and similar such APIs so you'd generally take the same approach as in those APIs. For example, I cooked up the following diff which adds fog to the teapot.rs
example using the view depth (i.e: the depth prior to final transformation by the projection matrix).
diff --git a/examples/teapot.rs b/examples/teapot.rs
index 02a751d..ac496bf 100644
--- a/examples/teapot.rs
+++ b/examples/teapot.rs
@@ -9,2 +9,4 @@ use vek::*;
+const FOG_COLOR: Rgba<f32> = Rgba::new(0.5, 0.5, 0.7, 1.0);
+
struct TeapotShadow<'a> {
@@ -67,2 +69,3 @@ struct VertexData {
light_view_pos: Vec3<f32>,
+ view_depth: f32,
}
@@ -88,2 +91,3 @@ impl<'a> Pipeline for Teapot<'a> {
let light_view_pos = light_view_pos.xyz() / light_view_pos.w;
+ let view_pos = self.v * wpos;
(
@@ -94,2 +98,3 @@ impl<'a> Pipeline for Teapot<'a> {
light_view_pos,
+ view_depth: view_pos.z,
},
@@ -105,2 +110,3 @@ impl<'a> Pipeline for Teapot<'a> {
light_view_pos,
+ view_depth,
}: Self::VertexData,
@@ -132,3 +138,5 @@ impl<'a> Pipeline for Teapot<'a> {
let light = ambient + if in_light { diffuse + specular } else { 0.0 };
- surf_color * light
+ let color = surf_color * light;
+
+ Lerp::lerp(color, FOG_COLOR, 1.0 - 1.0 / view_depth)
}
@@ -165,3 +173,3 @@ fn main() {
// Clear the render targets ready for the next frame
- color.clear(0x0);
+ color.clear(u32::from_le_bytes(Rgba::new(FOG_COLOR.b, FOG_COLOR.g, FOG_COLOR.r, FOG_COLOR.a).map(|e| e * 255.0).as_().into_array()));
depth.clear(1.0);
Note that this is not 'true' depth (although it is still used by a lot of games, particularly older ones). True depth cannot be found through any affine transformation and requires actually taking the difference between the fragment position and the camera position inside the fragment shader and finding the euclidean distance between them (i.e: wpos.distance(self.camera_pos)
). Traditionally, this has been quite an expensive operation to perform per-pixel, since it requires a dot product and a square root operation, so most older games skip it in favour of the affine approximation of depth. That said, the relative cost on modern CPUs is likely much smaller, so I imagine it wouldn't be much slower here.
I hope this is useful!
Or is that better done via post processing and using the depth buffer?
This is also possible, yes (and can enable a lot of interesting effects such as screen-space raycasting for volumetric fog - it's the technique we use in Veloren to render clouds and other volumetrics). I would probably not recommend it here though: since euc
does software rendering, there's a much tighter performance budget. Sticking to older-but-faster techniques will probably give you better results, unless you don't care about rendering performance (i.e: because you're rendering static images, say).
(btw; any idea how to implement a path tracer using euc ? Would be quite cool)
euc
's main 'trick' is rasterisation (i.e: turning lists of vertices into pixel polygons). I'm not sure how useful this specific feature would be for implementing a path tracer, since most path tracers require more abstract representations of surfaces. That said, the supporting infrastructure (the Pipeline
trait, built-in parallelism, buffers, textures, etc.) might be quite useful for this still. In general, path tracers (or raytracers more generally) shove the entire rendering process into the per-pixel fragment shader, often just rendering a single quad over the entire screen and using some technique (such as SDFs) to render the graphics.
Wow, thanks a lot for the fast response and the example. Will try out both approaches. Eldiron runs at 4fps :) so performance is not really that tight. My ray caster I use right now is just too limited when it comes to transformations.
Re path tracing, for me it would mostly be triangle based, I have a Rust based BSDF path tracer in my experimental ForgedThoughts SDF programming language, but having a mesh based version on the CPU would be great, especially as euc already has all the fundamentals for it. But dont know how much work a "ray -> triangle" based pipeline would be. I mean it would just be the opposite, a shader which constructs rays which hit the scene, so the parallelism would come from the fragment shader which iterates over the pixels.
Sorry, I am mixing things up, ray -> triangle is something completely different than triangle rasterization. My bad.
As a point of interest, the vector maths library that I use in the examples for euc
contains utilities for manipulating rays, including for triangle intersection.
Yes, ok, but without an acceleration structure it does not really make sense. Something like Embree would be great in Rust (at least the intersection part).
Rapier might provide the sort of utilities you're looking for in that regard.
Rapier is a physics engine ? Ok, will look into it.
My bad, I'm thinking of Parry (which Rapier makes use of internally).
Ok, thanks, will look into it, does not seem to very documented though.
Hi,
I am looking into the refactor branch again to finally start using it for my retro RPG project. I really do not like using my ray caster, having real 3D is so much better.
My first question, what is the best way to apply progressive distance fog inside the fragment call, i.e. how to get the distance to the camera ? Or is that better done via post processing and using the depth buffer ? However doing it in the fragment step would be quite a time saver.
Thanks
(btw; any idea how to implement a path tracer using euc ? Would be quite cool)