These items are copied from a to-do list I've been keeping for this project:
Debugger Interface - We could use the minifb to show the progress of rendering each image. The tricky thing about this would be to do it in a way that is low enough overhead to not negatively impact rendering times. One idea I have is to send each rendered color and its pixel position to a channel. That way the only overhead is for sending across the channel. Any other updating/re-rendering can be done asynchronously. It would be neat to have the ability to stop rendering at any point (by hitting Esc or something) and the ability to drag and select a certain portion of the image to be re-renderered. To avoid overhead in cases where the debugger isn't needed, it might make sense to have the entire debugger functionality hidden behind a feature flag.
Torus Primitive - This was already mostly implemented in src/primitive/torus.rs however the code to generate the normals was never finished. We should finish this and also make sure to cite the code from this blog post.
Material Recursive Base Case - The base case of the material code recursion should probably be to return the diffuse color, not the background color. After all, if we don't do this, a scene that depicts a closed room may still have hints of the background in it. We can test this with a scene that has a closed room of just mirrors (so that the recursive base case is always hit)
Reporter - We can improve the reporter by adding information about the number of samples we are currently on, as well as the current pixels being rendered to. We may also want to output some initial information about the render like the output file name, image dimensions, number of samples, etc.
InfinitePlane Optimization - The infinite plane really does not need to be a general plane (point, normal) because it is only ever used in axis-aligned cases. We can use the axis-aligned plane trick (where you already know one of the coordinates of intersection) to speed up the code. To avoid branching, we should have several infinite plane types for each normal direction. We need to make sure we account for the case where if the ray is parallel to the plane, you may end up dividing by zero. We can use the up facing infinite plane in the finite plane primitive. We can use only the planes for specific directions in other places. We should also replace all the code where we do this intersection optimization in an ad-hoc way. (see k-d tree, cube, cone, cylinder, bounding box?, etc.) We might even want to add a circle primitive to use in the cone and cylinder (and to just expose otherwise for anyone to use). This will be easy to do with the fast infinite plane and the circle equation x^2 + z^2 = 1.
Texture Mapping Convention - We use a u-axis to the right and v-axis down convention, but we should really be using a v-axis up convention. The spherical coordinates derivation is actually incorrect. Theta should actually be going in the opposite direction. This would give the right v-axis up convention. Try to do the derivation again and make sure you can justify the direction of theta and phi. Also, using negative z is probably wrong in the phi derivation. Will need to fix the other primitive texture mapping implementations to match the new convention.
Normal Map Transforms - Normals need to be transformed on their way back up the scene hierarchy, do normal maps need to also be transformed (e.g. by normal_trans)? It would make some sense to do this given that you are switching coordinate systems, but figuring out the details should be interesting.
Organize Assets Directory - Right now it is hard to tell which assets are used by which examples. We should organize the assets in folders based on the example that they correspond to and maybe duplicate the ones shared by multiple examples. Scripts in the examples/ directory that are failed attempts at scenes should be deleted. Might want to leave in the castle example but delete the maze around it.
TODO comments - There are many TODO comments throughout the codebase. They should be resolved at some point.
These items are copied from a to-do list I've been keeping for this project:
minifb
to show the progress of rendering each image. The tricky thing about this would be to do it in a way that is low enough overhead to not negatively impact rendering times. One idea I have is to send each rendered color and its pixel position to a channel. That way the only overhead is for sending across the channel. Any other updating/re-rendering can be done asynchronously. It would be neat to have the ability to stop rendering at any point (by hitting Esc or something) and the ability to drag and select a certain portion of the image to be re-renderered. To avoid overhead in cases where the debugger isn't needed, it might make sense to have the entire debugger functionality hidden behind a feature flag.src/primitive/torus.rs
however the code to generate the normals was never finished. We should finish this and also make sure to cite the code from this blog post.x^2 + z^2 = 1
.normal_trans
)? It would make some sense to do this given that you are switching coordinate systems, but figuring out the details should be interesting.examples/
directory that are failed attempts at scenes should be deleted. Might want to leave in the castle example but delete the maze around it.