Open jalberse opened 1 year ago
I was operating with 100-500 samples per pixel, but the paper was operating with 8 samples per pixel for much of their tables. They show that a higher samples per pixel results in lower % of interior nodes skipped.
Running with 8 spp, we see that we still have a low percentage of true positive predictions. They measure in "% interior node computations skipped" which might be slightly different (I'll need to check), but I don't think an order of magnitude difference is what we'd expect.
It could be that I do not have monte carlo sampling; my secondary rays are random, rather than biased towards lights and downweighted. I think this results in less co-local rays, which would reduce the number of true positive predictions - we would tend to get more with no predictions at all. Something like 8 rays also isn't nearly enough for my renderer, but can be OK with monte carlo included. Maybe I should add that and see the effect.
I have double-checked my hashing function and it is working as expected. So I suspect the lack of predictions is for some other reason.
(Six bits of precision)
target\release\shimmer.exe bunny --image-width 800 --aspect-ratio 1.0 1.0 --samples-per-pixel 8 --cam-look-from 278.0 278.0 -800.0 --cam-look-at 278.0 278.0 0.0 --cam-vertical-fov 40.0
Rendering tiles...
Done tracing. Writing to file... Done writing to file. Statistics for BVH/Predictor BvhId(f59acb01-9939-4ea1-b2a9-a0c3067c0809) Total rays into BVH::hit(): 31487080 True positive predictions: 109628 Ratio true positive: 0.003481682 False positive predictions: 3933676 Ratio false positive: 0.124929845 No predictions: 27443776 Ratio no predictions: 0.87158847 Table size (number entries): 2989785
Ah, okay, I know why we're getting such a low true positive rate: the original paper mapped the hashes to sets of predicted nodes, not a single predicted node. This is really not clear from the paper, only in the source code.
I will implement this and we should see a much higher true positive ratio, as we have more nodes to potentially check.
Their code does limit the number of nodes allowed in the predictor set, so we may need to do that. They seem to limit to 5.
Storing sets of predictions for each hash value also makes statements from the paper about the go-up-level make more sense. Higher go-up-levels don't result in fewer entries in the predictor table, but rather fewer predicted nodes for each entry.
I've implemented sets of prediction nodes. The visual output looks identical, which is good.
We are seeing a much higher percentage of true positive hits (a prediction resulting in a hit), at 1.7%. False positives are at 30% and no predictions make up the rest of the rays entering the BVH. So, allowing predictions to include a few different nodes has increased how often we skip traversals, but it's still relatively rare. Issue is almost certainly that I have low ray locality due to not biasing rays to sample lights (we randomnly bounce). The paper gets similar results where they have scenes with low ray locality due to complicated lighting
Bit precision 6 with a go-up-level of 0.
cargo run --release -- bunny --image-width 500 --aspect-ratio 1.0 1.0 --samples-per-pixel 100 --cam-look-from 278.0 278.0 -800.0 --cam-look-at 278.0 278.0 0.0 --cam-vertical-fov 40.0 > tes
t.ppm
Compiling shimmer v0.1.0 (D:\projects\ray-tracing-in-one-weekend-in-rust)
Finished release [optimized] target(s) in 6.80s
Running target\release\shimmer.exe bunny --image-width 500 --aspect-ratio 1.0 1.0 --samples-per-pixel 100 --cam-look-from 278.0 278.0 -800.0 --cam-look-at 278.0 278.0 0.0 --cam-vertical-fov 40.0
Rendering tiles...
Done tracing. Writing to file... Done writing to file. Statistics for BVH/Predictor BvhId(993a61d5-3fc8-4351-9a1b-3338350d762e) Total rays into BVH::hit(): 153699481 True positive predictions: 2581598 Ratio true positive: 0.016796399 False positive predictions: 45531264 Ratio false positive: 0.29623562 No predictions: 105586619 Ratio no predictions: 0.6869679 Table size (number entries): 10026353
Render time: 160.0675361s
Pushing what I have to the branch
Might want to merge it since I've got extra stuff in here like triangular meshes, and hrpp is all optional
But, there's more work to improve HRPP.
I'm going to merge the branch, since it's got triangle rendering etc, but leave this issue open since HRPP needs improvement. It's okay to merge IMO because (1) this repository is a toy and (2) HRPP is optional, so we just expect BVH users to not use HRPP yet.
Implementation of hash-based ray path prediction for acceleration structure traversal elision.
A preliminary step may be to change from an f64 based system to an f32 based system. This is more standard, efficient, and the HRPP paper uses f32 for their hash function (though we could develop one for f64). We could potentially make the renderer use either, but that may be overcomplicating things. I used f64 originally because I saw some visual artifacts while using f32 for very large objects. I think we can accept those visual artifacts, as we realistically render those large objects very rarely, and that could potentially be addressed in other ways.