Closed jia-heng closed 4 months ago
Hi @jia-heng,
Regarding 1 :
The total memory cost of our representation includes both the network's weights and the BVH nodes. Those are the only two things that are required at inference and you can see their respective costs under the Input Encoding
tab for the network, and Stats
tab for the nodes, as you already noticed.
Regarding 2 :
The LoD mechanism is deactived by default but it can be actived in the Developer mode
which you find under File -> Developer mode. A new LoDs
tab will appear in which you can set the number of LoDs that are learned during the optimization (see below). Once this is greater than 1 you will see the node count for the respective LoDs in the Stat tab as well.
Hope it helps :) Philippe
I am very interested in your work, but I have some doubts that I hope you can help clarify.
When you mention memory savings in the document, are you referring to the runtime memory, not the storage space occupied by the 3D scene data? The memory usage shown in the demo app only correlate with the number of nodes.
I have exported a chess scene as a GLTF file using Blender and trained it with the configuration provided in the demo. However, for the n-BVH's LoD nodes, as shown in the attached image, only the sixth layer has 25,249 nodes, while all other layers have just 1 node. Is there some configuration that I might have overlooked?