Closed MarcusLoppe closed 9 months ago
so I actually preempted this and the repository allows for face edges to be precomputed and passed in. you just have to change the data_kwargs
to include "face_edges"
on both trainers with appropriate custom Dataset
signing off for the holidays. merry xmas and see you in 2024!
so I actually preempted this and the repository allows for face edges to be precomputed and passed in. you just have to change the
data_kwargs
to include"face_edges"
Yes I was thinking about it, but if I converted lets say 200 models @ 500 MB each to face edges = 100GB
But the transformer can't cache or use the this since it generates the mesh on the fly.
Happy holidays š
@MarcusLoppe ok final commit, now you can precompute the mesh codes š ok, signing off for real
@MarcusLoppe added a way to cache the derivation of the face edges through a simple decorator on the dataset class
should resolve this issue
So I'm trying to see what prevents from training on high poly count meshes. I tried with 5k & 16k face count meshes, below are the results.
Using a batch size 1 at 5k the memory usage went up 1.5 GB, when I switched to batch size, it went up to 4 GB and if loaded it using the GPU i increased by 6 GB.
The face_edges object that is return has a actual usage of 1386.40 MB so 3.4GB is junk (at 4 batch size of 5k), I tried calling gc.collect() but no change.
I've tried to optimize the derive_face_edges_from_faces function but haven't had much luck, current it convert a batch of 1 in 0.43sec so if there is head room of making it slower and more memory effective.
Making it slower might affect the transformer since it needs to call it each step. Current this looks like a big memory issue and I hope someone better can resolve it. I'll try to see if there is another bottlenecks.