Would it be possible to implement code that allows to train for a mesh output instead of "dead" 256x256 pixel chips?
A 3d mesh could be scaled, and also be worked on in mainstream software. Quality and details then lies in the texture, and not in the image resolution (Where the current software seems stuck even on super expensive GPU)
If a tool like this could output a fbx sequence of the result, it would truly be a gamechanger.
I really don't know much about neural networks, is there something like a 3d autoencoder?
Would it be possible to implement code that allows to train for a mesh output instead of "dead" 256x256 pixel chips?
A 3d mesh could be scaled, and also be worked on in mainstream software. Quality and details then lies in the texture, and not in the image resolution (Where the current software seems stuck even on super expensive GPU)
If a tool like this could output a fbx sequence of the result, it would truly be a gamechanger.
I really don't know much about neural networks, is there something like a 3d autoencoder?