Closed KarimJedda closed 2 weeks ago
One small note if anyone attempts this.
This seems to be required regardless. I do not know how the compute_35
dependencies gets in the CMake files:
sed -i 's/-gencode arch=compute_35,code=sm_35//g' /gaussian-splatting-cuda/build/external/CMakeFiles/simple-knn.dir/flags.make
sed -i 's/-gencode arch=compute_35,code=sm_35//g' /gaussian-splatting-cuda/build/CMakeFiles/testing.dir/flags.make
sed -i 's/-gencode arch=compute_35,code=sm_35//g' /gaussian-splatting-cuda/build/CMakeFiles/gaussian_splatting_cuda.dir/flags.make
Doing so lets you build properly on a "factory reset" machine.
That sounds like a great plan. Having the software run in the cloud appears highly beneficial to me. Incorporating a Docker file isn't a massive addition, but it can deliver immediate value. So if you're inclined to take on this task, I wholeheartedly support you.
On the architecture front, it appears the source might be libtorch. However, I'm curious about how this is integrated into your cmake build. I haven't noticed this occurrence in my builds.
In the mid term, our objective should be to entirely phase out libtorch as a dependency. This move would address such issues. My inclination is to retain it primarily for test writing, facilitating easier output verification. Such a feature is invaluable when adjusting tensors and applying optimization routines. The midterm goal is to completely remove libtorch as dependency. This would alleviate this issue. I probably want to keep it only for writing tests, so that outputs can be more easily verified. That helps tremendously when you tweak tensors and apply optimization routines.
I think this can be closed after one year :)
Sounds good to me, what a journey :)
I'm running this right now in a docker container on a hosted GPU provider. It should theoretically be possible to build a docker container that would encapsulate all the dependencies and have it runnable.
The issue however is that this provider doesn't give me access to the host VM in such a way that I can push the docker image to a container registry for convenience. I tried building it locally but the build also requires a GPU.
The idea here would be to split the image in two stages, a builder and a runner as such:
Dockerfile
I believe this would simplify both development and inference.
For building:
DOCKER_BUILDKIT=1 docker build -t gaussplat -f Dockerfile
and for running something along the lines of (subject to tweaking):
docker -v /tmp/dataset:/dataset -v /tmp/output:/output run gaussiansplat:0.0.1 /dataset/tandt/truck
For now I'm putting this here until my GPUs and computer parts get delivered and I can try it in a more controlled environment. Until then, this could be a good first issue.