Open league opened 11 months ago
Patch and project coverage have no change.
Comparison is base (
0bab382
) 65.39% compared to head (2c73d15
) 65.39%.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
Oh look, it's our good friend the "test_matmul_aa_ci8" test failure (#187).
For the build of .gpu could we just specify all supported SMs? And then set a reasonable default for the shared memory per SM? I think those are the only two things that require a working GPU.
Should we turn the docker builds into CI tests? Maybe only in master?
I'm mixing another idea into this dockerfile PR, but it's minor stuff I encountered about test conditions while figuring out the prereqs in docker.
Re: building with Dockerfile.gpu, yes I think specifying to configure will work, will try. It's similar to what we need with nix because of hermetic build where we can't directly query hardware capabilities.
Re: building docker in CI: I wasn't sure whether docker-build works within docker but I guess it might. May be worthwhile if we want to support these things. (They are still mentioned in the README, and I guess I'm still finding them convenient for testing and playing around.)
Maybe this could help with the docker stuff: https://github.com/marketplace/actions/build-and-push-docker-images
Maybe we should go up to Jammy on this.
The nvidia/cuda:10.2 image disappeared from the hub, so the GPU versions in master were not buildable anymore. Also, the two that build bifrost were not yet using the configure script.
However, the full-build
.gpu
version doesn't quite work because nvidia-docker seems to provide access to GPU during run phase, but not during build phase. Perhaps relevant: https://stackoverflow.com/questions/59691207/docker-build-with-nvidia-runtimeThe
_prereq.gpu
version builds fully and is still helpful.Also pinging PR #92 which is outdated but relevant.