Open adriangb opened 2 months ago
What's the typical mechanism that you'd use to ensure that those two layers use the same TensorFlow version?
Currently it's a PITA with all package managers that I know of that support monorepos.
Hence why I think something like:
FROM python:3.12-slim-bullseye AS deps
COPY --from=ghcr.io/astral-sh/uv:latest /uv /bin/uv
COPY uv.lock .
RUN --mount=type=cache,target=/root/.cache/uv \
uv cache populate
FROM python:3.12-slim-bullseye AS service
COPY --from=ghcr.io/astral-sh/uv:latest /uv /bin/uv
COPY . /app
RUN --mount=from=deps,source=/root/.cache/uv,target=/root/.cache/uv \
uv install --package service
Would be nice.
I'd say this is similar in the venue/spirit of https://github.com/astral-sh/uv/issues/1681 / https://github.com/astral-sh/uv/issues/3163 as populating the cache will involve downloading and possibly a build step which is a common workflow achieved with pip wheel or download.
It would be nice to have the ability to pre-cache dependencies. In particular I would find this useful in a monorepo context to:
--cache-from
or similar from this cache layer and telluv
to use the cache that was built in that layer.This is important because if I have services with the following deps:
And
If I don't have a "common" step where I download
tensorflow
I'd end up downloading it twice (once for each service) which is wasteful. Yes you can parallelize it but IMO efficient serial >> inefficient parallel.Alternatively something like https://github.com/python-poetry/poetry/issues/5983 and functionality in uv to populate that folder of wheels could do the trick.