Open fridex opened 2 years ago
/triage accepted /priority important-longterm
worth checking the impact on ODH
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
This might be easy to achieve and can reduce maintenance cost.
/help
@goern: This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
/sig devsecops
After working on https://github.com/thoth-station/adviser/pull/2411#issuecomment-1385571660 I'm starting to wonder whether we should implicate Thoth directly as part of container build, at all ; and rather using it as a "lockfile updater" rather than a installer.
Some reasons:
I might be missing some context though on why integrating Thoth advise inside the s2i process is more valuable. I see Thoth more in the role of a pipenv update
than a python -m build
, to resume this.
@goern @codificat @harshad16 @KPostOffice @Gkrumbach07 wdyt ?
The initial thoughts were to provide most of the possible integration points (given that a user trusts the system), and the following was derived:
with the s2i image, the idea was that trusting thoth advice would fix the package issue during the build time. Though it wasn't used the most. also, it was to cover some cases, like GPU enable images as the developer build might not have required them. However, they are required for cluster images, so thoth advice could be helpful in those scenarios, where the environment dictates the package installation.
with the s2i image, the idea was that trusting thoth advice would fix the package issue during the build time. Though it wasn't used the most. also, it was to cover some cases, like GPU enable images as the developer build might not have required them. However, they are required for cluster images, so thoth advice could be helpful in those scenarios, where the environment dictates the package installation.
In cases of GPU (or any accelerator / custom resources) in a cluster, the image would not be built on the same node that it's used on, would it ? I think the match would more occur at the Scheduling stage (taking a k8s cluster as model) using limits on custom resources (checkout https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/).
Do some python AI/ML python packages require a GPU/accelerator during build time ?
Suggestion:
Is your feature request related to a problem? Please describe.
As suggested by @frenzymadness, our tooling could be simplified. Instead of creating a patch, we could maintain our own assemble script and propagate it to containers.
Describe the solution you'd like