Open 4gt-104 opened 1 month ago
There is no good solution at the moment.
If HAMi can try to read global configuration information through webhook, set this parameter. Not sure if it is feasible, need to try
you can modify mutatingWebhookConfiguration in HAMi, add env LIBCUDA_LOG_LEVEL=0 to GPU pods, by the way ,do you have a WeChat or Linkedin account?
@archlitchi thanks for the reply, I will try to implement setting LIBCUDA_LOG_LEVEL
during admission.
Unfortunately I don't have WeChat but I have a linkedin account.
I have reviewed the code and believe it can be easily implemented, but I have a concern regarding ArgoCD and GitOps. Overriding the pod spec, whether it's to modify the environment variable for visible CUDA devices or any other environment variable, would likely trigger an out-of-sync state.
@archlitchi what do you think?
I tested various scenarios, and the out-of-sync state is triggered only when bare pod manifests are applied via ArgoCD with set environment variables that can be modified by the admission webhook. Given this, I think adding a note about it in the documentation and proceeding with the environment variable mutation approach would be the best solution.
I tested various scenarios, and the out-of-sync state is triggered only when bare pod manifests are applied via ArgoCD with set environment variables that can be modified by the admission webhook. Given this, I think adding a note about it in the documentation and proceeding with the environment variable mutation approach would be the best solution.
i haven't tried submitting tasks with ArgoCD, i think we can add a field in values.yaml, regarding the log-level, it can be set to 2(which is the default log level, errors, warns and msgs), 0(errors only), 3(errors, warns,msgs and infos), 4(debugs, msgs, infos, warns, errors). we only patch the 'LIBCUDA_LOG_LEVEL' env to container is not set to 2.
Please provide an in-depth description of the question you have:
I reviewed the
HAMI-Core
and confirmed that the verbosity level can be reduced by setting theLIBCUDA_LOG_LEVEL
environment variable. However, configuring this for every GPU pod can be tedious.Is there a way to set the verbosity level through HAMI’s Helm chart or scheduler configuration instead?
What do you think about this question?: I believe the user should have easy access to configure this parameter, and it could be integrated with the already existing admission webhook. Additionally, I recommend setting the default
HAMI-Core
verbosity level to0
, ensuring consistent behavior with Nvidia’s device-plugin.Environment: