Project-HAMi / HAMi

Heterogeneous AI Computing Virtualization Middleware
http://project-hami.io/
Apache License 2.0
963 stars 199 forks source link

Is there a way to reduce the HAMI-Core verbosity level for workloads? #544

Open 4gt-104 opened 1 month ago

4gt-104 commented 1 month ago

Please provide an in-depth description of the question you have:

I reviewed the HAMI-Core and confirmed that the verbosity level can be reduced by setting the LIBCUDA_LOG_LEVEL environment variable. However, configuring this for every GPU pod can be tedious.

Is there a way to set the verbosity level through HAMI’s Helm chart or scheduler configuration instead?

What do you think about this question?: I believe the user should have easy access to configure this parameter, and it could be integrated with the already existing admission webhook. Additionally, I recommend setting the default HAMI-Core verbosity level to 0, ensuring consistent behavior with Nvidia’s device-plugin.

Environment:

wawa0210 commented 1 month ago

There is no good solution at the moment.

If HAMi can try to read global configuration information through webhook, set this parameter. Not sure if it is feasible, need to try

archlitchi commented 1 month ago

you can modify mutatingWebhookConfiguration in HAMi, add env LIBCUDA_LOG_LEVEL=0 to GPU pods, by the way ,do you have a WeChat or Linkedin account?

4gt-104 commented 1 month ago

@archlitchi thanks for the reply, I will try to implement setting LIBCUDA_LOG_LEVEL during admission. Unfortunately I don't have WeChat but I have a linkedin account.

4gt-104 commented 1 month ago

I have reviewed the code and believe it can be easily implemented, but I have a concern regarding ArgoCD and GitOps. Overriding the pod spec, whether it's to modify the environment variable for visible CUDA devices or any other environment variable, would likely trigger an out-of-sync state.

@archlitchi what do you think?

4gt-104 commented 1 month ago

I tested various scenarios, and the out-of-sync state is triggered only when bare pod manifests are applied via ArgoCD with set environment variables that can be modified by the admission webhook. Given this, I think adding a note about it in the documentation and proceeding with the environment variable mutation approach would be the best solution.

archlitchi commented 1 month ago

I tested various scenarios, and the out-of-sync state is triggered only when bare pod manifests are applied via ArgoCD with set environment variables that can be modified by the admission webhook. Given this, I think adding a note about it in the documentation and proceeding with the environment variable mutation approach would be the best solution.

i haven't tried submitting tasks with ArgoCD, i think we can add a field in values.yaml, regarding the log-level, it can be set to 2(which is the default log level, errors, warns and msgs), 0(errors only), 3(errors, warns,msgs and infos), 4(debugs, msgs, infos, warns, errors). we only patch the 'LIBCUDA_LOG_LEVEL' env to container is not set to 2.