Closed camigira closed 1 year ago
The default prompt and kubernetes message are both in English, if no --language
argument, I believe the LLM will return in English at all time ? but no way to show the response in other language like French or Chinese ?
or you meant to config language in somewhere globally like when do k8sgpt auth
and saving into ~/.config/k8sgpt/k8sgpt.yaml
?
👋 @panpan0000 Thanks for replying! Let me elaborate. I can see two distinct features related to language:
Specify a language other than English by using the flag --language
(currently supported). I agree that having the ability to set the language as a global setting would be convenient.
Multi-language (what I'm proposing): Automatically inferring the language to use based on the error message. In other words, if the error message is in Japanese, the default explanation should be provided in Japanese.
Both features are complementary, not exclusive.
Feel free to try these examples in ChatGPT by copying and pasting them:
Current: Specify Language via flag
Simplify the following Kubernetes error message delimited by triple dashes written in --- French --- language; --- Error creating deployment: pods 'pod-name' is forbidden: error updating deployment ---.
Provide the most possible solution in a step by step style in no more than 280 characters. Write the output in the following format:
Error: {Explain error here}
Solution: {Step by step solution here}
Proposed: Multi-Language leveraging LLM
Spanish
Simplify the following Kubernetes error message delimited by triple dashes using the same language provided; --- Error al crear el despliegue: se prohíbe el pod 'nombre-del-pod': error al actualizar el despliegue ---.
Provide the most possible solution in a step by step style in no more than 280 characters using the same language as the error. Write the output in the following format:
Error: {Explain error here}
Solution: {Step by step solution here}
French
Simplify the following Kubernetes error message delimited by triple dashes using the same language provided; --- Erreur lors de la création du déploiement : pods 'nom-du-pod' est interdit : erreur de mise à jour du déploiement ---.
Provide the most possible solution in a step by step style in no more than 280 characters using the same language as the error. Write the output in the following format:
Error: {Explain error here}
Solution: {Step by step solution here}
English
Simplify the following Kubernetes error message delimited by triple dashes using the same language provided; --- Error creating deployment: pods 'pod-name' is forbidden: error updating deployment ---.
Provide the most possible solution in a step by step style in no more than 280 characters using the same language as the error. Write the output in the following format:
Error: {Explain error here}
Solution: {Step by step solution here}
Japanese
Simplify the following Kubernetes error message delimited by triple dashes using the same language provided; --- デプロイメントの作成エラー:ポッド 'ポッド名' は禁止されています:デプロイメントの更新エラー ---.
Provide the most possible solution in a step by step style in no more than 280 characters using the same language as the error. Write the output in the following format:
Error: {Explain error here}
Solution: {Step by step solution here}
I don't think we can rely on the language being inferred by the error message. For example, if my native tongue is german and I am debugging a cluster in English ( which maybe I do not know so well ) then perhaps I'd prefer it in my native language.
I'm using the following language parameter to control the output language:
--language 'english (MUST: Reply in Simplified Chinese for everything except for certain terms and symbols, including the given format text strings. For example, "Error:" must be replaced by "错误信息:", and "Solution:" must be replaced by "解决方案:")'
Ref: https://datatracker.ietf.org/doc/html/rfc2119 (Thanks @weiqiangt letting me know this)
Although it may look weird, the result is acceptable. For example:
- apiVersion: core.k8sgpt.ai/v1alpha1
kind: Result
metadata:
creationTimestamp: "2024-08-01T02:56:10Z"
generation: 1
labels:
k8sgpts.k8sgpt.ai/backend: localai
k8sgpts.k8sgpt.ai/name: k8sgpt-local-ai
k8sgpts.k8sgpt.ai/namespace: openai
name: openloftsystemopenloftwebhookservice
namespace: openai
resourceVersion: "6663786"
uid: d3b5bc49-afa8-4139-bceb-5f8c6ddcb157
spec:
backend: localai
details: |-
错误信息:服务未就绪结点,而 Pod/openloft-controller-manager-f5fbfc96c-hfn4f] 只有一个。
解决方案:
步骤1:确保 controller-managerDeployment 的readyGOP是“True”。
步骤2:检测controller.Manager的配置文件中端口配置是否有效。
步骤3:检查 pod 对象及其依赖 service 等是否存在任何权限问题。
error:
- text: 'Service has not ready endpoints, pods: [Pod/openloft-controller-manager-f5fbfc96c-hfn4f],
expected 1'
kind: Service
name: openloft-system/openloft-webhook-service
parentObject: ""
status:
lifecycle: historical
kind: List
metadata:
resourceVersion: ""
Checklist
Affected Components
K8sGPT Version
No response
Kubernetes Version
No response
Host OS and its Version
No response
Steps to reproduce
default_prompt
takes two inputs, Language and Error. I would like to know if specifying the language has any practical benefits (perhaps faster response?) as opposed to letting the model infer the language making it multilanguage out of the box.Expected behaviour
No need to pass language as a flag
Actual behaviour
No response
Additional Information
No response