Open kos-team opened 3 weeks ago
the main
may not be stable, and we recommend the user to try the newest release version (vx.y.z)
@csuzhangxc The main usability issue here is that, the TiProxy follows a different version numbering than the other TiDB components. And if we set a version v8.1.0
in the property spec.version
, all TiDB components use v8.1.0
as the version. This works for all other components such as TiFlash, TiKV. However, since TiProxy does not have the same version number as the rest of the components, it would fail.
@csuzhangxc The main usability issue here is that, the TiProxy follows a different version numbering than the other TiDB components. And if we set a version
v8.1.0
in the propertyspec.version
, all TiDB components usev8.1.0
as the version. This works for all other components such as TiFlash, TiKV. However, since TiProxy does not have the same version number as the rest of the components, it would fail.
I know. I mean it's hard to choose a default value for TiProxy as we always recommend the user to use the newest version
We also reported a related issue to the tidb upstream repo: https://github.com/pingcap/tidb/issues/56643, about the lastest tag not pointing to the actual latest version. It seems that those upstream systems do not have a reliable tag for using the latest version. It would be nice if they have a tag which can be used as the default value here.
To make the deployment safer, I think the spec.tiproxy.image
perhaps can be set to be a required
property of the spec.tiproxy
object. This enforces users to specify a TiProxy version when they enable it, since tiproxy cannot use the default value from the spec.version
anyway.
Bug Report
What version of Kubernetes are you using? Client Version: v1.31.1 Kustomize Version: v5.4.2
What version of TiDB Operator are you using? v1.6.0
What's the status of the TiDB cluster pods? TiProxy pods are in
CrashBackOffLoop
State.What did you do? We deployed a cluster with TiProxy.
How to reproduce
Deploy a TiDB cluster with TiProxy enabled, for example:
What did you expect to see? TiProxy pods should start successfully and be in the
Healthy
state.What did you see instead? The TiProxy pods kept crashing and be in
CrashBackOffLoop
state due toErrImagePull
.Root Cause The root cause is that we specified
spec.version
tov8.1.0
which will be used for all components when pulling their images. However, there is nopingcap/tiproxy:v8.1.0
image available on the DockerHub causing the image pull process to fail for the TiProxy.How to fix Since the image tag for TiProxy follows a different naming convention compared to other components like TiKV and TiFlash, we recommend setting a default value of
main
forspec.tiproxy.version
. This will ensure the TiDB Operator overrides the version tag for TiProxy and pulls the correct image.