Closed Lyndon-Li closed 1 year ago
Yeah, this is killer. Trying to run Velero v1.10.2 with AWS plugin v1.6.1 and I cannot get past this. Makes Velero unusable.
@lnehrin We expect this problem won't happen in the normal case because we have checked the plugins and found plugins seldomly print log with WithError
. So as the initial discussion, we prepare to fix in the next release, 1.12.
If you face the problem in v1.10.2, this is not the case, we will not only need to fix this in 1.11 but also need to fix it in v1.10.x.
So please share us the Velero log bundle when the problem happened in your env, let's check which log caused the problem and redecide.
I tried various installations of velero, with and without CSI, and even no plugins at all. I thought I could migrate between EKS clusters, exactly as the example in the docs. Maybe my source cluster EKS v1.21.14 is just too unhappy. When I do an identical installation on a new EKS cluster v1.24.8 (no CSI) - a backup succeeds. Unfortunately my use case is to migrate namespaces from the old cluster v1.21.14 to the new cluster v1.24.8. I think I'm going to be stuck with manually migrating the PVs and fresh helm installations of what I need to get installed.
I'm also seeing this in v1.10.2, log attached. velero_out.log
Thank all, we will fix this in v1.11 and v1.10.3
Close as completed
As the [logrus code](https://github.com/sirupsen/logrus/blob/v1.8.1/json_formatter.go#:~:text=logrus/issues/137-,data,(),-default), the
error
field will be converted to astring
if the log format is json:When the plugin generate log with
logger.WithError
, the plugin client gets theerror
field asstring
instead oferror interface
.The backup controller registers a log_counter_hook to the backup logger, while in log_counter_hook.go, the assumes the
error
field aserror interface
only:As a result, there will be a panic like below: