kubernetes / client-go

Go client for Kubernetes.
Apache License 2.0
8.78k stars 2.9k forks source link

Remote Command Executor Re-Using Stdin/Stdout Results in Lost Data #1325

Closed adrianosela closed 1 month ago

adrianosela commented 6 months ago

I am building a proxy between ssh and kubectl exec using the remotecommand library. One of the neat things I'd like to do is deduce what shell to drop you in automatically without having to configure that.

So I am simply attempting to iterate over well-known shells (commands to use for the exec) as follows:

    shells := []string{"bash", "zsh", "ash", "sh"}
    shellSet := set.New(shells...)
    for _, shell := range shells {
        if shellSet.Size() == 0 {
            channel.Write([]byte("No shells available in the target container :("))
            s.logger.Error("no shells available in the target container", zap.Error(err))
            return
        }

        exec, err := getRemoteCommandExecutor(
            ctx,
            kubeconfig,
            user,
            shell,
            target,
        )
        if err != nil {
            s.logger.Error(
                "failed to get remote command executor for target",
                zap.String("namespace", target.namespace),
                zap.String("pod", target.pod),
                zap.String("container", target.container),
                zap.String("user", user),
                zap.String("shell", shell),
                zap.Error(err),
            )
            return
        }

        err = exec.StreamWithContext(ctx, remotecommand.StreamOptions{
            Stdin:             channel,
            Stdout:            channel,
            Stderr:            channel.Stderr(),
            Tty:               true,
            TerminalSizeQueue: terminalSizeQ,
        })
        if err != nil {
            if strings.Contains(err.Error(), "executable file not found") ||
                strings.Contains(err.Error(), "command terminated with exit code 127") {
                shellSet.Remove(shell)
                continue // try next shell
            }
            if !errors.Is(err, io.EOF) && !errors.Is(err, context.Canceled) {
                s.logger.Error(
                    "failed to stream between ssh channel and target container",
                    zap.String("namespace", target.namespace),
                    zap.String("pod", target.pod),
                    zap.String("container", target.container),
                    zap.String("user", user),
                    zap.String("shell", shell),
                    zap.Error(err),
                )
            }
        }
        return
    }
}

This works great when the first shell in the array (bash) is present in the container, otherwise we move on to the next one. The problem I'm facing is that there seem to be bytes lost when I re-use the same ssh channel object (stdin and stdout for the remote command executor) across iterations. Its not a matter of what shell im using, the problem exists as long as we iterate at least once in the loop.

I looked at the code in this repo and found under the hood its just an io.Copy goroutine for each file descriptor / io.Reader/io.Writer...

So I assumed that the io.Copy routine simply keeps hold of the reader longer than needed....

I tried wrapping my stdin/stdout in custom readers/writers to duplicate the last read/write each time we iterate but that did not solve the problem...

I am out of things to try - any thoughts? I'm not entirely sure this is a bug in the code - maybe it is, maybe it isnt. Maybe re-use of the stdin/stdout io.Reader/Writers is something not many have attempted before so it hasnt been an issue for anyone?

This code is used in the open sauce repo here https://github.com/borderzero/border0-cli/blob/main/internal/ssh/session/kubectl_exec_session.go

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 month ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/client-go/issues/1325#issuecomment-2120860936): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.