kubernetes-sigs / kustomize

Customization of kubernetes YAML configurations
Apache License 2.0
10.95k stars 2.24k forks source link

git cloner uses local filesystem instead of `fSys` #4390

Closed Haegi closed 2 years ago

Haegi commented 2 years ago

Describe the bug

When using kustomize in go (sigs.k8s.io/kustomize/api/krusty), it is currently not possible to use a in-memory filesystem (filesys.MakeFsInMemory()) and a remote target.

This happends because everything operates on the in-memory filesystem passed to the Run() method, despite the cloner (git.Cloner).

The stacktrace for this:

Files that can reproduce the issue

main.go

package main

import (
    "fmt"
    "os"

    "sigs.k8s.io/kustomize/api/krusty"
    "sigs.k8s.io/kustomize/api/types"
    "sigs.k8s.io/kustomize/kyaml/filesys"
    "sigs.k8s.io/yaml"
)

func main() {
    fSys := filesys.MakeFsInMemory()
    kustomization := types.Kustomization{
        Resources: []string{
            "github.com/kubernetes-sigs/kustomize/examples/multibases?ref=v1.0.6",
        },
    }

    bytes, err := yaml.Marshal(kustomization)
    if err != nil {
        fmt.Print(err)
        os.Exit(1)
    }

    err = fSys.WriteFile("kustomization.yaml", bytes)
    if err != nil {
        fmt.Print(err)
        os.Exit(1)
    }

    kustomizer := krusty.MakeKustomizer(krusty.MakeDefaultOptions())
    _, err = kustomizer.Run(fSys, ".")
    if err != nil {
        fmt.Print(err)
        os.Exit(1)
    }
}

go.mod

module xxx

go 1.16

require (
    sigs.k8s.io/kustomize/api v0.10.1
    sigs.k8s.io/kustomize/kyaml v0.13.0
    sigs.k8s.io/yaml v1.3.0
)

Expected output

No error and kustomize does it things (clone the repo and so on).

Actual output

accumulating resources: accumulation err='accumulating resources from 'github.com/kubernetes-sigs/kustomize/examples/multibases?ref=v1.0.6': '/github.com/kubernetes-sigs/kustomize/examples/multibases?ref=v1.0.6' doesn't exist': '/private/var/folders/t6/msc_f56j073427s1kv0jb4gr0000gn/T/kustomize-1859347002/examples/multibases' doesn't exist

Kustomize version

kustomize api v0.10.1

Platform

macOS(arm64)

Additional context

k8s-ci-robot commented 2 years ago

@Haegi: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
natasha41575 commented 2 years ago

While I agree that this is annoying, especially when writing tests, I'm not sure if there is a feasible solution. FsInMemory is a fake filesystem that is backed by a private struct fsNode.

For the git cloner to be able to clone to a file system in memory, it would have to clone the repo and then copy all of the contents into a tree of fsNodes. This is a nontrivial, expensive task, especially if the cloned repository is quite large.

What is your use case for wanting to use an in-memory file system along with remote targets? We have some examples of tests if that's what you're looking for, but all of our tests with remote targets use the local filesystem and need to clean themselves up afterward.

/triage under-consideration

Haegi commented 2 years ago

Thanks for the explanation. You’re right, it could be very expensive in some cases.

We want some environment variable substitution for kustomization files. To allow something like:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - github.com/kubernetes-sigs/kustomize/examples/multibases?ref=${REPO_TAG}

I guess this would be a quite big change to the current kustomize codebase. Therefore, we built some custom logic around kustomizer.Run(), which traverses the kustomization file and replaces the placeholders with environment variables. In this case, we also have to copy the files from the local filesystem to the in-memory filesystem. But this in not a big deal because we know how small our repositories are at the moment. 

natasha41575 commented 2 years ago

Though practically I think we should consider if there is a way to go about this without introducing a very expensive operation to kustomize, the issue itself is still valid and we can discuss implementation details if someone has a proposal for it.

/triage accepted

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-ci-robot commented 2 years ago

@k8s-triage-robot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kustomize/issues/4390#issuecomment-1171639802): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues and PRs according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue or PR with `/reopen` >- Mark this issue or PR as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
pmalek commented 3 months ago

While this has been solved for CLI use with on the FS kustomize.yaml, like so:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- https://github.com/kubernetes-sigs/kustomize//examples/multibases/dev/?ref=v3.3.1
$ kustomize build ./config/1
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: myapp
  name: dev-myapp-pod
spec:
  containers:
  - image: nginx:1.7.9
    name: nginx

But it still fails for API use:

package main

import (
    "io/fs"
    "path/filepath"
    "testing"

    "github.com/stretchr/testify/require"
    "sigs.k8s.io/kustomize/api/krusty"
    "sigs.k8s.io/kustomize/api/types"
    "sigs.k8s.io/kustomize/kyaml/filesys"
    "sigs.k8s.io/yaml"
)

func TestNew(t *testing.T) {
    kustomization := types.Kustomization{
        TypeMeta: types.TypeMeta{
            APIVersion: "kustomize.config.k8s.io/v1beta1",
            Kind:       "Kustomization",
        },
        Resources: []string{
            "https://github.com/kubernetes-sigs/kustomize/examples/multibases/dev/?ref=v3.3.1",
        },
    }

    bytes, err := yaml.Marshal(kustomization)
    require.NoError(t, err)
    t.Logf("kustomization.yaml:\n%s", string(bytes))

    fSys := filesys.MakeFsInMemory()
    err = fSys.WriteFile("kustomization.yaml", bytes)
    require.NoError(t, err)

    fSys.Walk(".", filepath.WalkFunc(func(path string, info fs.FileInfo, err error) error {
        t.Logf("path: %s\n", path)
        return nil
    }))

    kustomizer := krusty.MakeKustomizer(krusty.MakeDefaultOptions())
    res, err := kustomizer.Run(fSys, "/")
    require.NoError(t, err)
    require.NotNil(t, res)
}
go test -v -count 1 -run TestNew
=== RUN   TestNew
    x_test.go:33: kustomization.yaml:
        apiVersion: kustomize.config.k8s.io/v1beta1
        kind: Kustomization
        resources:
        - https://github.com/kubernetes-sigs/kustomize/examples/multibases/dev/?ref=v3.3.1
    x_test.go:40: path: /
    x_test.go:40: path: /kustomization.yaml
    x_test.go:46:
                Error Trace:    x_test.go:46
                Error:          Received unexpected error:
                                accumulating resources: accumulation err='accumulating resources from 'https://github.com/kubernetes-sigs/kustomize/examples/multibases/dev/?ref=v3.3.1': URL is a git repository': '/private/var/folders/0m/_63w01516tgf3cftmp9h7ylm0000gn/T/kustomize-1953801228/examples/multibases/dev' doesn't exist
                Test:           TestNew
--- FAIL: TestNew (2.83s)
FAIL
exit status 1
FAIL    kustomize1      3.166s
$ kustomize1 ls /private/var/folders/0m/_63w01516tgf3cftmp9h7ylm0000gn/T/kustomize-1953801228/examples/multibases/dev
kustomization.yaml

As you can see the directory does exist on the tmp fs.

Any clues how to proceed?

kustomize version
v5.4.2