kubernetes-sigs / yaml

A better way to marshal and unmarshal YAML in Golang
Other
243 stars 74 forks source link

Strict unmarshalling should be case-sensitive #15

Open martina-if opened 5 years ago

martina-if commented 5 years ago

Using yaml.UnmarshalStrict(...) compares keys in a case insensitive way.

I would expect it to do case sensitive comparisons of key names so that if I define a key in my schema called somethingGreat, a key in a yaml file called somethingGREat wouldn't match, and the unmarshal would fail with the error unknown field "somethingGREat".

errordeveloper commented 5 years ago

cc @dims @sttts @neolit123

neolit123 commented 5 years ago

i will verify this locally. seems like something we want to fix.

neolit123 commented 5 years ago

/kind bug /priority important-longterm

dims commented 5 years ago

/assign @neolit123

martina-if commented 5 years ago

How to reproduce it:


import (
    "fmt"
    // yaml "gopkg.in/yaml.v2"
    yaml "sigs.k8s.io/yaml"
)

func main() {

    type T struct {
        SomethingGreat int `yaml:"SomethingGreat,omitempty"`
    }
    var t T

    // Correct letter case
    if err := yaml.UnmarshalStrict([]byte("SomethingGreat: 1"), &t); err != nil {
        fmt.Printf("ERROR: %v\n", err)
    } else {
        fmt.Printf("Correctly parsed: %+v\n", t)
    }

    // Incorrect letter case
    if err := yaml.UnmarshalStrict([]byte("SomethingGREat: 2"), &t); err != nil {
        fmt.Printf("ERROR: %v\n", err)
    } else {
        fmt.Printf("Correctly parsed: %+v\n", t)
    }
}

Produces:

Correctly parsed: {SomethingGreat:1}
Correctly parsed: {SomethingGreat:2}
errordeveloper commented 5 years ago

So with yaml "gopkg.in/yaml.v2" it does actually fail:

Correctly parsed: {SomethingGreat:1}
ERROR: yaml: unmarshal errors:
  line 1: field SomethingGREat not found in type main.T

I wonder what is going on here...

neolit123 commented 5 years ago

@martina-if @errordeveloper see this PR in flight: https://github.com/kubernetes-sigs/yaml/pull/19

Correctly parsed: {SomethingGreat:1}
ERROR: main.T.ReadObject: found unknown field: SomethingGREat, error found in #1
0 byte of ...|hingGREat":2}|..., bigger context ...|{"SomethingGREat":2}|...
martina-if commented 5 years ago

Hi @neolit123, thanks for the link to the PR! Do you need a review from our side?

neolit123 commented 5 years ago

sure, go ahead.

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

errordeveloper commented 5 years ago

/remove-lifecycle stale

On Sun, 11 Aug 2019, 3:22 pm fejta-bot, notifications@github.com wrote:

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta https://github.com/fejta. /lifecycle stale

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/yaml/issues/15?email_source=notifications&email_token=AAB5MS2T7SNVAGKL7VA3RE3QEAOBVA5CNFSM4HIT5OIKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4BB22A#issuecomment-520232296, or mute the thread https://github.com/notifications/unsubscribe-auth/AAB5MS32YD7SZK3E3GIRQSLQEAOBVANCNFSM4HIT5OIA .

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

errordeveloper commented 4 years ago

/remove-lifecycle rotten

On Mon, 9 Dec 2019 at 15:30, fejta-bot notifications@github.com wrote:

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta https://github.com/fejta. /lifecycle rotten

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/yaml/issues/15?email_source=notifications&email_token=AAB5MS7GFITXMWGDHRHCWIDQXZQDFA5CNFSM4HIT5OIKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGJSYJQ#issuecomment-563293222, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAB5MSZTGFMGYCK6BSHJNZLQXZQDFANCNFSM4HIT5OIA .

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

errordeveloper commented 4 years ago

/remove-lifecycle stale

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

errordeveloper commented 4 years ago

/remove-lifecycle stale

errordeveloper commented 4 years ago

/lifecycle frozen

neolit123 commented 4 years ago

we faced this recently again here: https://github.com/kubernetes/kubeadm/issues/2167