golang / go

The Go programming language
https://go.dev
BSD 3-Clause "New" or "Revised" License
122.72k stars 17.5k forks source link

proposal: spec: introduce structured tags #23637

Open urandom opened 6 years ago

urandom commented 6 years ago

This proposal is for a new syntax for struct tags, one that is formally defined in the grammar and can be validated by the compiler.

Problem

The current struct tag format is defined in the spec as a string literal. It doesn't go into any detail of what the format of that string might look like. If the user somehow stumbles upon the reflect package, a simple space-separated, key:"value" convention is mentioned. It doesn't go into detail about what the value might be, since that format is at the discretion of the package that uses said tag. There will never be a tool that will help the user write the value of a tag, similarly to what gocode does with regular code. The format itself might be poorly documented, or hard to find, leading one to guess what can be put as a value. The reflect package itself is probably not the biggest user-facing package in the standard library as well, leading to a plethora of stackoverflow questions about how multiple tags can be specified. I myself have made the error a few times of using a comma to delimit the different tags.

Proposal

EDIT: the original proposal introduced a new type. After the initial discussion, it was decided that there is no need for a new type, as a struct type or custom types whose underlying types can be constant (string/numeric/bool/...) will do just as well.

A tag value can be either a struct, whose field types can be constant, or custom types, whose underlying types are constant. According to the go spec, that means a field/custom type can be either a string, a boolean, a rune, an integer, a floating-point, or a complex number. Example definition and usage:

package json

type Rules struct {
    Name string
    OmitEmpty bool
    Ignore bool
}

func processTags(f reflect.StructField) {
    // reflect.StructField.Tags []interface{}   
    for _ ,t := range f.Tags {
        if jt, ok := t.(Rules); ok {
              ...
              break
        }
    }
}
package sqlx

type Name string

Users can instantiate values of such types within struct definitions, surrounded by [ and ] and delimited by ,. The type cannot be omitted when the value is instantiated.

package mypackage

import json
import sqlx

type MyStruct struct {
      Value      string [json.Rules{Name: "value"}, sqlx.Name("value")]
      PrivateKey []byte [json.Rules{Ignore: true}]
}

Benefits

Tags are just types, they are clearly defined and are part of a package's types. Tools (such as gocode) may now be made for assisting in using such tags, reducing the cognitive burden on users. Package authors will not need to create "value" parsers for their supported tags. As a type, a tag is now a first-class citizen in godoc. Even if a tag lacks any kind of documentation, a user still has a fighting chance of using it, since they can now easily go to do definition of a tag and just look up its fields, or see the definition in godoc. Finally, if the user has misspelled something, the compiler will now inform them of an error, instead of it occurring either at runtime, or being silently ignored as is the case right now.

Backwards compatibility

To preserve backwards compatibility, string-based tags will not be removed, but merely deprecated. To ensure a unified behavior across libraries, their authors should ignore any string-based tags if any of their recognized structured tags have been included for a field. For example:

type Foo struct {
    Bar int `json:"bar" yaml:"bar,omitempty"` [json.OmitEmpty]
}

A hypothetical json library, upon recognizing the presence of the json.OmitEmpty tag, should not bother looking for any string-based tags. Whereas, the yaml library in this example, will still use the defined string-based tag, since no structured yaml tags it recognizes have been included by the struct author.

Side note

This proposal is strictly for replacing the current stuct tags. While the tag grammar can be extended to be applied to a lot more things that struct tags, this proposal is not suggesting that it should, and such a discussion should be done in a different proposal.

ianlancetaylor commented 6 years ago

Related to #20165, which was recently declined. But this version is better, because it proposes an alternative.

ianlancetaylor commented 6 years ago

I don't see any special need for a new tag type. You may as well simply say that a struct field may be followed by a comma separated list of values, and that those values are available via reflection on the struct.

On the other hand, something this proposal doesn't clearly address is that those values must be entirely computable at compile time. That is not a notion that the language currently defines for anything other than constants, and it would have to be carefully spelled out to decide what is permitted and what is not. For example, can a tag, under either the original definition or this new one, have a field of interface type?

urandom commented 6 years ago

@ianlancetaylor You raise an interesting point. A struct will pretty much have the same benefits as a new tag type would. I imagine it would probably make the implementation a bit simpler. Other types might only be useful if they are the underlying type of a custom one, and as such one would have to use them explicitly, otherwise there might be ambiguity when a constant is provided directly:

package sqlx

type ColumnName string
...

package main
import sqlx

type MyStruct struct {
    Total int64 [sqlx.ColumnName("total")]
}

vs what I would consider an invalid usage:


package main
import sqlx

type MyStruct struct {
    Total int64 ["total"]
}

For your second point, I assumed that it would be clear that any value for any field of a tag has to be a constant. Such a "restriction" makes it clear what can and cannot be a field type, and will rule out having a struct as a field type (or an interface, in your example).

dlsniper commented 6 years ago

I wonder if we could solve this without having to change the language, and even better, in Go 1.X rather than waiting for Go 2. As such, I've tried to understand the problem as well as the proposed solution and came up with a different approach to the problem, please see below.

First, the problem. I think the description starts from the wrong set of assumptions:

There will never be a tool that will help the user write the value of a tag, similarly to what gocode does with regular code.

There can totally be a tool that understands how these flags work and allow users to define custom tags and have them validated. One such tool might for example benefit from "magic" comments in the code, for example, the structure proposed could be "annotated" with a comment like // +tag.

This would of course have the advantage of not having to force the change in the toolchain, with the downside that you'd need to have the tool to validate this instead of the compiler. The values should be json, for example:

package mypackage

import "json"
import "sqlx"

type MyStruct struct {
    Value string `json:"{\"Name\":\"value\"}" sqlx:"{\"Name\":\"value\"}"`
    PrivateKey []byte `json:"{\"Ignore\":true}"`
}
package json

// +tag
type Tag struct {
    Name string
    OmitEmpty bool
    Ignore bool
}
package sqlx

// +tag
type SQLXTag struct {
    Name string
}

More details can be put into this on how this should be a single tag per package, the struct must be exportable, and so on (which the current proposal also does not address).

The format itself might be poorly documented, or hard to find, leading one to guess what can be put as a value. The reflect package itself is probably not the biggest user-facing package in the standard library as well, leading to a plethora of stackoverflow questions about how multiple tags can be specified.

This sounds like a problem one could be able to fix with a CL / PR to the documentation of the package which specifically improves it by documenting the tag available or how to use these struct tags.

I myself have made the error a few times of using a comma to delimit the different tags.

Should the above proposal with "annotating" a struct in a package work, this means that the tools could also fix the problem of navigating to the definition of tag.

Furthermore, the original proposal adds the problem that tag now is a keyword and cannot be used as in regular code. Imho, should any new keyword be added in Go 2, there should be a really good reason to do so and it should be kept in mind that it would make the porting of existing Go 1 sources that much harder given how now the code needs to be refactored before being ported over.

The downside of my proposal is that this requires people to use non-compiler tools. But given how govet is now partially integrated in go test, this check could also be added to that list.

Tools that offer completion to users can be adapted to fulfill the requirement of assisting the user in writing the tag, all without having the language changed with an extra keyword added.

And should the compiler ever want to validate these tags, the code would already be there in govet,

creker commented 6 years ago

@dlsniper

Furthermore, the original proposal adds the problem that tag now is a keyword and cannot be used as in regular code

You can always make compiler a bit smarter to understand context and when a keyword is a keyword. tag keyword would appear in a very specific context which compiler could easily detect and understand. No code could ever be broken with that new keyword. Other languages do that and have no problem adding new contextual keywords without breaking backwards compatibility.

As for your proposal, for me adding another set of magic comments further establishes opinion that there's something wrong with the design of the language. Every time I look at these comments they look out of place. Like someone forgot to add some feature and, in order to not break things, shoved everything into comments. There's plenty of magic comments already. I think we should stop and implement proper Go features and not continue developing another language on top of Go.

ianlancetaylor commented 6 years ago

You can always make compiler a bit smarter to understand context and when a keyword is a keyword.

As I said above, though, I see no advantage at all to using tag rather than struct.

This proposal still needs more clarity on precisely what is permitted in a tag type, whatever we call it. It's not enough to say "it has to be a constant." We need to spell out precisely what that means. Can it be a constant expression? What types are the fields permitted to have?

urandom commented 6 years ago

One such tool might for example benefit from "magic" comments in the code, for example, the structure proposed could be "annotated" with a comment like // +tag.

This seems to make the problem worse, to be honest. Instead of making tags easier to write and use, you are now introducing more magic comments. I'm sure I'm not the only one opposed to such solutions, as such comments are very confusing to users (plenty of questions on stackoverflow). Also, what happens when someone puts // +tag before multiple types?

Value string json:"{\"Name\":\"value\"}" sqlx:"{\"Name\":\"value\"}"

This not only ignores a major part of the problem, illustrated by the proposal (syntax), but also makes it harder to write.

More details can be put into this on how this should be a single tag per package, the struct must be exportable, and so on (which the current proposal also does not address).

Should we address the obvious? Honest question, I skipped some things as I thought they were too obvious to write.

This sounds like a problem one could be able to fix with a CL / PR to the documentation of the package which specifically improves it by documenting the tag available or how to use these struct tags.

It's still in the reflect package. Why would an average user ever go and read the reflect package. It's index alone is larger that the documentation of some packages.

Furthermore, the original proposal adds the problem that tag now is a keyword and cannot be used as in regular code. Imho, should any new keyword be added in Go 2, there should be a really good reason to do so and it should be kept in mind that it would make the porting of existing Go 1 sources that much harder given how now the code needs to be refactored before being ported over.

I edited my original proposal to remove the inclusion of a new type. This was discussed in the initial discussion with @ianlancetaylor, and I was hoping further discussion would include that as well.

urandom commented 6 years ago

This proposal still needs more clarity on precisely what is permitted in a tag type, whatever we call it. It's not enough to say "it has to be a constant." We need to spell out precisely what that means. Can it be a constant expression? What types are the fields permitted to have?

I've edited the proposal to add more information as to what types are permitted as tags.

giniedp commented 6 years ago

i like the idea of the proposal. I would love to be able to write Metadata that is then checked at compile time, and that i do not need to parse the metadata at runtime.

The updated proposal makes sense to me. A Metadata is simply a struct. Also the syntax is ok for the tags. It event lets me write the metadata line by line

1. initial proposal

type MyStruct struct {
    Value string [
        json.Rules{Name: "value"}, 
        sqlx.Name{"value"}
    ]
    PrivateKey []byte [
        json.Rules{Ignore: true}
    ]
}

That appears even to be readable (at least to me). Fields and annotations are clearly distinguishable, even without syntax highlighting.

Now, i know this is about tags only but if we would want to add metadata to other things than struct fields, i would suggest to put the metadata above the thing that you annotate instead of having it behind.

2 examples that arise to me, are C# Attributes and Java Annotations. Lets see how they would look like on go struct fields

2. C# like

type MyStruct struct {
    [json.Rules{Name: "value"}]
    [sqlx.Name{"value"}]
    Value string

    [json.Rules{Ignore: true}]
    PrivateKey []byte 
}

That is less readable than 1.

3. Java like

type MyStruct struct {
    @json.Rules{Name: "value"}
    @sqlx.Name{"value"}
    Value string

    @json.Rules{Ignore: true}
    PrivateKey []byte 
}

That is less readable than 1. but way cleaner than 2.

Now in go we already have a syntax for multiple imports and multiple constants. Lets try that

4. Go like

type MyStruct struct {
    meta (
        json.Rules{Name: "value"}
        sqlx.Name{"value"}
    )
    Value string

    meta (json.Rules{Ignore: true})
    PrivateKey []byte 
}

That is less compact. Removing the meta keyword wouldnt help i think. Neither when using square brackets

5. with square brackets

type MyStruct struct {
    [
        json.Rules{Name: "value"}
        sqlx.Name{"value"}
    ]
    Value string

    [json.Rules{Ignore: true}]
    PrivateKey []byte 
}

The single statement looks like 2. C# like and the multiline statement is still not as compact as i would like it to be.

So far i still like the 3. Java like style best. However, if metadata should not be applied to anything other than struct fields (ever) then i prefer the 1. initial proposal style. Now if there are some legal issues with stealing a syntax from another language (i am not a lawyer) then i could think of following

6. hash

type MyStruct struct {
    # json.Rules{Name: "value"}
    # sqlx.Name{"value"}
    Value string

    # json.Rules{Ignore: true}
    PrivateKey []byte 
}
urandom commented 6 years ago

Having the field tags before the field declaration is not very readable, compared to having them afterwards. For the same reason that it is more readable to have the type of a variable after the variable. When you start reading the struct definition, you come upon some meta information about something you haven't yet read. Currently, you know that a PrimaryKey is a byte array, and it is ignored for json marshaling. With your suggestioun, You know that something will be ignored for json marshaling, and afterwards you learn that what is ignored will be a primary key, which is a byte array.

giniedp commented 6 years ago

I understand your point and i partially agree on that. Having metadata on tags only, your suggestion looks best (i might want to omit the comma in multiline though)

My intention is to suggest a syntax that might work on structs and methods. Those elements have their own body. Adding metadata behind the body might push it out of sight if the implementation is lengthy. I think metadata should live somewhere near to the name of the element that it annotates and my natural choice is above that name, since below is the body.

Syntax highlighting helps to spot the parts of the code you are interested in. So if you are interested in reading the struct definition, your eye will skip the metadata syntax.

creker commented 6 years ago

I don't think tags are that important to care about them being out of sight. For the most part, I only care about actual fields and look at tags only in very specific cases. It's actually a good thing that they're out of the way because most of the time you don't need to look at them.

Your examples with @ and # prefixes look good and readable but I don't think it's that important to pursue and change existing syntax. Even C# syntax is easy to read for me being a C# programmer.

urandom commented 6 years ago

Just wanted to add another quick anecdote. I recently saw this committed by a colleague of mine, a seasoned developer:

ID int `json: "id"`

Obviously this wasn't tested at runtime, but its clear that even the best of us can overlook the syntax, especially since were are used to catching 'syntax' errors at compile time.

kprav33n commented 6 years ago

I like this proposal for property tags. Would it be too much to ask for a similar feature for struct-level tags?

ianlancetaylor commented 6 years ago

@kprav33n I'm not sure what you mean by struct-level tags, but I'm guessing it's something that the language does not support today. It sounds like an orthogonal idea that should be discussed separately from this issue.

kprav33n commented 6 years ago

@ianlancetaylor Thanks for the comment. Yes, this feature doesn't exist in the language today. Will discuss as a separate issue.

Backfighter commented 6 years ago

I really like this proposal since I encountered problems with the current implementation of tags

type MyStruct struct {
    field string `tag:"This is an extremely long tag and it's hard to view on smaller screens since it is so incredibly long, but it can't be broken using a line break because that would prevent correct parsing of the tag."`
}

Adding line breaks will prevent the tag from being parsed correctly:

package main

import (
    "fmt"
    "reflect"
)

type MyStruct struct {
    field string `tag:"This is an extremely long tag and it's hard to view on smaller 
screens since it is so incredibly long, but it can't be broken using a 
line break because that would prevent correct parsing of the tag."`
}

func main() {
    v, ok := reflect.ValueOf(MyStruct{}).Type().Field(0).Tag.Lookup("tag")
    fmt.Printf("%q, %t", v, ok)
    // Output: "", false
}

This is kind of documented at https://golang.org/pkg/reflect/#StructTag which clearly forbids line breaks between the "tag-pairs" and says that the quoted string part is in "Go string literal syntax". Which means "interpreted string literal" per specification. In this case a compile time check could have saved me some debugging time.

jimmyfrasche commented 6 years ago

I'm not sure if this is worth the increase in complexity to the language, etc., etc. But the potential benefits to tooling and improvement to developer experience do seem quite nice, though. I think it is worth exploring the idea.

I'm not concerned about the particulars of the syntax, so I'll stick with the convention in the first post. (To give the [] syntax a distinct name, I'll call it a vector.)

There's discussion about extending what kinds of types can be constants—which, currently, seems to be mostly taking place in #21130. Let's assume, for the moment that, it's extended to allow a struct whose fields are all types that can be constants.

While I agree that a tag should be a defined type, I don't think that should be enforced by the compiler—that can be left to linters.

With the above, the proposal reduces to: any vector of constants is a valid tag.

This also allows an old-style tag to be easily converted to a new-style tag.

For example, say we have some struct with a field like

Field Type `json:",omitempty" something:"else" another:"thing"`

Given a tool with a complete understanding of the tags defined in the stdlib but no third party libraries, this could be automatically rewritten to

Field Type [
    json.Rules{OmityEmpty: true},
    `something:"else"`,
    `another:"thing"`,
]

Then, the third party tags could be manually rewritten or rewritten further by tools provided by the third party packages.

It would also be possible for the reflect API to work with both old- and new-style tags: Get and Lookup would search for a tag that is an untyped string with the old-style format in the vector of constants while a new API allowed introspection of the new-style tags.

I'd also note that most of the benefits of this proposal are for tooling and increased compile time checking. There's little benefit for the program at runtime, but there are some:

  1. No parser needed in already reflect-heavy code, reducing the bug/security surface while requiring fewer tests
  2. Tags can have methods, potentially allowing a better factoring of the code for handling the tags.
jimmyfrasche commented 6 years ago

Some points brought up in #24889 by myself, @ianlancetaylor, @balasanjay, and @bcmills

If the tags are any allowed constant they could also be named constants and not just constant literals, for example:

const ComplicatedTag = pkg.Qual{ /* a lot of fields */ }
type S struct {
  A int [ ComplicatedTag ]
  B string [ ComplicatedTag, pkg.Other("tag") ]
  C interface{} [ ComplicatedTag ]
}

which allows both reuse and documentation on ComplicatedTag

Tags, being ordinary types, can use versioning which in turn allows them to be standardized and collected in central locations.

ghost commented 6 years ago

I dislike the idea of binding tags and external packages. While field tags are very widely used by those external packages, binding them is very hindering.

Having an json:... tag does not means the struct (or the package holding the struct) should have a dependency on encoding/json: since that Go currently does not have ways to modify field tags, it makes sense to be there for other packages (usually in the same program) to marshal/unmarshal. Being dependent on encoding/json does not make sense.

I think field tags have problems need to be addressed, like OP said: It needs syntax check and tools helping that. But binding them to dependency feels overdoing.

urandom commented 6 years ago

@leafbebop

Why would importing the json package pose problems? Or in fact, any package that will provide tags? The only problem with dependencies is circular imports, which would never happen here. And so far, I haven't seen that many third-party packages that use other third party packages' tag definitions, which means that more or less all current tags, like json, are pretty much placing a dependency on a package that defines them, since you are also more than likely to be importing said package somewhere else in your code in order to work with the structs that have these tags.

ghost commented 6 years ago

@urandom

Say I am writing a website, and since Go is supporting Wasm, I am going fullstack. Because of that, I isolate my code of data models for re-usabilty.

Now that there is not a way to add field tags to struct, for my server code to be able to co-operate with sql, I add tag fields for sqlx. To do so, I imported sqlx, of course.

And then I decide to re-use the code of data models for my Wasm-based frontend, to trully enjoy the benefit of full-stacking. But here is the problem. I imported sqlx, and sqlx has an init function, which means the whole package cannot be eleminated by DCE - that means binary size increases for no gain at all - and for Wasm, binary size is critical. The worst part is yet to come: sqlx uses cgo, which cannot be compiled to Wasm (yet, but I do not think Go will ever come to a point that compiles C to Wasm, that's pure magic).

Sure, I can just copy my code, delete all tags and build it again. But why should I? The code suddenly becomes non-cross-platform-able (now JS is a GOOS) just because something trivial. It does not make sence.

Alternatively, I think it can remains in a keyword style - instead of package, use a string-typed keyword.

urandom commented 6 years ago

@leafbebop I'd say that first of all, that is an incredibly specific example with not a lot of real world consequence. That being said, since you are worried about the resulting file size (again, not something a lot of us care about on a daily basis), you can also copy the struct and omit the tags. As you said so yourself, that would work just fine.

Such code would be in the extreme minority. Not only would it target wasm, but would also have to import cgo modules. Not a lot of people will have to deal with that. And why should everyone else have to suffer the current error-prone structureless mess that you have to triple check when you write, because no tool will be able to help you with that, and then pray that down the line you won't get any runtime errors because a field didn't match?

ghost commented 6 years ago

@urandom

No. The problem here is just: Data modeling should not be depend on an external package, by theory or by practise. And asking a dependency because of a piece of meta info that may or may not be used, is, non-idomatic and does not feel likes Go.

You don't need to import io to implent an io.Reader but you need to import sqlx to define a data model that might be used by sqlx? It seems wrong to me.

And about error-prone part. Detecting errors before running is what Go is good at. But not all those detecting happens when you run go build. There are many other tools, including go vet, to check things like that and, as far as I am concerned, a Go tool is not hard to write.

I am not against to have a better meta-info (be it tag or not) about fields, because the old way is not expressive, and hard for tools to check. But binding a package for it? That is another problem.

What I propose as keyword style, is that somehow we have a syntax like:

type S struct {
    F T meta (
        "json" {
            omitempty bool = true
        }
    )
}

P.S.: I don't think that re-using code between front-end and back-end, especially data model code is rare; And from what I read, Go with Wasm is widely welcomed and far from minority. But those are beyond the scope of this issue.

jimmyfrasche commented 6 years ago

@leafbebop Alternately, it could just be best practice to put the tag structs in a separate package when using init to avoid the coupling.

You don't need to import io to define an io.Reader, but you do need to import time to have a time.Duration field, which seems the more apt analogy here.

ghost commented 6 years ago

@jimmyfrasche That requires rewrites to all packages using database/sql (and image). Which does not seem best or good to me. And furthermore, it does make sense.

Field tags and field types are very different on aspect of code logic. Field types are determinant on how the struct is organized and how the logic of that struct is written. On the other hand, field tags are, descriptive info about that field, often offered to other package.

That means, when you declare a field is a time.Duration, the type - the data structure hold a time.Duration and logic of that type use time.Duration. But if you have a field tag as json:omitempty, its logic can often has absolutely nothing with json. There is a reason why current spec allows assigning structs with different tags.

On the other hand, field tags are more like interface: They are both about how "outside" code use the type. An io.Writer does not care about how data to write is produced as long as it is a []byte, as a field with yaml: name does not care how and by which package (yaml has multiple non-official package to parse) any value is unmarshalled into that field as long as its name is name.

jimmyfrasche commented 6 years ago

@leafbebop Why would packages using database/sql or image need to be rewritten? They use init but they don't expose any struct tags (unless I missed something re-skimming their docs to double check).

sqlx might need a separate package to define the types to use for the tags it provides, but it would need to define those types somewhere as part of the transition, regardless, so it would mostly just be a matter of what directory the file containing those definitions goes in.

I do get the yaml problem, though. It would be bad if every yaml parser was dependent on every other yaml parser just so that they could understand each others tags. Of course, it would also be possible for them to work together on a central repository that just contains the least-common-denominator tag definitions that they all agree on. They might need to define additional tag types locally but those could be upstreamed to the central repo later. This would have been an unmanageable mess before type aliases and vgo, but it no longer seems like it would be much of an issue. The tags being ordinary types allows them to use these existing mechanisms.

ghost commented 6 years ago

@jimmyfrasche I think you missed my points so I summarized it here:

  1. By philosophy, field tags are descriptive pieces of info. It is oriented to outer program. Much like implenmenting an io.Reader is not specified to be used in package io, having a json tag does not necessarily means the model is for json. It is wrong on philosophy to bind such information to a concrete use, let alone a specific package.

  2. By theory, even when a field is meant for a certain topic (json,sql,or yaml and so on), that topic does not bind to a certain package. Though versioning and type alias might simplify a unified tag system, but that seems complicated and non-idom for no reason. As I put it, it is literally just a topic and to adrress a general topic, a keyword is clear, to the point and easy to implement.

  3. By practise, having extra dependency on a data model package can cause problem. Every single sql driver has an init function (to utilize database/sql) and if it has a tag to interept, it means overhead.

And there is really no drawback for using a keyword-styled structured tag. The old form of field tag spec has two aspects being weak. Those are, expresiveness and error checking.

The expressiveness problem is solved in almost identical way of a "package" way, so I'll leave it there. And as I said in previous comment, error checking can happen outside of compile. It is not hard at all to write a go tool that checks field tags, and it can easily scale well to custom schema.

balasanjay commented 6 years ago

@leafbebop That counter-proposal seems to lack a significant amount of detail that makes it hard to evaluate; you should consider opening a separate proposal, if you think that is the way to go. Among other things, I don't understand how such "error checking" could be implemented, nor do I understand how packages would read that data (e.g. how would the json package read whether a field is tagged with omit empty). If the binding here are just strings, then how would we globally agree on who gets the "sql" string, or the "json" string, when validating a struct with a particular tag? There are multiple packages that deal with these subjects, and they may well want different data structures. Its also unclear how to version changes to these structures.

To discuss your objections:

1) I agree that field tags are descriptive pieces of info, oriented to an "outer" program. But I don't understand how it follows that its philosophically wrong to bind the information to a concrete use. I also see little difference whether you say "json" as a string, or json as a package reference (which you import as a string "encoding/json"). In either case, you're clearly describing the exact same thing semantically. (And here, there is no init function, nor any loss in dead-code-elimination).

2) Versioning and type aliases were presumably referenced to discuss how a change like this (or any approach which namespaces symbols) enables evolution in the ecosystem, using the same tools that we use for evolving structs. And until you address how your counterproposal would deal with that problem, it is hard to evaluate its effectiveness.

3) Let's say that we grant that this use-case is a valid one[1]; to me, it seems that its a straightforward anti-pattern to define tag-structs in packages that have init functions (or call cgo, or use unsafe, or include a giant dependency tree). First, the language does not need to prevent every anti-pattern, it can't. Second, consider if you're the author of such a sql package and you get a bug report to this effect ("I'd like to use the tag without pulling in the whole DB driver"); it is rather straightforward to fix in a completely backwards-compatible manner. Simply add a new "sqltag" package (or whatever you want to call it), move the tag types in there, and leave forwarding type aliases in the sql package for compatibility. Similarly, if many packages want to loosely couple a common set of tags, you could imagine defining an interface to represent the tag, and having the independent tag structs implement that interface. This opens up all of our usual tools in API design, and makes tags amenable to them. That seems to me to be a huge advantage of this proposal, and one that would be hard to replicate without inventing lots of concepts that would be very similar to the familiar notion of structs/interfaces/type-aliases/etc.

[1] For the record, I really don't think this sort of sharing is a good idea (I'm fairly sure one of the very first entries on the protobuf API best practices list is "don't use the same protos for storage and for clients", or something to that effect; it introduces a lot of coupling between systems that evolve very differently (compare how often a backend is deployed (maybe daily) with how often a client might update (maybe never, for users who don't update their apps on their phone or if the WASM is cached by the browser)), and does not allow you to evolve your storage (e.g. denormalize data) without making the difference visible to clients.

urandom commented 6 years ago

@leafbebop

The more I think about it, the more I don't see importing packages for tags as a problem. Everything else staying the same, if a package with side effects has tags, they can be defined in a separate package so as not to cause problems when used in the model definitions. Since this is a new concept, tagged for Go2, potential rewrites to accommodate this shouldn't be taken negatively into account.

And even if they don't, perhaps the compiler will be able to remove the coupling when compiling your modules. It will know that the struct tags are only used during runtime, and if it sees that these structs are not passed to any other piece of code from the third party package, it could probably deduce that the import is only for tags and essentially remove it, thus not executing its init scripts at all.

Finally, considering the equivalent use cases of annotations in other languages (Java, C# come to mind from the ones I've used), I've not seen problems being raised by having such coupling. There, the annotations are defined structures that you have to import them from their defining packages.

As for code reuse, I've only read about that in articles that deal with nodejs for now. So far I haven't seen the same codebase being shared in any other environment. Of course, that's all anecdotal.

EDIT: As for the yaml problem. I don't really see that as a problem as well. FIrst, there are no guarantees that all the yaml parsers right now use the same tag, let alone the same format even if the name was the same. Second, once you settle on a yaml parser, its unlikely that you will switch to a different one later down the road. Same with databases, once you pick one, you usually stick with it. I've personally yet to see a client want to switch to a different database down the road (anecdotal).

balasanjay commented 6 years ago

@leafbebop Alright, I think we're going to have to agree to disagree.

Most of these seem either misapplied (e.g. LSP isn't saying that any two arbitrary software constructs should be substitutable, just types and subtypes; where for packages, there's no such thing as subpackages, and it just generally seems to be talking about an entirely different domain) or contradictory (e.g. this definition of LSP contradicts the desire stated in OCP where some package invents their own notion of json's omitempty tag) or are "issues" with the current state of the world (e.g. SRP is equally violated if your data model has any functionality and has json string tags on it).

And there are more fundamental problems with the current state of the world than dreamed off by the authors of these principles; for instance, the compiler cannot help you if you typed "jsn" (after all, some "outer" package might legitimately be interpreting these), or if you include commas instead of spaces (see bug linked above).

I'm happy to admit the Proverb is relevant; it is legitimately a downside that there are potentially more dependencies being taken (though, again, keep in mind that this could be entirely mitigated if authors feel strongly via isolating tag structs in their own packages). But compared to the upside, this downside feels incredibly minor (to the point of insignificance).

creker commented 6 years ago

@leafbebop

which will be hindered because encoding/json asks json:omitempt

I don't see how. The proposal follows the usual Go syntax for referencing package identifiers. If you have a conflict you can use the same tools that Go provides now.

Data model codes should not be forced to depend on models they do not use.

depend on low-level modules, be it sqlx or yaml

As was mentioned, in the context of this proposal you would define tags in a separate package.

KantarBruceAdams commented 5 years ago

Could a single mechanism address the problems of both tags and magic comments?

Consider: // go:generate vs @go:generate

For an example of a user extension consider go contracts

// requires: // * x > 1

might look better as: @requires(x > 1)

These things can then be checked by the compiler and accessed via reflection or the AST instead of by parsing comments.

Also would it be fair to refer to this proposal as generalised attributes/annotations for go, in line with what other languages call this. It might be worth considering if compiler pragmas are just a subclass. In C++ attributes are replacing pragmas in some uses. C++ also had a rule that programs should be interpreted the same if attributes are removed. Something which I think may not be the case for comments in go. I realize attributes/annotations are one of the horrors some other languages have that golang would like to improve on but the current state of struct tags and magic comments is inferior to attributes. There really ought to be a better solution with more thought. Perhaps requiring that a tag/attribute/annotation is a constant value of type that satisfies a special interface may be sufficient for most cases. The "requires" syntax above for adding design by contract would require it to be an expression however (and a way to convert that expression to a string). C++20 includes contracts as attributes.

bminer commented 5 years ago

I think it might be good to update the original proposal to include the following points mentioned throughout the discussion:

  1. Libraries, especially those with side-effects (i.e. in init()), should probably expose a separate package for struct tags. For example, you may want to define a field with a JSON tag (i.e. Go v1 syntax: json:",omitempty"). This does not necessarily mean that you want to depend on the encoding/json package. By separating tag stuff into a package like encoding/json/tag, you can now write:
import json "encoding/json/tag"

type MyStruct struct {
      Value string [json.Rules{OmitEmpty: true}]
}

The idea would be to make encoding/json/tag extremely lightweight, so it can be imported without any bloat or side-effects.

  1. To avoid doing the above everywhere and still reduce coupling, the Go compiler could detect when an import is used exclusively for tags and remove it where possible.

Both of these points are summarized in this comment: https://github.com/golang/go/issues/23637#issuecomment-404397329

Another idea to take this further... Suppose you import a package that's not even found by the compiler? If it's only used for tags, you might decide to silently ignore the import statement (gasp!). Obviously, in this case, no compile-time error checking is performed for that tag. A variant of this idea could be to put a "tag" keyword in the import declaration itself to indicate that it's only being imported for tags. I'm not really sure that I like any of these ideas, but it's food for thought. ;)

import "encoding/json" tag

I agree that coupling is not what we want, especially in annotations, but I believe the advantages of structured tag annotations and compile-time error checking outweigh this.

domdom82 commented 4 years ago

any movement on this issue? I love the simplicity of tags in Go but I hate to use reflection to make use of them.

ianlancetaylor commented 4 years ago

@domdom82 There has been no movement on this issue other than what is recorded above.

Personally I think there may be something here but it seems to me that the proposal is not fully clear. It requires a notion of constant value that is not currently in the language. The suggested syntax may be ambiguous; consider

type S struct {
    f func() [2]T // Is this a result type or a tag?

Also I think @leafbebop raises some valid points.

Finally, this proposal does not save you from using reflect. You still need to use reflect to look at tags, you just get a interface{} rather than a string. So this proposal doesn't address your main concern.

urandom commented 4 years ago

It requires a notion of constant value that is not currently in the language.

I think I clarified this in the EDIT, where I mention that after discussions here, there is no need for such a construct. A struct with fields whose types can be constants, or other types whose underlying types can be constants is sufficient.

The suggested syntax may be ambiguous;

It might be ambiguous to the lexer. I'm not sure. It's not ambiguous for the reader. Your example cannot possibly be a tag.

Also I think @leafbebop raises some valid points.

I think @balasanjay and @bminer did a good job addressing these.

Finally, this proposal won't save you from using reflection, since that is the fundamental way of consuming tags. It will make the code afterwards easier however, since it will not need to parse the obtained tag string to produce something more descriptive.

jimmyfrasche commented 4 years ago

@ianlancetaylor

It requires a notion of constant value that is not currently in the language.

I think something like the extension to constants described here https://github.com/golang/go/issues/6386#issuecomment-406824755 would suffice.

The suggested syntax may be ambiguous

I'm sure a syntax could be found if the core idea is worth doing. Since the individual tags would be regular constants it would be a matter of demarcating/enclosing the list of tags. Just to throw something out there you could do field T / [list, of, tags].

Finally, this proposal does not save you from using reflect. You still need to use reflect to look at tags, you just get a interface{} rather than a string. So this proposal doesn't address your main concern.

I don't think the main concern in the first post was having to use reflect. It looks like it can be broken down into some more concrete points:

If the struct tags are a sequence of constants then, aside from whatever syntax is used to enclose the tags:

If the struct tag has complex validation logic that can't be expressed in the type system you still have the last two problems to a degree, of course, but it's much less of a degree.

Also since most packages would be looking for a single tag of some specific type, there could be a helper like

var tag Tag
ok := field.Tag.OfType(&tag)
// if ok, tag is filled in
domdom82 commented 4 years ago

ok I think I got it. Upgrading tags from simple strings to types makes them easier to parse and check at compile time. The reflection issue should still be addressed IMHO though in another issue. bump.

jimmyfrasche commented 4 years ago

The tags are a property of the type so reflection will always be needed to access them.

ghost commented 4 years ago

@rsc @robpike

ghost commented 4 years ago

how can this wasn't settled before go 1 when json is part of standard library

mikolysz commented 4 years ago

There's one problem none of those proposals address.

Sometimes we want to completely decouple the declaration of the type with the (potentially large) number of packages that do various things with it. I believe that the information we currently pass using tags should be passed using normal go types, wherever that information is actually needed. Sample use cases where this approach is better are:

So, instead of writing this:

// This lives in entities and shouldn't have anything to do with marshaling JSON.
type User struct {
    Name string `json:"name"` // We'd like to return Username to newer clients instead, but we can't, not without introducing a new field here, just for json.
    Age int `json:"age"`
    Email string `json:"email"` // Not everyone should see this.
    PasswordHash string `json:"-"`
    CreditCard stripe.CC `json:"credit_card" // Maybe We want to omit some of the subfields, but we can't.

    // Ten other fields...
    SSN string // We've forgotten the tag! Now all guests can see the SSNs of our users!
}

// ... 
func (u Users) Show(w http.ResponseWriter, *http.Request) {
    // We get the requested user from the db, save into u.

    if !isAdmin(r) {
        u.Email = ""
    }

    // We forgot about SSN, somebody is going to get hacked soon.
    b, err := json.Marshal(u)
    // Do whatever.
}

We would write:

// This lives in entities and doesn't have anything to do with marshaling JSON.
type User struct {
    Name string  
    Age int
    Email string
    PasswordHash string
    CreditCard stripe.CC

    // Ten other fields...
    SSN string
}

// ...
func (u Users) Show(w http.ResponseWriter, *http.Request) {
    // We get the requested user from the db, save into u.

    r := []json.Rule{
        json.UseNamingStrategy(myutilspackage.SnakeCase),
        // If we forget to allow a field, we're going to omit it, but that's better than transmitting too much.
        json.Allow("name", "age")}
    if isNewClient(r) {
        r := append(r, json.Rename("name", "username"))
    }

    if isAdmin(r) {
        r := append(r, json.Allow("email"))
    }

    b, err := json.Marshal(u, ...r)
    // Do whatever.
}

This is example code for illustration purposes only, function names would likely be different. We could also declare some of those options on an encoder, i.e. to always omit some fields from the stripe.CC type. For those who want rules to belong with the types, we could say that i.e. if a package implements the JSONRules method, returning a slice of JSON rules, the package JSON will call that and follow the rules declared there. Similar changes could be made to other packages using tags.

All I'm saying is that we should move code responsible for encoding, databases, validation etc. where it belongs, instead of putting it all in the type declarations. The approach I propose is more flexible than tags, type-checked at compile time, and allows for greater extensibility, security, testability and introspection. It's also one less concept to learn for new go developers, and that's an important thing, considering how much confusion tags cause. There's no new syntax, absolutely no changes to the language spec for now, everything can be done via the standard library and external packages. Sure, tags will still need to be supported by the Go compiler for a very long time, but they can be marked deprecated. Go2 could remove them, theoretically, but I think it would be better to just let them stay, to minimize effort when upgrading programs. Sure, warn about their usage in go vet, write a gofix rule that would convert them into real go code for standard library packages and let other package authors do the same, but that's about it.

I'm surprised no one has suggested this before, as doing things this way seems pretty obvious to me. It seems pretty go-like. We reapply existing concepts (plain go code, functions, variadic parameters, interfaces etc) to achieve something, without needing one more magical language feature that's easy to misuse and needs to be learned. This seems almost too simple to not be implemented already and I'm wondering where the error in my reasoning lies, but I can't find it myself.

urandom commented 4 years ago

@devil418 This is an already solved problem, since the current string tag system already produces coupling that may need to be avoided. And we have several solutions. 2 of the top of my head are: creating a local struct with the needed tags, and creating a new type using the desired one as a base, and implementing whatever interface the encoding/decoding library is using. Of course, nothing is stopping anyone from implementing what your wrote as a library, since it doesn't require any changes in the language.

tv42 commented 4 years ago

@devil418

type-checked at compile time

A lot of your example is stringly typed. json.Allow and json.Rename refer to struct fields by name; nothing about that is type-checked at compile time.

Merovius commented 4 years ago

(Apologies if I repeat what has been said - I've skimmed the thread for the points I made, but it's very possible that I overlooked something)

I agree with the general criticism that how a type is encoded into JSON shouldn't really be a property of the type, but a property of whatever does the encoding (that is, what field-names to use should be something that's somehow passed to json.Marshal, not something that is put on the struct). But I think that's likely a lost battle by now.

I also agree with the criticism that typed struct-tags require an import that shouldn't be necessary. If I annotate a struct with json-tags, that doesn't necessarily mean it ever gets encoded into json. And a program that doesn't do it, shouldn't have to compile the json package in when using the type. Importing isn't really the relevant question here, though. It is very possible for a program to import a package without it actually being compiled in - the linker can sometimes determine that a package isn't actually used and strip it out. So even if we use typed struct fields, a sufficiently clever linker might solve this problem. But as the tags are available via reflection, I think that's really hard (I assume, for example, that passing the type to fmt would then also trigger the "we need to include type-info about the tags" check).

Lastly: I also agree that it would be better, in general, if struct-tags would have more structure. One way to do this, that addresses at least some of the issues brought up by @ianlancetaylor, is to allow them to be a list of constant-expressions instead of a single string-literal. Untyped constants get their default type assigned (to deal with the question of precision and such for runtime-representation). The way this could work, is that json declares something like this

package json

type Name string

type ImaginaryTag int

type boolTag bool

const (
    OmitEmpty = boolTag(true)
    Omit = boolTag(true)
)

which could be used as

type Foo struct {
    Foo int json.Name("bar"),json.OmitEmpty,bson.Name("bar")
    Bar float64 json.Name("baz"),bson.Name("baz")
    Baz string json.ImaginaryTag(42)
    EmbeddedThing json.Omit
}

(there is a parsing-ambiguity with embedded fields, if the constant-expression is an identifier. This could either be resolved in the type-checking phase, or there might be some color of bikeshed that doesn't have this problem)

The json-package could then, via reflection, see the type of the struct-tag and switch behavior based on it. Tags would still be compile-time computable (as they are constants) and type-checked. You can't have composite types as struct-tags, but I believe use-cases that require that are very rare - if they exist at all. Complicated use-cases might even still use some custom grammar inside a stringly-typed tag. But I believe for 90%+ of use-cases the types which can already be constants in Go would suffice.

urandom commented 4 years ago

@Merovius Is there a technical reason why the tags can't be composite, if they are only made up of constant values? Otherwise, as the proposal says, regular constants are also fine and would be an improvement over the current string-based tags

Merovius commented 4 years ago

@urandom The technical reason is "constants can't be composite values". We'd need to introduce either some general way to have composite constants (there are proposals about that elsewhere) or a separate notion of a pseudo-constant literal only used in struct tags to the spec, if we want that. The complexity of doing that hardly seems worth the benefit. Constants are already well-defined for strings, bools and numeric types, so we'd really only need to touch the field-tag part of the spec to do what I suggested :)

ghost commented 4 years ago

I am still not convinced on having to have an extra dependency just because one of my dependency have tags on one of its field. I think during the development of go mod, many great articles are written on the topic why dependency can be problematic.

@Merovius Importing is the issue. I am not confident that yaml or sql world would have a common package for tags, and that means when designing the data model, the library used for the field tag must be determined, or when writing code of implementing details, the model code must be changed accordingly. And since string tags can be read if the keywords are the same, I very much doubt structured (or typed) tags in different package can understand each other, so it is extra bad if there is two different packages (different type of sql usually) using the model. That is a no to me.

On the other hand, I would say a linter, possibly supported by go vet, allowing some kind of package-defined protocol of tag fields is more interesting (and probably with help form gopls, which I still haven't got time to look into to understand more to comment on). But as the discussion shows, it is probably just me.

Another thought just occurred to me before hitting the comment button: If we are asking packages to create a new package for tags, how about we ask them to provide a test function that can be plugged into unit tests, which simply accepts interface{} and validate if the tag field is type safe, well-formed and good?

creker commented 4 years ago

How about actually not allowing packages with tag definitions to contain anything else? That not only encourages but forces people to define tags in very small separate packages that can be imported everywhere. Compiler doesn't even need to compile those. It only needs to parse them and use the information for syntax checks.