Open dsnet opened 4 years ago
I don't think that's a bug, or misuse, I actually depend on it in a project where there's a json header then there's other data.
This change would break a lot of working code.
The only example
for the Decoder
type calls Decode
in a loop. Perhaps there should be a lint or vet check that either Decode
is read until a non-nil error or its Buffered
method is called?
The call to the Buffered
method seems like a reliable indication of the “header before other data” case.
A vet
check in particular, if it is feasible, seems like it could address this problem without an API change.
@OneOfOne, I'm not proposing that we change the json.Decoder.Decode
behavior. In and of itself, the method's current behavior is entirely reasonable. However, there's fairly compelling evidence that there is a misuse problem, though. A sampling of json.Decoder.Decode
usages at Google shows that a large majority are using it in a way that actually expects no subsequent JSON values.
From my observation, the source of misuse is due to:
io.Reader
on hand (especially from a http.Request.Body
), and the lack of API in the json
package to unmarshal a single JSON value from a io.Reader
. As a result, they turn to json.Decoder
, but don't realize that it's intended for reading multiple JSON values back-to-back.DisallowUnknownFields
or UseNumber
) and unfortunately those options are only hung off json.Decoder
.The first issue could be alleviated by adding:
func UnmarshalReader(r io.Reader, v interface{}) error
where UnmarshalReader
consumes the entire input and guarantees that it unmarshals exactly one JSON value.
The second issue could be alleviated by having an options pattern that doesn't require the creation of a streaming json.Decoder
. In the v2 protobuf API, we use an options struct with methods hanging off of it.
@bcmills, a vet
check is interesting, but I'm not sure calling Decoder.Buffer
is the right approach. Rather, I think you need to get the next token and check that it returns io.EOF
:
d := json.NewDecoder(r)
if err := d.Decode(&v); err != nil {
panic(err) // not triggered
}
if _, err := d.Token(); err != io.EOF {
panic("some unused trailing data")
}
I'm not sure calling
Decoder.Buffer
is the right approach. Rather, I think you need to get the next token and check that it returnsio.EOF
That would work too. So that gives three patterns that should suppress the warning:
Decode
until it does not return nil
.Token
until it does not return nil
.Buffered
.(1) and (2) suppress the warning if the user has read enough of the input to detect either EOF or an earlier syntax error. (3) suppresses the warning if a non-JSON blob follows the JSON value.
The first issue could be alleviated by adding UnmarshalReader
Or a Decoder option to mark it as being single-object only.
That way fixing existing instances of this problem (of which there are many, including all over my own code) involves just adding a line, instead of refactoring.
Or a Decoder option to mark it as being single-object only.
That way fixing existing instances of this problem (of which there are many, including all over my own code) involves just adding a line, instead of refactoring.
I like how concise using a decoder often is:
if err := json.NewDecoder(r).Decode(&t); err != nil {
It would be nice if the fix didn't require breaking up the line. I like how @dsnet's UnmarshalReader suggestion offers something similarly concise:
if err := json.UnmarshalReader(r, &t); err != nil {
We could also use an alternative Decode method:
if err := json.NewDecoder(r).DecodeOne(&t); err != nil {
- As a result, they turn to
json.Decoder
, but don't realize that it's intended for reading multiple JSON values back-to-back.
To give further evidence that this is a common issue, I'm one of the encoding/json
owners and I've now just realised I've been making this mistake for years.
I'm personally in favour of UnmarshalReader
. If anyone wants to be in control of the decoder or any options, they can copy the contents of UnmarshalReader
and adapt it as needed.
I think a vet check would also be useful, because many existing users might not realise that this common json decoding pattern is buggy. Coupled with UnmarshalReader
, the users would just need to slightly modify one line. A vet check that required the user to refactor the code and add a few lines of non-trivial code seems unfortunate to me, so I generally agree with @cespare.
Maybe a method like Done() error
that reads the rest of the input and errors if there's something other than whitespace? That could also be used for reading multiple documents from the stream and quitting once you get to one with a certain value.
Shouldn't this be a concrete proposal (with possible variations)? The issue description doesn't seem actionable.
@networkimprov usually, a proposal brings forward a specific solution to be implemented. It seems like the main point of this issue was to bring up the problem, so the proposal could come later if needed.
Time and again, I've seen issues closed and further discussion directed to golang-nuts/dev because the item isn't actionable.
I agree with a new method. It re-use options for json.Decoder
, and is very easy to adopt. But instead of DecodeOne
, I would suggest it DecodeWhole
.
I suggest the name DecodeFull, it is more consistent with ReadFull.
I agree that DecodeFull
makes it more clear, that it decodes the the entire stream.
Whereas DecodeOne
sounds like it does the same thing as Decode
– decodes only one json value per call.
Remove this entirely, don't let close-to–duplicate functionality lying around. Especially one that invites misuse.
I don't want to bikeshed names too hard, but my suggestion would be DecodeSingle
. I find this a bit better than "full" where my first thought is that it's trying to decode the "full reader" to completion.
(Also, this is a function name in C#'s LINQ, where it signifies that an IEnumerable contains a single thing, and throws if not, which feels similar.)
Removing Go2
label and adding NeedsDecision
instead. When I first filed this issue, it was more so to point out an API wart that we should avoid repeating if there were ever a hypothetical v2 of encoding/json
. However, it seems that this discussion has gone in the direction where it might make sense to do something about this today.
Let's avoid bikeshedding on names; it brings unnecessary noise to the discussion. Bikeshedding can occur in future CLs that implement this assuming we should do anything about this.
To summarize the discussion so far, there seems to be 3 (non-mutually exclusive) approaches:
1) Add a top-level function that takes in an io.Reader
as an input and consume it entirely. This approach is provides a minor convenience benefit over approach 2.
2) Add a method to Decoder
that consume the entire io.Reader
. This approach is most flexible as it permits options to be specified and provides an obvious fix to existing misuses.
3) Add a vet check as suggested by @bcmills in https://github.com/golang/go/issues/36225#issuecomment-567776561
An issue with approaches 1 or 2 is that it assumes that io.EOF
will always be hit by the io.Reader
. For the case of an http.Request.Body
, I suspect that this may possibly be an issue if the client keeps the connection open, in which case, an io.EOF
will never be observed and the server will deadlock trying to read all input from the client, but the client won't end the close the request since it's waiting for a response from the server.
Personally, I like approach 2 and 3.
I think the io.EOF issue is a showstopper, unless a new function/method takes a size argument, which could be zero or -1 for read-to-eof.
Maybe add the list of solutions to the issue text, and retitle it "proposal: ..." ? I think that places it on the proposal review committee's agenda...
An issue with approaches 1 or 2 is that it assumes that
io.EOF
will always be hit by theio.Reader
.
If the code doing the decoding knows the length of the data, then they could use an io.LimitedReader
to work around this.
I suspect, though, that the more common case is that the code doesn't know, and doesn't care: It just wants to decode until the first json object is done. If it so happens that you can detect that there's extra (non-whitespace?) data, because it came along with a read you issued anyway, then sure, return an error. But I think most users would be perfectly happy to be a bit sloppy here and have only best-effort "extra data" error detection, as evidenced by the multitude of us happily, obliviously using json.Decoder for years and years.
So I guess I would advocate "fixing" this problem by defining it away.
FWIW, I am also a fan of approaches 2 and 3.
@dsnet, I notice that this issue is currently without a milestone. Given https://github.com/golang/go/issues/36225#issuecomment-569999484, do you want to turn it into a proposal?
2. Add a method to Decoder that consume the entire io.Reader. This approach is most flexible as it permits options to be specified and provides an obvious fix to existing misuses.
A variation on this would be to add a configuration method along the line of DisallowUnknownFields()
, with a name like ConsumeWholeReader()
.
Has there been any progress on this issue?
As further evidence of how widespread the problem is:
A quick look at the json binding logic of Gin (https://github.com/gin-gonic/gin/blob/master/binding/json.go) and Echo (https://github.com/labstack/echo/blob/master/json.go) seems to indicate that both of those major libraries allow trailing garbage after json data in request body because of this pitfall.
Here's another example, from go-chi/render: https://github.com/go-chi/render/blob/master/decoder.go#L44
Hi @bcmills . A couple of years ago in https://github.com/golang/go/issues/36225#issuecomment-588252579 you suggested that the next step for this issue is for someone to distill a proposal out of the discussion. That doesn't seem to have happened, but I am willing to do it. Would it be best to create a new issue with that proposal, or add it in a comment on this one?
@dsnet, myself, and others have been working on a new API for encoding/json for some time - we'll share soon. This issue is fixed in the new design, for example, so I'd recommend not spending time on a proposal for now :)
@mvdan I'm in no way opposed to a rethink of the API encoding/json (I think it's obvious to anyone who uses it extensively that it is old and has gathered some dust). But I wonder if making this issue dependent on such an effort may be a case of the perfect being the enemy of the good. How confident is your group that an implementation of your design will be incorporated into the go standard library in a timely fashion?
An advantage of addressing this issue with a new method on Decoder
is that it not only solves the nuisance for authors of new code in a modest way, but that it will also make it easy to fix code that currently uses Decode
(its behaviour for trailing data seems to be unwelcome and unintended for the vast majority of its uses). The simpler the fix is to adopt, the more existing code will get fixed.
You can import the JSON experiment in your packages now if you want: https://github.com/go-json-experiment/json
@dpw I am of course not opposed to smaller overlapping proposals, but we (the maintainers) also have to decide where to spend the time we have. My personal opinion for encoding/json is that small incremental steps aren't the best path forward.
@mvdan I appreciate that proposals and other contributions do not review themselves, and maintainers have to decide how to invest their time. But on #56733 I see ongoing discussion, involving go maintainers, about another proposal to add a method to Decoder
. That suggests that some maintainers consider at least some proposals to be worth considering.
I hold to a sentiment similar to @mvdan that I'd rather invest most of my efforts on what's to come rather than small incremental steps on what already is, especially if what's to come resolves many issues beyond just this one. You'll notice that most of my arguments in #56733 are from the background of what's best for JSON in the long run rather than simply what's possible to implement today in the current API. That is, I needed to be mentally confident that it could be implemented in both the current and new API with the same semantics.
Even if a proposal is accepted, it doesn't mean that it will be implemented immediately. For example, #48298 has been accepted for over a year but not implemented yet in the current API. However, it is already implemented in the new API that @mvdan alluded to.
Neither @mvdan nor I are paid to work on Go, but are simply freelance contributors. Unfortunately, we have to be selective about what we spend our attention towards.
I also came here because I just wanted to use DisallowUnknownFields
and I got surprised that extra data is silently ignored. If anyone needs it, this is now available in my x.UnmarshalWithoutUnknownFields
function:
func UnmarshalWithoutUnknownFields(data []byte, v interface{}) errors.E {
decoder := json.NewDecoder(bytes.NewReader(data))
decoder.DisallowUnknownFields()
err := decoder.Decode(v)
if err != nil {
return errors.WithStack(err)
}
_, err = decoder.Token()
if err == nil || !errors.Is(err, io.EOF) {
return errors.New("invalid data after top-level value")
}
return nil
}
I think just having an extra example next to the DisallowUnknownFields
documentation would probably help many. We do not need to find a perfect solution here, I think just an example to even hint at the issue would be useful.
Hi all, we kicked off a discussion for a possible "encoding/json/v2" package that addresses the spirit of this proposal.
In the proposed v2 package, there is a UnmarshalRead
method that unmarshals directly from an io.Reader
, verifying that it encounters io.EOF
before reporting success.
I also came here because I just wanted to use
DisallowUnknownFields
and I got surprised that extra data is silently ignored. If anyone needs it, this is now available in myx.UnmarshalWithoutUnknownFields
function:func UnmarshalWithoutUnknownFields(data []byte, v interface{}) errors.E { decoder := json.NewDecoder(bytes.NewReader(data)) decoder.DisallowUnknownFields() err := decoder.Decode(v) if err != nil { return errors.WithStack(err) } _, err = decoder.Token() if err == nil || !errors.Is(err, io.EOF) { return errors.New("invalid data after top-level value") } return nil }
I think just having an extra example next to the
DisallowUnknownFields
documentation would probably help many. We do not need to find a perfect solution here, I think just an example to even hint at the issue would be useful.
Why is the err == nil
required? Can't you just do if !errors.Is(err, io.EOF)
?
Can't you just do if !errors.Is(err, io.EOF)?
Yes, you can. It is just a bit more readable to me this way.
As it stands the API does not make the common case easy since
Decode
is designed with the assumption that the user will continue to decode more JSON values, which is rarely the case.
I wouldn't say rarely, for example, Facebook and others sometimes use JSON records, which the current API works well for:
package main
import (
"encoding/json"
"fmt"
"strings"
)
const s = `
{"month":1,"day":2}
{"month":3,"day":4}
`
func main() {
d := json.NewDecoder(strings.NewReader(s))
for {
var date struct { Month, Day int }
if d.Decode(&date) != nil {
break
}
fmt.Printf("%+v\n", date)
}
}
I'm observing the existence of several production servers that are buggy because the
json.Decoder.Decode
API lends itself to misuse.Consider the following:
json.NewDecoder
is often used because the user has anio.Reader
on hand or wants to configure some of the options onjson.Decoder
. However, the common case is that the user only wants to decode a single JSON value. As it stands the API does not make the common case easy sinceDecode
is designed with the assumption that the user will continue to decode more JSON values, which is rarely the case.The code above executes just fine without reporting an error and silently allows the decoder to silently accept bad input without reporting any problems.