scala / slip

obsolete — archival use only
67 stars 15 forks source link

Adding standard Json AST - https://github.com/mdedetrich/scala-json-ast #28

Closed mdedetrich closed 7 years ago

mdedetrich commented 8 years ago

READ THIS FIRST

This discussion is about designing a library for a representation of JSON that can be shared between many different implementations of JSON parsing/pretty printing/transformation/.... As it's a SLIP (and not a SIP, so no language changes!), this shared API may eventually ship as a module that's included in the standard set of Scala libraries (e.g., it would be like a modern, minimalist scala-xml without the language support).

Everyone's welcome to join the discussion, provided your contribution is truly a contribution to the precise topic of this SLIP, as I tried to capture it above. This SLIP is not trying to solve the general problem of how to modularize the standard library, for example. I'm happy to refine my summary if needed. As always, there are plenty of other channels to discuss other (related) topics.

To ensure a productive discussion, please be kind to each other, stay as focussed as possible, and refrain from comments with very little technical content. As we are still refining this SLIP, it's too early to vote.

Thanks, @adriaanm

pdalpra commented 8 years ago

TBQH, I was happy when I saw scala-xml and parser-combinators moved out the stdlib. Of course, those libs are quite useful, but I'm in favor of a slim standard library, and a set of well-supported libraries, close to the stdlib but not in it.

The question that comes to my mind about the 6 differents JSON ASTs is: if the Reactive Streams initiative managed to get people from several projects in several languages together and agree on a common set of interfaces, why couldn't we reach the same agreement for a JSON AST too ?

tpolecat commented 8 years ago

I would prefer that there be a very small standard library and no official blessings. If the effort under discussion gains wide adoption in a natural way, great, but I don't think it needs an artificial leg-up.

I would however be very happy to see some kind of open mechanism to help new users select libraries. Beginners certainly face a challenge and I'm very interested in improving their experience. I just don't think "batteries included" is the best approach.

I would also like to echo @bwmcadams and encourage charity here. We may disagree but we're all interested in improving Scala.

adriaanm commented 8 years ago

Exactly, @pdalpra, and I'd like the discussion to focus on the positive interpretation, in the spirit of the modularisation effort that we started in 2.11. We have no intention to change this.

As @odersky said:

We are talking about modules that can be updated at their own pace, with not all sharing the same stability guarantees as the core.

johanatan commented 8 years ago

I think the fact that there are a ton of 3rd party libraries to choose from and everyone arguing that this one or that one is better for this case or that case is exactly one of the big turn-offs of Scala and the Scala community itself. In most other languages, this is a no-brainer-- there's standard JSON parse and unparse functions. Are people really finding themselves in a position where an extra few percentile of performance from their JSON parser is worth all the bickering?

Also, as a strong opponent of boilerplate, I think that whatever library Scala does choose to annoint as the standard needs to come from category #3 or #4 in @lihaoyi's breakdown of the options and thus it may be premature to discuss this (since the options in those categories are still rather bleeding edge). In other words, I found both Play JSON and Spray JSON to be too wordy/repetitive for my tastes (and I'd imagine the same is true for any of the others in the lesser categories).

And also it does appear that the discussion about whether an AST is needed or not for the interoperability of the plethora of JSON libraries (which is the point of this SLIP) is misguided. If the stdlib chooses [a sufficiently powerful] one [which also seems to be a (noble) motivation of @odersky], no one has need for any of the others and thus no need for the AST for interoperability between/with them (assuming of course that people eventually tire out at least in isolated parts of the world [aka particular teams] of arguing over that few extra percentile).

hseeberger commented 8 years ago

A standard library is for standard things. These days JSON happens to be quite standard, hence I think the stdlib should support JSON – of course in good quality.

mandubian commented 8 years ago

To sum up, I'll keep 2 concepts in this discussion: modularity in stdlib (& lighter core) & eventually a common Json AST...

He-Pin commented 8 years ago

If some of you don't like the the AST in the slib,do you like the Ractive-Stream way?

kevinwright commented 8 years ago

Thus far... I agree. But I also strongly agree with concerns about lack of flexibility once something is in the standard library. It's almost ironic that the existing JSON parser is a clear example of this!

As an example of the sort of thing that scares me: play-json implicit formats don't use implicit priorities. I implemented a shapeless wrapper for play-json (akin to spray-json-shapeless) and ran into conflicts between Writes[Map[K,V]] and Writes[Seq[(K,V)]]... Seeing this sort of issue baked-in for a full major release cycle of Scala would be incredibly painful.

If the focus is on an easily-interoperable AST, I do think this could be made to work - though with some caveats:

json-ast shows that there clearly is enough agreement between existing implementations to make this SLIP credible, just so long as any remaining points of contention there are cleared up and the emphasis is very strongly on easing interop/substitutability for libraries from the broader ecosystem.

tonymorris commented 8 years ago

a disagreement without qualification or even fairly assumable faction-driven opinion lumps someone into a targetable group such as "scalaz people".

Thank you @bwmcadams for pointing out this bullying. This behaviour so pervasive among Scala users, that one could very easily be forgiven for responding in kind!

One day, if I am to be wishful, the bullies will figure this all out.

djspiewak commented 8 years ago

Just to tease out a point that's been mooted a couple of times here (notably by @non and @odersky)… What if the solution is to just resurrect (in a significantly superior and, you know, actually useful form) bzr? I mean, the proposal in the OP is already regarding a separate module, updated on its own schedule. That's basically one step removed from a completely independent library, with the difference being discoverability and standardization. The latter is a property that is desirable in a few cases (the postgres case for example), but generally not something that most people would benefit from. The former would benefit everyone.

Bzr failed the first time around largely because of Maven and the need for general integration into the Java ecosystem, as well as the fact that it was a pretty crummy tool in general (no offense to the authors!). But I wonder if something like a "actually working bzr" would be something that the community would find helpful, and it certainly seems like something of the sort would completely eliminate the need for these "separate but blessed" modules.

@kevinwright I would caution against the dangers of underestimating the difficulty of getting JSON numerics right. @tixxit and @non (and many others, I'm sure!) know the pain of this hardcore. This again dips straight into tradeoff land for most people (at least, without relying on something like Spire's Real, which is an external dependency). Taking the obvious split algebra solution (Long, Double and BigInteger) is a no-go for a large class of applications and carries an enormous complexity cost. Taking the "worst is best" approach with just BigInteger is still a no-go for nearly everyone, and carries a lot of pain and suffering for the common cases. This issue alone (forget about ordered fields) is enough to make a standardized AST a very, very dicy concept.

mdedetrich commented 8 years ago

@djspiewak

@kevinwright I would caution against the dangers of underestimating the difficulty of getting JSON numerics right. @tixxit and @non (and many others, I'm sure!) know the pain of this hardcore. This again dips straight into tradeoff land for most people (at least, without relying on something like Spire's Real, which is an external dependency). Taking the obvious split algebra solution (Long, Double and BigInteger) is a no-go for a large class of applications and carries an enormous complexity cost. Taking the "worst is best" approach with just BigInteger is still a no-go for nearly everyone, and carries a lot of pain and suffering for the common cases. This issue alone (forget about ordered fields) is enough to make a standardized AST a very, very dicy concept.

This is very true

Like I have said before, the official spec maintains that JNumber are unbounded (http://www.json.org/). They can have any precision, which means, at least for Scala, the only valid representation for them is a BigDecimal.

The current Json4s library has seperate implementations for JNumber, one being a BigDecimal, and the other being a Double, and its created a lot of confusion in json4s land

kevinwright commented 8 years ago

@djspiewak I totally agree WRT to the issue of numeric representation, hence calling it out as one of my two caveats. I do, however, believe that it should be possible to make this a tunable option at parse time.

djspiewak commented 8 years ago

@mdedetrich

Like I have said before, the official spec maintains that JNumber are unbounded (http://www.json.org/). They can have any precision, which means, at least for Scala, the only valid representation for them is a BigDecimal.

Just to clarify, BigDecimal alone cannot represent arbitrary numbers, meaning that it actually fails at satisfying its only reason for existence! @tixxit alluded to this earlier. Scala (well, also Java) entirely lacks any numeric type sufficient to represent certain values representable in JSON.

@kevinwright Maybe (re: tunable). I've learned not to underestimate the creativity of this community. :-) I've tried doing similar things in the past though without success, so I speak from a position of fear more than anything else.

puffnfresh commented 8 years ago

@mdedetrich performance and brokenness of this:

There is no such thing as a standard AST for JSON

@tixxit has already pointed this out. Do you represent a number as a Double? BigDecimal? Something else?

It's not as simple as an AST. You have to worry about promotions, lazy creation, etc. if you want a working and fast JSON library.

adriaanm commented 8 years ago

Thanks for keeping the discussion on point. How to deliver modules (in general) is orthogonal to the module's functionality, so that should really be a topic for another discussion. Let's focus on carving out the API for this particular JSON AST module, which is clearly challenging enough in itself.

So, for future comments, ask yourself: "does this apply only to an API for a shared representation of JSON ASTs?" If yes, go ahead and comment. If not, please save this part of your thought for another discussion.

jroper commented 8 years ago

@adriaanm this is exactly the work that I was part of.

As the maintainer of play-json, I am indifferent to this slip. I see a need, which arises from voices in the Play community, for a json AST that is bigger than Play, that has good interoperability with other json libraries outside of Play. I think the new json4s-ast effort will meet that goal, it already has the backing and input of a number of json library maintainers, and I'm excited to see it succeed. I don't think being an official "rubber stamped" Scala library is a prerequisite to that, but I also don't think that would be a bad thing. @odersky's argument about being accessible to newcomers is a good point - an official library helps guide newcomers, making Scala more accessible.

My only concern about this slip is the potential for it to get in the way of solving the problem that I'm hoping json4s-ast to solve - the worst thing that could happen here is nothing - I wouldn't want to be in a situation where we hold off moving to json4s in Play because it might become part of the Scala stdlib, but then that never happens.

johanatan commented 8 years ago

Would it not be possible to attempt to parse each number as a 32-/64-bit int/float first and only if those fail fall back to progressively larger and larger bit widths? Of course, this would not perform well with a file full of them but in that case there could be a tunable option/mode to essentially skip the smaller bit widths (and I'd imagine that most JSON out there is not full of huge numbers).

mandubian commented 8 years ago

For Json Numbers, Argonaut representation is quite complete...

mdedetrich commented 8 years ago

@djspiewak

Just to clarify, BigDecimal alone cannot represent arbitrary numbers, meaning that it actually fails at satisfying its only reason for existence! @tixxit alluded to this earlier. Scala (well, also Java) entirely lacks any numeric type sufficient to represent certain values representable in JSON.

Do you have reference for this? I am just curious what the issues for BigDecimal are (since we use it all the time)

mdedetrich commented 8 years ago

@mdedetrich performance and brokenness of this:

There is no such thing as a standard AST for JSON

https://www.ietf.org/rfc/rfc4627.txt and http://www.json.org/?

johanatan commented 8 years ago

It doesn't have to be standard-- it just has to be ours. And then it will be de facto standard. :)

ktoso commented 8 years ago

Coming back to the technicalities, and why I think it is an interesting and useful proposal.

In hope of highlighting the use-case for this SLIP more explicitly (which was mentioned,but not given much weight/credit somehow) I'd like to showcase the existing pain that the lack of common AST currently causes in libraries (using Akka HTTP as an example, though I imagine all similar libraries which "deal with JSON" but not as their core having a similar problem).

Use case: Library doing something with JSON, without the need of picking a parser (users will provide one)

Requires: For this use case to be improve the status-quo only the common AST is needed.

Status Quo: In a library (Akka HTTP or Spray) that deals with JSON (unmarshalling/marshalling json requests/responses), that does not want to pick a library for marshalling/unmarshalling to support, but leaves it up to the users to pick their favourite library (as @bwmcadams mentions, different people like different projects or have different needs from an API perspective).

Let's imagine an API that can complete an HTTP request with an object, and we can marshal it into a response entity by chaining marshallers (T => JsObject => HttpResponse (simplified chain)). Using pseudo code, this is what we must provide (or ask the user to provide):

def complete(response: => HttpResponse)
// to support 
complete(Person())
// we need to provide both
// this is some library, doing the marshalling (spray-json, argonaut, play-json etc)
type toJs = Person => sprays.JsObject 

// we need to convert JsObject to a response (e.g. spray.JsObject => akka.HttpRespose)
type sprayJsToResponse = sprays.JsObject => HttpResponse

// we also need Play
type playJsToResponse = play.JsObject => HttpResponse

// we also need Argonout
type playJsToResponse = argonaut.JsObject => HttpResponse

// [and we also need ...] * n

Since the library does not want to depend on all JS libraries on the planet, these are shipped as additional dependencies which is putting a burden on maintaining many implementations on library authors, and putting a burden on library users as they need to pick the Support libraries (it's a less "it just works" experience), and hope they're up-to-date (or implement that conversion manually).

What would improve: With a common AST, this becomes simple, the library, can provide marshalling from common.JsObject => HttpResponse, and json libraries can emit the common.JsObject format.

Users could simply rely on:

// provided by library:
type commonJsToResponse = common.JsObject => HttpResponse
def complete(response: => HttpResponse)
// needed from user:
type toJs = Person => sprays.JsObject 
// to support:
complete(Person())

Possible problems:

Analysis The pain which could be addressed by the common AST is a real and existing thing (from our perspective, explained above). Which is why I find the SLIP attractive (the AST), and +1 that part of it. I (personally) believe this could be made to work, and would be pretty useful; I do not have opinions about shipping a built-in parser though (I, personally, am most interested in nailing the AST - which the SLIP is about).

Hope this helps to shift focus a bit onto that original use-case again.

I'm also seconding @jroper's comment on json4s-ast, it solves the thing I (we) need, but we're somewhat stuck between deciding on what to support until this SLIP has been decided upon.

My only concern about this slip is the potential for it to get in the way of solving the problem that I'm hoping json4s-ast to solve - the worst thing that could happen here is nothing - I wouldn't want to be in a situation where we hold off moving to json4s in Play because it might become part of the Scala stdlib, but then that never happens.


Note on Java's JSR-374 (JSON) – I dug a bit more, and it's only part of JEE, so would not work for us as inter-op thing, I thought it was aimed towards the JDK itself.

One perhaps useful spin on the Number problem is the aproach taken there: https://docs.oracle.com/javaee/7/api/javax/json/JsonNumber.html It's a wrapper which can but does not have to use BigDeciman – for "fast" parsers one would emit a JsonNumber backed by a double. This can be figured out during parsing and/or configuration.

djspiewak commented 8 years ago

@mdedetrich The exponent for BigDecimal is an Int, while the addend is an integer (mathematical integer). This means that any values k = a(10^b) such that a is integer and b < MIN_INT or b > MAX_INT are unrepresentable. @tixxit can probably go into more details, perhaps with references that are more authoritative than just "Daniel says so".

mdedetrich commented 8 years ago

@djspiewak Cheers thanks, kind of unfortunate since we don't have a proper unbounded real type in stdlib (or a ring like in clojure/spire), but maybe its something we can deal with?

kevinwright commented 8 years ago

@djspiewak One idea, make JsNumber totally abstract, with .toString, .toInt, .tryToInt, .toBigDecimal, .tryToBigDecimal, etc. Could even have a form of .toBigDecimal that accepted precision args.

Provide an instance (perhaps implicitly?) of String => JsNumber to the parser.

I can even see some valid use-cases for a JsNumber that simply stored the string representation.

ktoso commented 8 years ago

@kevinwright

@djspiewak One idea, make JsNumber totally abstract, with .toString, .toInt, .tryToInt, .toBigDecimal, .tryToBigDecimal, etc. Could even have a form of .toBigDecimal that accepted precision args.

Yes, that's what the Java one did as well, in order to not force impls to use BigDecimal, but allow it if a parser wants to.

See:

https://docs.oracle.com/javaee/7/api/javax/json/JsonNumber.html It's a wrapper which can but does not have to use BigDeciman

djspiewak commented 8 years ago

@mdedetrich @kevinwright In my experience, the best non-Spire way to deal with JSON numbers is to just leave them in String form and make clients parse them as whatever they want. You already have the char[] in memory due to parsing, so there aren't really any performance issues inherent (aside from a little extra ongoing memory footprint). So basically, Kevin's third option.

Having a totally abstract JsNumber seems appealing, but it's pretty darn difficult to do anything useful with it without having a concrete way to "get a number" (for some definition of "number").

mandubian commented 8 years ago

@kevinwright someway like https://github.com/argonaut-io/argonaut/blob/master/argonaut/src/main/scala/argonaut/JsonNumber.scala ?

mdedetrich commented 8 years ago

@kevinwright I suppose what it comes down to, is whether we should have an underlying representation or not?

Currently underlying representation for fast is a String (with no runtime checking, for speed). For safe its a BigDecimal. Both have converters to different number formats.

We can make it so there isn't an underlying representation for safe, and instead make JNumber a sealed abstract class with all of the different number types as options?

jroper commented 8 years ago

Technically you can't represent JSON numbers with String either, since that's limited to expressing decimal numbers with up to 2^31 digits... maybe we should use a stream instead... I think we need to draw a pragmatic line somewhere. BigDecimal works well in my opinion.

mdedetrich commented 8 years ago

@jroper

Technically you can't represent JSON numbers with String either, since that's limited to expressing decimal numbers with up to 2^31 digits... maybe we should use a stream instead... I think we need to draw a pragmatic line somewhere. BigDecimal works well in my opinion.

Yeah I am in agreement with you. I see the use case for supporting such huge numbers as very rare, and there is nothing stopping people from making their own AST for such rare cases.

For the vast majority of people, String/BigDecimal will represent what they need. I see the merit in having different types for JNumber, and JNumber being a sealed abstract class, but I think the current split of safe/fast can cater for those cases

kevinwright commented 8 years ago

@mdedetrich I'm actually wondering if String would always be the fastest solution. I can easily imagine a streaming parser for working on large documents where you take a performance hit for crossing memory pages or CPU cache limits in the resulting AST.

On the other hand, it's ideal for a number of my own use-cases where I often pass around JSON fragments without needing to internally manipulate them.

djspiewak commented 8 years ago

@jroper Except anyone who wants performance of any magnitude, or safety in the face of certain unbounded computations inside of BigDecimal (radix calculations suck and are surprisingly pervasive), or any host of other use-cases other than "represent the things" is going to be very upset about BigDecimal.

This is literally the problem with BigDecimal: it satisfies no one. There is no use case which is correctly handled by BigDecimal. Its precision is incomplete. Its performance is terrible. Its reliability is actually non-existent, since some operations can be (surprisingly) non-terminating. There is nothing that it does well, and so it basically represents the worst of all possible trade offs (since nothing is gained and everything is lost).

ktoso commented 8 years ago

@djspiewak not forcing parsers into BigDecimal land is exactly what has been proposed in this thread by linking to Argonaut's and/or Java's JsonNumber impls. Sounds like a concept we should explore - and impls can do whatever they want (most likely not BigDecimal ;-))

mdedetrich commented 8 years ago

@kevinwright

I think the only faster version is to use streaming (which can also deal with the cache limits), which is what you said. However I suspect that streaming would happen before the construction of the AST

@djspiewak

The main thing for safe is the runtime cost instantiating the JNumber (or BigDecimal). People are not going to be doing maths on internal JNumber's when passing them around. We just need a reasonable guarantee that we don't lose precision when parsing a JSON number into a JNumber. safe isn't designed for pure speed

Like I said, I am definitely open to making the JNumber a sum type of the other number types if its a more superior technical solution. The idea for safe though, is that we don't lose information for reasonable use cases, which can definitely happen more often if you use Double

jroper commented 8 years ago

@djspiewak That's quite an extreme view to have, considering an entire ecosystem of Play users are using it right now in production.

I agree, you would be crazy to do any mathematical calculations with BigDecimal. But what BigDecimal does do well is allow the representation of exact decimal values in a generic way, for the purposes of data transfer. It has its limits, and performance consequences, but these have not been enough to surface anywhere in the Play issue tracker or mailing list. The typical Play user uses it for nothing more than transferring data from one medium to another, the JSON is parsed in BigDecimal, and then that is converted to a more appropriate data type in a case class, eg Int. And for that purpose, it does satisfy.

djspiewak commented 8 years ago

@jroper If you're just transferring data from one medium to another, then why not use String? If you're performing maths, then BigDecimal is a minefield wrapped up inside a half-licked lollipop.

At the risk of appealing to authority, we have been down this road before. Several times actually. Remember lift-json? Even web framework users run into the consequences of BigDecimal. Hard. And one of the charms of BigDecimal is its implementation makes it impossible for users to do anything superior if their use case calls for it (i.e. you can't recover your lost performance, you can't restore your lost precision, and you can't perform the broken operations).

kevinwright commented 8 years ago

One thing is become increasingly obvious here.

The main point of concern that people have, technically, is over the underlying representations of data (both objects and numbers). A standardised AST that abstracted over both of these would get a lot more community acceptance.

wheaties commented 8 years ago

@odersky ok, if the desire really is a "batteries" included, then having a modular "scala" where the default is to import all the "important" libraries and a more advanced "pick a package" approach is something I can definitely get behind.

mdedetrich commented 8 years ago

@jroper If you're just transferring data from one medium to another, then why not use String? If you're performing maths, then BigDecimal is a minefield wrapped up inside a half-licked lollipop.

BigDecimal gives you a guarantee its an actual valid number, String doesn't. Hence the distinction between safe/fast.

One thing is become increasingly obvious here.

The main point of concern that people have, technically, is over the underlying representations of data (both objects and numbers). A standardised AST that abstracted over both of these would get a lot more community acceptance.

Definitely see this point of view, but I can't see a situation where people are doing internal maths on a JValue. What most people seem to do, as @jroper said, is they do some querying, get a JNumber, and then convert it to a more appropriate type (such as a double or int)

I personally have zero problem in adding this abstraction, but is there a compelling justification for it?

djspiewak commented 8 years ago

@mdedetrich

BigDecimal gives you a guarantee its an actual valid number, String doesn't. Hence the distinction between safe/fast.

A problem easily solved. Private constructor, used by the parser, takes a String and constructs a JNumber. Public constructor (well, factory method on companion object) takes a String and validates with a cached Regex.

jroper commented 8 years ago

I would actually be ok with String too, as I would with an abstract JsNumber type that offered the user many different ways to interrogate the value. But I don't have any objections with the current proposal put forward in json4s.

LeifW commented 8 years ago

@djspiewak I assume you mean "sbaz", not "bzr"?

mdedetrich commented 8 years ago

A problem easily solved. Private constructor, used by the parser, takes a String and constructs a JNumber. Public constructor (well, factory method on companion object) takes a String and validates with a cached Regex.

Sure, I am happy with that. You can easily make a pull request at https://github.com/json4s/json4s-ast to implement this (or I can do it myself if you don't have the time)

soc commented 8 years ago

I think @lihaoyi made a very good point early on:

1. Just AST; bring your own parser and serializer and operations
2. AST + operations + parser + serializer, bring your own case-class-mapper/etc.
3. AST + operations + parser + serializer + case-class-mapper a.l.a. shapeless, uPickle, etc.
4. Fancy operations: lenses, traversals, zippers, etc.

If we only look at standardizing the AST, I think we can side-step the whole debate, because it just doesn't matter

because all the various JSON libraries will just depend on the AST, and drag it in automatically, and most users will never see it.

Is this correct?

djspiewak commented 8 years ago

@LeifW I did mean sbaz. No idea why I had bzr on the brain. Thanks!

@mdedetrich No time at present. :-(

mdedetrich commented 8 years ago

@djspiewak No problem, have made an issue at https://github.com/json4s/json4s-ast/issues/16 and will look into it myself

@soc

because all the various JSON libraries will just depend on the AST, and drag it in automatically, and most users will never see it.

Is this correct?

Generally yes. The idea is that users will pass around the same JValue (just like we pass around String or Double's now). So if you get a JValue from your webframework, you can easily throw it into your library that happens to interface with CouchDB or something

soc commented 8 years ago

Ok, great. Then I think we all need to calm down a bit. The sky is not falling.

Regarding the unbounded number thing ... I think it's necessary that people can pick different number types based on their use-case, which of course creates the question whether this should be something which can be "configured" in the parser (easy) or should be dealt with during marshaling to Scala instances (not hard, but requires shapeless). No number type will ever satisfy the the JSON spec, and I think it's fine if the limitations are spelled out clearly.

I think something like

case class Foo(numberA: Double, numberB: Int, numberC: BigInt)
val instanceOf Foo = someJson.as[Foo]

is superior to trying to make decisions in the parser.

jedws commented 8 years ago

I propose we replace this AST with the Argonaut AST.

The Argonaut AST is significantly better than this one (or two actually) as it doesn't expose so much of its innards, and it already has far superior number handling.

Rather than modifying/improving these two experimental new AST APIs from within a (fractured and controversial) SLIP process, we should propose a heavily production-hardened AST instead.

mdedetrich commented 8 years ago

Regarding the unbounded number thing ... I think it's necessary that people can pick different number types based on their use-case, which of course creates the question whether this should be something which can be "configured" in the parser (easy) or should be dealt with during marshaling to Scala instances (not hard, but requires shapeless).

I have one issue regarding using different number types (at least for safe), in that its possible to lose information from numbers, and its something that the user has no control over.

Assuming we have some JSON input (whether its from HTTP request, or some database), and some parser creates a safe JValue (over which the user has no control), then the user needs a reasonable guarantee that the safe JValue is a "correct" representation of that JSON.

If we allow parsers to specify a number type (lets say double), its possible for that parser to lose precision on that double, and the user would have no idea until its too late. Thats why I am more open to @djspiewak idea.

@jedws The above point does show that even argonaut AST isn't perfect. It doesn't have the above guarantee

Retract the above, having a look at argonaut in more detail, may do something similar specific JNumber