Open bo-tato opened 1 year ago
Documenting the file is probably the better option. Otherwise requiring big
would always require json
even if you're not using it. Maybe tree shaking would clean it up inherently, but would rather be a bit more explicit when it comes to that kind of stuff anyway.
Also should document these kind of extension files within the JSON
docs as well.
EDIT:
big/json
uri/json
uuid/json
Documenting is certainly the first option.
However, I would also consider alternative options to improve the developer experience and make these requires more implicit.
It feels weird that I have to big/json
explicitly just to get the serialization behaviour of Big
.
It shouldn't add much overhead to parse the JSON library. As long as it's not used, the semantic effort should be negligible.
Should also do this for YAML
as well ^. I also confirmed that given a simple program that isn't using serialization like:
require "big"
require "yaml"
require "big/yaml"
pp BigInt.new("1") + BigInt.new("9")
That libyaml
is correctly not linked.
$ otool -L test
test:
/usr/local/opt/gmp/lib/libgmp.10.dylib (compatibility version 15.0.0, current version 15.1.0)
/usr/local/opt/pcre2/lib/libpcre2-8.0.dylib (compatibility version 12.0.0, current version 12.2.0)
/usr/local/opt/bdw-gc/lib/libgc.1.dylib (compatibility version 7.0.0, current version 7.2.0)
/usr/local/opt/libevent/lib/libevent-2.1.7.dylib (compatibility version 8.0.0, current version 8.1.0)
/usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1319.100.3)
However the main con of like implicitly having a require "big/yaml"
(or JSON) when you require YAML
/JSON
, is you're now requiring GMP
even if you're only using YAML
/JSON
:
require "yaml"
require "big/yaml"
pp YAML.parse "---\nfoo"
$ otool -L test
test:
/usr/local/opt/gmp/lib/libgmp.10.dylib (compatibility version 15.0.0, current version 15.1.0)
/usr/local/opt/libyaml/lib/libyaml-0.2.dylib (compatibility version 3.0.0, current version 3.9.0)
/usr/local/opt/pcre2/lib/libpcre2-8.0.dylib (compatibility version 12.0.0, current version 12.2.0)
/usr/local/opt/bdw-gc/lib/libgc.1.dylib (compatibility version 7.0.0, current version 7.2.0)
/usr/local/opt/libevent/lib/libevent-2.1.7.dylib (compatibility version 8.0.0, current version 8.1.0)
/usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1319.100.3)
Yeah, I would not have require "yaml"
pull in all the stuff. Instead, require "big"
would include require "big/yaml"
.
That is because Big
needs global configuration (of libgmp's allocation functions) and YAML
doesn't; you cannot use the big numbers without the program calling LibGMP.set_memory_functions
.
Now let's say XML serialization is supported in the standard library, and UUID supports serialization. XML calls LibXML.xmlGcMemSetup
, UUID requires no such configuration. Would that mean require "xml"
includes require "uuid/xml"
?
I was dealing with json with arbitrarily large numbers, and wrote a custom
JSON::PullParser
for the Big types. I was going to submit a feature request with it, when I search github issues and realize this has already been implemented, if yourequire "big/json"
. But this isn't documented anywhere I can find. As it's only a few lines of code I think the best solution would be to just have it included when yourequire "big"
. Otherwise we should add documentation to the Big pages saying if you want to deserialize json you need to also require "big/json"