mirage / ezjsonm

An easy interface on top of the Jsonm library.
Other
40 stars 22 forks source link

Yet another jsoo friendly attempt #47

Closed hhugo closed 3 years ago

hhugo commented 3 years ago
hhugo commented 3 years ago

Aside from one spelling nit, I have a couple of issues:

  1. Tests don't run for me (Debian, x86_64, node v16.11.1). I get:
test.bc.js: internal error, uncaught exception:
            (Failure "TypeError: runtime.unix_dup is not a function")

You need alcotest.1.5 and js_of_ocaml-compiler.3.11

I am very new to ocaml, so forgive me if this is a beginner error.

  1. This probably doesn't fix the deep nesting issue ([[[...[[[1]]]]...]]]). I say "probably" because I can't run the js tests to check. That is less of an issue from our use case, but I am not sure it's a total non-issue. Still, I would prefer to have the explicit-stack version even if only for jsoo.

48 support deep nesting

  1. Did we benchmark to see if the non-cps version is slower? If the speed is about the same, I favor just having one implementation.

I did not benchmark, would you be able to help do it ? maybe on #48 as well ?

gasche commented 3 years ago

I'm confused by this PR. What is the goal? It looks like it is testing whether the backend supports TCO, and then:

  1. on backends that support TCO, it uses the tail-recursive implementation (which itself uses CPS)
  2. on backends that do not support TCO, it uses a non-tail-recursive implementation

What is gained with (2)? If the backend does not support TCO, (1) is not compiled to tail-recursive calls either, so it should be equivalent to (2), right? (Or is the assmption that (2), being more direct code, will be faster than (1)?) I would assume that large inputs are going to blow the stack just like before.

hhugo commented 3 years ago

I'm confused by this PR. What is the goal? It looks like it is testing whether the backend supports TCO, and then:

  1. on backends that support TCO, it uses the tail-recursive implementation (which itself uses CPS)
  2. on backends that do not support TCO, it uses a non-tail-recursive implementation

What is gained with (2)? If the backend does not support TCO, (1) is not compiled to tail-recursive calls either, so it should be equivalent to (2), right? (Or is the assmption that (2), being more direct code, will be faster than (1)?) I would assume that large inputs are going to blow the stack just like before.

I'm confused by you confusion.

Jsoo does not optimize TCO in general (cps) but is perfectly able to optimize mutually tail recursive functions. The PR similar to #45, remove cps for dict/object and array when general TCO is not supported. (2) is not completely tail recursive and will stack overflow on deeply nested values. See #48 for the full tail call version.

novalis commented 3 years ago
  1. Tests don't run for me (Debian, x86_64, node v16.11.1). I get:
test.bc.js: internal error, uncaught exception:
            (Failure "TypeError: runtime.unix_dup is not a function")

You need alcotest.1.5 and js_of_ocaml-compiler.3.11

Ah, yes, upgrading alcotest did it. Thanks. Can we change the opam file to require 1.5 with these patches?

  1. Did we benchmark to see if the non-cps version is slower? If the speed is about the same, I favor just having one implementation.

I did not benchmark, would you be able to help do it ? maybe on #48 as well ?

Yes:

47 compared to master:

deep arr, long arr, long obj: within margin of eyeball deep obj: old is ~15% faster

48 compared to master:

all perf is within margin of eyeball

So I think #48 is the best bet.

And #48 seems to be stack-friendly, unsurprisingly.

hhugo commented 3 years ago

Ah, yes, upgrading alcotest did it. Thanks. Can we change the opam file to require 1.5 with these patches?

I've fixed the opam file

hhugo commented 3 years ago

Closing in favor of #48