Open lassik opened 4 years ago
Now there is:
$ docker run schemers/gauche:head scheme-eval "(print (+ 1 2 3))"; echo "exit $?"
6
exit 0
$ docker run schemers/gauche:head scheme-eval "(print (+ 1 2 wrong))"; echo "exit $?"
gosh: "ERROR": unbound variable: wrong
exit 1
This demonstrates that exceptions go to stderr:
$ docker run schemers/gauche:head scheme-eval "(print (+ 1 2 wrong))" >/dev/null; echo "exit $?"
gosh: "ERROR": unbound variable: wrong
exit 1
Need to call print
(or write
) manually; otherwise nothing is printed.
How to handle R6RS vs R7RS eval for Schemes that have both? Should we permit passing custom command line arguments in addition to the thing-to-be-evaled?
What about using --mount
as part of docker and eval a file instead of string?
That is definitely possible, but slightly more complex. We'd have to make a tempfile on the host OS, give the --mount
flag, and write a somewhat more complex runner script. The current one is very simple.
However, you'd be right that evaling a big expression is not very convenient this way.
One option is to read code from stdin, which would let us pass an entire source file without a mount.
Ideally we'd have a DSL on the host side to write portability tests like this:
(define-test "Is `constantly` defined like in Common Lisp?"
(for bigloo chibi gauche guile)
(eval (begin (write ((constantly 123) 1 2 3)) (newline)))
(expect-value 123))
Then a runner would run them. Perhaps the list of Schemes in (for ...)
should be given by the user and not hard-coded.
For evaling a lot of code at one, yet another option is to pipe an uncompressed .tar
archive to stdin, extract it in the container, and run it. This would let us ship as many files as we want without worrying setting up mounts.
For portability-testing a feature across many implementations, it would be great if we could do something like this:
docker run schemers/bigloo scheme-eval "(+ 1 (* 2 3))"
It would evaluate the expression.
If this works consistently across containers (as
scheme-script
now does among many) it would be easy to write a runner script to launch containers for a test, collect and curate the results.