Open mbrukman opened 7 years ago
There is also an open PR for this by @mikedanese that dates from before @yugui's framework. We didn't finish it because without a catch construct it's impossible to test failure cases. The absence of "catch" is an abundance of caution because it caused grief with BCL. When we last discussed this, there were 3 cases where catch would be useful:
1) Testing failure cases 2) Catch & rethrow with more context in the message, e.g. for writing custom manifesters for other languages (Lua, XML, etc) that recurse over object structures and need to provide errors with the path through those structures rather than just the call stack 3) I've forgotten the 3rd, maybe it was to do with some proposed feature that never got implemented
I would not like it to be used for:
1) Masking stack overflow errors 2) Masking file not found errors or std.extVar() not found errors to provide hidden defaults 3) Masking fields that don't exist to provide hidden defaults
So maybe we can add a "catch and augment" construct and a "ensure error thrown" construct that only ever returns true (or an error).
Even testing from a another language would be great, something like jssonnet.evaluate(template, parameters)
@cyrusv The API for that is there (and I think it was there from the beginning). Currently Python, C, C++ and Go (through go-jsonnet) are officially supported. See: https://jsonnet.org/ref/bindings.html
If you encounter any troubles using any of these APIs, please let us know.
I've found that working with "dynamic" json in Go is kind of a pain, is it recommended to use Python to evaluate the shape of the resulting data? I think I would prefer to write Go, but I don't have a lot of experience with handling untyped data in Go
All these APIs are supported. I think it's possible to work with "dynamic JSON" in Go just fine. It's a bit more verbose, of course. I think you can also put most of the checks on the Jsonnet side. The external language is really needed only for two things: 1) Running multiple tests independently, so that error in one doesn't stop the test_suite 2) Checking "expected errors"
I'm thinking about a 1st party tool which would take a Jsonnet in some special form and evaluate it in a way suitable for unit tests. For example there may be an array with each element is evaluated separately (so that each test can fail separately). The elements of the array can also be objects with some metadata e.g. expected error. Would you be interested in having such a tool?
I would definitely be interested in the experience! Do we need a separate tool though? Could you explain why a tool would be needed instead of, for example, a small go pkg with examples?
Could you explain why a tool would be needed instead of, for example, a small go pkg with examples?
Well, you wouldn't need to write any non-Jsonnet code yourself then. The test suite would be a self-contained, valid Jsonnet code. There would be a command which you could run like:
jsonnet-test foo-test.jsonnet
What is the recommendation for users who are writing non-trivial Jsonnet configs and would like to test them from within Jsonnet itself?
I came across https://github.com/yugui/jsonnetunit — has this framework been reviewed / evaluated? Is this something that should be recommended as a general tool to Jsonnet users? If not, what would be the recommendation to users of Jsonnet interested in unit testing their configs?