Open epsilonError opened 2 weeks ago
Note:
This port uses the existing tests, but I can add more or change them if desired.
It seems like hTest's HTML-based testing flattens the matrix/vector when previewing the inputs and results. I'm not sure how to change the preview, but the test does execute correctly.
To emulate failing being expected for the Incorrect Data tests, the solution I came up with is having check:
always return true.
Name | Link |
---|---|
Latest commit | b1e9ad667eb53076b643d9d1381e66014851c121 |
Latest deploy log | https://app.netlify.com/sites/colorjs/deploys/6675116b87ec520008cee106 |
Deploy Preview | https://deploy-preview-559--colorjs.netlify.app |
Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
Post Script:
I have re-implemented matrixMultiply
in typescript for another project. The re-implementation can return scalars when vectors are multiplied and throws an error if either matrix/vector doesn't contain a number.
A gist of the typescript implementation. Let me know if you'd like such changes up-streamed.
These changes fix the 2 ported tests that I skipped (vector with vector, and multiplying empty vectors).
Thank you for working on this!
For mathjs and the HTML testsuite there are two ways to do it:
node_modules
rather than the alias, we can then add a redirect like this one to make sure node_modules
resolves on the deployed website. This points out a very interesting use case for hTest: so far it assumes that you know in advance whether a test should pass or produce an error, but if your tests are simply comparing with the output from something else, you may not know. I’ll think of what’s the best way to address this use case in hTest.
To emulate failing being expected for the Incorrect Data tests, the solution I came up with is having
check:
always return true.
Given that the Node runner doesn’t output anything for passing tests, are they really useful? Should we remove them? I'll defer to the others.
This points out a very interesting use case for hTest: so far it assumes that you know in advance whether a test should pass or produce an error, but if your tests are simply comparing with the output from something else, you may not know. I’ll think of what’s the best way to address this use case in hTest.
My first first attempt of this was applying a function to expect expect: (...args) {...},
like you can with run: (...args) {...}
but when I got an error (can't remember what) I decided to hard-code expect
just for expediency.
This would also mean that expect
could throw an error itself which needs to be handled.
I recently watched a video with a python core developer taking about unit tests and he mentions the Capture Recipe that might be useful for this case: https://youtu.be/jSIsyMd2-RY?t=1451
Python Code:
def capture(func, *args, **kwargs):
"Return a tuple with the returned exception and value."
try:
return [None, func(*args, **kwargs)]
except Exception as e:
return [type(e), None]
I haven't used it before but it looks like it creates a structure for differentiating the error-free and error result from function calls which allows easier comparison.
This could be useful with the throws: true
flag in the case where expect
is a function.
Oh interesting. expect
cannot work like that since its actual value may be a function, but perhaps we could have getExpect()
(akin to getName()
) that works that way. I wonder what the arguments would be though.
- Add an import map on all pages that maps mathjs to a CDN URL
Thanks! I finally got this to work and so could clean up the variable and function definitions.
To emulate failing being expected for the Incorrect Data tests, the solution I came up with is having
check:
always return true.Given that the Node runner doesn’t output anything for passing tests, are they really useful? Should we remove them? I'll defer to the others.
For color.js I'd say them passing automatically probably isn't useful.
In the abstract though it does sound like a useful example of tests being used as supplemental documentation. But as you said, none of that appears within the Node runner. And the current HTML testsuite only shows that something is out of the ordinary with the instruction/description text.
For hTest in general it might be nice to intentionally specify that something shouldn't work or isn't supported and that failing test would mean some assumptions about the code or documentation need to be updated. Though this a good unit test does not necessarily make.
If there was a good way to succinctly visualize what the intention is behind such tests then it may useful for quick reference (e.g. the old test format displaying the matrix structure highlights the results with context). But this would basically require adding a visualization framework/implementation to hTest.
I'd say declarative and flexible unit testing is enough of a very good goal on its own. 😄
Oh interesting.
expect
cannot work like that since its actual value may be a function, but perhaps we could havegetExpect()
(akin togetName()
) that works that way. I wonder what the arguments would be though.
What I assumed could be the case is the same arguments that are given to run
would be provided to getExpect()
and it would have access to all the same data as run
.
But I'm only thinking of data being passed as arguments, not functions. I have much better intuitive sense of data manipulation tasks and pass by value arguments, but that is just my bias.
I think one of the bigger benefits from a getExpect()
implementation would be deprecation testing of function interfaces, tests for backwards compatibility, and tests for possible lossless/lossy conversions from one function interface to the other.
expect
could be v1 arguments, getExpect()
could use the v1 function interface and run
& arg
/args
would be a newer implementation (or vice versa). But I have no idea how feasible this would be to do in hTest.
My naive assumption would be getExpect()
gets the same arguments as run
if no expect
is given. But would prioritize expect
if it exists. This is similar to how I assume args
and arg
work. Though I'd bet that pair must have different names to be prioritized correctly, which I haven't accounted for with this getExpect()
idea.
Oh interesting.
expect
cannot work like that since its actual value may be a function, but perhaps we could havegetExpect()
(akin togetName()
) that works that way. I wonder what the arguments would be though.What I assumed could be the case is the same arguments that are given to
run
would be provided togetExpect()
and it would have access to all the same data asrun
.But I'm only thinking of data being passed as arguments, not functions. I have much better intuitive sense of data manipulation tasks and pass by value arguments, but that is just my bias.
I think one of the bigger benefits from a
getExpect()
implementation would be deprecation testing of function interfaces, tests for backwards compatibility, and tests for possible lossless/lossy conversions from one function interface to the other.
expect
could be v1 arguments,getExpect()
could use the v1 function interface andrun
&arg
/args
would be a newer implementation (or vice versa). But I have no idea how feasible this would be to do in hTest.My naive assumption would be
getExpect()
gets the same arguments asrun
if noexpect
is given. But would prioritizeexpect
if it exists. This is similar to how I assumeargs
andarg
work. Though I'd bet that pair must have different names to be prioritized correctly, which I haven't accounted for with thisgetExpect()
idea.
I have a lot of thoughts, but I think this is not the right place for this discussion — would you mind opening an issue over at hTest so we could discuss this and flesh out the design? Thanks!
An attempt at porting the Matrix Multiplication tests to the new format.
I wanted to try out hTest and thought working on #398 would be a good way to start.
~I was hoping to use the
math.js
matrix multiplication methods as the test oracle, but I couldn't figure out how to get the dynamic loading in the HTML-based testing to properly import it.~~I've left comments for all of that code I used, just in case anyone else knows how to make that work.~ Feel free to edit this branch (maintainers) or fork it (everyone else).