Closed missinglink closed 9 years ago
All we currently want is some light spot checking to make sure that things look good -- we already know that algorithmically and functionally this is objectively superior to the previous method of building hierarchies (ie, the elasticsearch lookups). That's why I included the points.json
; it was only necessary to add a bunch of points, run the lookup, and manually review the results. Once our acceptance tests are in place and we use this lookup all throughout the pipeline, we'll just integrate checks on admin values there.
probably at least 1 whole day
This should've taken no more than five minutes. What exactly are you trying to do? Just dump your test cases in points.json
and fire away. What's the reason for incrementally adding test cases and rerunning the lookup from scratch?
I would love to see this lib spun-off as a micro-service and can see it being very popular in a wide range of applications, with external contributions by smart developers in influential organisations.
That strikes me as idealistic at best.
For that reason I personally believe it should be able to stand on it's own, and provide it's own proof of quality.
See above. This is going way over what the testing (read: spot-checking) was initially meant to be.
I really don't have much more to add, it seems we are at a disagreement so I think we should elicit feedback from the rest of the @pelias/owners.
I think we currently have better things to focus on than spinning this off into a generalized, stand-alone package, and should keep it Pelias-specific, which means spot-checking would be more than sufficient. Plus, we had already agreed on that, so I'm not sure why we're back to the drawing board.
I think the easiest option is to write a very simple express wrapper for the library so you have your index.js which is what gets pulled in when used as a dependency, and you have the server.js which runs on npm start. The api would be minimalistic and simply allow for testing points. Then your test scripts can choose to either run their own instance, or be directed to an existing url. This will have the obvious benefit of being able to add test points without having to reload the service each time. This could probably be put together in an hour, so we're not spending too much time on it.
To be clear, I don't have issue with the functionality
or the algorithm
involved, merely the testing
:
lat/lon
pairs through the lib is enough to establish quality.I am agnostic to specific implementation of the test suite so long as it means this lib is maintainable in the future.
@missinglink , @dianashk , and I quickly hashed this one out. I'll make the current testing method (which read points from a file and executes the lookup for each) more unit-testing like, such that it automatically compares actual and expected output and makes failures/successes easy to grok. Additionally, I'll throw in a dead-simple browser app running on top of a fully loaded admin-lookup, which will allow you to pan around a map, click, and receive lookup results for that point. Should make testing painless.
The multi-process branch now contains both automated point-lookup test and a simple web app for manual testing. I'll add this to the README later, but for now:
npm run test-web-app
: will run the app at localhost:3000
(give it some time to load all the layers)npm run test-lookups
: will compare actual and expected lookup results based on the data inside test/lookup_points.json
.If you're running the web app, check the JS console for JSON that you can readily copy into test/lookup_points.json
to create additional test cases.
@sevko I spent some time looking in to the tests and trying to establish the quality of this lib.
The issue I have is that it is very time-consuming for obvious reasons, this PR adds the ability to send a
$ kill -SIGUSR1 {{pid}}
to signal the running process to reload the points data and re-run the queries.Even with this approach it is still very time consuming and I cannot easily review the quality of this lib without investing significant time of my own (probably at least 1 whole day).
The chicken/egg of this that we don't know the quality of the results so we can't merge it; we can't test the quality of the results because we can't easily add test cases, re-run the suite and compare
actual<>expected
.I would suggest spending some more time on a more scalable way of testing that could be re-used by a 3rd party developer in the future.
For me, a quality test suite would have the following qualities:
actual<>expected
We also discussed in the past that this sort of testing could be handled on the meta-level by some wide-reaching functional tests.
I would love to see this lib spun-off as a micro-service and can see it being very popular in a wide range of applications, with external contributions by smart developers in influential organisations.
For that reason I personally believe it should be able to stand on it's own, and provide it's own proof of quality.
Your thoughts?