Open phemmer opened 7 years ago
@phemmer I agree with the assessment it is hard to create a recording with test data.
I think a simple solution would be to allow you to create recordings from a file via the API. Like the example you provide but as a separate file. Then you can have data sets that are static stored in VCS and easily created and modified on any Kapacitor instance.
We could also add support for this via the CLI something like
kapacitor replay-live local -task task_id -data data.line
where data.line
is a file with line protocol data.
Or if you want Kapacitor to hang on to the data for testing later. Create a recording first from the file and then replay it:
kapacitor record local -data data.line -recording-id test-data001
kapacitor replay -recording test-data001 -task task_id
See #1079 for a similar request.
I think the concept is good, but one problem with using line protocol is batch queries, or using the stats()
node. Since both of these involve timers on kapacitor's side, we'd need some way to simulate a pause in the input & advancement of the clock. Line protocol doesn't have support for comments, but if it did, we could put kapacitor instructions in comments, such as #sleep 1234
(Edit: Yes it does. I apparently just missed it in the reference documentation).
@phemmer Agreed, line protocol by itself is not descriptive enough. In fact the tests within Kapacitor do something very similar this already. We should expose and standardize that method.
Hey @nathanielc and @phemmer, I've been thinking (and tinkering) on how to make TICKscript testing easier and more automated than we can achieve with the current recording/replay features. My proposal can be found in this blog post and after some manual testing I'm putting kapacitor-unit together.
What do you think about this approach? The challenge so far has been to batch tasks, as mentioned by @phemmer before.
I generally don't like to bump threads like this but this is so necessary that I have to comment. Testing TICK scripts is a nightmare. Especially if you want to be writing scripts regularly to solve problems quickly.
At the moment I just have to wait and see if my scripts do what I expect, and I have frequently been wrong about that due to quirks in the way nodes work that aren't particularly thoroughly described.
@AdamSLevy I agree with you. Shameless plug, kapacitor-unitis quite stable already and makes it quite easy and controlled to test TICK scripts. Let me know what you think about that ;)
@gpestana kapacitor-unit
still requires a running InfluxDB and Kapacitor, correct? If so, it really makes it a non-starter for me.
My ideal would be a library that mocks out any network call and allows me to write real unit tests with no external dependencies, e.g. in pseudocode:
# fill a 2-hour time period with values increasing linearly
data = Measurement.fill(name='somefieldname', period='2h', function=lambda x: x+2, start_value=0, resolution='5s')
# >>> data
# [ Measurement(value=0, time=start+0s), Measurement(value=2, time=start+5s), ... ]
stream = Stream(database='somedb', measurement='somemeasurement', data=data)
tick = TICKScript('/path/to/tick')
alerts = tick.with_stream(stream).alerts
assert len(alerts) > 0
assert alerts[0].level == 'WARNING'
To be honest, I'm really not sure why TICKscript even exists. I wish there were just a simple Python or Go API for these scripts.
@jcmcken That's a pretty good idea and it feels like a really clean approach.
Why is running a small instance of influxdb and kapacitor a show stopper to you? You can really easily and lightly spin it up with the docker-compose included in the project. It just makes much easier to ensure that the test environment is the same as production and ensures that further development of the kapacitor-unit will be in line with new versions of the TICK stack.
And maybe give it a try and tell me what you think :) https://github.com/gpestana/kapacitor-unit
@gpestana Calling it a non-starter is maybe a bit extreme. I'll give it a try, but I'd just really prefer not to have to start up a bunch of infrastructure to do simple unit tests. One immediate problem is that it would need to serve parallel requests (my intent is to set up CI jobs that run the tests automatically). Another is that I have a matrix of Influx/Kapacitor versions that would need to support and test against.
I'd also prefer the Influx folks to provide testing libraries/harnesses so that protocol/library compatibility is maintained.
@jcmcken that's a good point too. We are using it in CI/CD context too running multiple tests in parallel. We are spinning up an infra stack per test, which is quite heavy. But I've been thinking of adding a namespace prefix to the data points and tick scripts before loading it into influxdb/kapacitor so that the same infra could serve all test cases. How would this sound like? We can give priority to that feature if needed! :)
Right now, I am at loss as where the tick script, though syntactically correct is misbehaving. The error sting is cryptic! Doing recording and then realize that the data doesn't exist is really painful!
I am sorry, but am really frustrated!
Problem
This is a feature request to provide an easier way to test kapacitor scripts than using recordings. Recordings are very difficult to work with for a number of reasons:
To start, you have to submit the data you want to test to a live InfluxDB server. This in itself poses several problems. If the scenario you want to test is not actually happening at that moment, you have to manually inject fake data. If this is a production system this would a big downside as people might be using the system, and the fake data could screw things up. So the solution here might be to spin up a local InfluxDB & kapacitor, but it becomes a lot of work just to test a kapacitor script against some sample data:
Yes most of these are little tasks, but it's quite a lot of them, and requires juggling multiple terminal windows around. And even doing this, you still have timing difficulties. With stream scripts it's not that hard to forge the timestamp of point before submitting it, but with batch scripts it's a lot harder as the batch query is scheduled, so you have to coordinate your data injection with kapacitor's polling interval.
And if you want to tweak your sample data just a little bit, you have to go through teps 9-14 over again.
And then a few weeks down the road, if you want to make a change to your tick script, and you want to make sure your change doesn't break any previous behavior, you have to make sure you keep those recordings around so you can test with them again. And if you work on a team of people, and other people might modify the script, you have to somehow transfer those recordings around.
Proposal
Ultimately I think it would be a lot easier if there were a way to keep the test data in the script itself, or in a file along side it (which could be stored in a VCS). The data should also be human maintainable. Meaning a person should be able to construct & edit the data by hand, without having to record data.
Both these would make scripts a lot easier to develop and maintain.
I personally have a hack I use for doing this with stream scripts. I start with a script such as:
I then have & run the following shell script:
The script spins up a
kapacitord
, adds & enables the script, parses the//>
lines out of the script, and injects them to kapacitor's write API endpoint. This means I can keep my sample data with my script, and test it with a single command. It's easy to use, and has no InfluxDB involved. Unfortunately it does not handle batch scripts. It would be nice if kapacitor had similar functionality.