Open wikipedia-mabdul opened 11 years ago
Aha, let's possibly use http://phantomjs.org/! ... it's integrated with Travis so we can have it run automatically for each push. Will work on this over the weekend--
wow. that looks great. ping me if you need any help. As I said: I will add testcases when the infrastructure is done.
Dunno if I find any time to develop more stuff the rest of the week. I found out that my restructure of the submission.js is not working properly at testwp but it includes some really good stuff and improvements, like
For the case you have too much time: would you check how we can transform a enwp comment timestamp (~ - 01.08.2013 11:11 (UTC)) to the "submission like" timestamp (the 14(?) digit timestamp like 20130801111111 or so)
I will need that to sort the comments as the sorting for the submission templates is nearly finished after iron out some bugs...
Can you not simply use Date.parse(datestring)
to get a date object? I already wrote some code (originally for afcHelper_last_nonbot) to convert a date object to the correct "20130801111111" or what have you:
date = dt.getUTCFullYear() + ('0' + (dt.getUTCMonth() + 1)).slice(-2) + ('0' + dt.getUTCDate()).slice(-2) + ('0' + dt.getUTCHours()).slice(-2) + ('0' + dt.getUTCMinutes()).slice(-2) + ('0' + dt.getUTCSeconds()).slice(-2);
Okay, status update:
@wikipedia-mabdul what specifically do we want to test? Just trying to brainstorm.
first: if afchelper_act('decline')(example) is getting the same results although we improved any function - so although running cleanup and so on...
The best would be if we test as many stuff as possible - doesn't matter if we use the gui or not.
the logic part has to be stable. at least for the old test cases should be the same results (or improved)
I started to overtake the script without any _cleanup or _blanking and thus there were many false positives and problem in the start. original I only cleaned some HTMl comments, but as you see, now using AutoED, other references cleanup; and so on. This is simply getting to complex to test it with every new release to check if every stuff is working. new "bugs" like that a button isn't displayed will be ironed out very shortly and I don't believe that we will change anything major in that system in future. JK: test once will run, act only wen bug reports come in. Major problems are more the functions changing the /content/ of the page.
I think the best way to do this is to assemble a large number of testcases -- so, for example, an article that contains a decline template. An article that has just been submitted. Something in need of cleanup. Et cetera, et cetera, et cetera. Once we get this sample, then we define what the desired behavior is for each one, then run a "special" version of the script over them -- this special version will be different in that (among other things) we fake the pagetext variables and make editPage return dummy responses (so, for example, we can have a test called "submit article, error on page sage" that simulates adding {{subst:submit}} and then receiving an error from the api). Lots to do, lots to do.. :)
I have never thought about HOW we can do it. I just thought this would be cool if we have some major test cases(with a growing test basis of test cases) which should be more or less automated tested before pushing a new script. This is unhandy to do this every time by "hand".
I have saw how handy JUnit (for java of course) is for such tests. I don't want any GUI tests, although this would be cool too if it is easy to set up...
yeah, GUI seems pretty hellish to test :p
But yeah, I think what I will do is work on a framework for faking variables and functions and stuff, then you and I and whoever else can just gradually add to the test cases... and the awesome thing is, we can integrate it with every commit/pull request and see a little "build passed" or "build failed... [link] to problem" notice right inline in github -- which would be pretty cool, imho
oh yeah.
btw: IRC is never sleeping XD
I'd like to humbly state my opinion that this issue should be of the HIGHEST priority; a good test suite is essential for any mature piece of software to function with any sort of stability in the long term.
I started working a bit on integrating quint at https://github.com/WPAFC/afch/tree/feature-unittests (which could then be run using phantonjs through travis ci), but found it fairly daunting -- do you, @earwig, have any experience with said client-side javascript testing? Your thoughts (heck, and code) would be much appreciated.
I really don't have any experience with JS testing, but I have quite a bit with testing in general.
There's a big difference between testing modules and testing user-driven applications, as I quickly realized ;) And then when you try to have it being done automatically... it's just a lot of work.
Yes. But you don't have to test the UI, you can test the functions the UI calls. For example, a test for the function that declines pages could have arrays of example wikitext, decline reasons, etc, and pass each of those inputs to the decline function and ensure it spits out the correct wikitext.
This is not nearly as difficult to test as the UI, since those functions basically just operate on strings. As for things like the functions that move pages or notify submitters, you can set up a mock interface for MediaWiki's JS APIs and test whether the script makes the correct calls.
I've written some tests that you might want to peruse at if you're looking for ideas of complicated wikitext to test (for functions that manipulate templates or do wikitext cleanup).
BTW: just found http://www.favbrowser.com/microsoft-introduces-browserswarm
Oh yeah, let us get some internal test suites so that we are able to handle all the standard tests with just on click. XD
Who wants to overtake this? (I would write some checks after the infrastructure is ready)