Closed timholy closed 9 years ago
@chrisvoncsefalvay, I need to update this page. There's now a much easier way: go visit https://coveralls.io/r/timholy/julia and start browsing through the files. It's built nightly (momentarily by me executing a script, but someday I'll figure out the bug in my cron script...).
Updated, see revised top post.
@timholy Ah-hah! Thank you ;)
Ah, thanks for the revision @chrisvoncsefalvay. Updated that, too.
@timholy Is this a line highlighting bug?
Also should some lines (sparingly) by pragma no covered ? e.g. https://coveralls.io/files/422931392#L35 though I guess that specific example is about catching specific error types... #7026.
Edit: another bug? https://coveralls.io/files/422931266#L75
In your first example, which line are you referring to on that page? Number 37?
I've seen occasional oddities myself. One of the most likely explanations is #1334. Also see #9663. Any bug in line numbering is guaranteed to translate into bugs in coverage analysis.
Have now started covering (no pun intended) Base.graphics
(just thought I'll put it here so we don't accidentally multiply our efforts - we could use a greenie board or a Trello checking who does what so we don't accidentally overlap!)
Good plan!
I created a Trello for us to keep stuff coordinated. Happy to add any of you as users, email me your username.
@timholy Can you add adding the test to tests/runtests.jl
to your original post, so that the new test would be run by the CI suites? Thanks ;)
Done.
Satisfying observation: coverage has gone up every single day since I started regularly measuring it (on Jan 11). The net increase is from 70.4% to 71.6%, but when you consider the size of julia's code base, that's pretty good. At this rate we'll hit 100% :smile: in about 7 months.
And those increases omit early test-coverage commits from @kshyatt and @hayd that occurred before I had set up the necessary infrastructure.
@timholy one annoying thing with coveralls is that it reports 0/0 as 0% e.g. https://coveralls.io/files/433294481 so this will remain first in the sorted list of files.
I wonder if it's worth flagging one of those lines (perhaps module or include) so that it reports instead as 100% (yes, that'll inflate the numbers a little, but it'll also make it easier to find the next least covered file candidate).
80% by Spring. :)
We can't control how coveralls chooses to interpret 0/0; all we do is submit a string containing the code and a single value per line containing either null
(javascript analog of nothing
) or the number of times that line ran.
One option is that we could just exclude files that have no runnable lines (see https://github.com/IainNZ/Coverage.jl/issues/45). As @garborg correctly pointed out, whatever we choose to do with files that have no runnable lines will have no impact on the final percentage.
Perhaps it would be worth asking them whether such files could be moved to the end of the list, and scored as NA rather than 0%?
Oh, I read your strategy more carefully; we could lie that way, I guess, but I like the idea of either excluding the files or contacting Coveralls.io about this better.
Contacting Coveralls would be great. (I was going to say I was for the change to NA and ambivalent about the move to the end, but for us, at least, it seems like a clear improvement, too.)
I do like keeping the 0/0 files in, though (second best option being doing nothing, imo). They're worth showing to people (at least leaving available, no one has to click on them) -- both because the tooling has TODOS (functions defined via macros, etc.) and may have regressions, and because those files are often filled with types, and the moment someone is scanning through Coveralls results is a moment it's likely to register that a subtype is left out completely or from a chunk of the tests.
@timholy The changes I made in #9485 you merged more than a week ago does not seem to be included in the coverage result in https://coveralls.io/files/440827056. So I think there is something that does not seem to work with the daily updates of the results.
Indeed, sorry about that. I was debugging my cron script and commented out the git pull; make cleanall && make
part. Oops. Can you check again?
https://github.com/JuliaLang/julia/blob/master/base/replutil.jl#L175 still seems to different from https://coveralls.io/files/440930474#L175
When you click at GitHub in coveralls it redirects you to your https://github.com/timholy/julia, which also seems to be behind.
Good debugging. I had convinced myself that it only used the source that we packaged up, but clearly it needs GitHub repo to be updated, too.
This will be a little hard to automate, because of the need to use the SSH key when pushing to https://github.com/timholy/julia; in the short term I'll try to remember to update it daily.
Thank you, the results also looks more correct now:) I can see that one code path I introduced in #9485 is untested, I thought I had covered it and but apparently not and I did not check it locally. I am going to submit a pull.
How can I check if a function has "raised" a Warning?
I'm not sure I understand what you're asking.
Warnings aren't thrown (just printed), so you can't check them without monitoring STDOUT. You can check errors with test_throws
because errors are thrown.
Thanks
The coveralls test has not run the last two days.
I believe @timholy is running it manually at night, so let's not beat him up too much :)
On Mon, Feb 9, 2015 at 8:30 AM, Daniel Høegh notifications@github.com wrote:
The coveralls test has not run the last two days.
— Reply to this email directly or view it on GitHub https://github.com/JuliaLang/julia/issues/9493#issuecomment-73527671.
My automatic runs are held up by #10027
I appreciate the support @quinnj, but in this case @timholy frankly appreciates that (1) somebody is looking and (2) the same somebody is willing to ping him when something goes wrong.
I assumed you meant that it just wasn't running, so I ran a new one. But looking I now see that the coverage status is just borked. This is probably because of https://github.com/IainNZ/Coverage.jl/issues/48. Let me see what I can do about that.
EDIT: I'm no longer seeing this. Now getting the source listing.
On Coveralls TIMHOLY / JULIA / 41, clicking a file name to view source yields the following message:
SOURCE NOT AVAILABLE
The owner of this repo needs to re-authorize with Github; their OAuth credentials are no longer valid so the file cannot be pulled from the Github API.
We seem to be back in business now. Let me know of any further troubles.
Coveralls coverage % dropped roughly in half with JULIA / 49. More accurate, or more trouble?
Presumably it's trouble. But it seems to be a base-julia problem.
Update: things are back to working with the local run, but as of today I'm having issues with GnuTLS.
And we're back in business. Up to 74%!
@timholy glad to hear. The GnuTLS failure was related to https://github.com/JuliaLang/julia/commit/2af73dd7723c91b0e9fcf5af7662b40c8fb97067.
Looks like @staticfloat has gotten the buildbots working! If you're feeling ready to make that the official version, OK if I update the link above?
(Looks like my days of running things manually can thankfully come to a close....)
I still haven't figured out how to get the Git information to propagate properly, but other than that things seem to be working just fine.
Coveralls isnt's working since march 7th. Anyone know why?
There have been a couple bugs that killed things, one of which was https://github.com/jakebolewski/JuliaParser.jl/issues/18 which was just fixed. The latest failure is due to a segfault.
I looked at Coveralls today and it seems to be behaving weirdly. For example, let's look at https://coveralls.io/builds/2460415/source?filename=base%2Fstatistics.jl. Coveralls claims this has 0% test coverage for the latest build. Yet, when I actually look in test/
, I see that test/statistics.jl
is there and has lots of great tests. There are a LOT of files that Coveralls says have no test coverage but have files full of tests in test/
. Are they just not being run? It's a bit frustrating for those who want to write more tests if the coverage results are totally wrong.
Has the coverage of Julia dropped to 26%?? https://coveralls.io/r/JuliaLang/julia
(The coverall's page shows 73% at the top level, but 26% on each build.)
Now fixed. I'm not sure what magic wand must be waved to regenerate those numbers, but presumably they should go up if @staticfloat's report was the only problem.
Thanks, @ihnorton!
Better, but still some bugs apparently http://buildbot.e.ip.saba.us:8010/builders/coverage_ubuntu14.04-x64
Some more Coveralls builds ran last night and the file coverage stats are still bad. Is there a backlog, or more bug-hunting to do?
More bug-hunting. Reflection http://buildbot.e.ip.saba.us:8010/builders/coverage_ubuntu14.04-x64/builds/397/steps/Run%20non-inlined%20tests/logs/stdio, and linalg/symmetric http://buildbot.e.ip.saba.us:8010/builders/coverage_ubuntu14.04-x64/builds/397/steps/Run%20inlined%20tests/logs/stdio have issues in the buildbot job that's uploading to coveralls
Updated Feb 23, 2015.
This would make a whole bunch of good "first contribution" projects, and does not require deep insider knowledge of the language.
The basic idea is to expand the test suite to make sure that julia's base code works as promised. Here is one recommended way to contribute toward this goal:
The easy way
test/runtests.jl
. http://julia.readthedocs.org/en/latest/stdlib/test/ may be helpful in explaining how the testing infrastructure works. Submit the test as a pull request (see CONTRIBUTING.md).The manual method
master
branch. Build julia withmake
/tmp/coverage_tests.jl
:(This is the list of tests currently in
test/runtests.jl
, with 3 omissions (resolve
,reflection
, andmeta
) and one addition (pkg
). The omitted tests explicitly test inlining, which we're going to disable, or are problematic when inlining is disabled.)rm usr/lib/julia/sys.so
. Deletingsys.so
will prevent julia from using any pre-compiled functions, increasing the accuracy of the results. (This also makes startup a bit slower, but that's fine for this test---and once you're done, simply typingmake
will cause this file to be rebuilt).test/
directory.julia --code-coverage=all --inline=no
. This turns on code-coverage and prevents inlining (inlining makes it difficult to accurately assess whether a function has been tested).include("/tmp/coverage_tests.jl")
.base/
directory.0
in front of them (indicating that they were run 0 times), or have-
in front of them (indicating that they were never compiled).test/runtests.jl
. http://julia.readthedocs.org/en/latest/stdlib/test/ may be helpful in explaining how the testing infrastructure works. Submit the test as a pull request (see CONTRIBUTING.md).