Closed melantronic closed 8 years ago
I had tried once before but ran into some confusion that made me stop. Maybe it's since been improved. I'll give a try tonight, mind if I PM you for help if needed?
Sent from my iPhone
On Oct 31, 2016, at 10:09 AM, zemertz notifications@github.com wrote:
Hi there,
I've just forked the project and wanted to make sure that it would compile. I've added a CI build with travis so that all the code is compiled and the tests also run.
I have this building on my fork but it would be better if you go to travis-ci.org and enable travis build for your repo to get the building working from the base. It will help with future pull requests to know there are no breaking changes
(Note: Once It's enabled, then just update the README to point to your own travis build instead of https://travis-ci.org/martingollogly/JSAT )
You can view, comment on, or merge this pull request online at:
https://github.com/EdwardRaff/JSAT/pull/53
Commit Summary
Add CI Update README.md Update README.md Update .travis.yml File Changes
A .travis.yml (13) M README.md (3) Patch Links:
https://github.com/EdwardRaff/JSAT/pull/53.patch https://github.com/EdwardRaff/JSAT/pull/53.diff — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.
Yeah, no probs
Do you know how time/performance sensitive tests work on travis-ci? Those may be problematic and report false fails the way they are written.
hmm, I'd need to look at the tests, there are a few approaches, so far all tests have passed so there's currently no problem but, wondering what scenario you're thinking of are you thinking of running models rather than unit tests.
One way would be to group tests together and run them as smaller test jobs or increase the build time to the allowed period.
According to the travis docs https://docs.travis-ci.com/user/customizing-the-build/#Build-Timeouts
It is very common for test suites or build scripts to hang. Travis CI has specific time limits for each job, and will stop the build and and add an error message to the build log in the following situations:
A job takes longer than 50 minutes on travis-ci.org A job takes longer than 120 minutes on travis-ci.com A job takes longer than 50 minutes on OSX infrastructure or travis-ci.org or travis-ci.com A job produces no log output for 10 minutes
On Travis, Time currently takes 3 minutes to run all current tests so should be ok
Results : Tests run: 1082, Failures: 0, Errors: 0, Skipped: 0 [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 03:13 min
I'm more concerned about if the jobs run in a VM / multi-user environment. Some of the tests pass/fail based on runtime, and a multi-user environment can impact that negatively. That may throw some false negatives.
Looks good so far. I'll just keep an eye on any build fails just in case.
A big thanks for getting this working for me! It's been requested before but I think the Java support wasn't so good at the time, so I gave up then. A lot of others will appreciate this!
Would you mind taking a look at this run? It says it failed, but I don't see any failed tests. https://travis-ci.org/EdwardRaff/JSAT/builds/172481402 . Just trying to understand how CI works. Feel free to email me to continue this conversation (didn't see yours on your github page)
hmm, looks like it's passing again.
One test did fail on a previous build in your link. At least we know where to look and monitor for reoccurence.
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.018 sec <<< FAILURE! - in jsat.math.optimization.BFGSTest testOptimize(jsat.math.optimization.BFGSTest) Time elapsed: 0.017 sec <<< FAILURE! java.lang.AssertionError: expected:<1.0> but was:<-0.9932860618144995>
You can build manually through the travis interface to see if it can be reproduced. I don't fully understand this particular function under test yet but it could be a valid failure and worth monitoring.
Would be interested to know if this was a raised issue on anyone's dev environment. Some debugging might help, Happy to try and run the test locally to see if I get the same results and see if I can spot what's going on
I see the PR failing. I've found that its exiting here
The command "mvn clean install" exited with 1 ...
The travis job does this
Are both targets needed, so just wondering how you normally build it
when mvn clean install is run, one test did fail
Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.136 sec <<< FAILURE! - in jsat.linear.vectorcollection.KDTreeTest testSearch_Vec_int(jsat.linear.vectorcollection.KDTreeTest) Time elapsed: 0.016 sec <<< FAILURE! java.lang.AssertionError: jsat.linear.vectorcollection.KDTree$KDTreeFactory failed 10 at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at jsat.linear.vectorcollection.KDTreeTest.testSearch_Vec_int(KDTreeTest.java:171)
Are both targets needed, so just wondering how you normally build it
I usually just let netbeans handle it!
when mvn clean install is run, one test did fail
Can you get it to fail consistently / regularly? A lot of the tests in JSAT rely on some level of randomness, which can cause spurious failures. I pushed some improved tests last night to reduce some trouble cases, but I haven't seen KDTree fail before.
Cool, right, looking at the pom skipTests=false.
This is fine which means we don't need both targets so I recommend just removing mvn test from the travis.yml. I'd be happy to do this through another pr ?
The tests were executing twice and that's why the issue was harder to spot in one of the tests. Maybe the particular test need to be able to cope with the amount of randomness
Cheers,
Martin
I'd be happy to do this through another pr ?
I would greatly apprecaite it!
Maybe the particular test need to be able to cope with the amount of randomness
Thats what I've been working on improving (you will see a lot of commits for 0.0.6 were test improvements). It gets hard when some of the tests have failure rates in 1 out of 50+ runs.
Hi there,
I've just forked the project and wanted to make sure that it would compile. I've added a CI build with travis so that all the code is compiled and the tests also run.
I have this building on my fork but it would be better if you go to travis-ci.org and enable travis build for your repo to get the building working from the base. It will help with future pull requests to know there are no breaking changes
(Note: Once It's enabled, then just update the README to point to your own travis build instead of https://travis-ci.org/martingollogly/JSAT )