Closed lloydmeta closed 8 years ago
I felt it was appropriate to have PlayAppSupport extend RequiresDB because our app ... requires a DB to run..
Terrible logic. Rejected. :laughing:
Thanks for the quick fix. It's odd I didn't run into this earlier, not sure why that is.
This now leads to some other issues. The vagrant box is up, but getting some odd exceptions from loggers.
[info] Run completed in 35 seconds, 78 milliseconds.
[info] Total number of tests run: 161
[info] Suites: completed 32, aborted 19
[info] Tests: succeeded 159, failed 2, canceled 0, ignored 0, pending 0
[info] *** 19 SUITES ABORTED ***
[info] *** 2 TESTS FAILED ***
[error] Error: Total 180, Failed 2, Errors 19, Passed 159
[error] Failed tests:
[error] com.m3.octoparts.cache.MemoryBufferingRawCacheSpec
[error] com.m3.octoparts.aggregator.service.PartRequestServiceSpec
[error] Error during tests:
[error] controllers.system.BuildInfoControllerSpec
[error] com.m3.octoparts.wiring.assembling.EnvConfigLoaderSpec
[error] controllers.PartsControllerSpec
[error] controllers.support.AuthenticationCheckSupportSpec
[error] integration.ApplicationSpec
[error] com.m3.octoparts.repository.ConfigImporterSpec
[error] com.m3.octoparts.util.OctoMetricsImplSpec
[error] controllers.support.DummyPrincipalSupportSpec
[error] controllers.support.HttpPartConfigCheckerSpec
[error] com.m3.octoparts.repository.config.CacheGroupRepositorySpec
[error] integration.ApiSpec
[error] com.m3.octoparts.database.MigrationsSpec
[error] controllers.support.AuthorizationCheckSupportSpec
[error] integration.AdminSpec
[error] com.m3.octoparts.repository.DBConfigsRepositorySpec
[error] controllers.AdminControllerSpec
[error] controllers.system.HealthcheckControllerSpec
[error] com.m3.octoparts.repository.config.HttpPartConfigRepositorySpec
[error] controllers.system.SystemConfigControllerSpec
[error] (octoparts/test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 144 s, completed Feb 9, 2016 11:40:53 AM
An example, but they're all roughly the same
[info] ApiSpec:
[info] Exception encountered when attempting to run a suite with class name: integration.ApiSpec *** ABORTED ***
[info] java.lang.ClassCastException: org.slf4j.helpers.SubstituteLogger cannot be cast to ch.qos.logback.classic.Logger
[info] at com.kenshoo.play.metrics.MetricsImpl.setupLogbackMetrics(Metrics.scala:63)
[info] at com.kenshoo.play.metrics.MetricsImpl.onStart(Metrics.scala:73)
[info] at com.m3.octoparts.util.OctoMetricsImpl.onStart(OctoMetricsImpl.scala:25)
[info] at com.kenshoo.play.metrics.MetricsImpl.<init>(Metrics.scala:83)
[info] at com.m3.octoparts.util.OctoMetricsImpl.<init>(OctoMetricsImpl.scala:17)
[info] at com.m3.octoparts.wiring.UtilsModule$class.metrics(UtilsModule.scala:29)
[info] at com.m3.octoparts.wiring.assembling.ApplicationComponents.metrics$lzycompute(ApplicationComponents.scala:23)
It's odd I didn't run into this earlier, not sure why that is.
I am only able to get the previous errors if I use testOnly
, in which case the ConnectionPool stateful singleton wasn't set up yet by earlier tests that initialise the app ;p
Not sure how you're getting these current errors though.
flip this
I think that is the culprit but since I can't reproduce the errors, I don't know why it works. Assuming that is the cause, since we have a custom implementation anyways, we can just override that member and make it default to false.
Ok, I can't reproduce this when running sbt test
from the Vagrant machine; only from the host box. Did you try running tests from the host rather than the Vagrant machine?
There are some errors that are only showing up if I run all tests with sbt test
and don't if I run the offending test with testOnly. Example:
[info] MemoryBufferingRawCacheSpec:
SLF4J: The following set of substitute loggers may have been accessed
SLF4J: during the initialization phase. Logging calls during this
SLF4J: phase were not honored. However, subsequent logging calls to these
SLF4J: loggers will work as normally expected.
SLF4J: See also http://www.slf4j.org/codes.html#substituteLogger
SLF4J: application
[info] - should store data for a while *** FAILED ***
[info] None was not equal to Some("DEF") (MemoryBufferingRawCacheSpec.scala:26)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.MatchersHelper$.newTestFailedException(MatchersHelper.scala:160)
[info] at org.scalatest.Matchers$ShouldMethodHelper$.shouldMatcher(Matchers.scala:6254)
[info] at org.scalatest.Matchers$AnyShouldWrapper.should(Matchers.scala:6288)
Ok, I can't reproduce this when running sbt test from the Vagrant machine; only from the host box. Did you try running tests from the host rather than the Vagrant machine?
I generally only run tests from the host, usually with activator
, then test
from within the SBT console. However, I just tried variations of sbt
then test
, sbt test
from both the host (also tried activator
variations here) and the VM and couldn't reproduce the errors.
@lloydmeta Dang. It's 100% reproducible here, even after doing a full vagrant halt
and vagrant up
. Will try to reproduce on another machine later.
It looks like that there are several implementation of slf4j on the classpath...?
It looks like that there are several implementation of slf4j on the classpath...?
/me crawls into a hole.
@mauhiz if you have time, mind giving running the tests on your machine ?
I wanted to. I just tried to vagrant up
, but it failed provisioning with this message:
Unexpected Exception: global name 'display' is not defined
Any idea?
Vagrant 1.7.4
ansible 2.0.0.2
Python 2.7.11
Strange, I wonder if it's related to this: https://github.com/ansible/ansible/issues/14147
My versions are:
Vagrant 1.7.4
ansible 1.8.2
Python 2.7.11
I downgraded ansible to 1.8.4
and provisioning now works. :game_die:
Can ignore the following failure?
TASK: [td_agent | Check for fluent-plugin-elasticsearch gem] ******************
failed: [default] => {"changed": true, "cmd": "/usr/lib64/fluent/ruby/bin/fluent-gem list fluent-plugin-elasticsearch -i", "delta": "0:00:00.133794", "end": "2016-02-10 01:02:12.023123", "rc": 1, "start": "2016-02-10 01:02:11.889329", "warnings": []}
stdout: false
...ignoring
Besides this well-known test failure :
[info] RichFutureWithTiming
[info] - should not start measuring too early *** FAILED ***
[info] A timeout occurred waiting for a future to complete. Queried 11 times, sleeping 15 milliseconds between each query. (RichFutureWithTimingSpec.scala:43)
Test suite passed fine.
@mauhiz thanks. added a longer PatienceConfig
to deal with that one in c0fcf7a once and for all :smile:
The rabbit hole gets deeper, as neither you nor I can reproduce the error..
@lloydmeta The change in PatienceConfig seems not to apply to testing, as I'm still experiencing timeouts with the default wait time error.
[info] PartRequestServiceSpec:
[info] #responseFor
[info] when given a PartRequest with a partId that is not supported
11:57:34.363 [ForkJoinPool-1-worker-7] WARN c.m.o.w.OctoClientSpec$$anonfun$1$$anon$1 - Unexpected response status: 'null' for path whatever
11:57:34.521 [ForkJoinPool-1-worker-7] WARN c.m.o.w.OctoClientSpec$$anonfun$1$$anon$1 - Unexpected response status: 'null' for path whatever
11:57:34.676 [ForkJoinPool-1-worker-7] WARN c.m.o.w.OctoClientSpec$$anonfun$1$$anon$1 - Unexpected response status: 'null' for path whatever
[info] - should return a Future[PartResponse] with an error that mentions that the part Id is not supported *** FAILED ***
[info] A timeout occurred waiting for a future to complete. Queried 10 times, sleeping 15 milliseconds between each query. (PartRequestServiceSpec.scala:42)
[info] org.scalatest.concurrent.Futures$FutureConcept$$anon$1:
[info] at org.scalatest.concurrent.Futures$FutureConcept$class.tryTryAgain$1(Futures.scala:546)
[info] at org.scalatest.concurrent.Futures$FutureConcept$class.futureValue(Futures.scala:558)
[info] at org.scalatest.concurrent.ScalaFutures$$anon$1.futureValue(ScalaFutures.scala:74)
[info] at org.scalatest.concurrent.Futures$class.whenReady(Futures.scala:684)
[info] at com.m3.octoparts.aggregator.service.PartRequestServiceSpec.whenReady(PartRequestServiceSpec.scala:19)
@xevix I fixed it for that particular test suite.
PartRequestServiceSpec
(and RichFutureWithTimingSpec
for that matter) have never failed locally for me nor on Travis' free testing infrastructure due to timeouts; and they shouldn't because neither do any form of I/O or intense computing. Under those circumstances, the default PatienceConfig for testing futures that comes with Scalatest should be sufficient.
One cause could be too many things vying for computing resources on your machine at the same time during tests?
@lloydmeta Ok something is definitely odd. I noticed there are long pauses in waiting for ForkJoinPool workers. I manually ran tests with a scale of 5x and it still wasn't enough. My machine isn't running anything CPU-intensive other than this, so I wonder if there's an issue with threadswitching somehow.
> testOnly -- -F 5
... snip ...
[info] PartRequestServiceSpec:
12:30:25.970 [ForkJoinPool-1-worker-5] WARN c.m.o.w.OctoClientSpec$$anonfun$1$$anon$1 - Unexpected response status: 'null' for path whatever
[info] #responseFor
[info] when given a PartRequest with a partId that is not supported
12:30:26.008 [ForkJoinPool-1-worker-5] WARN c.m.o.w.OctoClientSpec$$anonfun$1$$anon$1 - Unexpected response status: 'null' for path whatever
12:30:26.765 [ForkJoinPool-1-worker-5] WARN c.m.o.w.OctoClientSpec$$anonfun$1$$anon$1 - Unexpected response status: 'null' for path whatever
[info] - should return a Future[PartResponse] with an error that mentions that the part Id is not supported *** FAILED ***
[info] A timeout occurred waiting for a future to complete. Queried 11 times, sleeping 75 milliseconds between each query. (PartRequestServiceSpec.scala:42)
With a scale of 10x all tests are passing, and the logging issue goes away. So it seems that whatever the cause of the Future timeouts, the Future timeouts are somehow related to triggering the logging issues.
Since the initial issue this PR is addressing has been resolved, merging. Thanks!
@xevix interesting find. Also, didn't know about that flag !
:+1:
RequiresDB itself is now private to the test-time "support" package, because it has an implicit dependency on being run in a test suite that ensures there is a Play app running before its commands are run.