Open tsenart opened 9 years ago
I have most tests passing in #159
Great! :-)
remaining test flake, observed in #159:
--- FAIL: TestAuthticatee_validLogin-2 (1.00 seconds)
Error Trace: authenticatee_test.go:102
Error: Expected nil, but got: &errors.errorString{s:"Unexpected authentication 'mechanisms' received"}
<autogenerated>:29: ✅ Install()
<autogenerated>:29: ✅ UPID()
<autogenerated>:29: ✅ Start()
<autogenerated>:29: ✅ Stop()
<autogenerated>:29: ✅ Send(string,*upid.UPID,*mesosproto.AuthenticateMessage)
<autogenerated>:29: ❌ Send(string,*upid.UPID,*mesosproto.AuthenticationStartMessage)
<autogenerated>:29: ❌ Send(string,*upid.UPID,*mesosproto.AuthenticationStepMessage)
<autogenerated>:31: FAIL: 5 out of 7 expectation(s) were met.
The code you are testing needs to make 2 more call(s).
at: [authenticatee_test.go:105]
messenger_test.go getNewPort
irresponsibly picks random ports and so causes intermittent unit test failures
@jdef: Are you taking this one?
I can if needed.
On Mon, Sep 14, 2015 at 9:36 AM, Tomás Senart notifications@github.com wrote:
@jdef https://github.com/jdef: Are you taking this one?
— Reply to this email directly or view it on GitHub https://github.com/mesos/mesos-go/issues/169#issuecomment-140077585.
Hehe, no worries, I'll do it :)
The flake is back!
--- FAIL: TestAuthticatee_validLogin-2 (1.00s)
Error Trace: authenticatee_test.go:102
Error: Expected nil, but got: &errors.errorString{s:"Unexpected authentication 'mechanisms' received"}
<autogenerated>:29: ✅ Install()
<autogenerated>:29: ✅ UPID()
<autogenerated>:29: ✅ Start()
<autogenerated>:29: ✅ Stop()
<autogenerated>:29: ✅ Send(string,*upid.UPID,*mesosproto.AuthenticateMessage)
<autogenerated>:29: ❌ Send(string,*upid.UPID,*mesosproto.AuthenticationStartMessage)
<autogenerated>:29: ❌ Send(string,*upid.UPID,*mesosproto.AuthenticationStepMessage)
<autogenerated>:31: FAIL: 5 out of 7 expectation(s) were met.
The code you are testing needs to make 2 more call(s).
at: [authenticatee_test.go:105]
FAIL
FAIL github.com/mesos/mesos-go/auth/sasl 1.019s
Another test flake:
E0923 21:32:50.601642 5363 slave_health_checker.go:155] Failed to request the health path: Head http://127.0.0.1:57392/slave/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0923 21:32:50.612446 5363 slave_health_checker.go:155] Failed to request the health path: Head http://127.0.0.1:57392/slave/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0923 21:32:50.623419 5363 slave_health_checker.go:155] Failed to request the health path: Head http://127.0.0.1:57392/slave/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0923 21:32:50.634278 5363 slave_health_checker.go:155] Failed to request the health path: Head http://127.0.0.1:57392/slave/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0923 21:32:50.645270 5363 slave_health_checker.go:155] Failed to request the health path: Head http://127.0.0.1:57392/slave/health: read tcp 127.0.0.1:48416->127.0.0.1:57392: use of closed network connection (Client.Timeout exceeded while awaiting headers)
E0923 21:32:50.656058 5363 slave_health_checker.go:155] Failed to request the health path: Head http://127.0.0.1:57392/slave/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
--- FAIL: TestSlaveHealthCheckerPartitonedSlave (0.16s)
slave_health_checker_test.go:253: test server listening on 127.0.0.1:57392
slave_health_checker_test.go:269: Shouldn't get unhealthy notification
FAIL
FAIL github.com/mesos/mesos-go/healthchecker 6.629s
Currently all our tests are breaking. Let's fix them and get some proper CI in place.