Closed sindresorhus closed 4 years ago
I could expand it with nock, I'm just not sure whether cleanAll is needed when testing with AVA. I guess I'm still unclear about process isolation and if it's applicable to this situation.
You wouldn't need cleanAll
in an after
hook. The process is going to be killed, so there's no good reason to clean up mocks at the end. after
cleanup is only necessary for persistent state (i.e. disk state).
You might want cleanAll
in an afterEach
if all your tests were serial
.
When it comes to testing real endpoints and handling concurrency, you have a few options.
Let's assume a simple social app. You have a "friends list", you can add and remove people from the list, and you can get recommendations for "friends of friends".
serial
tests, and a clean application state each time. This avoids any interference and ensures no test relies on state left over from a previous one.1
does not.get-port
to find open ports. This is probably only advisable for lightweight apps that can be spun up and down very fast.download
seems to have a decent test suite. We could use that as a template for our nock
recipe. (note that doesn't really test an endpoint as much as mock one).
One thing I was surprised by: That test suite doesn't use serial
. I think that's a bit problematic. It's conceivable the user could get themselves into a situation where they are getting the wrong mock if they unwittingly add an async operation before the underlying call to http.request
. @kevva - Thoughts?
Not sure I follow you here. How could they get the wrong mock?
All tests in the file are launched at once, with no guarantee as to how they will interleave.
It's entirely possible your order could be this:
beforeEach for test1
beforeEach for test2
test1
test2
Indeed if you made beforeEach
asynd
, it probably will interleave like that:
test.beforeEach(async t => {
// ...
});
Another possibility, if anything in the chain (download
->got
->http
) introduced an async operation before the http.request
call, surprising interleaving could happen again.
Basically, nock
is manipulating a global (http
/ https
) - so it's unsafe for non test.serial
tests.
I think it's certainly worth trying to write a couple experimental tests that intentionally try to break the download
test suite. Maybe I'm missing something, and the current use is no big deal.
If I do turn out to be right on this, it's probably worth it's own recipe explaining how you need to be careful of these situations which can introduce subtle bugs.
If I understand you correctly, the .persist()
method in nock
should fix this? I just added it to download
. That'll make it totally non-dependent of the tests running in a particular order etc. That way you don't need to run beforeEach
too.
Yes. https://github.com/kevva/download/commit/16ce6399d55decbec50e2b2f3f042767b2db7b51 improves the download
test suite. That test suite has the advantage of expecting the same thing from a particular URL each time. Things get more complicated when mocking dynamic endpoints that return different values across time. I think .persist()
as you use it now should be the preferred method if possible, and we need to explore how best to mock dynamic endpoints (I'm guessing you need to go to test.serial
at that point, at least when using nock
).
The response is always the same when using nock
though since no actual operations are made AFAIK. That changes if you're using a real server for mocking though.
Except you don't always want that to be the case:
import test from 'ava';
import socialClient from './';
import nock from 'nock';
test('one friend', t => {
nock('http://social.com').get('/friendList').reply(['friendOne']);
t.is(await socailClient.friendCount(), 1);
});
test('one friend', t => {
nock('http://social.com').get('/friendList').reply(['friendOne', 'friendTwo']);
t.is(await socailClient.friendCount(), 2);
});
Above you are requesting the same URL, but you want different replies for both.
So you want the endpoints to sort of depend on each other to replicate a real use case (like getting your friends, add one and then get them again)? Then yeah, that'll be hard, but I don't know if that's how it's expected to work. Isn't the point just to test that your endpoints returns the expected data, not to test the real "flow" of your API.
Yeah, I was wondering how to test the API flow with AVA, but that probably isn't related to AVA, though it would be a nice addition to the recipe.
I've only recently started testing for real, but from what I understand nock is a tool which tests if e.g. a function called the expected endpoint and handled the response well.
So you want the endpoints to sort of depend on each other to replicate a real use case (like getting your friends, add one and then get them again)?
No, testing a flow like that is probably best done with a real server running your real app (an integration test).
I'm thinking something like a set of tests focused on the socialClient.listFriends()
method:
listFriends returns correct response if person has no friends
listFriends handles one friends
listFriends handles two friends
listFriends handles more than 100 friends (paging limit)
listFriends handles a 401 (unauthorized) code by asking for credentials
None of the above tests would actually add friends using your backend API, you are just mocking the REST call, and making sure your client correctly handles a number of different possible return values from the same URL.
The createServer
helper in got
is a good example of creating real servers for testing.
listFriends returns correct response if person has no friends
listFriends handles one friends
listFriends handles two friends
listFriends handles more than 100 friends (paging limit)
listFriends handles a 401 (unauthorized) code by asking for credentials
Alright, that clear things up :). I was just assuming that you wanted one typical response from your endpoint, not testing all the possible responses, but I agree that's a good use case.
I think it's probably worth mentioning that people should structure their API so they can swap out the base URL of their endpoint:
function MyGitHubApi(opts) {
this.endpoint = opts.endpoint || 'https://api.github.com';
}
// in the tests:
new MyGitHubApi({
endpoint: 'http://localhost:3030'
})
I'm sorry, it might be a bit off-topic, but I've tried to integrate nock with ava the same way as @jamestalmage is doing here https://github.com/avajs/ava/issues/849#issuecomment-224667838
And I'm getting
1. contact › Contact can be created Error: XMLHttpRequest is not supported by your browser
Do you have any suggestions? I've tried to integrate https://github.com/avajs/ava/blob/master/docs/recipes/browser-testing.md but it's probably a wrong direction.
I made a minimal test demoing a problem I've run into https://github.com/morenoh149/avaNockParallel
@jamestalmage @silvenon @sotojuan what do you advise I do regarding the approach? should I use https://github.com/mmalecki/hock ?
@morenoh149 One approach is just make sure interceptors are unique in tests that run in parallel in the same node process. I realize this may not always be straight forward for all APIs, but it should handle a large set of cases. At least for rest APIs you can do .get('/widgets/exists')
and .get('/widgets/doesnotexists)
to test cases in parallel.
@razor-x in that vien, it'd be nice if ava exposed a unique testId in t.context or similar
It would be nice to have a better recipe for actual endpoint testing. I recently tried this https://github.com/morenoh149/avaNockParallel/ after changing the url and was never able to receive the promise "message: 'Promise returned by test never resolved'" everytime. I have also tried https://gist.github.com/seacloud9/38f79278d62ed8ceb2370f137d6c42ad it would be extremely helpful to have an endpoint recipe that currently runs. After looking at nock I think I am going to try https://github.com/dareid/chakram does anyone have a chakram test that works with axios or apisauce? This would be very helpful.
Hey guys, I struggled with endpoint testing (using supertest with Mongomem and Mongoose) according to the docs. Discovered that there's a much much simpler way of doing things. Here's a snippet of what I ended up with:
import test from 'ava'
import request from 'supertest'
import { MongoDBServer } from 'mongomem'
import { setupMongoose, setupFixtures, removeFixtures } from '../utils'
test.before('start server', async t => await MongoDBServer.start())
test.beforeEach(async t => {
const db = await setupMongoose()
const app = require('../../server')
await setupFixtures()
t.context.app = app
t.context.db = db
})
test.afterEach.always(async t => {
const { db } = t.context
await removeFixtures()
await db.connection.close()
})
test.serial('litmus get test', async t => {
const { app } = t.context
const res = await request(app)
.get('/test1')
.send({
email: 'testing@gmail.com'
})
t.is(res.status, 200)
t.is(res.body.name, `test`)
})
test.serial('litmus post test', async t => {
const { app } = t.context
const res = await request(app)
.post('/test2')
.send({
email: 'test2@gmail.com'
})
t.is(res.status, 200)
t.is(res.body.name, 'oohlala')
})
test.after.always('cleanup', t => MongoDBServer.tearDown())
Would you like me to write some docs on this? I'd love to! Just let me know.
I can't read that image on my machine.
Edited with a snippet instead of a screenshot
@zellwk we're always happy to see improved docs! It's probably easier to discuss improvements with a PR though, so there's some context to them.
Added PR! :)
Would a solution to this issue be something like:
https://developers.google.com/web/updates/2017/06/headless-karma-mocha-chai
Thus far I've found ava, puppeteer, live-server work (live-server because file:///urls don't work in general). Early days but so far so good. I haven't tried headless-karma-mocha-chai yet. But https://github.com/avajs/ava/issues/1619#issuecomment-356869077 seems to be a good approach for puppeteer.
I gotta say real in browser testing like headless-karma-mocha-chai suggests, but with ava, would be SOOO abs fab! I'm not clear, however, on what the testing ecology requires to get ava running via karma.
@backspaces I think this issue relates to API endpoints more than browser behavior, though Karma might be one way to validate HTML responses. Recipe improvements are always welcome!
We'd also welcome work on https://github.com/avajs/karma-ava, though actually running AVA's test interface inside a browser is not a priority.
We already have the Endpoint testing recipe, but it's not very comprehensive. Testing endpoints is such a common tasks, I think we should expand on it.
Like using
nock
/chakram
, testing against a real end-point (and how to handle concurrency), etc.