cypress-io / cypress

Fast, easy and reliable testing for anything that runs in a browser.
https://cypress.io
MIT License
46.86k stars 3.17k forks source link

Feature Request: fuzzing #1090

Open wildaces215 opened 6 years ago

wildaces215 commented 6 years ago

Feature

Nothing that I saw in the documentation

Be able to call a line of code that enables random inputs whatever it is the test is testing

jennifer-shehane commented 6 years ago

This is not currently on our roadmap for implementing in Cypress. Will leave open to gauge community interest in this feature.

bahmutov commented 6 years ago

I would like more concrete proposal: what do you mean by fuzzing? Generating multiple tests with different data permutations? Entering random input to a specific field? Randomly pressing buttons?

wildaces215 commented 6 years ago

Yes multiple tests and random data, I believe it can help make testing better and make it more secure so that hackers can enter into the system. Besides that it helps white hat hackers patch bugs

acthp commented 6 years ago

Been trying to get this working with jsverify, which derives more from the generative testing community (via Hughes' quickcheck) rather than fuzzing: the main difference being it uses the test results to explore the input space, vs. doing uncorrelated runs of random inputs.

I feel that this is critical for effective testing, because experience shows it's impossible to predict what the failing test cases will be. Fixed tests are very limited. Generated tests cover a dramatically larger portion of the input space.

The problem I'm hitting with cypress is that there appears to be no way to checkpoint (e.g. catch) a set of commands. I can run randomly generated sequences of user actions. On failure jsverify should then isolate the cause of failure by trying related sequences, however I haven't been able to get cypress to notify of failure and then run the next sequence.

To start, the .then() method will not accept an error callback, and there's no .catch(). I understand that this is partly due to the retry mechanisms, which makes sense. Desired behavior is to, on failure, abandon the remaining actions and queue the next sequence.

There is a 'fail' event that can be caught, which might be the right place, but I haven't found a way to use this to continue with the next sequence of cy actions. I'm currently experimenting around the cy.fail and next methods, but I don't understand the promise chaining and error handling well enough, yet, to make it work.

brian-mann commented 6 years ago

You would use a done callback in the mocha test it(..., (done) => { ... }) along with a cy.on('fail', (...)) in order to queue more commands and when everything finishes call done()

acthp commented 6 years ago

@brian-mann That looks promising. It's not quite working for me, though.

Here's my test case. Is this what you had in mind?

describe.only('throw test', function () {
    it('should fail on throw', function() {
        cy.wrap(true).then(() => {
            throw new Error('blah');
        });
    });
    it('should recover from throw', function(done) {
        cy.wrap(true).then(() => {
            cy.once('fail', () => {
                console.log('error');
                cy.wrap(true).then(() => {
                    console.log('command after error');
                    done();
                });
            });
            throw new Error('blah');
            console.log('should not run');
        });
    });
});

The second case will log 'error', but not 'command after error'. The test runner is hung at this point, with no commands running and the timer still counting.

brian-mann commented 6 years ago

Yeah I think the runner is built to stop running commands on fail. You can handle the error and prevent the test from failing but you can't run more commands. There's not necessarily a reason why it shouldn't do that - we just didn't account for this case because you know our stance on recovering from failures and conditional testing. It could likely be accommodated though

acthp commented 6 years ago

Ah, yeah. I was trying to find the mechanism that halts commands after fail, but I don't really understand how the commands are run. I tried stubbing out cy.fail, and changing it to invoke next, but these didn't work. The way the runner hangs (i.e. it doesn't fail the test case, but keeps counting time) makes me think there's an unresolved promise, but I'm not sure how to find it.

brian-mann commented 6 years ago

When you use a done function and you prevent Cypress from failing the test - it no longer times out or really tries to do anything at that point. If you don't call the done function then nothing will ever happen. It was too much work to account for this and without an actual use case it wasn't worth the trouble.

brian-mann commented 6 years ago

As soon as you call the done function then the test will pass and move on.

acthp commented 6 years ago

ah, ok. Do you mean Cypress, or mocha, is introspecting the function to see if it has a done parameter, and will not complete the test in that case, until done is called?

bahmutov commented 6 years ago

Yes mocha is looking at the number of arguments in the callback function. If there is even one - it thinks the test is a sync and that's done callback

Sent from my iPhone

On Jan 5, 2018, at 16:40, acthp notifications@github.com wrote:

ah, ok. Do you mean Cypress, or mocha, is introspecting the function to see if it has a done parameter, and will not complete the test in that case, until done is called?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

brian-mann commented 6 years ago

It's actually Cypress who does this. We override mocha's behavior - but it ends up being similar, just without a timeout. We handle async errors 100x better.

Cypress checks to see if there is a done - and if its not called the test will never end. This is very complex because by default we juggle timeouts per command and do not have an overall test timeout which is what Mocha does.

brian-mann commented 6 years ago

The logic starts here although it ends up affecting other areas: https://github.com/cypress-io/cypress/blob/develop/packages/driver/src/cypress/cy.coffee#L935

acthp commented 6 years ago

@brian-mann Can you give me any hints on how the action queue works? Specifically, how is it consumed, and how does it fail? My impression is that a promise then function calls next to advance to the next promise in queue. And failure is handled by a catch that resets various pieces of state, and does not call next. Is that roughly correct? I tried to verify this by changing cy.fail to call next, but this still doesn't get past the thrown error: it logs an unhandled promise rejection error, and still shuts down the runner. The call to next then fails because it's "outside" a test case.

brian-mann commented 6 years ago

It's honestly all in that file. You would need to clone our repo and build (and watch) the driver. Throw in some debugger statements and follow the logic in order to know what to change.

Scanning that file I see this line: https://github.com/cypress-io/cypress/blob/develop/packages/driver/src/cypress/cy.coffee#L545

This is what's causing Cypress to bail and not move on. It's essentially saying - okay you told me not to fail, so I have nothing more to do.

This line here https://github.com/cypress-io/cypress/blob/develop/packages/driver/src/cypress/cy.coffee#L508 turns off running more commands. That also likely needs to change.

There are a lot of tests around this behavior and its very difficult to test because its like testing the test mode itself. We have a lot of e2e tests in a lot of files that test various aspects of the way this work.

acthp commented 6 years ago

I'm unable to figure this out. Maybe a different tack would be to look at how cypress handles the boundary between it() calls, because it's the same required functionality. What happens to the queue at that point? I'm looking for interactions between mocha and cypress, but not finding them. Alternatively, if there were a way to dynamically register test cases in mocha, or read test cases from an iterator, that might also resolve the issue, but this doesn't seem to be supported by mocha.

acthp commented 6 years ago

Just tried another workaround: mocha allows re-running a test case on failure via this.retries(). It can be manipulated at run-time, so I can keep bumping the number up while test cases are generated. However, the code doesn't work under cypress. I guess cypress doesn't support mocha's this.retries().

brian-mann commented 6 years ago

@acthp we are actually implementing retrying in Cypress-land - the way Mocha does it doesn't really work that well (it was a bolt on feature) and it's not compatible with how we display our UI and/or capture results from the runs.

brian-mann commented 6 years ago

You might be able to use a non-documented aspect of Cypress which is cy.now(commandName, args...)

In this mode, it does not enqueue the command - and instead it immediately invokes it. This will return you a real Bluebird promise, which then you can add your own catch handling too. This might work along with using a done callback.

You might be able to use a mixture of regular cy commands (for when you don't need manual error handling) and then also cy.now(...) when you want to implement error handling yourself.

acthp commented 6 years ago

I have something that appears to be working. On a couple of injected bugs, it was able to return minimal failing test sequences by executing a random walk, and narrowing the failure case.

The technique I'm using is to call Cypress.Commands.overwrite to append a catch, which then uses cy.state and cmd.skip to skip all the remaining queued commands, up to a marker command. At that point, new commands can be queued.

I added a cy.softerror call that enables the catch, and a cy.recover that marks the end of the section to be skipped, and returns any caught error.

Code here:

Cypress.Commands.each(({name}) =>
    Cypress.Commands.overwrite(name, (fn, ...args) => {
        if (cy.state('softerror')) {
            return fn(...args).catch(err => {
                var n = cy.state('current').get('next');
                while (n && n.get('name') !== 'recover') {
                    n.skip();
                    n = n.get('next');
                }
                cy.state('abort', err);
                Cypress.log({
                    displayName: `softerror(${name})`,
                    consoleProps: () => ({err})});
            });
        } else {
            return fn(...args);
        }
    }));

Cypress.Commands.add('softerror', () => {
    cy.state('softerror', true);
});

Cypress.Commands.add('recover', () => {
    var s = cy.state('abort');
    cy.state('abort', false);
    return cy.wrap(s);
});

// root mocha hook
beforeEach(function() {
    cy.state('softerror', false);
});

I'm not certain I'm using skip correctly, but it looked promising, and seems to work.

acthp commented 6 years ago

Hitting another issue, now, which I'm pretty sure is memory usage. When I increase the number of generated sequences, the tab eventually dies, with chrome's 'something went wrong' page.

Cypress has options to reduce memory usage after a test, but I don't see any way to cleanup within a long test. For this use case, we need some control over exactly what memory to clear, because it's not usually the last sequence that's interesting.

Looks like I need to get access to the RUNNABLE_LOGS props.

brian-mann commented 6 years ago

Yeah this is a known, longstanding issue we should get resolved. It's being partly addressed in 2.0.0

acthp commented 6 years ago

Is there more to this than the data cached in the runnable? I'm wrote something to clear those, similar to cleanupQueue, but the browser still runs out of memory. A heap dump is showing a lot of data on promises.

acthp commented 6 years ago

@brian-mann Can you provide any details on what 2.0.0 will address, and in what time frame?

Spending more time in the heap dumps, one large issue appears to be that all the css is duplicated on every command, or something like that. I was surprised it wasn't a single instance of each, but looking at the code I suspect this is because the text is being modified via replace in snapshots, which is causing a new string to be created each time. I suspect I can work around this by memoizing String.prototype.replace during the test case.

Also seeing xhr request and response bodies accumulating in the xhrs variable. I might be able to intern these manually via cy.route.

Finally, seeing some large, duplicated DOM html strings being allocated, which I can't explain, like

<div data-danger-index="0"  style="opacity:1;text-decoration:none;color:#FFFFFF;" data-reactid=".1.1.1.1.$/=10.1.0.$9"><div ...

So far I'm unable to track down what's holding these allocations.

codingedgar commented 4 years ago

@acthp did you find a way for softerror to work?

acthp commented 4 years ago

@edgarjrg nope, due to the memory issue I ended up moving the property-testing loop outside of the cypress run, e.g. in an outer mocha instance generate a random cypress test case, then invoke cypress and collect the results. It's slower, but not too slow.

codingedgar commented 4 years ago

@acthp I've been trying to connect fast-check with Cypress but had no success, the guys at fast-check also have an issue opened (https://github.com/dubzzz/fast-check/issues/253) to try to demonstrate Cypress, Selenium etc with generative testing, but no results.

spicemix commented 3 years ago

...thought I'd share a couple Cypress commands I wrote quickly that may be useful to others, also demonstrates Typescript commands:

npm i -D big-list-of-naughty-strings npm i -D @thisshu/bad-words

// cypress/support/commands.ts

Cypress.Commands.add('naughtyString', () =>
  cy.fixture('../../node_modules/big-list-of-naughty-strings/blns.json')
    .then((blns: string[]) => blns[Math.floor(Math.random() * blns.length)])
);

Cypress.Commands.add('profanity', () =>
  cy.fixture('../../node_modules/@thisshu/bad-words/lib/lang.json')
    .then(
      (bads: any) => bads.words[Math.floor(Math.random() * bads.words.length)]
    )
);
// cypress/support/index.d.ts
/// <reference types="cypress" />

declare global {
  namespace Cypress {
    interface Chainable<Subject = any> {
      naughtyString(): Chainable<string>;
      profanity(): Chainable<string>;
    }
  }
}
// cypress/integration/naughty.spec.ts

describe('Naughty Strings', () => {
  it('should generate naughty strings', () => {
    for (let i = 0; i < 20; i++) {
      cy.naughtyString().then((ns) => console.log(ns));
    }
    for (let i = 0; i < 20; i++) {
      cy.profanity().then((ns) => console.log(ns));
    }
  });
});

enjoy