wallabyjs / public

Repository for Wallaby.js questions and issues
http://wallabyjs.com
758 stars 45 forks source link

"One of your tests is taking too long to execute" with no details #141

Closed endragor closed 3 years ago

endragor commented 9 years ago

Sometimes Wallaby.js output this warning in the "Wallaby Console" tab:

One of your tests is taking too long to execute, check for possibly recently introduced infinite loops.

And it does not provide any details as to which of the tests is taking too long. Except this one line, everything looks normal, and there are no failing tests under "Failing Tests" tab.

Would love if Wallaby.js provided more details about the problematic test. Also, I'm sure none of our tests have infinite loops and all of them should be executed fast enough, so this might be some issue with Jest integration.

ArtemGovorov commented 9 years ago

Could you please turn debug: true on and the next time you see the issue, please have a look what test was executed last (executed test names are displayed), so the next one after the executed one in the test file should be the one wallaby has issues with. We'll try to improve the details reporting.

endragor commented 9 years ago

In debug log there is "Test executed" line for every test we have (the number of such lines matches the total number of tests). So it does not seem like any concrete test is not being executed. I can stably reproduce this on my project every time I run Wallaby configuration. When I change code with a configuration that is already running, this error does not seem to occur.

ArtemGovorov commented 9 years ago

there is "Test executed" line for every test we have

This is a helpful note. I can't reproduce it yet, but I have tried something in 1.0.58 that may actually fix it. Please try the new version and let me know how it goes.

If the issue is still there - then I have a quick question: do you have async tests or just sync ones? Also, is there a chance you could reduce the number of tests by excluding them file by file to see if the issue is caused by some specific test or just by the number of tests?

endragor commented 9 years ago

Still reproducible with 1.0.58. We have async tests that make use of Jasmine's done callback, and a couple of tests that use jest.runAllPendingTimers() (Jest mocks timer functions and provides helper functions to simulate time advancement). However, when I tried to remove all those tests, the issue was still there.

It seems that no particular test is causing it - I tried to modify tests pattern to gradually include more and more tests, and the warning started to occur after I got around ~60 tests. When I left only the pattern that started causing the warning and removed everything else, the warning disappeared.

ArtemGovorov commented 9 years ago

I have been trying various things to reproduce the issue and had some different error but on ~60 tests as well. Not sure what exactly was causing it, most likely I installed jest-cli using a different node version from the one I'm using to run my tests, but after re-installing jest-cli with the same version of node.js that is specified as runner in env.runner, the issue I had have disappeared. Not likely it'll help, but could you try re-installing yours just in case.

ArtemGovorov commented 9 years ago

Ok, found out what was the issue in my case: I'm using io.js and when I install jest-cli with it and using it as a runner for my tests, it fails when I have 64 tests. So this passes:

jest.dontMock('sum/sum');

describe('sum', function() {

  for(var i = 0; i < 63; i++){
    it('adds 1 + 2 to equal 3' + i, function() {
      var sum = require('sum/sum');
      expect(sum(1, 500000)).toBe(500001);
    });
  }
});

But changing the number of tests to 64 makes it fail. When I use 0.10.x node - it all works. Are you still using v0.10.38 or have changed the version?

endragor commented 9 years ago

It's still v0.10.38. This issue is quite minor, since all the tests are still executed and reported properly. Improving the warning message might help to at least identify the test(s) that is "taking long" in the eyes of Wallaby.

Busata commented 8 years ago

Any details on why wallaby considers a test to be too long? We're trying to integrate it into our existing project, and there are quite a lot of seemingly simple tests that it fails for.

ArtemGovorov commented 8 years ago

@Busata Could you please create a sample GitHub repository with your config and a couple of tests where I could reproduce the issue so I may have a look?

kwonoj commented 8 years ago

I'm experiencing this as well but 'only if I specify to use phantomJS2', using env configuration

env: {
  runner: "node_modules/karma-phantomjs2-launcher/node_modules/phantomjs2-ext/lib/phantom/phantomjs"
}

it'd be good at least if I could find out which test case is stuck, to workaround by disabling them for temporarily.

ArtemGovorov commented 8 years ago

@kwonoj Before we address the issue better, you may set debug: true in your wallaby config, see what tests were executed, find the last executed test. The test that follows the last executed one in its test file is the problematic one.

kwonoj commented 8 years ago

Wish that'd work, but more weirdly setting debug flag makes whole editor (VS Code) hangs completely even before reaching stage to actually start tests. (Stops at some point around preprocessing).

ArtemGovorov commented 8 years ago

@kwonoj Could you please try to see if it works in other editors? Is this an open source project I may try (or a one you may share/email)? If it's not something that you may share, could you share your wallaby config file?

Another thing I may suggest is try to edit your tests list and include your test files one by one (or by larger chunks) to find out which test file is causing the issue. Once the file is found, you may do the similar thing to find out which test is causing the issue. Please let me know if find it out, would love to understand the reason why it's failing in PhantomJs2.

kwonoj commented 8 years ago

Unfortunately it's not OSS and proprietary, won't be able to share codes. I'll try to isolate by suggestion and trying to wrap up config to share via gists.

okonet commented 8 years ago

I'm also started seeing this on a project that successfully worked before. Running tests in CLI works without any problems (all green, no delays etc.). I use mocha and core v1.0.167

ArtemGovorov commented 8 years ago

@okonet Are tests still passing with the message?

okonet commented 8 years ago

It's hard to say. wallaby console says they do, but in the UI there is no indication whatsoever. I'm using IDEA BTW.

okonet commented 8 years ago

It looks like it hangs out after executing 350 tests. I have 438 tests in total.

ArtemGovorov commented 8 years ago

Does it always happen or just sometimes? The warning is displayed when wallaby doesn't get a result from the next test within few seconds. The logic of displaying it didn't change, so I'm not sure what could be causing the issue. The best I can suggest is to review recent changes committed after the time when it worked ok and trying to isolate the problematic test. If you manage to isolate the test, it would be great if you could share its code that was causing the issue so I could see if could address it in wallaby core somehow.

ArtemGovorov commented 8 years ago

There're a few differences between running tests in CLI and by wallaby.js that may result in causing the issues: wallaby collects code coverage, runs tests in parallel (so the order of execution is not same as in CLI runner), etc.

I'd really love to get to the reason of the issue (though not sure it's a single reason for all reported cases) and fix it if it's wallaby core issue, so if anyone can provide a sample where I could reproduce it, that would be just awesome.

ArtemGovorov commented 8 years ago

In the latest core v1.0.171, I have added the problematic test name reporting, so know it should tell what test is it hanging on now in the warning message. I have also added an additional sandbox health check for this case to try to find out and report the possible nature of the issue.

okonet commented 8 years ago

v1.0.171 does not display the warning at all anymore... :/

I see phantomjs at around 100% CPU load in Activity Monitor for a few minutes following by its crash:

phantom stderr: PhantomJS has crashed. Please read the crash reporting guide at https://github.com/ariya/phantomjs/wiki/Crash-Reporting and file a bug report at https://github.com/ariya/phantomjs/issues/new with the crash dump file attached: /tmp/A7AC5EBB-FAEF-4BCB-801C-9934C8AC2850.dmp

Wed, 03 Feb 2016 09:43:41 GMT wallaby:phantomRunner Signal killed phantomjs #0: null, exit code: 1
ArtemGovorov commented 8 years ago

v1.0.171 does not display the warning at all anymore... :/

Such scenario is possible, because now wallaby is handling it differently to capture the problematic test name, and it looks like in your case it is not a specific test causing the issue. It may be something before tests are started, like an issue with modules imported before the describe in one of the specs.

I see phantomjs at around 100% CPU load phantom stderr: PhantomJS has crashed.

It means it's stuck somewhere and finally is getting out of memory issue. Given that your test pass when you run from CLI, could be something with code instrumentation or wallaby-webpack compilation (cyclic module reference?).

Could you please share your wallaby config file?

okonet commented 8 years ago
var wallabyWebpack = require('wallaby-webpack');
var babel = require('babel');
var webpackConfig = require('./webpack.config.test');
var babelConfig = JSON.parse(require('fs').readFileSync(require('path').join(__dirname, '.babelrc')));

process.env.BABEL_ENV = 'test'; // Set babel into test env to disable HMR
babelConfig.babel = babel;

module.exports = function(wallaby) {

    return {
        debug: true,
        // set `load: false` to all of source files and tests processed by webpack
        // (except external files),
        // as they should not be loaded in browser,
        // their wrapped versions will be loaded instead
        files: [
            {pattern: 'node_modules/phantomjs-polyfill/bind-polyfill.js', instrument: false},
            {pattern: 'node_modules/intl/dist/Intl.js', instrument: false},
            {pattern: 'node_modules/react/dist/react-with-addons.js', instrument: false},
            {pattern: 'node_modules/sinon/pkg/sinon.js', instrument: false},
            {pattern: 'node_modules/chai/chai.js', instrument: false},
            {pattern: 'bower_components/jquery/jquery.js', instrument: false},
            {pattern: 'node_modules/chai-jquery/chai-jquery.js', instrument: false},
            {pattern: 'node_modules/chai-react/chai-react.js', instrument: false},
            {pattern: 'node_modules/sinon-chai/lib/sinon-chai.js', instrument: false},
            {pattern: 'node_modules/angular/angular.js', instrument: false},
            {pattern: 'node_modules/angular-mocks/angular-mocks.js', instrument: false},

            {pattern: 'js/plugins/**', ignore: true},

            //  CSS and other file types
            {pattern: 'css/**/*', load: false, instrument: false},
            {pattern: 'js/**/*.css', load: false, instrument: false},
            {pattern: 'js/**/*.scss', load: false, instrument: false},
            {pattern: 'config/*', load: false, instrument: false},
            {pattern: 'js/**/*.json', load: false, instrument: false},
            {pattern: 'js/**/*.html', load: false, instrument: false},

            {pattern: 'js/**/*.js', load: false},
            {pattern: 'js/**/*.jsx', load: false},
            {pattern: 'js/**/__tests__/*spec.*', ignore: true}
        ],

        tests: [
            {pattern: 'js/**/__tests__/*spec.*', load: false}
        ],

        compilers: {
            'js/**/*.js*': wallaby.compilers.babel(babelConfig)
        },

        testFramework: 'mocha@2.0.1',

        postprocessor: wallabyWebpack(webpackConfig),

        bootstrap: function() {
            /* eslint-disable */
            var mocha = wallaby.testFramework;
            mocha.ui('bdd');
            window.expect = chai.expect;
            var should = chai.should();

            // required to trigger tests loading
            window.__moduleBundler.loadTests();
            /* eslint-enable */
        }
    };
};
ArtemGovorov commented 8 years ago

May I also see your webpack.config.test.js file?

okonet commented 8 years ago

webpack.test.config extends this one so here are both base and test files:

var path = require('path');
var webpack = require('webpack');
var ExtractTextPlugin = require('extract-text-webpack-plugin');
var isProduction = process.env.NODE_ENV === 'production';

module.exports = {
    entry: {
        'default': ['./js/index.js'],
        mappings: ['./js/index.js', './js/modules/ColumnMappings/index.js'],
        dashboards: ['./js/index.js', './js/modules/Dashboards/index.js'],
        dashboard: ['./js/index.js', './js/modules/Dashboard/index.js'],
        reportBuilder: ['./js/index.js', './js/reportBuilder.js'],
        management: [
            './js/index.js',
            './js/angular/mappings',
            './js/angular/kpiCreator',
            './js/transform'
        ],
        'import': [
            './js/index.js',
            './js/angular/import/importFile',
            './js/angular/import/mapping',
            './js/angular/import/selectRules',
            './js/angular/import/selectSeparators',
            './js/angular/import/summary',
            './js/angular/import/mediaPlanUpload',
            './js/angular/import/dataImportShow'
        ],
        'api': [
            './js/index.js',
            './js/angular/api/createEndpoints',
            './js/angular/api/testAdWordsEndpoint',
            './js/angular/api/testFacebookAdsEndpoint',
            './js/angular/api/googleOAuth',
            './js/angular/api/fbLogin'
        ],
        transform: ['./js/index.js', './js/transform'],
        plan: ['./js/index.js', './js/plan'],
        administration: ['./js/index.js', './js/administration'],
        login: ['./js/login']
        // playground: ['./playground/index']
    },
    output: {
        path: path.join(__dirname, 'assets'),
        filename: '[name].js',
        chunkFilename: '[name].[id].js'
    },
    plugins: [
        new webpack.ProvidePlugin({
            'angular': 'angular'
        }),
        new webpack.DefinePlugin({
            _PRODUCTION_: isProduction,
            'process.env': {
                NODE_ENV: JSON.stringify(process.env.NODE_ENV)
            }
        }),
        new webpack.ResolverPlugin(
            new webpack.ResolverPlugin.DirectoryDescriptionFilePlugin('bower.json', ['main'])
        ),
        new webpack.optimize.OccurenceOrderPlugin(),
        new webpack.IgnorePlugin(/^\.\/lang/, /moment$/), // Ignore the /locale context completely to prevent moment of loading optional lang dependencies for moment
        new ExtractTextPlugin('[name].[contenthash].css', { disable: !isProduction }), // Disable ExtractTextPlugin in dev to enable CSS hot reloading

        // Common chunk for login pages
        new webpack.optimize.CommonsChunkPlugin({
            name: 'login.common',
            chunks: ['login']
        }),

        // Common chunk for app
        new webpack.optimize.CommonsChunkPlugin({
            name: 'app.common',
            chunks: ['dashboards', 'dashboard', 'reportBuilder']
        }),

        // Common chunk for admin pages
        new webpack.optimize.CommonsChunkPlugin({
            name: 'admin.common',
            chunks: ['default', 'mappings', 'import', 'management', 'transform', 'administration', 'plan']
        }),

        // Shared code between app and admin chunks
        new webpack.optimize.CommonsChunkPlugin({
            name: 'common',
            chunks: ['app.common', 'admin.common']
        })
    ],
    postcss: function() {
        return [
            require('precss'),
            require('postcss-calc'),
            require('autoprefixer')({ browsers: ['last 2 version', 'ie >= 10'] })
        ];
    },
    module: {
        loaders: [
            {
                test: /\.jsx?$/,
                exclude: /(node_modules|bower_components|plugins)/,
                loader: 'babel'
            },
            {
                // **IMPORTANT** This is needed so that each bootstrap js file required by
                // bootstrap-webpack has access to the jQuery object
                test: /bootstrap\/js\//,
                loader: 'imports?jQuery=jquery'
            },

            // CSS loaders
            {
                test: /\.scss$/,
                loader: ExtractTextPlugin.extract('style', 'css!postcss?parser=postcss-scss')
            },
            {
                test: /\.css$/,
                loader: ExtractTextPlugin.extract('style', 'css!postcss')
            },

            // Images
            {
                test: /\.(png|gif|jpg)$/,
                loader: 'url?limit=15000'
            },

            // SVG assets including fonts
            {
                test: /\.svg(\?v=\d+\.\d+\.\d+)?$/,
                loader: 'url?limit=15000&mimetype=image/svg+xml'
            },

            // Fonts loading
            { test: /\.(woff|woff2)(\?v=\d+\.\d+\.\d+)?$/, loader: 'url?limit=10000&mimetype=application/font-woff'},
            { test: /\.ttf(\?v=\d+\.\d+\.\d+)?$/, loader: 'url?limit=10000&mimetype=application/octet-stream'},
            { test: /\.eot(\?v=\d+\.\d+\.\d+)?$/, loader: 'file'},

            // Rest of loader
            { test: /\.yml$/, loader: 'json!yaml' },
            { test: /\.json$/, loader: 'json' },
            { test: /\.html$/, loader: 'html' },
            {
                test: require.resolve('react'),
                loader: 'expose?React'
            },
            {
                test: path.resolve(__dirname, 'node_modules/angular/angular.js'),
                loader: 'exports?window.angular'
            }
        ]
    },
    resolve: {
        root: path.join(__dirname),
        modulesDirectories: ['node_modules', 'bower_components', 'web_modules'],
        extensions: ['', '.webpack.js', '.js', '.jsx', '.css', '.scss'],
        alias: {
            // So it can be required in any module like `require('config')`
            'config': path.join(__dirname, 'config', 'config.js'),
            'css': path.join(__dirname, 'css')
        }
    }
};
var _ = require('lodash');
var baseConfig = require('./webpack.config.base');
var config = _.clone(baseConfig);

config.entry = {};
config.output = {};
config.plugins = config.plugins.filter(function(plugin) {
    return !plugin.chunkNames; // Remove CommonChunkPlugin from config in test mode
})

// Do not parse CSS in tests
config.module.loaders = [{
    test: /\.(css|scss)$/,
    loader: 'null-loader'
}].concat(config.module.loaders)

// Some wallaby hacks
config.resolve.modulesDirectories = config.resolve.modulesDirectories.concat([require('path').join(__dirname, 'bower_components')]);
config.resolve.extensions = [''].concat(config.resolve.extensions);

module.exports = config;
ArtemGovorov commented 8 years ago

Not sure if it's related to the issue, but it looks like your .js and .jsx files are processed by babel twice, first by wallaby compiler, then by your webpack babel loader

{
                test: /\.jsx?$/,
                exclude: /(node_modules|bower_components|plugins)/,
                loader: 'babel'
            }

so you need to do

// Do not parse CSS in tests
config.module.loaders = [{
    test: /\.(css|scss)$/,
    loader: 'null-loader'
}].concat(config.module.loaders.filter(function(l) { return l.loader !== 'babel'  }))
ArtemGovorov commented 8 years ago

I'll try to add some more diagnostics for your case - specifically reporting about what spec file is causing the issue so we go from there to further isolate it.

ArtemGovorov commented 8 years ago

@okonet So did you have a chance to try without the double babel compiler?

Also I have added some more diagnostics that should report the hanging file name (if it can be detected and the issue is with executing some file program scope). Please try updating your wallaby-webpack package to v0.0.12 and try running with the latest core v1.0.173. Please try to see if something is displayed now (within a few seconds after it hangs).

Another question, before it stopped working, did you add any new node modules to your project or perhaps made some config changes, or started to use some webpack features/babel plugins that you didn't use before? Perhaps you could try checking out one of the previous revisions that you think were working fine to see if it still works there?

okonet commented 8 years ago
  1. I've tried remvoing babel-loader from webpack config. It didn't even run because of Parse errors. Also I need this to run tests on CI (I'm using karma)
  2. Runing with the original config, I see this
Some long running code has been detected: one of your tests is taking more than 5000ms to execute.
The last recorded console.log: Warning: Failed propType: Required prop `controller` was not specified in `LinkTo`.

which leads me nothere since the last console message is always different.

I'm bisecting right now. Will let you know if I will find what causes the issue.

okonet commented 8 years ago

I've found what caused problems. This is this file and it's test:

export default function parseNumber(numberString: String) {

    if (numberString.match(/^[\d]+(,[\d]*)?$/)) {
        return parseFloat(numberString.replace(',', '.'))
    } else if (numberString.match(/^[\d]+(\.[\d]*)?$/)) {
        return parseFloat(numberString)
    }

    return null
}
/* eslint no-unused-expressions: 0 */

import parseNumber from '../parseNumber';

describe('parseNumber', () => {

    it('should parse numebr from string to float', () => {
        parseNumber('100,5').should.eq(100.5);
        parseNumber('100,').should.eq(100);
        parseNumber('300').should.eq(300);
        parseNumber('100.25').should.eq(100.25);
    });

    it('should not parse badly formatted nzmbers', () => {
        should.equal(parseNumber('100,5.12312'), null)
        should.equal(parseNumber('100,555.12312'), null)
        should.equal(parseNumber('.12312'), null)
    });

});

If I skip this tests, everything works fine.

ArtemGovorov commented 8 years ago

Awesome! Thanks a lot, will create a repo with these 2 to see if I can reproduce it. Could you please share your .babelrc so that I run the same plugins?

ArtemGovorov commented 8 years ago

numberString: String what is it BTW, flow?

okonet commented 8 years ago

Here is my .babelrc

{
    "stage": 0,
    "optional": ["runtime"],
    "plugins": [
        "lodash"
    ],
    "env": {
        "development": {
            "plugins": [
                "react-display-name",
                "react-transform"
            ],
            "extra": {
                "react-transform": {
                    "transforms": [
                        {
                            "transform": "react-transform-hmr",
                            "imports": ["react"],
                            "locals": ["module"]
                        }, {
                            "transform": "react-transform-catch-errors",
                            "imports": ["react", "redbox-react"]
                        }
                    ]}
            }
        }
    }
}

Yes, it should be flow. But it's a wrong annotation. It should read string (lowercase). See http://flowtype.org/

The code wasn't written by me so please bear with me :)

ArtemGovorov commented 8 years ago

Re-created the sample and I'm having one issue: this test fails

it('should not parse badly formatted nzmbers', () => {
        should.equal(parseNumber('100,5.12312'), null)
        should.equal(parseNumber('100,555.12312'), null)
        should.equal(parseNumber('.12312'), null)
    });

because should.equal internally uses this to compare with the second argument and this in that context is window and not to the first argument, as it should be. I looked up the issue, and it seems it has to do with chai.should() initialization (there are some other funny issues with it in PhantomJs). I have made this change to my bootstrap function to make it work:

    bootstrap: function() {
      ...
      window.should = undefined;
      window.should = chai.should();
      ...
    }

It works for you in karma because it uses a similar trick (see the should global var on top).

Let me know if the fix makes it work for you.

okonet commented 8 years ago

Yeah, I've fixed it by rewriting the test using expect. I guess this is one more reason not to use should.

BTW why can't you just write:

window.should = chai.should()

?

ArtemGovorov commented 8 years ago

@okonet No, you have to explicitly set window.should to undefined first (one way or another), only then you may assign chai.should() to it. Otherwise you're getting RangeError: Maximum call stack size exceeded. in PhantomJs. I don't know why it happens in chai + PhantomJs, perhaps because of the way chai defines properties. There's some discussion in the chai repo.

ArtemGovorov commented 8 years ago

@okonet So you either need to do in global scope:

var should;
...
window.should = window.chai.should();

like they do in karma, or just do window.should = undefined; before the assignment.

webuniverseio commented 8 years ago

I also get a lot of following messages:

Some long running code has been detected: one of your tests is taking more than 5000ms to execute.

I reduced number of tests to 1 and instead of getting multiple messages like above I got one. I also noticed that incremental changes are taking too long, so I started to suspect that the issue is in runner as test was super simple. I use electron so I decided to try phantom 2 and regular phantom. I started to output formatted date and once I get results I add a white space to re-trigger the test and see how long it took to run. Comparing to regular phantom, phantom 2 spends ~2x more time, electron ~3x.

I tried to run tests on another machine which is more performant I don't get those messages, but my current machine is not slow, it has 32GB , i7-4790 @ 3.6 GHz (8 processors).

In addition to the runner issue, my webpack incremental builds take a while. Combination of those factors most likely is a reason for that message to appear...

ArtemGovorov commented 8 years ago

@szarouski Are you using the latest wallaby-webpack package version 0.0.22? A simple test should not be taking that long to execute, neither should webpack incremental build. Could you please share a repo with the simple test you mentioned, so I could reproduce it?

webuniverseio commented 8 years ago

@ArtemGovorov, correct, I'm using 0.0.22. I don't think the issue is with wallaby-webpack, the issue is most likely with my webpack configuration (it is shared between wallaby and project build and both project build and watch are pretty slow, so I guess I should optimize what it is doing). But on top of my configuration I see that runners mentioned above are quite slow too. I'll try to prepare something for you soon.

ArtemGovorov commented 8 years ago

@szarouski

the issue is most likely with my webpack configuration (it is shared between wallaby and project build and both project build and watch are pretty slow, so I guess I should optimize what it is doing).

Yep, I saw many projects where adjusting the webpack config in wallaby config improved the performance. Wallaby (or testing environment in general) often doesn't need some things that are in production webpack config.

Electron and Phantom 2 can actually be a bit slower, but normally only for the initial run. Incremental runs should take about the same time in all runners and Electron is usually even faster during the incremental runs. From the fact that incremental runs are significantly slower for some runners, it may mean that in your case incremental builds are not really incremental and for some reason invalidate a lot of files/modules. To test the theory, you may try to set the workers count to 1 to see if it reduces the performance difference between the runners:

 module.exports = function () {
   return {
     ...
     workers: { initial: 1, regular: 1 }
   };
 };

There seem to be 2 separate issues:

webuniverseio commented 8 years ago

@ArtemGovorov I'm almost done with clean up, hopefully will have a repo with test ready for you somewhere on the weekend. Here is the output from wallaby console that happens on file change, please let me know if it looks suspicious. That cycle repeats on every file change. Thank you.

Thu, 16 Jun 2016 01:34:01 GMT wallaby:project File test/code/RewardsCentre/ItemDateTest.js changes are patches only: true
Thu, 16 Jun 2016 01:34:01 GMT wallaby:project Returning previously cached file [101] from memory
Thu, 16 Jun 2016 01:34:01 GMT wallaby:project Preparing to process test/code/RewardsCentre/ItemDateTest.js
Thu, 16 Jun 2016 01:34:01 GMT wallaby:processor Running 1 processor(s) for test/code/RewardsCentre/ItemDateTest.js
Thu, 16 Jun 2016 01:34:01 GMT wallaby:processor Finished running one processor for test/code/RewardsCentre/ItemDateTest.js
Thu, 16 Jun 2016 01:34:01 GMT wallaby:project Processing compiled test/code/RewardsCentre/ItemDateTest.js
Thu, 16 Jun 2016 01:34:01 GMT wallaby:project Instrumenting test/code/RewardsCentre/ItemDateTest.js, via process pool: false
Thu, 16 Jun 2016 01:34:01 GMT wallaby:project Detected single test [should return a formatted date] change
Thu, 16 Jun 2016 01:34:01 GMT wallaby:project No preprocessors configured for test/code/RewardsCentre/ItemDateTest.js
Thu, 16 Jun 2016 01:34:01 GMT wallaby:project Writing to disk and caching processed file test/code/RewardsCentre/ItemDateTest.js
Thu, 16 Jun 2016 01:34:01 GMT wallaby:project File test/code/RewardsCentre/ItemDateTest.js changes saved, store ts: 1466040271378, change ts: 1466040841215
Thu, 16 Jun 2016 01:34:01 GMT wallaby:project Running postprocessor
Thu, 16 Jun 2016 01:34:01 GMT wallaby:testTask Test files from affected: 1, from deleted or manually requested: 0, from recently changed: 0, from loaded by: 0, from failing: 0
Thu, 16 Jun 2016 01:34:04 GMT wallaby:postprocessor Emitting 9 files
Thu, 16 Jun 2016 01:34:05 GMT wallaby:project Postprocessor execution finished
Thu, 16 Jun 2016 01:34:05 GMT wallaby:project Test run started; run priority: 2
Thu, 16 Jun 2016 01:34:05 GMT wallaby:testTask Test files from affected: 1, from deleted or manually requested: 0, from recently changed: 0, from loaded by: 0, from failing: 0
Thu, 16 Jun 2016 01:34:05 GMT wallaby:workers Starting test run, priority: 2
Thu, 16 Jun 2016 01:34:05 GMT wallaby:phantomRunner Starting sandbox [worker #0, session #jotv5]
Thu, 16 Jun 2016 01:34:05 GMT wallaby:phantomRunner Preparing sandbox [worker #0, session #jotv5]
Thu, 16 Jun 2016 01:34:05 GMT wallaby:browserRunner Total files to load in sandbox: 847
Thu, 16 Jun 2016 01:34:05 GMT wallaby:browserRunner Sandbox is generated [worker #0, session #jotv5]: http://localhost:60165/wallaby_sandbox0.html
Thu, 16 Jun 2016 01:34:05 GMT wallaby:phantomRunner Creating page for worker #0
Thu, 16 Jun 2016 01:34:05 GMT wallaby:phantomRunner Phantom page created
Thu, 16 Jun 2016 01:34:05 GMT wallaby:phantomRunner Prepared sandbox [worker #0, session #jotv5]
Thu, 16 Jun 2016 01:34:05 GMT wallaby:workers Running tests in sandbox [worker #0, session #jotv5]
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Preparing to serve /wallaby-webpack.js
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Serving /wallaby-webpack.js from cache
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Preparing to serve /__modules/222.js
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Serving /__modules/222.js from cache
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Preparing to serve /__modules/501.js
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Serving /__modules/501.js from cache
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Preparing to serve /__modules/568.js
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Serving /__modules/568.js from cache
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Preparing to serve /__modules/810.js
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Serving /__modules/810.js from cache
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Preparing to serve /__modules/812.js
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Serving /__modules/812.js from cache
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Preparing to serve /__modules/815.js
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Serving /__modules/815.js from cache
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Preparing to serve /__modules/83.js
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Serving /__modules/83.js from cache
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Preparing to serve /test/code/RewardsCentre/ItemDateTest.js.wbp.js
Thu, 16 Jun 2016 01:34:06 GMT wallaby:middleware Serving /test/code/RewardsCentre/ItemDateTest.js.wbp.js from cache
Thu, 16 Jun 2016 01:34:07 GMT wallaby:workers Loaded 2 test(s)
Thu, 16 Jun 2016 01:34:07 GMT wallaby:workers Test executed: should return a formatted date
Thu, 16 Jun 2016 01:34:07 GMT wallaby:workers Sandbox requested early screen shot capture
Thu, 16 Jun 2016 01:34:09 GMT wallaby:workers Run 1 test(s), skipped 0 test(s)
Thu, 16 Jun 2016 01:34:09 GMT wallaby:workers Sandbox [jotv5] is responsive, closing it
Finished executing 1 affected test(s)
Thu, 16 Jun 2016 01:34:09 GMT wallaby:project Test run finished
Thu, 16 Jun 2016 01:34:09 GMT wallaby:project Processed console.log entries
Thu, 16 Jun 2016 01:34:09 GMT wallaby:project Processed loading sequences
Thu, 16 Jun 2016 01:34:09 GMT wallaby:project Processed executed tests
Thu, 16 Jun 2016 01:34:09 GMT wallaby:project Processed code coverage
Thu, 16 Jun 2016 01:34:09 GMT wallaby:project Test run result processed and sent to IDE
...repeats on every change...
ArtemGovorov commented 8 years ago

@szarouski Awesome, thanks a lot! Looks like webpack incremental compilation is taking ~4-5 seconds and it does emit 7 modules as changed for some reason. It may be normal, may be not, possibly could be optimised in the webpack config, I need to have a look. There's another suspicious ~1-2 second delay:

Thu, 16 Jun 2016 01:34:07 GMT wallaby:workers Sandbox requested early screen shot capture
Thu, 16 Jun 2016 01:34:09 GMT wallaby:workers Run 1 test(s), skipped 0 test(s)

that looks strange to me. Do these log lines always have the 1-2 seconds between them, or it was just once? Either way, I'll look into this in the sample to see if there's anything wrong with it.

webuniverseio commented 8 years ago

@ArtemGovorov with workers: { initial: 1, regular: 1 } I can still see that :ghost: is faster than :ghost: 2 and electron is slowest.

webuniverseio commented 8 years ago

@ArtemGovorov Ok, here is a stripped down version of the test: https://github.com/szarouski/wallaby-config (please run npm i && cd root && npm i (in root)). Good news is that after webpack config optimization Some long running code has been detected: one of your tests is taking more than 5000ms to execute. message disappeared. Other good news are that if I run just this test by narrowing pattern in tests section, phantomjs is showing changes within 1-3 seconds. Bad news is that phantom 2 and electron still take around 5-8 seconds.

Do these log lines always have the 1-2 seconds between them, or it was just once?

Always on phantom 2 and electron, but not on phantom 1. I also can't reproduce that issue on stripped down version, so there must be something about files that I didn't include... Not sure what else we can take a look at. I guess I have to start adding files to my stripped down version until I'll see that things run slow. Let me know what you think.

Thank you.

ArtemGovorov commented 8 years ago

@szarouski Hmm, your example repo works the same for me (0-2 seconds from the code change to the result) across all runners (Ph, Ph2, Electron), both on Windows and OSX.

phantom 2 and electron still take around 5-8 seconds

For this sample repo or in the real app?

I guess I have to start adding files to my stripped down version until I'll see that things run slow

This is the only way I can see working for now. I can see nothing in your config that could affect the performance that much. Perhaps it's not your app code itself, but some modules that you import. In the sample you only import very few modules, perhaps some of them that are not used in the example are making the difference?

One thing I spotted in your config is that your tests pattern may be too broad and include some files that are not files with tests. You may need to make sure all the files satisfying the pattern are files with tests or make the pattern stronger, like

"tests": [{"pattern": "test/code/**/*Test.js", "load": false}],
webuniverseio commented 8 years ago

@ArtemGovorov, I found a way to reproduce both of the issues: with delay after Sandbox requested early screen shot capture and Some long running code has been detected: one of your tests is taking more than 5000ms to execute.. I started copying files into my folder with stripped down version, folder by folder and noticed that reaction to test update started to decrease (I didn't copy any additional tests and didn't make any changes to ItemDateTest.js file), becoming slower and slower as I was adding more folders with more files. It looks like even though files that I copied over are not used for particular test, they affect performance somehow.

You can also notice that phantom 1 is working faster than others.

So to reproduce the issue please use same example repository and before running tests please extract following archive in root/Assets folder (archive contains 8192 copies of ItemDate.js file) - Assets.zip. Try to make a change to ItemDateTest, you'll see that reaction is slow.

I also see that if I comment out patterns that include un-relevant stuff (like Assets folder) in files, then performance will be back to normal. So I'm thinking that top level solution would be to have some sort of utility that finds tree of dependencies for specified tests and uses that tree for files, making sure that only relevant stuff is included in files.

What do you think?

ArtemGovorov commented 8 years ago

@szarouski I must be missing something or have something configured differently, but the example repo with tons of files in root/Assets works the same for me (0-2 seconds from the code change to the result) across all runners (Ph, Ph2, Electron), both on Windows and OSX. The cold start is significantly slower (as wallaby has to copy over and instrument lots of files), but the incremental run is still fast for me.

And this is expected: webpack doesn't process those files because they are unreachable for it (no tests require it), and wallaby doesn't touch them at all after the cold start. It would have been an issue if these files were force loaded into the sandbox with load: true because each test run would have to load +8192 scripts, but with load: false these files are not relevant (the same way as non-used node modules are, or any folders with any files for that matter).

webuniverseio commented 8 years ago

@ArtemGovorov, don't know if that will affect anything, but I was deleting cached files from ...\system\wallaby\projects\ folder before starting wallaby. I also forced webstorm to refresh project cache. Like you said cold start is a lot slower, but incremental run is very slow for me - 10-15 seconds. I hope it is not something external that affects performance, like hardware or anti-virus... Will try to reproduce on home PC and let you know.