Open paton opened 9 years ago
Look at CPU utilization, if it jumps to 100% and stays there, that would explain why everything times out. That's usually a sign of a memory leak.
Memory is at ~10% as the errors occur... doesn't seem to go much above that.
CPU might spike up temporarily during the 2-3 seconds it takes Zombie to error out, put it goes right back down to ~1% after the error.
Node has a memory limit of 1GB (see https://github.com/joyent/node/wiki/FAQ).
Memory was about 450mb at time of error (10% machine usage).
FWIW, we've been running 2.5.1 for a while which has been stable.
Really appreciate the fast reply.
I can't think of anything other than a memory leak exhibiting "time bomb" behavior.
I was giving Zombie a first try and I receive this error, too. Unfortunately, I can't get past this first step. I tried to visit a few different pages and finally settled on google.com simply for debugging purposes but no luck. I'm running it as a mocha test using zombie 3.1.1, node 0.12.0, mocha 2.2.1.
As far as I can tell there's no memory spike, cpu spike, etc... It just fails with this error and that's it.
I've seen this too on 3.1.1, and will run into 100% cpu utilization when I set the waitDuration to something pretty high. This doesn't happen on 2.5.1. @assaf , are you suggesting the memory leak is on client-side javascript and not zombie?
I'm just leaving a "me too" on the error. I found this thread while hunting for the error message. I don't have any more data to add on the cause though, except to say that I am getting it after running a pretty good while and processing a few hundred HTTP requests.
Also experiencing the same issue. Happens for any visit call to any domain. Unlike thomaslsimpson above me, it doesn't allow me to process anything, error hits almost a second or two after a call to visit, so indeed seems like a memory leak/overflow as it's almost instant. Let me know if you need anything more from me.
Running Debian 64x, io.js 1.6.4, npm 2.7.5, zombie 4.0.5.
Error: "unhandled rejection Error: Timeout: did not get to load all resources on this page at timeout (/home/zackiles/workspace/modern-scanner/node_modules/zombie/lib/eventloop.js:545:36) at Timer.listOnTimeout (timers.js:89:15)"
I have the same problem. I tried changin the timeout value without success
I'm having this problem as well with 3.1.1. Doesn't happen all the time, but most. I'd say something like 60 percent of the time I get this error when I call:
var browser = new Browser({
maxWait: 10000,
loadCSS: false
});
browser.visit(url, function (e) {
if (e) {
throw e;
}
// wait for new page to be loaded then fire callback function
browser.wait().then(function() {
return callback(null);
});
});
I ended up using spookyJS, it works perfectly if you don't want to wait this issue to be solved.
Well I'm using zombiejs to login to a page then navigate around and download stuff and it looks like that would be a nightmare with spookiejs. I wish this could just get a little attn because this is the only real issue that is keeping me from using this module.
I use it for the same purpose I don't think it is a nightmare. But if Zombie is working in your environment it is certainly better and faster.
I get this same error on every time:
$ DEBUG=zombie node server.js
Robot listening at http://0.0.0.0:8080
Executing gallery item id 13123
zombie Opened window http://studio.code.org/c/42489384/edit +0ms
zombie GET http://studio.code.org/c/42489384/edit => 200 +1s
zombie Loaded document http://studio.code.org/c/42489384/edit +111ms
zombie GET http://studio.code.org/shared/js/client_api.js => 200 +620ms
zombie GET http://studio.code.org/shared/js/initApp.js => 200 +111ms
server.js:18
throw e;
^
Error: Timeout: did not get to load all resources on this page
at timeout (node_modules/zombie/lib/eventloop.js:543:36)
at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)
Code at https://github.com/ottok/code-org-robot/blob/master/server.js
If I disable throwing an error from the first browser.visit if will continue to load more resources, but at no point the complete page. Fiddling around with different uses of browser.wait or traditional setTimeout cat get a few steps longer but eventually it still fails with the same error:
zombie Opened window http://studio.code.org/c/42489384/edit +19ms
zombie GET http://studio.code.org/c/42489384/edit => 200 +1s
zombie Loaded document http://studio.code.org/c/42489384/edit +76ms
zombie GET http://studio.code.org/shared/js/client_api.js => 200 +643ms
zombie GET http://studio.code.org/shared/js/initApp.js => 200 +101ms
Code.org - Click 'Run' to see my program in action
zombie GET http://studio.code.org/assets/application-de3a2cfca2d3211ac5ee15e95b385684.js => 200 +3s
zombie Fired setTimeout after 1ms delay +181ms
zombie GET http://www.google-analytics.com/analytics.js => 200 +895ms
zombie GET http://js-agent.newrelic.com/nr-632.min.js => 200 +29ms
Unhandled rejection Error: Timeout: did not get to load all resources on this page
at timeout (node_modules/zombie/lib/eventloop.js:543:36)
at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)
zombie GET http://studio.code.org/blockly/js/blockly.js => 200 +5s
:+1:
edit I just increased the waitDuration to make it go away for me:
var zombie = require('zombie');
zombie.waitDuration = '30s';
yep, having the same error. My page loads a google tag manager container, I'm wondering if it's hanging waiting to retrieve that?
Same here increasing wait time dosen't resolve the issue either , major let down for me :unamused:
Just a suggestion, try to open your page in Chrome with the console open to see if there are any errors popping up. I know that zombie reports an error if anything goes wrong in the page.
@mikegleasonjr but there should be a way to ignore such errors if I'm not interested in them!
something like zombie.ignoreDroppedRequests = true;
there are no errors , like i said this works fine when you browse from a normal browser but when you automate the process through zombie this happens
I've had this problem, not in a long running process, but because of a tag in google tag manager. It was loading a javascript file, that included another js file, which finally loaded an iframe. Something about that code was causing zombie to reliably throw this error.
Here's the actual tag if it's helpful:
<script>
__reach_config = {
pid: '<removed>',
url: window.location.protocol + "//" + window.location.hostname+window.location.pathname,
reach_tracking: false
};
(function(){
var s = document.createElement('script');
s.async = true;
s.type = 'text/javascript';
s.src = document.location.protocol + '//d8rk54i4mohrb.cloudfront.net/js/reach.js';
(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(s);
})();
</script>
@sjparkinson that's my exact scenario (using GTM) and I get the same resources error that is being talked about here. Did you ever solve or get around it?
Unfortunately not. We're currently reviewing the testsuite as it's a bit flakey. My "fix" is to disable GTM on the development environment.
I experience the same problem, and am unable to fix it by removing google analytics code from the webpage.
Okay, so by searching the repo for 'maxWait' I found some piece of a changelog which points out that the option is now called waitDuration
. By setting waitDuration
to 30 * 1000
in var browser = new zombie({waitDuration: 30*1000})
I was able to fix this issue.
I must point out that the documentation of this project is terrible.
I had this problem when trying to use 127.0.0.1 instead of localhost in the line:
zombie.localhost('localhost', 5000);
even with waitDuration: 30*1000
+1
zombie.waitDuration = '30s';
fixed it for me...
before it was maxWait
, and waitFor
int as ms... now theres a new one and it expects a string???
This is working for me too.
Same problem here, setting the waitDuration doesn't solve it in my case.
I've since switched to serializable views (i.e. React) and Jest snapshots
edit I just increased the waitDuration to make it go away for me:
var zombie = require('zombie'); zombie.waitDuration = '30s';
had the same problem, setting a longer waitduration solved it. thanks
When using Zombie on a long running process (24+ hours) I occasionally run in to this error:
Once this error appears, all subsequent Zombie requests fail. Restarting the server resolves the issue.
Here's our implementation of Zombie:
Zombie 3.1.0 Node 0.10.31
Any ideas?