Closed Hiswe closed 9 years ago
@Hiswe glad to hear you're migrating to nano :)
Could you provide more detailled information about the error?
@jo thanks for answering so fast :)
I use node v0.12.0 and Apache CouchDB 1.6.1
I have this error in my console during a db.get:
{ [Error: error happened in your connection]
name: 'Error',
scope: 'socket',
errid: 'request',
code: 'ECONNRESET',
description: 'socket hang up',
stacktrace:
[ 'Error: socket hang up',
' at createHangUpError (_http_client.js:215:15)',
' at Socket.socketCloseListener (_http_client.js:247:23)',
' at Socket.emit (events.js:129:20)',
' at TCP.close (net.js:476:12)' ] }
And nothing in the couchDB logs.
the last operation was:
[info] [<0.6299.1>] 127.0.0.1 - - POST /etherpadlite/_bulk_docs 201
Have you tried increasing maxSockets
?
From the request options docs:
pool
- An object describing which agents to use for the request. If this option is omitted the request will use the global agent (as long as your options allow for it). Otherwise, request will search the pool for your custom agent. If no custom agent is found, a new agent will be created and added to the pool.
- A
maxSockets
property can also be provided on thepool
object to set the max number of sockets for all agents created (ex:pool: {maxSockets: Infinity}
).- Note that if you are sending multiple requests in a loop and creating multiple new
pool
objects,maxSockets
will not work as intended. To work around this, either userequest.defaults
with your pool options or create the pool object with themaxSockets
property outside of the loop.
With nano:
var db = require('nano')({
url: 'http://localhost:5984/mydb',
requestDefaults: {
pool: {
maxSockets: Infinity
}
}
})
Yes I'd try but it was even worse. Just after the bulk operations, on the first get I had this:
from my app
{ [Error: error happened in your connection]
name: 'Error',
scope: 'socket',
errid: 'request',
code: 'ENFILE',
errno: 'ENFILE',
syscall: 'connect',
description: 'connect ENFILE',
stacktrace:
[ 'Error: connect ENFILE',
' at exports._errnoException (util.js:746:11)',
' at connect (net.js:833:19)',
' at net.js:928:9',
' at GetAddrInfoReqWrap.asyncCallback [as callback] (dns.js:81:16)',
' at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:99:10)' ] }
in couch
[error] [<0.12051.0>] {error_report,<0.30.0>,
{<0.12051.0>,std_error,
[{application,mochiweb},
"Accept failed error",
"{error,emfile}"]}}
=ERROR REPORT==== 27-Apr-2015::18:19:01 ===
application: mochiweb
"Accept failed error"
"{error,emfile}"
[error] [<0.12051.0>] {error_report,<0.30.0>,
{<0.12051.0>,crash_report,
[[{initial_call,
{mochiweb_acceptor,init,
['Argument__1','Argument__2','Argument__3']}},
{pid,<0.12051.0>},
{registered_name,[]},
{error_info,
{exit,
{error,accept_failed},
[{mochiweb_acceptor,init,3,
[{file,"mochiweb_acceptor.erl"},{line,34}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,237}]}]}},
{ancestors,
[couch_httpd,couch_secondary_services,
couch_server_sup,<0.31.0>]},
{messages,[]},
{links,[<0.101.0>]},
{dictionary,[]},
{trap_exit,false},
{status,running},
{heap_size,376},
{stack_size,27},
{reductions,258}],
[]]}}
[error] [<0.101.0>] {error_report,<0.30.0>,
{<0.101.0>,std_error,
{mochiweb_socket_server,297,
{acceptor_error,{error,accept_failed}}}}}
Looks like you reach a system limit of too many open files (sockets). How many requests are you firing at one time? Can you batch them?
You can try to increase the number of open files you system allows, see PAM and ulimit
I think after each bulk, it makes ±10000 async get…
trying to follow this doesn't solve the issue… Still, it's kind of beyond my knowledge ^^
_all_docs
queries (via db.fetch
)eachLimit
, when using async via eachLimit
)Again thanks for all the answers :smile:
In fact UeberDB aim to be a common interface between a lot of DB.
So it's true that it should be better to use a db.fetch
but in this case, I think I can't.
As for the eachLimit
, it isn't what the maxSockets
is for? Handling the requests should be in the DB size, not in my backend code, no?
I don't know... firing 10000 requests at once just feels wrong. I think every part of a system needs to act appropriate ;)
As for the maxSockets
and eachLimit
I don't know. Have you tried decreasing the number of sockets?
Ok following your advices I have tested with 1 max socket and after 50. All went good, even if 50 was the previous setup where I had my econnreset. So I think that tweaking the ulimit fix this issue in a way or another.
Thanks again! I can now make my pull request to ueberDB!
:+1:
Hi,
I'm trying to migrate ueberDb's couch plugin to nano (this is my fork here ) Unfortunately during the tests I have always an ECONNRESET before the end of them, like after ±15min of intense operations.
I've tried the same tests with their old couchDb lib (felix-couchdb which I have to fork also) and I haven't such an error.
So I'm not sure if it's really a nano issue or not, but you may give me a hint about this?