nodemailer / smtp-server

Create custom SMTP servers on the fly
Other
846 stars 145 forks source link

How to prevent out of memory errors? #127

Closed niftylettuce closed 5 years ago

niftylettuce commented 5 years ago

This simple test crashes the server (note I'm sending a 1gb file attachment):

test('prevents out of memory', async t => {
  const transporter = nodemailer.createTransport({
    streamTransport: true
  });
  const filePath = path.join(os.tmpdir(), uuid());
  const size = bytes('1gb');
  const { port } = t.context.forwardEmail.server.address();
  const connection = new Client({ port, tls });
  const fh = fs.openSync(filePath, 'w');
  fs.writeSync(fh, 'ok', size);
  const info = await transporter.sendMail({
    from: 'foo@forwardemail.net',
    to: 'Baz <baz@forwardemail.net>',
    subject: 'test',
    text: 'test text',
    html: '<strong>test text</strong>',
    attachments: [{ path: filePath }]
  });
  return new Promise(resolve => {
    connection.once('end', resolve);
    connection.connect(() => {
      connection.send(info.envelope, info.message, err => {
        console.log('err', err);
        t.is(err.responseCode, 450);
        t.regex(err.message, /Message size exceeds maximum/);
        fs.unlinkSync(filePath);
        connection.close();
      });
    });
  });
});
[2019-05-09 21:08:46] INFO  SMTP Server listening on [::]:64843

<--- Last few GCs --->
 marking 169 ms) (average mu = 1.000, current mu = 1.000) finalize increment[53323:0x102802200]     3331 ms: Mark-sweep 797.3 (832.2) -> 777.2 (817.2) MB, 9.3 / 0.1 ms  (+ 4.4 ms in 4 steps since start of marking, biggest step 4.0 ms, walltime since start of marking 2344 ms) (average mu = 0.995, current mu = 0.995) finalize incre[53323:0x102802200]     5320 ms: Mark-sweep 777.2 (817.2) -> 405.0 (446.6) MB, 241.4 / 0.0 ms  (average mu = 0.945, current mu = 0.879) allocation failure GC in old space requested

<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x175492ddbe3d]
    1: StubFrame [pc: 0x175492e08de2]
Security context: 0x0729a7f9e6e9 <JSObject>
    2: fill [0x729a7f87099] [native array.js:1] [bytecode=0x7297792fc81 offset=134](this=0x072947473109 <JSArray[1073741824]>,F=0x07292f5873c9 <String[1]: 0>,ap=0x07292f5826f1 <undefined>,aq=1073741824)
    3: arguments adaptor frame: 1->3
    4: /* anonymous */(aka /* anonymous */) [0x729bee691c1] [/Users/jack/Projects/...

FATAL ERROR: invalid table size Allocation failed - JavaScript heap out of memory
 1: 0x10003c597 node::Abort() [/Users/jack/.nvm/versions/node/v10.15.3/bin/node]
 2: 0x10003c7a1 node::OnFatalError(char const*, char const*) [/Users/jack/.nvm/versions/node/v10.15.3/bin/node]
 3: 0x1001ad575 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/jack/.nvm/versions/node/v10.15.3/bin/node]
 4: 0x100579242 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/jack/.nvm/versions/node/v10.15.3/bin/node]
 5: 0x1006c015e v8::internal::HashTable<v8::internal::NumberDictionary, v8::internal::NumberDictionaryShape>::EnsureCapacity(v8::internal::Handle<v8::internal::NumberDictionary>, int, v8::internal::PretenureFlag) [/Users/jack/.nvm/versions/node/v10.15.3/bin/node]
 6: 0x1006c8e05 v8::internal::Dictionary<v8::internal::NumberDictionary, v8::internal::NumberDictionaryShape>::Add(v8::internal::Handle<v8::internal::NumberDictionary>, unsigned int, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyDetails, int*) [/Users/jack/.nvm/versions/node/v10.15.3/bin/node]
 7: 0x10050874f v8::internal::(anonymous namespace)::DictionaryElementsAccessor::AddImpl(v8::internal::Handle<v8::internal::JSObject>, unsigned int, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, unsigned int) [/Users/jack/.nvm/versions/node/v10.15.3/bin/node]
 8: 0x1006919cc v8::internal::JSObject::AddDataElement(v8::internal::Handle<v8::internal::JSObject>, unsigned int, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::internal::ShouldThrow) [/Users/jack/.nvm/versions/node/v10.15.3/bin/node]
 9: 0x1007f7949 v8::internal::Runtime::SetObjectProperty(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, v8::internal::LanguageMode) [/Users/jack/.nvm/versions/node/v10.15.3/bin/node]
10: 0x1005fa3cc v8::internal::Runtime_KeyedStoreIC_Slow(int, v8::internal::Object**, v8::internal::Isolate*) [/Users/jack/.nvm/versions/node/v10.15.3/bin/node]
niftylettuce commented 5 years ago

I updated the test above.

niftylettuce commented 5 years ago

I'm looking at using messageSplitter.bodySize right now.

https://github.com/niftylettuce/forward-email/blob/master/message-splitter.js#L22

If you have any thoughts let me know, thank you as always.

niftylettuce commented 5 years ago

I feel like there has to be a better way to end / send a response over SMTP if it exceeds max byte length. It can't just be if (!exceeded) chunks.push?

niftylettuce commented 5 years ago

@andris9 here's a test specific against wildduck.email:

const path = require('path');
const os = require('os');
const fs = require('fs');

const bytes = require('bytes');
const nodemailer = require('nodemailer');
const uuid = require('uuid');
const Client = require('nodemailer/lib/smtp-connection');

const transporter = nodemailer.createTransport({
  streamTransport: true
});
const size = bytes('1gb');
const tls = { rejectUnauthorized: false };

(async () => {
  try {
    const connection = new Client({
      port: 25,
      host: 'mail.wildduck.email',
      tls,
      debug: true
    });
    const filePath = path.join(os.tmpdir(), uuid());
    const fh = fs.openSync(filePath, 'w');
    fs.writeSync(fh, 'ok', size);

    const info = await transporter.sendMail({
      from: 'foo@forwardemail.net',
      to: 'yipyip123123@wildduck.email',
      subject: 'test',
      text: 'test text',
      html: '<strong>test text</strong>',
      attachments: [{ path: filePath }]
    });
    console.log('info', info);
    connection.once('end', () => {
      console.log('connection ended');
    });
    connection.once('error', err => {
      console.error('connection error', err);
    });
    connection.connect(() => {
      connection.login(
        { credentials: { user: 'yipyip123123', pass: 'yipyip123123' } },
        () => {
          console.log('logged in');
          connection.send(info.envelope, info.message, err => {
            console.log('err', err);
            fs.unlinkSync(filePath);
            connection.close();
          });
        }
      );
    });
    console.log('info', info);
  } catch (err) {
    console.error(err);
  }
})();
niftylettuce commented 5 years ago

On a related note, I saw that you wrapped this with a try/catch - but there is never an error thrown by that function (and all the functions it recursively calls). https://github.com/niftylettuce/forward-email/blob/master/message-splitter.js#L127-L131

In any case, if an error were to be thrown, it doesn't get sent to the callback, it actually causes an uncaught exception.

niftylettuce commented 5 years ago

To elaborate on my previous comment:

If you change this code from:

    try {
      headersFound = this._checkHeaders(chunk);
    } catch (err) {
      return callback(err);
    }

to

    try {
      headersFound = this._checkHeaders(chunk);
    } catch (err) {
      return callback(err);
    }
    // <--- add a random callback error here
    return callback(new Error('UH OH! OOPS!'));

The line added causes an uncaught exception.

andris9 commented 5 years ago

I extracted this standalone smtp-server + mailsplitter to test here. I can't reproduce these errorrs:

niftylettuce commented 5 years ago

on line 199 of your example at https://gist.github.com/andris9/179f3c685ddb161856a5b2ebc090fc14, you wrote stream.sizeExceeded. Throughout that entire readable event listener function, and throughout the while loop in particular, the value of stream.sizeExceeded is undefined.

niftylettuce commented 5 years ago

what I am suggesting is that the moment the file limit is reached, the SMTP connection stays connected and continues to send along the rest of the file. it is not until the file is completely transferred that an error response is returned

my hope is that we can find a solution where the moment the file limit is reached in bytes, then an error occurs and the smtp connection disconnects

andris9 commented 5 years ago

Oh, you're right, stream.sizeExceeded is not set until input stream is ended. This is why your app ran out of memory. I updated the example to count the size at place, see lines 200 to 210.

You can close the socket once max size has reached but it is not advisable. Per SMTP until you get a 5xx response any transaction problem is a soft bounce, which means that if you do not respond with 5xx the sender keeps retrying to send the message and that may go on for several days. Sending 5xx immediatelly before closing the connection does not work (at least not for all the clients) as per SMTP the server can send a response after the client has sent message terminating \r\n.\r\n sequence.

andris9 commented 5 years ago

Just for an example, here's how Gmail treats large messages. I tried to send a message with 100MB attachment which is way beyond Gmails limit, and it waited for the terminating dot before replying with the error:

C: ...100MB attachment streamed...
C: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
C: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
C: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
C: AG9r
C: ----_NmP-c77a5c1c94feceb4-Part_1--
C: .
<143490292 bytes encoded mime message (source size 141650649 bytes)>
S: 552-5.2.3 Your message exceeded Google's message size limits. Please visit
S: 552-5.2.3  https://support.google.com/mail/?p=MaxSizeError to view our size
S: 552 5.2.3 guidelines. z11si4008837lji.68 - gsmtp
niftylettuce commented 5 years ago

The Node process is consumed by the file upload though, and until the stream is finished the thread is not freed up for another SMTP request. Imagine you have a 4 CPU server, 4 threads running with pm2, and you get 4x1 TB file attachment uploads at the same time. Your server would basically be frozen and unresponsive to other valid clients trying to connect. What can be done here?

andris9 commented 5 years ago

What makes you think that? Thats the entire point of Node that the thread is not occupied due to using event loop. That TB attachment is not sent as a single large entity but as a series of small TCP chunks. For the uploader it takes a while to upload the chunk, for Node process it takes almost no time to process it (as nothing is done with the chunk).

andris9 commented 5 years ago

And don’t test the TB uploads locally where you do not have netwotk latency, it does not give you realistic results

niftylettuce commented 5 years ago

Not sure if you saw this, but if I try to connect 500+ connections at once to SMTP server (with default options), the server has an uncaught exception.

I narrowed the uncaught exception down to this line (based off my current version of Node and the output).

https://github.com/nodejs/node/blob/v10.15.3/lib/net.js#L1097

niftylettuce commented 5 years ago

I figured out why it was throwing uncaught exception, it's because I did not have a connection.on('error', err => ...); listener function. Full error object:

{ Error: connect ECONNRESET 127.0.0.1:52638
    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1097:14)
  errno: 'ECONNRESET',
  code: 'ESOCKET',
  syscall: 'connect',
  address: '127.0.0.1',
  port: 52638,
  command: 'CONN' }
andris9 commented 5 years ago

Was it a handled exception that can be catched by smtpserverinstance.on(‘error’) or was it unhandled that kills your process?

At that count you might be running out of file descriptors etc. If you can catch the error then I wouldn’t worry too much.

smtp-server also has maxClients option to limit active connections. Also If you have 4cpus then I would use nodejs cluster module with 8 workers, each running its own smtp server instance. In this case if one instance explodes then it limits the damage to other open connections.

niftylettuce commented 5 years ago

Any reason why two workers per CPU in particular? Just for efficiency 50/50 split?

niftylettuce commented 5 years ago

Usually I just do 1 thread per CPU using PM2 which handles clustering

andris9 commented 5 years ago

Yeah dont’ worry about these ECONNRESET errors under load. Thats most probably not legit traffic anyway.

andris9 commented 5 years ago

Having more workers than threads is not so much about efficiency but damage control. You start as many workers as the system can reasonably handle and if one dies then it does not affect other connections

andris9 commented 5 years ago

If you start having resource issues during normal run then you should consider upgrading hardware anyway. Having more workers than threads is analogous to web hosting overselling - normally it should not affect anything. If it does then upgrade

niftylettuce commented 5 years ago

Thank you as always.