For example, if an application calls SignalFxClient.send() periodically with 400 datapoints and the default batchSize of 300, it will leak memory because at most one HTTP post is made per call to send():
400 items will get added to signalFxClient.queue
send() will lead to a single call to startAsyncSend(), and 300 (batchSize) items will get removed from signalFxClient.queue, leaving 100 items in signalFxClient.queue
When this is repeated, queue will grow by 100 elements each time, leaking memory
Introduce a max queue size and log a warning if this is exceeded
Chunk calls into batchSize and call SignalFx concurrently
Allow calls to be made in sizes that are larger than batchSize (This may cause problems if SignalFX endpoints reject http requests where the body size in bytes is too large; I haven't tried this)
This affects 7.x and 8.0.0beta releases (I didn't check earlier release lines)
For example, if an application calls SignalFxClient.send() periodically with 400 datapoints and the default batchSize of 300, it will leak memory because at most one HTTP post is made per call to send():
https://github.com/signalfx/signalfx-nodejs/blob/48ce0587390ecfe3145c148339b070bb56d92476/lib/client/ingest/signal_fx_client.js#L231-L243
https://github.com/signalfx/signalfx-nodejs/blob/99d8fb3cd8ca2396ecf42a6a209ae2655bd8611e/lib/client/conf.js#L8
Possible Solutions
this.queue.length >= this.batchSize
https://github.com/signalfx/signalfx-nodejs/blob/48ce0587390ecfe3145c148339b070bb56d92476/lib/client/ingest/signal_fx_client.js#L292This affects 7.x and 8.0.0beta releases (I didn't check earlier release lines)