bestlong / node-red-contrib-mssql-plus

A Node-RED node to read and write to Microsoft MS SQL Databases
MIT License
31 stars 18 forks source link

Using split in combination with MSSQL-plus and azureSQL #51

Closed Marklundarn closed 2 years ago

Marklundarn commented 3 years ago

First of all. Great work with this node. Easy to use and still versatile. I might however have push it to it's limit in this scenario and like to discuss it.

This is the scenario that im testing. image

I want to be able to run a query for each row (object in my array) but im facing the issue Resource ID : 1. The request limit for the database is 30 and has been reached. See 'http://go.microsoft.com/fwlink/?LinkId=267637' for assistance.

From what i can understand the node is opening a connection for each query that runs and that would totally make sense. In this case it does however create problems with limits towards Azure.

What i think could solve this problem would be if we actually open the connection --> run alla queries --> close connection. Something like this image

I have found a couple of workarounds like concatenate the complete query and then run. That technically solves the issue but makes the flow harder to follow and maintain.

Is there a way to control the opening and closing of the connection manually?

A example flow to recreate the issue flows.zip

Steve-Mcl commented 3 years ago

Hi @Marklundarn how many queries are executed from your object? I assume more than 30?

Reading up on the issue it's to do with concurrent requests and it is a limitation set by your azure package. For example if you moved up from the basic package you would get more concurrent requests.

Are these queries quite large & do they take some time to respond? If so & you are in no major hurry to get the answers back you could use a delay node-set to rate limit queries to 1 per second.

Also I haven't had a chance to look at your flow, could you post an example of a couple of queries? Perhaps it's just a case of building a dynamic where clause to perform just one query? Maybe a stored procedure and TVP would be more suitable?

Marklundarn commented 3 years ago

Hi,

Yeah, im running about 250 as a test case. Actually thought "request" was the same as "opened connections" but that doesn't seem to be the case. With the limitations of azure SQL i think the only viable solution is to concatenate into one big request. Thanks for you input! :)

/Jens

Marklundarn commented 3 years ago

Hi,

After reading up a bit more on how mssql seems to handle request/workers. It do indeed seem like 1 connection instance will always be using at least 1 worker(request) (maybe more depending on the Max DOP setting on dbServer) .

That means that if we are opening a connection for each "run" of the mssql-node and closing the connection instance after the query are done we'll risk ending upp with multiple instances of connections opened at the same time. Each connection instance using a minimum of 1 request/worker slot. This would max out the worker/requests slot almost instantly and azure would reject our connections. But if open a single connection instance and re-use it to execute our queries and then closing when all queries are executed. We would only be using 1 connection which would mean less workers/requests towards azure even with the same amount of queries. I also think that this approach would remove som overhead created by opening and closing connections for each query.

Do understand that this isn't some minor thing to do but just wanted to contribute with my thoughs on the issue :)

UNOPARATOR commented 2 years ago

@Marklundarn; since the limitation is regarding your Azure SQL Database Resource Limits, you might consider adding a delay node (set as a Rate Limiter) or maybe try the node-red-contrib-semaphore node instead.

Marklundarn commented 2 years ago

@Marklundarn; since the limitation is regarding your Azure SQL Database Resource Limits, you might consider adding a delay node (set as a Rate Limiter) or maybe try the node-red-contrib-semaphore node instead.

Thanks! I'll look into that :)