MicrosoftDocs / azure-docs

Open source documentation of Microsoft Azure
https://docs.microsoft.com/azure
Creative Commons Attribution 4.0 International
10.24k stars 21.41k forks source link

Rate of scaling up and down #42667

Closed dmarlow closed 4 years ago

dmarlow commented 4 years ago

We're considering using the serverless feature, but we're trying to understand how quickly resources can be made available and if that affected by the size of the database. We're not considering using any form of auto pausing/resuming. Just want to scale between the minimum and maximum cores to stay cost optimized. Does the database size make any difference in how quickly a database can scale up to more cores?


Document Details

Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.

SumanthMarigowda-MSFT-zz commented 4 years ago

@dmarlow Thanks for your question. We are checking on this and will respond to you soon.

NavtejSaini-MSFT commented 4 years ago

@dmarlow This link in the document explains this behavior. image

The best possible way to even mitigate the connections drop is to always have the retry logic in your application. When the connections are dropped for a moment, if there is a retry event, it it will get over the error.

dmarlow commented 4 years ago

I'm not so concerned about the connection drop as I am having a database that is large that causes auto-scaling to take a long time (i.e. isn't able to quickly respond to scale up/down requests).

NavtejSaini-MSFT commented 4 years ago

@oslake Please confirm if the timing will be affected by size of DB. Will it cause a long time to auto scale or will it only depend on number of vcores added.

NavtejSaini-MSFT commented 4 years ago

@dmarlow We have checked with our product group team and they have conveyed that the storage/DB size doesn't affect its auto-scaling latency.

dmarlow commented 4 years ago

I'm curious how that works. It must be doing something different than what scaling a DB does today, right? Today, when you change the size/scale of a db, it copies the data to a new cluster; then, things are pointed over to it. Severless must be doing something different under the covers. How does it work?

NavtejSaini-MSFT commented 4 years ago

@dmarlow Compute and storage are separated on different machines so there is no data copy involved in re scaling.

dmarlow commented 4 years ago

Very cool. So compute can be provisioned and the disconnect happens when pointing traffic to new compute nodes. Data doesn't need to move as both compute clusters utilize the same underlying storage. Makes sense. Thanks for the clarifications.