orientechnologies / orientdb

OrientDB is the most versatile DBMS supporting Graph, Document, Reactive, Full-Text and Geospatial models in one Multi-Model product. OrientDB can run distributed (Multi-Master), supports SQL, ACID Transactions, Full-Text indexing and Reactive Queries.
https://orientdb.dev
Apache License 2.0
4.76k stars 872 forks source link

Scalability of Many Databases #8015

Open mmacfadden opened 6 years ago

mmacfadden commented 6 years ago

This is not exactly a bug report or a feature request, but more of a discussion.

We will have a fairly large set of databases in a multi-tenant situation. Let's say for sake of argument that we will have 5000 database that are around 20GB each. So, we have 100TB of storage to host these database. Now assume that due to load I need to start adding nodes to the cluster, to scale horizontally. For each node, I want to add I need to add 100TB of storage to my infrastructure. This is not really practical. If I have 10 nodes, I wind up with 1000TB of storage required. Yikes.

Interestingly, each database itself, doesn't need 10 nodes. Let's say that each user only imposes enough load to consume a tiny fraction of a single database node. In that case, we need to scale the system to support all the databases concurrently, but mainly we are looking at HA for a single database. In that case I really only need an individual database on 2 nodes (or 3 if I want to make sure I can do a majority write quorum).

It would be interesting to be able to specify how many nodes databases should be present on, and not required each node to have a full copy of ALL databases. This is somewhat like the DHT Auto Shading feature in #6256, but at the database level. When I add a node to the cluster, I do NOT want to have to transfer 100TB of data to that node to get it up and running. I would much rather add a node, and have a certain number of databases start to migrate to that node (essentially auto balancing). It's entirely possible that DHT Auto-Sharding could solve this if implemented properly, if all data from all databases just shards across the whole cluster. The only concern, is that presently an EMPTY database that just has the schema and indices seems to take up about 500MB of storage. If I multiply that by 5000 databases, times 10 nodes. I wind up with 5GB per database times 5000 database for a total of 25TB of data just to store empty databases.

I am not entirely sure how this would interact with node roles (MASTER vs. REPLICA). I suppose the configuration could specify how many masters and replicas required for databases.

In general, I think it would be ideal if one could separate processing scalability and storage scalability / reliability. But in general, if I add a node with storage to the system, the effective storage of the cluster should go up. Now it stays the same. It would also be fantastic if the database could work off of shared storage. We can RAID the storage ourselves for the desired amount of reliability and performance at the storage level. This might be technically challenging for now. It does seem like all read replicas could share the same storage.

luigidellaquila commented 6 years ago

Hi @mmacfadden

Thank you very much for opening this discussion. In principle, I like this idea, it is a common use case and having a standard solution to make it scale is a big pro.

In v 3.x we will mainly concentrate on the distributed architecture, so we will definitely take this into consideration.

As a side node, we will have to understand which is the best strategy to manage connections and routing (ie. if the client has to do the routing and re-routing to specific nodes, or if the cluster has to do pass-through communication)

I'm flagging this as an enhancement and tentatively assigning to @tglman

Thanks

Luigi

mmacfadden commented 6 years ago

@luigidellaquila Perhaps we should add the distributed label to this issue.

luigidellaquila commented 6 years ago

Done, thanks!