We are preparing to move this project to GitLab account due to the planned changes in UX and CI\CD pipelines. This repository will be still in use for 2 main aspects:
Public Representation of Albero Project. Helping to Choose the Right Data Backend Technology on Azure.
Here is the main representation of Decision Tree for Data Backend Technologies for Azure. Please use this HTML file for a simple navigation. Click on drill down to be redirected to the subsequent Decision Trees.
Below are some explanations & our comments on why we have created it, how to use it and how to submit request for changes. Enjoy!
Disclaimer. This article represents personal experience and understanding of the authors. Please use this for the reference only. This article doesn’t represent official position of Microsoft.
Simplicity is an ultimate sophistication. -- Leonardo Da Vinci
In this article we are talking a lot about different methods of comparison and selection of databases. Also, we are presenting an alternative approach for looking and considering different options. At the same time, I would like to highlight that this is just one of the viewpoints. Please use below as a reference rather than a prescriptive guidance.
This Decision Tree is: • Map of the Azure Data Services with the main goal to help you to navigate among them and understand their strengths and weaknesses. • Supplementary material to the officially published Microsoft documentation helping you to define and shape your thought process around selection of the certain data technologies and using them together in your solutions. This Decision Tree is not: • A Definitive Guide to selection of Data Technologies. • Business / politics related document. All the criteria we were using are purely technical. • Not a pattern or use-case focused document. • Not a competitive analysis of any kind. We are keeping some responsibility on maintaining this document as long as we can but still would recommend verifying points in the document against Microsoft official guidance and documentation. Also do not hesitate to apply common sense and, please check things before putting into production. Not all the situations are the same / similar.
The article has four chapters: Chapter 1: Read and Write Profiles – explains the premise of the decision tree. Chapter 2: Navigating Through the Decision Tree – guide to navigate through the decision tree. Chapter 3: Mapping Use Case to the Decision Tree – examples of how the decision tree is used for different use cases. Chapter 4: Getting Access and Providing Feedback - Finally, do not hesitate to provide us with your experience / feedback. We will cover how to do this in this chapter.
Our data technologies were developed mainly for two major purposes. And guess what, these are not encryption and obfuscation rather reading and writing data. Actually, mainly for reading as (and I hope you agree with me) there is no point in writing data you cannot read later on. Surprisingly, we never compare data technologies based on their actual read and write behavior. Typically, while compare data technologies we are (pick all that applies):
Basically, we craft design of our data estate based on experience, preferences, and beliefs. When our group faced first time the need to compare different technologies and recommend one, our first thought was – it is impossible. How would you compare NoSQL database to Ledger database? Very simple – using their fundamental read and write goal as a foundation for such a comparison. The essence of the technology remains the same as well as a goal of its creation. A sheep cannot become a tiger 😉
Intuitively (and, hopefully, obviously), if some data has a write path it should also have a read path and may or may not have one or more processing capabilities / tools / approaches.
Of course, loads of the technologies and vendors are claiming that one single solution can solve all the possible issues but the entire history of rise of Data Technologies over the last decade shows that it is surely not the case any longer.
Well it seems that we have finished with WHY and already started with WHAT? Let's move on and show you one of these Decision Trees in more details.
So, in order to help you to navigate across ever-changing and pretty complex Azure Data Landscape, we have built a set of decision trees based on the concept of read and write profiles. Conceptually Decision Tree looks very simple.
Well, it is not that simple obviously. The good thing is that it covers almost entire Azure Data Portfolio in one diagram (which is comprised of more than 20 services, tens of SKUs, integrations and important features). So, it just cannot be super simple. But we are trying 😉
In order to guide you through it, let’s just paste a small example (subset) of this decision tree here and demonstrate you some of the main features and ways to navigate through them.
It is comprised of two main paths read and write patterns. Write pattern from top to the middle marked with blue boxes and lines and read pattern – from the bottom to the middle marked with greenish lines and boxes. This represents some of the fundamental differences in behavior of various technologies. In the grey boxes you can see either questions or workload descriptions. As mentioned, this approach is not strictly defined in the mathematical sense rather follows industry practices and includes specific features and tech aspects which differentiate this technology from others. In case of the doubt, just simply follow yes / no path. When you have to choose among descriptions, you have to find the one which fits best. Below are the components of a simple navigation.
There are also some more tricky parts, where you cannot say with certainty which workload will be a better fit. In such cases we are using wide blue arrows representing “leaning” concept. Pretty much like in one of the examples below.
There is one more style of “leaning” which is represented by the so-called “paradigm”. In some cases there are technologies which will be preferred when you are using particular programming language or stack. In our decision tree this is represented by the notion of “paradigm” as described on a picture below.
Typically, in one paradigm we have more than one product available. To distinguish the main goal of the product within certain paradigm we are using some code wording like in the example above. This goal is represented by one word which is shown above the box with the service and holds the same color as a paradigm.
In most of the technology patterns we also have a “default” path for reads and writes. Typically for a greenfield project this is the easiest and richest path (in terms of functionality, new features and, possibly, overall happiness of the users).
In some cases, we also have implemented a drill-down approach to simplify the navigation. Drill downs lead to a different diagram explaining some details around service offerings or SKUs for a particular product / service.
Drill down will bring you to the new Decision Tree which is specific for the particular technology (such as SQL Database on Azure, PostgreSQL or others). These Decision Trees are following same / similar patterns with some reduced number of possible read and write profiles (as shown on a diagram below). On these Decision Trees SLAs, High-Availability options as well as Storage and RAM limits are defined on the per SKU basis.
Another cool feature of the Decision Tree is a depiction of maximum achievable SLA, High Availability options and Storage / RAM limits (when it makes sense). These are implemented as shown below. Please remember that these might be different from SKU to SKU and only the maximum achievable is shown on the main Decision Tree.
Please note that all / most (just in case we forgot something) of the icons with limitations, HA & SLA are clickable so you will be redirected to the official Microsoft documentation.
One of the newest features is a Developer View. In this view we are listing all the Procedural Languages supported by the technology as well as SDKs and some important limitations of size of items or resultsets where applicable. Also we are depicting supported file types and formats. We are planning to make these references to the official Microsoft documentation (pretty much like it was done with SLAs, Storage, etc.)
With two separate profiles for reads and writes there is a very important and frequently asked question: “What if read and write profiles do not match?” Let’s answer with the question. What do you typically do when your technology used for writes is not suitable for doing reads with the pattern / functionality required? The answer is quite obvious – you will introduce one more technology to your solution. To help you to find which components can be directly integrated with each other we have introduced the concepts of “Integration In” and “Integration Out”. The example of the notation is shown below.
In this example we can see that Azure Synapse Analytics can accept data from: • Azure Cosmos DB using Azure Synapse Link • Azure Data Lake Store Gen2 / Blob Storage using CTAS functionality of Polybase • Azure Stream Analytics via output directly • Azure Databricks using Azure Databricks Connector for Azure Synapse And export data through ADLS Gen2 using CETAS statements of Polybase. As you may see on the Decision Tree itself, we can only see that such an integration is possible, but we are not specifying exact mechanism or its limitations. If you click on this icon, you will be redirected to the official Microsoft documentation. One more important note here. We do not show Azure Data Factory on this diagram as this is the service which meant to be used across the entire Azure portfolio and adding it to the diagram will make it even more messy. So, we implicitly mean that Azure Data Factory can be used to integrate with most of the services mentioned on the Decision Tree. Ok, let’s take a look on how to apply this in practice. In the next chapter we will cover some examples of using Decision Tree to craft the architecture and select appropriate technology for your workload.
Why and How to Map Your Use Case to the Decision Tree As you can see, these Decision Trees can be pretty complex but at the same time they represent almost full subset of data technologies. At the same time, industrial and technological use cases might still be very relevant especially if combined with the Decision Tree as a frame for discussion. In such case one can clearly see not only choices made but also choices omitted. Also, it can immediately give you an idea which alternatives and when you may consider. HOW? Just shade out all what is not needed and add your relevant metrics for the decisions made (for instance, predicted throughput, data size, latency, etc.) Let’s take a closer look on how we can do it. And we will start with a small example.
In this example, your business specializes in the retail industry and you're building a retail management system to track the inventory of the products in each store. You also want to add some basic reporting capabilities based on the geospatial data from the stores. To decide what is the best database for these requirements let’s take our uber tree and start from a write pattern.
The next example of how the Uber Tree can be used as a tool to produce a data architecture comes from the gaming space. Your team is building new features for a massively multiplayer online game and they need to collect and store all actions of the players, analyze those that matter in near-real time and archive the history. As usual, we will start from the write profile. • The events are captured and stored and are never updated. • High throughput is expected with hundreds of thousands of events per second. For the specific use case seems that there is a single path for the writes; Event Hub answers those requirements. But the way we will process and read the data is not in a sequential order. More specifically: • The data needs to be read in a time-series manner prioritizing the most recent and aggregating based on time. • Need to narrow down the analysis to the metrics that are relevant for a particular game and also enrich the data with data coming from different sources, so basically, you need control over the schema. On the read pattern, it looks like Azure Data Explorer would be the most suitable store. In this case, where two different profiles for the write and read are identified, we will leverage two solutions that are integrated. Azure Data Explorer natively supports ingestion from Event Hub. So, we can provide a queue interface to the event producer and an analytical interface to the team that will run the analysis on those events.
In this example, your business specializes in the energy industry and you're building a analytics platform for power plant operation and asset management. It would include all the necessary pieces, from condition monitoring and performance management to maintenance planning, risk management and market scheduling. To decide what is the best approach for these requirements let’s start with the write patterns of our uber tree.
In this example, your business specializes in the healthcare industry and you're building a platform for patient outreach and engagement. You are trying to build an advanced analytics solution looking to take chronically unwell patients that have high utilization of emergency department/ unplanned inpatient services and, through a more coordinated provision of ambulatory services keep them well at home. To decide what is the best approach for these requirements let’s start with the write patterns of our uber tree.
Here we are. Thank you for being with us up to this moment. This will be the shortest Chapter – we promise 😉 You can find interactive Decision Tree on GitHub Pages by following this link: http://albero.cloud/ All the materials can be found in a public GitHub Repository here: https://github.com/albero-azure/albero You can provide your feedback / submit questions and propose materials via Issues of the GitHub Repository. Thank you and have a very pleasant day! BTW. Just tested it from my smartphone and it also looks pretty nice 😉