anitsh / til

Today I Learn (til) - Github `Issues` used as daily learning management system for taking notes and storing resource links.
https://anitshrestha.com.np
MIT License
78 stars 11 forks source link

Architecture Tradeoff Analysis Method Collection ATAM - Carnegie Mellon University #693

Open anitsh opened 3 years ago

anitsh commented 3 years ago

A method for evaluating software architectures relative to quality attribute goals. A leading method in the area of software architecture evaluation.

It aids in eliciting sets of quality requirements along multiple dimensions, analyzing the effects of each requirement in isolation, and then understanding the interactions of these requirements.

It exposes architectural risks that potentially inhibit the achievement of an organization's business goals.

It reveals

Benefits

Many people have a stake in a system's architecture, and all of them exert whatever influence they can on the architect(s) to make sure that their goals are addressed. For example, the users want a system that is easy to use and has rich functionality. The maintenance organization wants a system that is easy to modify. The developing organization (as represented by management) wants a system that is easy to build and that will employ the existing work force to good advantage. The customer (who pays the bill) wants the system to be built on time and within budget. All of these stakeholders will benefit from applying the ATAM. And needless to say, the architect is also a primary beneficiary.

Output is an outbrief presentation and/or a written report that includes the major findings of the evaluation. These are typically

It takes three to four days and gathers together a trained evaluation team, architects, and representatives of the architecture's various stakeholders.

The most important results are improved architectures.

Challenges

Most complex software systems are required to be modifiable and have good performance. They may also need to be secure, interoperable, portable, and reliable. But for any particular system

Description

Business drivers and the software architecture are elicited from project decision makers. These are refined into scenarios and the architectural decisions made in support of each one. Analysis of scenarios and decisions results in identification of risks, non-risks, sensitivity points, and tradeoff points in the architecture. Risks are synthesized into a set of risk themes, showing how each one threatens a business driver.

The ATAM consists of nine steps:

Resource

462 #176

anitsh commented 2 years ago

The ATAM gets its name because it not only reveals how well an architecture satisfies particular quality goals (such as performance or modifiability), but it also provides insight into how those quality goals interact with each other, how they trade off against each other. Such design decisions are critical. They have the most far-reaching consequences and are the most difficult to change after a system has been implemented.

When evaluating an architecture using the ATAM, the goal is to understand the consequences of architectural decisions with respect to the quality attribute requirements of the system. Why do we bother? Quite simply, an architecture is the key ingredient in a business or an organization’s technological success. A system is motivated by a set of functional and quality goals. And to be successful, it must also do so within strict performance, availability, modifiability, and cost parameters.

The architecture is the key to achieving, or failing to achieve, these goals. The ATAM is a means of determining whether these goals are achievable by the architecture as it has been conceived, before enormous organizational resources have been committed to it.

An architecture analysis method so that the analysis is repeatable. Having a structured method helps ensure that the right questions regarding an architecture will be asked early, during the requirements and design stages when discovered problems can be solved relatively cheaply. It guides users of the method, the stakeholders, to look for conflicts and for resolutions to these conflicts in the software architecture.

This method has also been used to analyze legacy systems. This frequently occurs when the legacy system needs to support major modifications, integration with other systems, porting, or other significant upgrades. Assuming that an accurate architecture of the legacy system is available (which frequently must be acquired and verified using architecture extraction and conformance testing methods, applying the ATAM results in increased understanding of the quality attributes of the system.

ATAM draws its inspiration and techniques from three areas:

The ATAM is intended for analysis of an architecture with respect to its quality attributes. Although this is the ATAM’s focus, there is a problem in operationalizing this focus. We (and the software engineering community in general) do not understand quality attributes well: what it means to be “open” or “interoperable” or “secure” or “high performance” changes from system to system, from stakeholder to stakeholder, and from community to community.

Efforts on cataloguing the implications of using design patterns and architectural styles contribute, frequently in an informal way, to ensuring the quality of a design. More formal efforts also exist to ensure that quality attributes are addressed. These consist of formal analyses in areas such as performance evaluation [Klein 93], Markov modeling for availability, and inspection and review methods for modifiability.

But these techniques, if they are applied at all, are typically applied in isolation and their implications are considered in isolation. This is dangerous. It is dangerous because all design involves tradeoffs and if we simply optimize for a single quality attribute, we stand the chance of ignoring other attributes of importance. Even more significantly, if we do not analyze for multiple attributes, we have no way of understanding the tradeoffs made in the architecture places where improving one attribute causes another one to be compromised.

It is important to clearly state what the ATAM is and is not:

Implications:

Goals

Aim

Risks are architecturally important decisions that have not been made (e.g., the architecture team has not decided what scheduling discipline they will use, or has not decided whether they will use a relational or object oriented database), or decisions that have been made but whose consequences are not fully understood (e.g., the architecture team has decided to include an operating system portability layer, but are not sure what functions need to go into this layer).

Sensitivity points are parameters in the architecture to which some measurable quality attribute response is highly correlated. For example, it might be determined that overall throughput in the system is highly correlated to the throughput of one particular communication channel, and availability in the system is highly correlated to the reliability of that same communication channel.

A tradeoff point is found in the architecture when a parameter of an architectural construct is host to more than one sensitivity point where the measurable quality attributes are affected differently by changing that parameter. For example, if increasing the speed of the communication channel mentioned above improves throughput but reduces its reliability, then the speed of that channel is a tradeoff point.

Risks, sensitivity points, and tradeoff points are areas of potential future concern with the architecture. These areas can be made the focus of future effort in terms of prototyping, design, and analysis.

A prerequisite of an evaluation is to have a statement of quality attribute requirements and a specification of the architecture with a clear articulation of the architectural design decisions. However, it is not uncommon for quality attribute requirement specifications and architecture renderings to be vague and ambiguous. Therefore, two of the major goals of ATAM are to

Given the attribute requirements and the design decisions, the third major goal of ATAM is to

The notion of a quality attribute characterization is a key concept upon which ATAM is founded.

Quality attribute characterizations answer the following questions about each attribute:

One of the positive consequences of using the ATAM that we have observed is a clarification and concretization of quality attribute requirements. This is achieved in part by eliciting scenarios from the stakeholders that clearly state the quality attribute requirements in terms of stimuli and responses. The process of brainstorming scenarios also fosters stakeholder communication and consensus regarding quality attribute requirements. Scenarios are the second key concept upon which ATAM is built.

To elicit design decisions we start by asking what architectural approaches are being used to achieve quality attribute requirements. Our goal in asking this question is to elicit the architectural approaches, styles, or patterns used that contribute to achieving a quality attribute requirement. You can think of an architectural style as a template for a coordinated set of architectural decisions aimed at satisfying some quality attribute requirements.

Once we have identified a set of architectural styles or approaches we ask a set of attributespecific questions (for example, a set of performance questions or a set of availability questions) to further refine our knowledge of the architecture. The questions we use are suggested by the attribute characterizations. Armed with knowledge of the attribute requirements and the architectural approaches, we are able to analyze the architectural decisions.

Attribute-based architectural styles (ABASs) [Klein 99a] help with this analysis. Attribute-based architecture styles offer attribute-specific reasoning frameworks that illustrate how each architectural decision embodied by an architectural style affects the achievement of a quality attribute. For example, a modifiability ABAS would help in assessing whether a publisher/subscriber architectural style would be well-suited for a set of anticipated modifications. The third concept upon which ATAM is found is, thus, the notion of attribute-based architectural styles.

Briefly introduce the ATAM, explain its foundations, discuss the steps of the ATAM in detail, and concludes with an extended example of applying the ATAM to a real system.

The ATAM is an analysis method organized around the idea that architectural styles are the main determiners of architectural quality attributes. The method focuses on the identification of business goals which lead to quality attribute goals. Based upon the quality attribute goals, we use the ATAM to analyze how architectural styles aid in the achievement of these goals.

The steps of the method are as follows:

Presentation

  1. Present the ATAM. The method is described to the assembled stakeholders (typically customer representatives, the architect or architecture team, user representatives, maintainers, administrators, managers, testers, integrators, etc.).
  2. Present business drivers. The project manager describes what business goals are motivating the development effort and hence what will be the primary architectural drivers (e.g., high availability or time to market or high security).
  3. Present architecture. The architect will describe the proposed architecture, focusing on how it addresses the business drivers.

Investigation and Analysis

  1. Identify architectural approaches. Architectural approaches are identified by the architect, but are not analyzed.
  2. Generate quality attribute utility tree. The quality factors that comprise system “utility” (performance, availability, security, modifiability, etc.) are elicited, specified down to the level of scenarios, annotated with stimuli and responses, and prioritized.
  3. Analyze architectural approaches. Based upon the high-priority factors identified in Step 5, the architectural approaches that address those factors are elicited and analyzed (for example, an architectural approach aimed at meeting performance goals will be subjected to a performance analysis). During this step architectural risks, sensitivity points, and tradeoff points are identified.

Testing

  1. Brainstorm and prioritize scenarios. Based upon the exemplar scenarios generated in the utility tree step, a larger set of scenarios is elicited from the entire group of stakeholders. This set of scenarios is prioritized via a voting process involving the entire stakeholder group.
  2. Analyze architectural approaches. This step reiterates step 6, but here the highly ranked scenarios from Step 7 are considered to be test cases for the analysis of the architectural approaches determined thus far. These test case scenarios may uncover additional architectural approaches, risks, sensitivity points, and tradeoff points which are then documented.

Reporting

  1. Present results. Based upon the information collected in the ATAM (styles, scenarios, attribute-specific questions, the utility tree, risks, sensitivity points, tradeoffs) the ATAM team presents the findings to the assembled stakeholders and potentially writes a report detailing this information along with any proposed mitigation strategies.
anitsh commented 2 years ago

Quality Attribute Characterizations

Evaluating an architectural design against quality attribute requirements necessitates a precise characterization of the quality attributes of concern. For example, understanding an architecture from the point of view of modifiability requires an understanding of how to measure or observe modifiability and an understanding of how various types of architectural decisions impact this measure. To use the wealth of knowledge that already exists in the various quality attribute communities, we have created characterizations for the quality attributes of performance, modifiability, and availability, and are working on characterizations for usability and security. These characterizations serve as starting points, which can be fleshed out further in preparation for or while conducting an ATAM.

Each quality attribute characterization is divided into three categories:

External stimuli (or just stimuli for short) are the events that cause the architecture to respond or change.

To analyze an architecture for adherence to quality requirements, those requirements need to be expressed in terms that are concrete and measurable or observable. These measurable/observable quantities are described in the responses section of the attribute characterization.

Architectural decisions are those aspects of an architecture (components, connectors, and their properties) that have a direct impact on achieving attribute responses.

For example, the external stimuli for performance are events such as messages, interrupts, or user keystrokes that result in computation being initiated. Performance architectural decisions include processor and network arbitration mechanisms; concurrency structures including processes, threads, and processors; and properties including process priorities and execution times. Responses are characterized by measurable quantities such as latency and throughput.

For modifiability, the external stimuli are change requests to the system’s software. Architectural decisions include encapsulation and indirection mechanisms, and the response is measured in terms of the number of affected components, connectors, and interfaces and the amount of effort involved in changing these affected elements. Characterizations for performance, availability, and modifiability are given in Appendix A.

Our goal in presenting these attribute characterizations is not to claim that we have created an exhaustive taxonomy for each of the attributes, but rather to suggest a framework for thinking about quality attributes; a framework that we have found facilitates a reasoned and efficient inquiry to elicit the appropriate attribute-related information.

The attribute characterizations help to ensure attribute coverage as well as offering a rationale for asking elicitation questions. For example, irrespective of the style being analyzed we know that latency (a measure of response) is at least a function of

We know that these architectural resources must be designed so that they can ensure the appropriate response to a stimulus. Therefore, given a scenario such as “Unlock all of the car doors within one second of pressing the correct key sequence,” the performance characterization inspires questions such as • Is the one-second deadline a hard deadline (response)? • What are the consequences of not meeting the one-second requirement (response)? • What components are involved in responding to the event that initiates unlocking the door (architectural decisions)? • What are the execution times of those components (architectural decisions)? • Do the components reside on the same or different processors (architectural decisions)? • What happens if several “unlock the door” events occur quickly in succession (stimuli)?

Examples of Performance Related Questions

Are the servers single- or multi-threaded? What is the location of firewalls and their impact on performance? How are priorities assigned to processes? What information is cached versus regenerated? Based upon what principles? How are processes allocated to hardware? What is the physical location of the hardware and its connectivity? What are the bandwidth characteristics of the network? How is queuing and prioritization done in the network? Do you use a synchronous or an asynchronous protocol? What is the impact of uni-cast or multicast broadcast protocols? What is the performance impact of a thin versus a thick client? How are resources allocated to service requests? How do we characterize client loading, (e.g., how many concurrent sessions, how many users)? What are the performance characteristics of the middleware: load balancing, monitoring, reconfiguring services to resources?

These questions are inspired by the attribute characterizations and result from applying the characterization to architecture being evaluated. For example, see the performance questions in Figure 1 and consider how these questions might have been inspired by the performance characterization in Figure 16 of Appendix A.

Other Examples of Attribute-Specific Questions Modifiability:

Performance:

Availability

anitsh commented 2 years ago

Scenarios

In a perfect world, the quality requirements for a system would be completely and unambiguously specified in a requirements document that is evolving ahead of or in concert with the architecture specification. In reality, requirements documents are not written, or are written poorly, or do not properly address quality attributes. In particular, we have found that quality attribute requirements for both existing and planned systems are missing, vague, or incomplete. Typically the first job of an architecture analysis is to precisely elicit the specific quality goals against which the architecture will be judged. The mechanism that we use for this elicitation is the scenario.

A scenario is a short statement describing an interaction of one of the stakeholders with the system. A user would describe using the system to perform some task; his scenarios would very much resemble use cases in object-oriented parlance. A maintainer would describe making a change to the system, such as upgrading the operating system in a particular way or adding a specific new function. A developer’s scenario might talk about using the architecture to build the system or predict its performance. A customer’s scenario might describe how the architecture is to be re-used for a second product in a product line.

Scenarios provide a vehicle for concretizing vague development-time qualities such as modifiability; they represent specific examples of current and future uses of a system. Scenarios are also useful in understanding run-time qualities such as performance or availability. This is because scenarios specify the kinds of operations over which performance needs to be measured, or the kinds of failures the system will have to withstand.

Different types of scenarios must used to probe a system from different angles, optimizing the chances of surfacing architectural decisions at risk. In ATAM we use three types of scenarios

  1. Use case scenarios. These involve typical uses of the existing system and are used for information elicitation).
  2. Growth scenarios. These cover anticipated changes to the system).
  3. Exploratory scenarios. These cover extreme changes that are expected to “stress” the system.

Use case scenarios describe a user’s intended interaction with the completed, running system. For example,

  1. There is a radical course adjustment during weapon release (e.g., loft) that the software computes in 100 ms. (performance)
  2. The user wants to examine budgetary and actual data under different fiscal years without re-entering project data. (usability)
  3. A data exception occurs and the system notifies a defined list of recipients by e-mail and displays the offending conditions in red on data screens. (reliability)
  4. User changes graph layout from horizontal to vertical and graph is redrawn in one second. (performance)
  5. Remote user requests a database report via the Web during peak period and receives it within five seconds. (performance)
  6. The caching system will be switched to another processor when its processor fails, and will do so within one second. (reliability)

The above use case scenarios expresses a specific stakeholder’s desires. Also, the stimulus and the response associated with the attribute are easily identifiable.

For example, in the first scenario, “radical course adjustment during weapon release,” the stimulus and "latency goal of 100 ms" is called out as being the important response measure.

For scenarios to be well-formed it must be clear what the stimulus is, what the environmental conditions are, and what the measurable or observable manifestation of the response is.