Open eziceice opened 5 years ago
max-age
or s-maxage
directives.
Statement Coverage: Use this type of coverage to determine whether every statement in the program has been invoked at least once.
Condition Coverage: Condition coverage analyzes statements that include conditions in source code.
Boundary Coverage: Boundary code coverage examines code that has boundary execution.
Function Coverage: Function coverage determines whether all the functions of your code have been called during simulation.
Branch coverage: Has each branch (also called DD-path) of each control structure (such as in if and case statements) been executed? For example, given an if statement, have both the true and false branches been executed? Notice that this one is a subset of Edge coverage.
Traditionally web applications are big. You write one piece of software that runs on a server and answers requests in form of HTML, XML or JSON. If you want your web application to do something new, you add that functionality to the existing application. Such big systems are called "monolithic" (a monolith is a very big rock).
Monoliths are problematic, because they usually grow in size and complexity over time. This is a problem when developing something in a team. Developers are adding new code to the system and can't change or re-use the existing code, because there is many dependencies between the code pieces. They are also too afraid of removing old code because it might be used somewhere.
When delivering such code to clients, e.g. by putting it on the internet, we call that "deploying". Deploying and the usual testing after deployment is difficult, because within a big system there is a lot of things that can break. Finding out what is going wrong and who should fix it, is very difficult and requires people to know the whole thing.
Another disadvantage is the scalability. By that we mean "how can we serve more users at the same time?" A single web server computer can only handle a certain amount of users accessing it in parallel. Upgrading that computer to better hardware makes it serve more users, but you will soon hit the boundaries of what is possible with hardware. This upgrading is called vertical scaling. We could also put our web application on two or more servers, so that we can handle more users. This is called horizontal scaling. Monolithic applications are traditionally made only with vertical scaling in mind.
In order to simplify the workflow with big applications, we can split it into smaller parts. Each part serves one particular purpose. We call that a "(web) service". These web services are very flexible to use. You can use them from within your existing monolithic application, either in the server part, or in the client part. You can also have a web service that uses other web services.
The split into single web services allows you loosely couple your application. This means that as a user of the service you only depend on the service being up, available and working. You no longer need to take care of its dependencies, its compilation, deployment or testing.
You can give that responsibility to a different developer or team. You can't break their web service because you do not access it through the source code. They can even use a different programming language and you could still use their service.
This independence is made possible by deciding on using a common format and common protocols(a protocol is a way of communicating). For web services the most popular formats are JSON and XML. The protocol used the most is HTTP, because it's simple, well-supported by all existing software and your browser is using it, too.
The word "micro" in "microservices" just emphasises the idea to make these web services as small as possible. If you need a more complex service, it is usually better to create a new service that depends on one or more others.
Developers practicing continuous integration merge their changes back to the main branch as often as possible. The developer's changes are validated by creating a build and running automated tests against the build. By doing so, you avoid the integration hell that usually happens when people wait for release day to merge their changes into the release branch. Continuous integration puts a great emphasis on testing automation to check that the application is not broken whenever new commits are integrated into the main branch.
Continuous delivery is an extension of continuous integration to make sure that you can release new changes to your customers quickly in a sustainable way. This means that on top of having automated your testing, you also have automated your release process and you can deploy your application at any point of time by clicking on a button. In theory, with continuous delivery, you can decide to release daily, weekly, fortnightly, or whatever suits your business requirements. However, if you truly want to get the benefits of continuous delivery, you should deploy to production as early as possible to make sure that you release small batches that are easy to troubleshoot in case of a problem.
Continuous deployment goes one step further than continuous delivery. With this practice, every change that passes all stages of your production pipeline is released to your customers. There's no human intervention, and only a failed test will prevent a new change to be deployed to production. Continuous deployment is an excellent way to accelerate the feedback loop with your customers and take pressure off the team as there isn't a Release Day anymore. Developers can focus on building software, and they see their work go live minutes after they've finished working on it.
Docker (and containers in general) solve the problem of packaging an application and its dependencies. This makes it easy to ship and run everywhere.
Kubernetes is one layer of abstraction above containers. It is a distributed system that controls/manages containers.
Solid Principale
Single Responsibility Principle
Open Closed Principle
Liskov’s Substitution Principle
Interface Segregation Principle
Dependency Inversion Principle