Open krizhanovsky opened 6 years ago
~In typical service mesh there are huge communication overheads: microservice <-> Envoy <-> Envoy <-> microservice. All the communications are done over TLS and there are 6 context switches and full TCP/IP processings. Envoy provides UNIX sockets, but communications are still very expensive. Having that there is a trend to replace monolithic applications by micorservices, the performance penalty is unacceptable by many applications, so the microservice architecture could be even more sensitive to performance than the typical edge case for Tempesta FW.~
The recent questionary revealed that the most companies, even with service mesh, don't experience issues with network I/O. Most of the respondents balance between microservices and a monolith having about 5 services involved in a request processing. From the other hand, Kubernetes provides network I/O plugins and poor HTTP/2 load balancing. Several respondents mentioned network I/O issues in Kubernetes. But all the cases were about the K8S infrastructure, not about Linux network I/O, data copies, TLS or data serialization.
Defence in depth principle requires a protection layer between containers running separate microservers, even being deployed in the same security perimeter and inside the same hardware server in a private cloud.
Need to implement a functional test and run it in CI for 3 containers running microservices communicating via HTTP. Tempesta FW must be run in the host system and in collaboration with nft enforce HTTP communications rules for the containers:
/foo
or PUT with URI prefix/bar
and a headerX-Bar: bar
.If HTTP tables doesn't allow the 2nd rule, then an enhancement issue must be created.
Please create a new Wiki page for microservices, including the protection, communications optimization scenarios, microservices caching, and load balancing with appropriate configuration examples.