Closed ThuF closed 3 years ago
Use cases
Cloud Messaging • Services (microservices, service mesh) • Event/Data Streaming (observability, analytics, ML/AI) • Command and Control • IoT and Edge • Telemetry / Sensor Data / Command and Control • Augmenting or Replacing Legacy Messaging Systems
Scenario
It's used for SUB/PUB message broker. The aim is to allow Microservices to communicate with messages.
Sub / Pub example:
SUB sap 1
– Host 1
PUB sap 11
– Host 2
Hello SAP
MSG foo 1 11
– Host 1
Hello World
Alternatives
Cilium, Consul, Linkerd, Conduit, Kuma
Project | Client Languages and Platforms |
---|---|
NATS | Core NATS: 48 known client types, 11 supported by maintainers, 18 contributed by the community. NATS Streaming: 7 client types supported by maintainers, 4 contributed by the community. NATS servers can be compiled on architectures supported by Golang. NATS provides binary distributions. |
gRPC | 13 client languages. |
Kafka | 18 client types supported across the community and by Confluent. Kafka servers can run on platforms supporting java; very wide support. |
Pulsar | 7 client languages, 5 third-party clients - tested on macOS and Linux. |
Rabbit | At least 10 client platforms that are maintainer-supported with over 50 community supported client types. Servers are supported on the following platforms: Linux Windows, NT. |
Project | Supported Patterns |
---|---|
NATS | Streams and Services through built-in publish/subscribe, request/reply, and load-balanced queue subscriber patterns. Dynamic request permissioning and request subject obfuscation is supported. |
gRPC | One service, which may have streaming semantics, per channel. Load Balancing for a service can be done either client-side or by using a proxy. |
Kafka | Streams through publish/subscribe. Load balancing can be achieved with consumer groups. Application code must correlate requests with replies over multiple topics for a service (request/reply) pattern. |
Pulsar | Streams through publish/subscribe. Multiple competing consumer patterns support load balancing. Application code must correlate requests with replies over multiple topics for a service (request/reply) pattern. |
Rabbit | Streams through publish/subscribe, and services with a direct reply-to feature. Load balancing can be achieved with a Work Queue. Applications must correlate requests with replies over multiple topics for a service (request/reply) pattern. |
Project | Quality of Service / Guarantees |
---|---|
NATS | At most once, at least once, and exactly once is available in JetStream. |
gRPC | At most once. |
Kafka | At least once, exactly once. |
Pulsar | At most once, at least once, and exactly once. |
Rabbit | At most once, at least once. |
Project | Multi-tenancy Support |
---|---|
NATS | NATS supports true multi-tenancy and decentralized security through accounts and defining shared streams and services. |
gRPC | N/A |
Kafka | Multi-tenancy is not supported. |
Pulsar | Multi-tenancy is implemented through tenants; built-in data sharing across tenants is not supported. Each tenant can have its own authentication and authorization scheme. |
Rabbit | Multi-tenancy is supported with vhosts; data sharing is not supported. |
Project | Authentication |
---|---|
NATS | NATS supports TLS, NATS credentials, NKEYS (NATS ED25519 keys), username and password, or simple token. |
gRPC | TLS, ALT, Token, channel and call credentials, and a plug-in mechanism. |
Kafka | Supports Kerberos and TLS. Supports JAAS and an out-of-box authorizer implementation that uses ZooKeeper to store connection and subject. |
Pulsar | TLS Authentication, Athenz, Kerberos, JSON Web Token Authentication. |
Rabbit | TLS, SASL, username and password, and pluggable authorization. |
Project | Authorization |
---|---|
NATS | Account limits including number of connections, message size, number of imports and exports. User-level publish and subscribe permissions, connection restrictions, CIDR address restrictions, and time of day restrictions. |
gRPC | Users can configure call credentials to authorize fine-grained individual calls on a service. |
Kafka | Supports JAAS, ACLs for a rich set of Kafka resources including topics, clusters, groups, and others. |
Pulsar | Permissions may be granted to specific roles for lists of operations such as produce and consume. |
Rabbit | ACLs dictate permissions for configure, write, and read operations on resources like exchanges, queues, transactions, and others. Authentication is pluggable. |
Project | Message Retention and Persistence Support |
---|---|
NATS | Supports memory, file, and database persistence. Messages can be replayed by time, count, or sequence number, and durable subscriptions are supported. With NATS streaming, scripts can archive old log segments to cold storage. |
gRPC | N/A |
Kafka | Supports file-based persistence. Messages can be replayed by specifying an offset, and durable subscriptions are supported. Log compaction is supported as well as KSQL. |
Pulsar | Supports tiered storage including file, Amazon S3 or Google Cloud Storage (GCS). Pulsar can replay messages from a specific position and supports durable subscriptions. Pulsar SQL and topic compaction is supported, as well as Pulsar functions. |
Rabbit | Supports file-based persistence. Rabbit supported queue-based semantics (vs log), so no message replay is available. |
Project | HA and FT Support |
---|---|
NATS | Core NATS supports full mesh clustering with self-healing features to provide high availability to clients. NATS streaming has warm failover backup servers with two modes (FT and full clustering). JetStream supports horizontal scalability with built-in mirroring. |
gRPC | N/A. gRPC relies on external resources for HA/FT. |
Kafka | Fully replicated cluster members are coordinated via Zookeeper. |
Pulsar | Pulsar supports clustered brokers with geo-replication. |
Rabbit | Clustering Support with full data replication via federation plugins. Clusters require low-latency networks where network partitions are rare. |
Project | Supported Deployment Models |
---|---|
NATS | The NATS network element (server) is a small static binary that can be deployed anywhere from large instances in the cloud to resource constrained devices like a Raspberry PI. NATS supports the Adaptive Edge architecture which allows for large, flexible deployments. Single servers, leaf nodes, clusters, and superclusters (cluster of clusters) can be combined in any fashion for an extremely flexible deployment amenable to cloud, on-premise, edge and IoT. Clients are unaware of topology and can connect to any NATS server in a deployment. |
gRPC | gRPC is point to point and does not have a server or broker to deploy or manage, but always requires additional pieces for production deployments. |
Kafka | Kafka supports clustering with mirroring to loosely coupled remote clusters. Clients are tied to partitions defined within clusters. Kafka servers require a JVM, eight cores, 64 GB to128 GB of RAM, two or more 8-TB SAS/SSD disks, and a 10-Gig NIC. (1)__ |
Pulsar | Pulsar supports clustering and built-in geo-replication between clusters. Clients may connect to any cluster with an appropriately configured tenant and namespace. Pulsar requires a JVM and requires at least 6 Linux machines or VMs. 3 running ZooKeeper. 3 running a Pulsar broker and a BookKeeper bookie. (2)__ |
Rabbit | Rabbit supports clusters and cross-cluster message propagation through a federation plugin. Clients are unaware of topology and may connect to any cluster. The server requires the Erlang VM and dependencies. |
Project | Monitoring Tooling |
---|---|
NATS | NATS supports exporting monitoring data to Prometheus and has Grafana dashboards to monitor and configure alerts. There are also development monitoring tools such as nats-top. Robust side car deployment or a simple connect-and-view model with NATS surveyor is supported. |
gRPC | External components such as a service mesh are required to monitor gRPC. |
Kafka | Kafka has a number of management tools and consoles including Confluent Control Center, Kafka, Kafka Web Console, Kafka Offset Monitor. |
Pulsar | CLI tools, per-topic dashboards, and third-party tools. |
Rabbit | CLI tools, a plugin-based management system with dashboards and third-party tools. |
Project | Management Tooling |
---|---|
NATS | NATS separates operations from security. User and Account management in a deployment may be decentralized and managed through a CLI. Server (network element) configuration is separated from security with a command line and configuration file which can be reloaded with changes at runtime. |
gRPC | External components such as a service mesh are required to manage gRPC. |
Kafka | Kafka has a number of management tools and consoles including Confluent Control Center, Kafka, Kafka Web Console, Kafka Offset Monitor. |
Pulsar | CLI tools, per-topic dashboards, and third-party tools. |
Rabbit | CLI tools, a plugin-based management system with dashboards and third-party tools. |
Project | Built-in and Third Party Integrations |
---|---|
NATS | NATS supports WebSockets, a Kafka bridge, an IBM MQ Bridge, a Redis Connector, Apache Spark, Apache Flink, CoreOS, Elastic, Elasticsearch, Prometheus, Telegraf, Logrus, Fluent Bit, Fluentd, OpenFAAS, HTTP, and MQTT (coming soon), and more. |
gRPC | There are a number of third party integrations including HTTP, JSON, Prometheus, Grift and others. (3)__ |
Kafka | Kafka has a large number of integrations in its ecosystem, including stream processing (Storm, Samza, Flink), Hadoop, database (JDBC, Oracle Golden Gate), Search and Query (ElasticSearch, Hive), and a variety of logging and other integrations. |
Pulsar | Pulsar has many integrations, including ActiveMQ, Cassandra, Debezium, Flume, Elasticsearch, Kafka, Redis, and others. |
Rabbit | RabbitMQ has many plugins, including protocols (MQTT, STOMP), WebSockets, and various authorization and authentication plugins. |
Mature As modern systems continue to evolve and utilize more components and process more data, supporting patterns beyond 1:1 communications, with addressing and discovery tied to DNS is critical. Foundational technologies like NATS promise the most return on investment. Incumbent technologies will not work as modern systems unify cloud, Edge, IoT and beyond. NATS does
Installation Methods
Installing via Docker
> docker pull nats:latest
latest: Pulling from library/nats
Digest: sha256:0c98cdfc4332c0de539a064bfab502a24aae18ef7475ddcc7081331502327354
Status: Image is up to date for nats:latest
docker.io/library/nats:latest
Run NATS on Docker:
> docker run -p 4222:4222 -ti nats:latest
[1] 2019/05/24 15:42:58.228063 [INF] Starting nats-server version #.#.#
[1] 2019/05/24 15:42:58.228115 [INF] Git commit [#######]
[1] 2019/05/24 15:42:58.228201 [INF] Starting http monitor on 0.0.0.0:8222
[1] 2019/05/24 15:42:58.228740 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2019/05/24 15:42:58.228765 [INF] Server is ready
[1] 2019/05/24 15:42:58.229003 [INF] Listening for route connections on 0.0.0.0:6222
Installing via a Package Manager On Windows:
> choco install nats-server
On Mac OS
brew install nats-server
Test the installation
> nats-server
[41634] 2019/05/13 09:42:11.745919 [INF] Starting nats-server version 2.0.0
[41634] 2019/05/13 09:42:11.746240 [INF] Listening for client connections on 0.0.0.0:4222
...
[41634] 2019/05/13 09:42:11.746249 [INF] Server id is NBNYNR4ZNTH4N2UQKSAAKBAFLDV3PZO4OUYONSUIQASTQT7BT4ZF6WX7
[41634] 2019/05/13 09:42:11.746252 [INF] Server is ready
Installing From the Source
> GO111MODULE=on go get github.com/nats-io/nats-server/v2`
Installing on Kubernetes with NATS Operator
apiVersion: nats.io/v1alpha2
kind: NatsCluster
metadata:
name: example-nats-cluster
spec:
size: 3
version: "2.1.8"
$ kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/00-prereqs.yaml
$ kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/10-deployment.yaml
$ kubectl create ns nats-io
apiVersion: apps/v1
kind: Deployment
metadata:
name: nats-operator
namespace: nats-io
spec:
(...)
spec:
containers:
- name: nats-operator
(...)
args:
- nats-operator
- --feature-gates=ClusterScoped=true
(...)
$ kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/00-prereqs.yaml
$ kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/10-deployment.yaml
$ cat <<EOF | kubectl create -f -
apiVersion: nats.io/v1alpha2
kind: NatsCluster
metadata:
name: example-nats-cluster
spec:
size: 3
version: "1.3.0"
EOF
apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
name: "nats"
spec:
size: 3
version: "1.3.0"
tls:
serverSecret: "nats-clients-tls"
routesSecret: "nats-routes-tls"
Installing on Kubernetes with script
curl -sSL https://nats-io.github.io/k8s/setup.sh | sh
Install NATS on Kyma
When you install Kyma you can add eventing component.
apiVersion: "installer.kyma-project.io/v1alpha1"
kind: Installation
metadata:
name: kyma-installation
namespace: default
labels:
action: install
kyma-project.io/installation: ""
spec:
version: "__VERSION__"
url: "__URL__"
profile: ""
components:
- name: "cluster-essentials"
namespace: "kyma-system"
- name: "eventing"
namespace: "kyma-system"
NATS Controllers for Kubernetes (NACK) JetStream Controller The JetStream controllers allows you to manage NATS JetStream Streams and Consumers via K8S CRDs.
# Creates cluster of NATS Servers that are not JetStream enabled $ kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/master/nats-server/simple-nats.yml
$ kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/master/nats-server/nats-js-leaf.yml
Now install the JetStream CRDs and Controller:
$ kubectl apply -f https://raw.githubusercontent.com/nats-io/nack/main/deploy/crds.yml customresourcedefinition.apiextensions.k8s.io/streams.jetstream.nats.io configured customresourcedefinition.apiextensions.k8s.io/consumers.jetstream.nats.io configured customresourcedefinition.apiextensions.k8s.io/streamtemplates.jetstream.nats.io configured
$ kubectl apply -f https://raw.githubusercontent.com/nats-io/nack/main/deploy/rbac.yml $ kubectl apply -f https://raw.githubusercontent.com/nats-io/nack/main/deploy/deployment.yml
### Knative Eventing Kafka
Knative/Eventing offers a consistent, Kubernetes-based platform for reliable, secure and scalable asynchronous event delivery from packaged or app-created event sources to on-cluster and off-cluster consumers.
The Knative Eventing mission focuses on three areas: Developer Experience, Event Delivery, and Event Sourcing. The primary focus is Developer experience around event driven applications; event delivery and sourcing provide infrastructure which enables the developer experience to scale to complex applications.
![diagram-Knative Eventing Functionality](https://user-images.githubusercontent.com/82384876/118359720-78678b00-b58d-11eb-90b2-18c67911511d.png)
Knative event broker (Controller)
A Broker is working like a controller and provide a bucket of events which can be selected by attribute. t receives events and forwards them to subscribers defined by one or more matching Triggers. Since a Broker implements Addressable, event senders can submit events to the Broker by POSTing the event to the Broker’s `status.address.url.`
![diagram-Page-8](https://user-images.githubusercontent.com/82384876/118360528-d2b61b00-b590-11eb-89a1-72c7594ab7d1.png)
![diagram-Page-8 (1)](https://user-images.githubusercontent.com/82384876/118361199-e5315400-b592-11eb-8df5-7ed8b709d0fb.png)
**Emitting Events using Ping Source**
apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: MTChannelBasedBroker name: default spec:
config: apiVersion: v1 kind: ConfigMap name: config-br-default-channel namespace: knative-eventing
delivery: deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: dlq-service
@krasimirdermendzhiev should we consider the research finished and close the issue?
I pre-selected and made further research for three different events management systems: Kafka, NATS, RabbitMQ. Below is a brief comparison of the three. According to this, I find the NATS system to be suiting our needs the best.
Features | Apache Kafka | RabbitMQ | NATS |
---|---|---|---|
In Kyma | No | No | yes |
Messaging models supported | Message queue Pub/sub | Message queue Pub/sub | Message queue Request-reply Pub/sub |
Quality of Service | At least once, exactly once | At most once, at least once | At most once, at least once, and exactly once is available in Jetstream. |
Multi-tenancy | Is not supported | Supported woth vhosts; data sharing is not supported | Supported and decentralized security through accounts and defining shared streams and sevices. |
Authentication | Kerberos and TLS, Supports JAAS and out-of-box authorizer implementation that uses ZooKeer to store connection and subject | TLS, SASL, username and password, and pluggable authorization | NATS supports TLS, NATS credentials, NKEYS, username and password, or simple token |
Message Storage | Disk | In-memory/disk | In-memory/disk |
Distributed Units | Topics | Queues | Channels |
Throughput | High | Medium to High | High |
Research the Eventing component and potential usage in Kyma.
Components: