mirror of
https://github.com/taigrr/nats.docs
synced 2025-01-18 04:03:23 -08:00
GitBook: [master] 178 pages modified
This commit is contained in:
parent
30e3cdc1fa
commit
d21aed4887
@ -3,6 +3,7 @@
|
||||
* [Introduction](README.md)
|
||||
* [What's New!](whats_new.md)
|
||||
* [NATS 2.0](whats_new_20.md)
|
||||
* [Compare NATS](compare-nats.md)
|
||||
* [FAQ](faq.md)
|
||||
|
||||
## NATS Concepts
|
||||
|
136
compare-nats.md
Normal file
136
compare-nats.md
Normal file
@ -0,0 +1,136 @@
|
||||
---
|
||||
description: 'NATS Comparison to Kafka, Rabbit, gRPC, and others'
|
||||
---
|
||||
|
||||
# Compare NATS
|
||||
|
||||
This feature comparison is a summary of a few of the major components in several of the popular messaging technologies of today. This is by no means an exhaustive list and each technology should be investigated thoroughly to decide which will work best for your implementation.
|
||||
|
||||
In this comparison, we will be featuring NATS, Apache Kafka, RabbitMQ, Apache Pulsar, and gRPC.
|
||||
|
||||
### Language and Platform Coverage
|
||||
|
||||
| Project | Client Languages and Platforms |
|
||||
| :--- | :--- |
|
||||
| **NATS** | Core NATS: 48 known client types, 11 supported by maintainers, 18 contributed by the community. NATS Streaming: 7 client types supported by maintainers, 4 contributed by the community. NATS servers can be compiled on architectures supported by Golang. NATS provides binary distributions. |
|
||||
| **gRPC** | 13 client languages. |
|
||||
| **Kafka** | 18 client types supported across the community and by Confluent. Kafka servers can run on platforms supporting java; very wide support. |
|
||||
| **Pulsar** | 7 client languages, 5 third-party clients - tested on macOS and Linux. |
|
||||
| **Rabbit** | At least 10 client platforms that are maintainer-supported with over 50 community supported client types. Servers are supported on the following platforms: Linux Windows, NT. |
|
||||
|
||||
### Built-in Patterns
|
||||
|
||||
| Project | Supported Patterns |
|
||||
| :--- | :--- |
|
||||
| **NATS** | Streams and Services through built-in publish/subscribe, request/reply, and load-balanced queue subscriber patterns. Dynamic request permissioning and request subject obfuscation is supported. |
|
||||
| **gRPC** | One service, which may have streaming semantics, per channel. Load Balancing for a service can be done either client-side or by using a proxy. |
|
||||
| **Kafka** | Streams through publish/subscribe. Load balancing can be achieved with consumer groups. Application code must correlate requests with replies over multiple topics for a service \(request/reply\) pattern. |
|
||||
| **Pulsar** | Streams through publish/subscribe. Multiple competing consumer patterns support load balancing. Application code must correlate requests with replies over multiple topics for a service \(request/reply\) pattern. |
|
||||
| **Rabbit** | Streams through publish/subscribe, and services with a direct reply-to feature. Load balancing can be achieved with a Work Queue. Applications must correlate requests with replies over multiple topics for a service \(request/reply\) pattern. |
|
||||
|
||||
### Delivery Guarantees
|
||||
|
||||
| Project | Quality of Service / Guarantees |
|
||||
| :--- | :--- |
|
||||
| **NATS** | At most once, at least once, and exactly once is available in JetStream. |
|
||||
| **gRPC** | At most once. |
|
||||
| **Kafka** | At least once, exactly once. |
|
||||
| **Pulsar** | At most once, at least once, and exactly once. |
|
||||
| **Rabbit** | At most once, at least once. |
|
||||
|
||||
### Multi-tenancy and Sharing
|
||||
|
||||
| Project | Multi-tenancy Support |
|
||||
| :--- | :--- |
|
||||
| **NATS** | NATS supports true multi-tenancy and decentralized security through accounts and defining shared streams and services. |
|
||||
| **gRPC** | N/A |
|
||||
| **Kafka** | Multi-tenancy is not supported. |
|
||||
| **Pulsar** | Multi-tenancy is implemented through tenants; built-in data sharing across tenants is not supported. Each tenant can have its own authentication and authorization scheme. |
|
||||
| **Rabbit** | Multi-tenancy is supported with vhosts; data sharing is not supported. |
|
||||
|
||||
### AuthN
|
||||
|
||||
| Project | Authentication |
|
||||
| :--- | :--- |
|
||||
| **NATS** | NATS supports TLS, NATS credentials, NKEYS \(NATS ED25519 keys\), username and password, or simple token. |
|
||||
| **gRPC** | TLS, ALT, Token, channel and call credentials, and a plug-in mechanism. |
|
||||
| **Kafka** | Supports Kerberos and TLS. Supports JAAS and an out-of-box authorizer implementation that uses ZooKeeper to store connection and subject. |
|
||||
| **Pulsar** | TLS Authentication, Athenz, Kerberos, JSON Web Token Authentication. |
|
||||
| **Rabbit** | TLS, SASL, username and password, and pluggable authorization. |
|
||||
|
||||
### AuthZ
|
||||
|
||||
| Project | Authorization |
|
||||
| :--- | :--- |
|
||||
| **NATS** | Account limits including number of connections, message size, number of imports and exports. User-level publish and subscribe permissions, connection restrictions, CIDR address restrictions, and time of day restrictions. |
|
||||
| **gRPC** | Users can configure call credentials to authorize fine-grained individual calls on a service. |
|
||||
| **Kafka** | Supports JAAS, ACLs for a rich set of Kafka resources including topics, clusters, groups, and others. |
|
||||
| **Pulsar** | Permissions may be granted to specific roles for lists of operations such as produce and consume. |
|
||||
| **Rabbit** | ACLs dictate permissions for configure, write, and read operations on resources like exchanges, queues, transactions, and others. Authentication is pluggable. |
|
||||
|
||||
### Message Retention and Persistence
|
||||
|
||||
| Project | Message Retention and Persistence Support |
|
||||
| :--- | :--- |
|
||||
| **NATS** | Supports memory, file, and database persistence. Messages can be replayed by time, count, or sequence number, and durable subscriptions are supported. With NATS streaming, scripts can archive old log segments to cold storage. |
|
||||
| **gRPC** | N/A |
|
||||
| **Kafka** | Supports file-based persistence. Messages can be replayed by specifying an offset, and durable subscriptions are supported. Log compaction is supported as well as KSQL. |
|
||||
| **Pulsar** | Supports tiered storage including file, Amazon S3 or Google Cloud Storage \(GCS\). Pulsar can replay messages from a specific position and supports durable subscriptions. Pulsar SQL and topic compaction is supported, as well as Pulsar functions. |
|
||||
| **Rabbit** | Supports file-based persistence. Rabbit supported queue-based semantics \(vs log\), so no message replay is available. |
|
||||
|
||||
### High Availability and Fault Tolerance
|
||||
|
||||
| Project | HA and FT Support |
|
||||
| :--- | :--- |
|
||||
| **NATS** | Core NATS supports full mesh clustering with self-healing features to provide high availability to clients. NATS streaming has warm failover backup servers with two modes \(FT and full clustering\). JetStream will support horizontal scalability with built-in mirroring. |
|
||||
| **gRPC** | N/A. gRPC relies on external resources for HA/FT. |
|
||||
| **Kafka** | Fully replicated cluster members are coordinated via Zookeeper. |
|
||||
| **Pulsar** | Pulsar supports clustered brokers with geo-replication. |
|
||||
| **Rabbit** | Clustering Support with full data replication via federation plugins. Clusters require low-latency networks where network partitions are rare. |
|
||||
|
||||
### Deployment
|
||||
|
||||
| Project | Supported Deployment Models |
|
||||
| :--- | :--- |
|
||||
| **NATS** | The NATS network element \(server\) is a small static binary that can be deployed anywhere from large instances in the cloud to resource constrained devices like a Raspberry PI. NATS supports the Adaptive Edge architecture which allows for large, flexible deployments. Single servers, leaf nodes, clusters, and superclusters \(cluster of clusters\) can be combined in any fashion for an extremely flexible deployment amenable to cloud, on-premise, edge and IoT. Clients are unaware of topology and can connect to any NATS server in a deployment. |
|
||||
| **gRPC** | gRPC is point to point and does not have a server or broker to deploy or manage, but always requires additional pieces for production deployments. |
|
||||
| **Kafka** | Kafka supports clustering with mirroring to loosely coupled remote clusters. Clients are tied to partitions defined within clusters. Kafka servers require a JVM, eight cores, 64 GB to128 GB of RAM, two or more 8-TB SAS/SSD disks, and a 10-Gig NIC. [_\(1\)_](compare-nats.md#references)\_\_ |
|
||||
| **Pulsar** | Pulsar supports clustering and built-in geo-replication between clusters. Clients may connect to any cluster with an appropriately configured tenant and namespace. Pulsar requires a JVM and requires at least 6 Linux machines or VMs. 3 running ZooKeeper. 3 running a Pulsar broker and a BookKeeper bookie. [_\(2\)_](compare-nats.md#references)\_\_ |
|
||||
| **Rabbit** | Rabbit supports clusters and cross-cluster message propagation through a federation plugin. Clients are unaware of topology and may connect to any cluster. The server requires the Erlang VM and dependencies. |
|
||||
|
||||
### Monitoring
|
||||
|
||||
| Project | Monitoring Tooling |
|
||||
| :--- | :--- |
|
||||
| **NATS** | NATS supports exporting monitoring data to Prometheus and has Grafana dashboards to monitor and configure alerts. There are also development monitoring tools such as nats-top. Robust side car deployment or a simple connect-and-view model with NATS surveyor is supported. |
|
||||
| **gRPC** | External components such as a service mesh are required to monitor gRPC. |
|
||||
| **Kafka** | Kafka has a number of management tools and consoles including Confluent Control Center, Kafka, Kafka Web Console, Kafka Offset Monitor. |
|
||||
| **Pulsar** | CLI tools, per-topic dashboards, and third-party tools. |
|
||||
| **Rabbit** | CLI tools, a plugin-based management system with dashboards and third-party tools. |
|
||||
|
||||
### Management
|
||||
|
||||
| Project | Management Tooling |
|
||||
| :--- | :--- |
|
||||
| **NATS** | NATS separates operations from security. User and Account management in a deployment may be decentralized and managed through a CLI. Server \(network element\) configuration is separated from security with a command line and configuration file which can be reloaded with changes at runtime. |
|
||||
| **gRPC** | External components such as a service mesh are required to manage gRPC. |
|
||||
| **Kafka** | Kafka has a number of management tools and consoles including Confluent Control Center, Kafka, Kafka Web Console, Kafka Offset Monitor. |
|
||||
| **Pulsar** | CLI tools, per-topic dashboards, and third-party tools. |
|
||||
| **Rabbit** | CLI tools, a plugin-based management system with dashboards and third-party tools. |
|
||||
|
||||
### Integrations
|
||||
|
||||
| Project | Built-in and Third Party Integrations |
|
||||
| :--- | :--- |
|
||||
| **NATS** | NATS supports WebSockets, a Kafka bridge, an IBM MQ Bridge, a Redis Connector, Apache Spark, Apache Flink, CoreOS, Elastic, Elasticsearch, Prometheus, Telegraf, Logrus, Fluent Bit, Fluentd, OpenFAAS, HTTP, and MQTT \(coming soon\), and [more](https://nats.io/download/#connectors-and-utilities). |
|
||||
| **gRPC** | There are a number of third party integrations including HTTP, JSON, Prometheus, Grift and others. [_\(3\)_](compare-nats.md#references)\_\_ |
|
||||
| **Kafka** | Kafka has a large number of integrations in its ecosystem, including stream processing \(Storm, Samza, Flink\), Hadoop, database \(JDBC, Oracle Golden Gate\), Search and Query \(ElasticSearch, Hive\), and a variety of logging and other integrations. |
|
||||
| **Pulsar** | Pulsar has many integrations, including ActiveMQ, Cassandra, Debezium, Flume, Elasticsearch, Kafka, Redis, and others. |
|
||||
| **Rabbit** | RabbitMQ has many plugins, including protocols \(MQTT, STOMP\), WebSockets, and various authorization and authentication plugins. |
|
||||
|
||||
### References
|
||||
|
||||
1. [ https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.1.0/bk\_planning-your-deployment/content/ch\_hardware-sizing.html\#:~:text=Kafka%20Broker%20Node%3A%20eight%20cores,and%20a%2010%2D%20Gige%20Nic%20.&text=75%20MB%2Fsec%20per%20node,therefore%2010GB%20Nic%20is%20required%20](%20https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.1.0/bk_planning-your-deployment/content/ch_hardware-sizing.html#:~:text=Kafka%20Broker%20Node%3A%20eight%20cores,and%20a%2010%2D%20Gige%20Nic%20.&text=75%20MB%2Fsec%20per%20node,therefore%2010GB%20Nic%20is%20required%20)
|
||||
2. [https://pulsar.apache.org/docs/v1.21.0-incubating/deployment/cluster/](https://pulsar.apache.org/docs/v1.21.0-incubating/deployment/cluster/)
|
||||
3. [https://github.com/grpc-ecosystem](https://github.com/grpc-ecosystem)
|
||||
|
@ -91,16 +91,16 @@ let nc = await connect({
|
||||
nnatsConnection *conn = NULL;
|
||||
natsOptions *opts = NULL;
|
||||
natsStatus s = NATS_OK;
|
||||
|
||||
|
||||
s = natsOptions_Create(&opts);
|
||||
if (s == NATS_OK)
|
||||
// Set the timeout to 10 seconds (10,000 milliseconds)
|
||||
s = natsOptions_SetTimeout(opts, 10000);
|
||||
if (s == NATS_OK)
|
||||
s = natsConnection_Connect(&conn, opts);
|
||||
|
||||
|
||||
(...)
|
||||
|
||||
|
||||
// Destroy objects that were created
|
||||
natsConnection_Destroy(conn);
|
||||
natsOptions_Destroy(opts);
|
||||
|
@ -272,7 +272,7 @@ public class SlowConsumerListener {
|
||||
static void
|
||||
errorCB(natsConnection *conn, natsSubscription *sub, natsStatus s, void *closure)
|
||||
{
|
||||
|
||||
|
||||
// Do something
|
||||
printf("Error: %d - %s", s, natsStatus_GetText(s));
|
||||
}
|
||||
|
@ -268,7 +268,6 @@ natsConnection_Destroy(conn);
|
||||
natsOptions_Destroy(opts);
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
{% endtabs %}
|
||||
|
||||
The mechanics of drain for a subscription are simpler:
|
||||
|
@ -90,6 +90,7 @@ let nc = await connect({
|
||||
nc.close();
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
{% tab title="C" %}
|
||||
```c
|
||||
natsConnection *conn = NULL;
|
||||
|
@ -172,6 +172,5 @@ if (s == NATS_OK)
|
||||
natsConnection_Destroy(conn);
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
{% endtabs %}
|
||||
|
||||
|
@ -58,8 +58,6 @@ For NATS Streaming, it is actually recommended to use the Fault Tolerance mode a
|
||||
|
||||
Both NATS and NATS Streaming have officially supported Helm charts as well:
|
||||
|
||||
- [NATS Helm Chart](https://github.com/nats-io/k8s/tree/master/helm/charts/nats)
|
||||
- [NATS Streaming Helm Chart](https://github.com/nats-io/k8s/tree/master/helm/charts/stan)
|
||||
|
||||
|
||||
* [NATS Helm Chart](https://github.com/nats-io/k8s/tree/master/helm/charts/nats)
|
||||
* [NATS Streaming Helm Chart](https://github.com/nats-io/k8s/tree/master/helm/charts/stan)
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# NATS Streaming Cluster with FT Mode on AWS
|
||||
# NATS Streaming Cluster with FT Mode
|
||||
|
||||
## Preparation
|
||||
|
||||
@ -381,3 +381,4 @@ $ kubectl logs stan-0 -c stan
|
||||
[1] 2019/12/04 20:40:41.671541 [INF] STREAM: ----------------------------------
|
||||
[1] 2019/12/04 20:40:41.671546 [INF] STREAM: Streaming Server is ready
|
||||
```
|
||||
|
||||
|
@ -90,7 +90,7 @@ If other form of credentials are used \(jwt, nkey or other\), then the server wi
|
||||
| :--- | :--- |
|
||||
| `url` | Leafnode URL \(URL protocol should be `nats-leaf`\). |
|
||||
| `urls` | Leafnode URL array. Supports multiple URLs for discovery, e.g., urls: \[ "nats-leaf://host1:7422", "nats-leaf://host2:7422" \] |
|
||||
| `account` | [Account](../securing_nats/accounts.md) name or jwt public key identifying the local account to bind to this remote server. Any traffic locally on this account will be forwarded to the remote server.
|
||||
| `account` | [Account](../securing_nats/accounts.md) name or jwt public key identifying the local account to bind to this remote server. Any traffic locally on this account will be forwarded to the remote server. |
|
||||
| `credentials` | Credential file for connecting to the leafnode server. |
|
||||
| `tls` | A [TLS configuration](leafnode_conf.md#tls-configuration-block) block. Leafnode client will use specified TLS certificates when connecting/authenticating. |
|
||||
|
||||
|
@ -124,7 +124,6 @@ services:
|
||||
networks:
|
||||
nats:
|
||||
name: nats
|
||||
|
||||
```
|
||||
|
||||
Now we use Docker Compose to create the cluster that will be using the `nats` network:
|
||||
|
@ -8,19 +8,19 @@ It is also used to resolve the issue of not having direct client connections to
|
||||
|
||||
## Note - Important Things to Know About Reconnections
|
||||
|
||||
A common misunderstanding from users moving from NATS to NATS Streaming has to do with how reconnection works.
|
||||
It is important to understand how NATS Streaming relates to NATS "core". You can find some information in [NATS Streaming Concepts/Relation to NATS](/nats-streaming-concepts/relation-to-nats.md).
|
||||
A common misunderstanding from users moving from NATS to NATS Streaming has to do with how reconnection works. It is important to understand how NATS Streaming relates to NATS "core". You can find some information in [NATS Streaming Concepts/Relation to NATS](relation-to-nats.md).
|
||||
|
||||
The NATS Streaming library uses the NATS library to connect to a NATS Server and indirectly communicates with the NATS Streaming "server". To better understand the issues, you should assume that the server has no direct connection with the client, and the client may possibly never lose its TCP connection to a NATS Server and yet may not have access to a streaming server (streaming client connected to a NATS Server, that is cluster to another, to which the NATS Streaming "server" is connected to).
|
||||
The NATS Streaming library uses the NATS library to connect to a NATS Server and indirectly communicates with the NATS Streaming "server". To better understand the issues, you should assume that the server has no direct connection with the client, and the client may possibly never lose its TCP connection to a NATS Server and yet may not have access to a streaming server \(streaming client connected to a NATS Server, that is cluster to another, to which the NATS Streaming "server" is connected to\).
|
||||
|
||||
When the low-level NATS TCP connection is broken, the reconnection logic triggers in the core NATS library. Once the connection is re-established, the low level NATS subscriptions used by the Streaming library to communicate with the streaming servers will be resent. All of that activity could have happened and the Streaming server would not know (see topology example described above). But from the server that is not a problem, after all, if a message is delivered while the client is disconnected, the server won't receive an ACK and will redeliver that message after the AckWait interval.
|
||||
When the low-level NATS TCP connection is broken, the reconnection logic triggers in the core NATS library. Once the connection is re-established, the low level NATS subscriptions used by the Streaming library to communicate with the streaming servers will be resent. All of that activity could have happened and the Streaming server would not know \(see topology example described above\). But from the server that is not a problem, after all, if a message is delivered while the client is disconnected, the server won't receive an ACK and will redeliver that message after the AckWait interval.
|
||||
|
||||
It is worth noting that a frequent mistake made by new users is to run the NATS Streaming server with memory store, which is the default if no persistence mode is specified, storing its state in memory. This means that if the streaming server is stopped, all state is lost. On server restart, since no connection information is recovered, running applications will stop receiving messages and newly published messages will be rejected with an `invalid publish request` error. Client libraries that support and set the `Connection Lost` handler \(refer to [connection status](https://github.com/nats-io/stan.go#connection-status) for more information\) will be notified that the connection is lost with the error `client has been replaced or is no longer registered`.
|
||||
|
||||
To maintain the streaming connection (a better name may have been "session"), both server and clients send heartbeats/PINGs. If the server misses a configured amount of heartbeats from the client, it will close the connection, which also means deleting all non-durable subscriptions. If the client was "network partitioned" from the server when that happened, even after the partition is resolved, the client would not know what happened. Again, to understand how that is possible, see the topology example above: the network partition happened between the two clustered NATS Servers, and no TCP connection between the streaming client and/or streaming server was ever lost.
|
||||
To maintain the streaming connection \(a better name may have been "session"\), both server and clients send heartbeats/PINGs. If the server misses a configured amount of heartbeats from the client, it will close the connection, which also means deleting all non-durable subscriptions. If the client was "network partitioned" from the server when that happened, even after the partition is resolved, the client would not know what happened. Again, to understand how that is possible, see the topology example above: the network partition happened between the two clustered NATS Servers, and no TCP connection between the streaming client and/or streaming server was ever lost.
|
||||
|
||||
To solve that, the client sends PINGs to the server, and if missing enough of them, will close the connection and report it as lost through the ConnectionLost handler. See [connection status](https://github.com/nats-io/stan.go#connection-status) for more details. In the case of the network partition example above, even if the client's number of PINGs has not reached its maximum count when the partition is resolved, the server will respond with an error to the next PING going through because it will detect that this specific client instance has been closed due to timeouts.
|
||||
|
||||
When that point is reached, the connection **and its subscriptions** are no longer valid and need to be recreated by the user.
|
||||
|
||||
The client-to-server PINGs are by default set to pretty aggressive values and should likely be increased in a normal setup. That is, with the default values, the client would not tolerate a server being down/not responding for only 15 seconds or so. Users should adjust the `Pings()` option, deciding how long they are willing to have their application not able to communicate with a server without "knowing", versus declaring the connection lost too soon (and having to recreate the state: connection and all subscriptions).
|
||||
The client-to-server PINGs are by default set to pretty aggressive values and should likely be increased in a normal setup. That is, with the default values, the client would not tolerate a server being down/not responding for only 15 seconds or so. Users should adjust the `Pings()` option, deciding how long they are willing to have their application not able to communicate with a server without "knowing", versus declaring the connection lost too soon \(and having to recreate the state: connection and all subscriptions\).
|
||||
|
||||
|
@ -60,16 +60,16 @@ In general the configuration parameters are the same as the command line argumen
|
||||
| sv | Enable trace logging | `true` or `false` | `sv: true` | `false` |
|
||||
| nats\_server\_url | If specified, connects to an external NATS Server, otherwise starts an embedded one | NATS URL | `nats_server_url: "nats://localhost:4222"` | N/A |
|
||||
| secure | If true, creates a TLS connection to the server but without the need to use TLS configuration \(no NATS Server certificate verification\) | `true` or `false` | `secure: true` | `false` |
|
||||
| tls | TLS Configuration | Map: `tls: { ... }` | [**See details below**](cfgfile.md#tls-configuration) |
|
||||
| store\_limits | Store Limits | Map: `store_limits: { ... }` | [**See details below**](cfgfile.md#store-limits-configuration) |
|
||||
| file\_options | File Store specific options | Map: `file_options: { ... }` | [**See details below**](cfgfile.md#file-options-configuration) |
|
||||
| sql\_options | SQL Store specific options | Map: `sql_options: { ... }` | [**See details below**](cfgfile.md#sql-options-configuration) |
|
||||
| tls | TLS Configuration | Map: `tls: { ... }` | [**See details below**](cfgfile.md#tls-configuration) | |
|
||||
| store\_limits | Store Limits | Map: `store_limits: { ... }` | [**See details below**](cfgfile.md#store-limits-configuration) | |
|
||||
| file\_options | File Store specific options | Map: `file_options: { ... }` | [**See details below**](cfgfile.md#file-options-configuration) | |
|
||||
| sql\_options | SQL Store specific options | Map: `sql_options: { ... }` | [**See details below**](cfgfile.md#sql-options-configuration) | |
|
||||
| hb\_interval | Interval at which the server sends an heartbeat to a client | Duration | `hb_interval: "10s"` | `30s` |
|
||||
| hb\_timeout | How long the server waits for a heartbeat response from the client before considering it a failed heartbeat | Duration | `hb_timeout: "10s"` | `10s` |
|
||||
| hb\_fail\_count | Count of failed heartbeats before server closes the client connection. The actual total wait is: \(fail count + 1\) \* \(hb interval + hb timeout\) | Number | `hb_fail_count: 2` | `10` |
|
||||
| ft\_group | In Fault Tolerance mode, you can start a group of streaming servers with only one server being active while others are running in standby mode. This is the name of this FT group | String | `ft_group: "my_ft_group"` | N/A |
|
||||
| partitioning | If set to true, a list of channels must be defined in store\_limits/channels section. This section then serves two purposes, overriding limits for a given channel or adding it to the partition | `true` or `false` | `partitioning: true` | `false` |
|
||||
| cluster | Cluster Configuration | Map: `cluster: { ... }` | [**See details below**](cfgfile.md#cluster-configuration) |
|
||||
| cluster | Cluster Configuration | Map: `cluster: { ... }` | [**See details below**](cfgfile.md#cluster-configuration) | |
|
||||
| encrypt | Specify if server should encrypt messages \(only the payload\) when storing them | `true` or `false` | `encrypt: true` | `false` |
|
||||
| encryption\_cipher | Cipher to use for encryption. Currently support AES and CHAHA \(ChaChaPoly\). Defaults to AES | `AES` or `CHACHA` | `encryption_cipher: "AES"` | Depends on platform |
|
||||
| encryption\_key | Encryption key. It is recommended to specify the key through the `NATS_STREAMING_ENCRYPTION_KEY` environment variable instead | String | `encryption_key: "mykey"` | N/A |
|
||||
@ -97,7 +97,7 @@ Note that the Streaming Server uses a connection to a NATS Server, and so the NA
|
||||
| max\_bytes | Total size of messages per channel, 0 means unlimited | Number >= 0 | `max_bytes: 1GB` | 1GB |
|
||||
| max\_age | How long messages can stay in the log | Duration | `max_age: "24h"` | Unlimited |
|
||||
| max\_inactivity | How long without any subscription and any new message before a channel can be automatically deleted | Duration | `max_inactivity: "24h"` | Unlimited |
|
||||
| channels | A map of channel names with specific limits | Map: `channels: { ... }` | [**See details below**](cfgfile.md#channels) |
|
||||
| channels | A map of channel names with specific limits | Map: `channels: { ... }` | [**See details below**](cfgfile.md#channels) | |
|
||||
|
||||
## Channels
|
||||
|
||||
@ -166,6 +166,6 @@ For a given channel, the possible parameters are:
|
||||
| :--- | :--- | :--- | :--- | :--- |
|
||||
| driver | Name of the SQL driver to use | `mysql` or `postgres` | `driver: "mysql"` | N/A |
|
||||
| source | How to connect to the database. This is driver specific | String | `source: "ivan:pwd@/nss_db"` | N/A |
|
||||
| no\_caching | Enable/Disable caching for messages and subscriptions operations. | `true` or `false` | `no_caching: false` | `false` (caching enabled) |
|
||||
| no\_caching | Enable/Disable caching for messages and subscriptions operations. | `true` or `false` | `no_caching: false` | `false` \(caching enabled\) |
|
||||
| max\_open\_conns | Maximum number of opened connections to the database. Value <= 0 means no limit. | Number | `max_open_conns: 5` | unlimited |
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user