1
0
mirror of https://github.com/taigrr/nats.docs synced 2025-01-18 04:03:23 -08:00

GitBook: [master] 60 pages and 12 assets modified

This commit is contained in:
Ginger Collison 2021-03-15 14:08:37 +00:00 committed by gitbook-bot
parent 3429cc74ff
commit 0f1d9e01a8
No known key found for this signature in database
GPG Key ID: 07D2180C7B12D0FF
59 changed files with 714 additions and 661 deletions

View File

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

View File

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 19 KiB

View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 19 KiB

View File

Before

Width:  |  Height:  |  Size: 53 KiB

After

Width:  |  Height:  |  Size: 53 KiB

View File

@ -110,42 +110,42 @@
## JetStream ## JetStream
* [About Jetstream](jetstream/about_jetstream/jetstream.md) * [About Jetstream](jetstream/jetstream.md)
* [Concepts](jetstream/concepts/concepts.md) * [Concepts](jetstream/concepts/README.md)
* [Streams](jetstream/concepts/streams.md) * [Streams](jetstream/concepts/streams.md)
* [Consumes](jetstream/concepts/consumers.md) * [Consumes](jetstream/concepts/consumers.md)
* [Configuration](jetstream/concepts/configuration.md) * [Configuration](jetstream/concepts/configuration.md)
* [Getting Started](jetstream/getting_started/getting_started.md) * [Getting Started](jetstream/getting_started/README.md)
* [Using Docker](jetstream/getting_started/using_docker.md) * [Using Docker](jetstream/getting_started/using_docker.md)
* [Using Docker with NGS](jetstream/getting_started/using_docker.md#using-docker-with-ngs) * [Using Docker with NGS](jetstream/getting_started/using-docker-with-ngs.md)
* [Using Source](jetstream/getting_started/using_source.md) * [Using Source](jetstream/getting_started/using_source.md)
* [Administration & Usage from CLI](jetstream/administration/administration.md) * [Administration & Usage from CLI](jetstream/administration/README.md)
* [Account Information](jetstream/administration/account.md) * [Account Information](jetstream/administration/account.md)
* [Streams](jetstream/administration/streams.md) * [Streams](jetstream/administration/streams.md)
* [Consumers](jetstream/administration/consumers.md) * [Consumers](jetstream/administration/consumers.md)
* [Monitoring](jetstream/monitoring/monitoring.md) * [Monitoring](jetstream/monitoring.md)
* [Data Replication](jetstream/data_replication/replication.md) * [Data Replication](jetstream/replication.md)
* [Clustering](jetstream/clustering/clustering.md) * [Clustering](jetstream/clustering/README.md)
* [Administration](jetstream/clustering/administration.md) * [Administration](jetstream/clustering/administration.md)
* [Configuration Management](jetstream/configuration_mgmt/configuration_mgmt.md) * [Configuration Management](jetstream/configuration_mgmt/README.md)
* [NATS Admin CLI](jetstream/configuration_mgmt/configuration_mgmt.md#nats-admin-cli) * [NATS Admin CLI](jetstream/configuration_mgmt/nats-admin-cli.md)
* [Terraform](jetstream/configuration_mgmt/configuration_mgmt.md#terraform) * [Terraform](jetstream/configuration_mgmt/terraform.md)
* [GitHub Actions](jetstream/configuration_mgmt/github_actions.md) * [GitHub Actions](jetstream/configuration_mgmt/github_actions.md)
* [Kubernetes Controller](jetstream/configuration_mgmt/kubernetes_controller.md) * [Kubernetes Controller](jetstream/configuration_mgmt/kubernetes_controller.md)
* [Disaser Recovery](jetstream/disaster_recovery/disaster_recovery.md) * [Disaser Recovery](jetstream/disaster_recovery.md)
* [Model Deep Dive](jetstream/model_deep_dive/model_deep_dive.md) * [Model Deep Dive](jetstream/model_deep_dive/README.md)
* [Stream Limits, Retention Modes and Discard Policy](jetstream/model_deep_dive/model_deep_dive.md#stream-limits-retention-modes-and-discard-policy) * [Stream Limits, Retention Modes and Discard Policy](jetstream/model_deep_dive/stream-limits-retention-modes-and-discard-policy.md)
* [Message Deduplication](jetstream/model_deep_dive/model_deep_dive.md#message-deduplication) * [Message Deduplication](jetstream/model_deep_dive/message-deduplication.md)
* [Acknowledgment Models](jetstream/model_deep_dive/model_deep_dive.md#acknowledgement-models) * [Acknowledgment Models](jetstream/model_deep_dive/acknowledgment-models.md)
* [Exactly Once Delivery](jetstream/model_deep_dive/model_deep_dive.md#exactly-once-delivery) * [Exactly Once Delivery](jetstream/model_deep_dive/exactly-once-delivery.md)
* [Consumer Starting Position](jetstream/model_deep_dive/model_deep_dive.md#consumer-starting-position) * [Consumer Starting Position](jetstream/model_deep_dive/consumer-starting-position.md)
* [Ephemeral Consumers](jetstream/model_deep_dive/model_deep_dive.md#ephemeral-consumers) * [Ephemeral Consumers](jetstream/model_deep_dive/ephemeral-consumers.md)
* [Consumer Message Rates](jetstream/model_deep_dive/model_deep_dive.md#consumer-message-rates) * [Consumer Message Rates](jetstream/model_deep_dive/consumer-message-rates.md)
* [Stream Templates](jetstream/model_deep_dive/model_deep_dive.md#stream-templates) * [Stream Templates](jetstream/model_deep_dive/stream-templates.md)
* [Ack Sampling](jetstream/model_deep_dive/model_deep_dive.md#ack-sampling) * [Ack Sampling](jetstream/model_deep_dive/ack-sampling.md)
* [Storage Overhead](jetstream/model_deep_dive/model_deep_dive.md#storage-overhead) * [Storage Overhead](jetstream/model_deep_dive/storage-overhead.md)
* [NATS API Reference](jetstream/nats_api_reference/nats_api_reference.md) * [NATS API Reference](jetstream/nats_api_reference.md)
* [Multi-tenancy & Resource Mgmt](jetstream/multi-tenancy/resource_management.md) * [Multi-tenancy & Resource Mgmt](jetstream/resource_management.md)
## NATS Tools ## NATS Tools
@ -244,3 +244,4 @@
* [Using a Load Balancer for External Access to NATS](nats-on-kubernetes/nats-external-nlb.md) * [Using a Load Balancer for External Access to NATS](nats-on-kubernetes/nats-external-nlb.md)
* [Creating a NATS Super Cluster in Digital Ocean with Helm](nats-on-kubernetes/super-cluster-on-digital-ocean.md) * [Creating a NATS Super Cluster in Digital Ocean with Helm](nats-on-kubernetes/super-cluster-on-digital-ocean.md)
* [From Zero to K8S to Leafnodes using Helm](nats-on-kubernetes/from-zero-to-leafnodes.md) * [From Zero to K8S to Leafnodes using Helm](nats-on-kubernetes/from-zero-to-leafnodes.md)

View File

@ -6,7 +6,7 @@ NATS Streaming is a service on top of NATS. To connect to the service you first
Connecting to a streaming server requires a cluster id, defined by the server configuration, and a client ID defined by the client. Connecting to a streaming server requires a cluster id, defined by the server configuration, and a client ID defined by the client.
_Client ID should contain only alphanumeric characters, `-` or `_`_ _Client ID should contain only alphanumeric characters, `-` or \`_\`\_
Connecting to a server running locally on the default port is as simple as this: Connecting to a server running locally on the default port is as simple as this:

2
faq.md
View File

@ -161,7 +161,7 @@ The default setting for a single server is 65,536. Although there is no specifie
Most systems can handle several thousand NATS connections per server without any changes although some have a very low default such as OS X. You'll want to look at kernel/OS settings to increase that limit. You'll also want to look at default TCP buffer sizes to best optimize your machine for your traffic characteristics. Most systems can handle several thousand NATS connections per server without any changes although some have a very low default such as OS X. You'll want to look at kernel/OS settings to increase that limit. You'll also want to look at default TCP buffer sizes to best optimize your machine for your traffic characteristics.
If you are using TLS you'll want to be sure the hardware can handle the CPU load created by TLS negotiation when there is the thundering herd of inbound connections after an outage or network partition event. This often overlooked factor is usually the constraint limiting the number of connections a single server should support. Choosing a cipher suite that is supported by TLS acceleration can mitigate this (e.g. AES with x86). Thinking of the entire system, you'll also want to look at a range of reconnect delay times or add reconnect jitter to the NATS clients to even out the distribution of connection attempts over time and reduce CPU spikes. If you are using TLS you'll want to be sure the hardware can handle the CPU load created by TLS negotiation when there is the thundering herd of inbound connections after an outage or network partition event. This often overlooked factor is usually the constraint limiting the number of connections a single server should support. Choosing a cipher suite that is supported by TLS acceleration can mitigate this \(e.g. AES with x86\). Thinking of the entire system, you'll also want to look at a range of reconnect delay times or add reconnect jitter to the NATS clients to even out the distribution of connection attempts over time and reduce CPU spikes.
All said, each server can be tuned to handle a large number of clients, and given the flexibility and scalability of NATS with clusters, superclusters, and leaf nodes one can build a NATS deployment supporting many millions of connections. All said, each server can be tuned to handle a large number of clients, and given the flexibility and scalability of NATS with clusters, superclusters, and leaf nodes one can build a NATS deployment supporting many millions of connections.

View File

@ -1,31 +0,0 @@
# JetStream
JetStream was created to solve the problems identified with streaming in technology today - complexity, fragility, and a lack of scalability. Some technologies address these better than others, but no current streaming technology is truly multi-tenant, horizontally scalable, and supports multiple deployment models. No technology we are aware of can scale from edge to cloud under the same security context while having complete deployment observability for operations.
## Goals
JetStream was developed with the following goals in mind:
- The system must be easy to configure and operate and be observable.
- The system must be secure and operate well with NATS 2.0 security models.
- The system must scale horizontally and be applicable to a high ingestion rate.
- The system must support multiple use cases.
- The system must self heal and always be available.
- The system must have an API that is closer to core NATS.
- The system must allow NATS messages to be part of a stream as desired.
- The system must display payload agnostic behavior.
- The system must not have third party dependencies.
## High-Level Design and Features
In terms of deployment, a JetStream server is simply a NATS server with the JetStream subsystem enabled, launched with the `-js` flag with a configured server name and cluster name. From a client perspective, it does not matter which servers are running JetStream so long as there is some route to a JetStream enabled server or servers. This allows for a flexible deployment which to optimize resources for particular servers that will store streams versus very low overhead stateless servers, reducing OpEx and ultimately creating a scalable and manageable system.
## Feature List
- At-least-once delivery; exactly once within a window
- Store messages and replay by time or sequence
- Wildcard support
- Account aware
- Data at rest encryption
- Cleanse specific messages (GDPR)
- Horizontal scalability
- Persist Streams and replay via Consumers
JetStream is designed to bifurcate ingestion and consumption of messages to provide multiple ways to consume data from the same stream. To that end, JetStream functionality is composed of server streams and server consumers.

View File

@ -1,8 +1,8 @@
## Administration and Usage from the CLI # Administration & Usage from CLI
Once the server is running it's time to use the management tool. This can be downloaded from the [GitHub Release Page](https://github.com/nats-io/natscli/releases/) or you can use the `natsio/nats-box:latest` docker image. On OS X homebrew can be used to install the latest version: Once the server is running it's time to use the management tool. This can be downloaded from the [GitHub Release Page](https://github.com/nats-io/natscli/releases/) or you can use the `natsio/nats-box:latest` docker image. On OS X homebrew can be used to install the latest version:
```nohighlight ```text
$ brew tap nats-io/nats-tools $ brew tap nats-io/nats-tools
$ brew install nats-io/nats-tools/nats $ brew install nats-io/nats-tools/nats
$ nats --help $ nats --help
@ -30,3 +30,4 @@ We'll walk through the above scenario and introduce features of the CLI and of J
Throughout this example, we'll show other commands like `nats pub` and `nats sub` to interact with the system. These are normal existing core NATS commands and JetStream is fully usable by only using core NATS. Throughout this example, we'll show other commands like `nats pub` and `nats sub` to interact with the system. These are normal existing core NATS commands and JetStream is fully usable by only using core NATS.
We'll touch on some additional features but please review the section on the design model to understand all possible permutations. We'll touch on some additional features but please review the section on the design model to understand all possible permutations.

View File

@ -1,8 +1,10 @@
### Account Information # Account Information
## Account Information
JetStream is multi-tenant so you will need to check that your account is enabled for JetStream and is not limited. You can view your limits as follows: JetStream is multi-tenant so you will need to check that your account is enabled for JetStream and is not limited. You can view your limits as follows:
```nohighlight ```text
$ nats account info $ nats account info
Connection Information: Connection Information:
Client ID: 8 Client ID: 8
@ -20,13 +22,13 @@ JetStream Account Information:
Max Consumers: unlimited Max Consumers: unlimited
``` ```
### Streams ## Streams
The first step is to set up storage for our `ORDERS` related messages, these arrive on a wildcard of subjects all flowing into the same Stream and they are kept for 1 year. The first step is to set up storage for our `ORDERS` related messages, these arrive on a wildcard of subjects all flowing into the same Stream and they are kept for 1 year.
#### Creating ### Creating
```nohighlight ```text
$ nats str add ORDERS $ nats str add ORDERS
? Subjects to consume ORDERS.* ? Subjects to consume ORDERS.*
? Storage backend file ? Storage backend file
@ -64,31 +66,32 @@ Statistics:
You can get prompted interactively for missing information as above, or do it all on one command. Pressing `?` in the CLI will help you map prompts to CLI options: You can get prompted interactively for missing information as above, or do it all on one command. Pressing `?` in the CLI will help you map prompts to CLI options:
```text
$ nats str add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --discard old --replicas 3
``` ```
$ nats str add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --discard old --replicas 3```
Additionally, one can store the configuration in a JSON file, the format of this is the same as `$ nats str info ORDERS -j | jq .config`: Additionally, one can store the configuration in a JSON file, the format of this is the same as `$ nats str info ORDERS -j | jq .config`:
``` ```text
$ nats str add ORDERS --config orders.json $ nats str add ORDERS --config orders.json
``` ```
#### Listing ### Listing
We can confirm our Stream was created: We can confirm our Stream was created:
```nohighlight ```text
$ nats str ls $ nats str ls
Streams: Streams:
ORDERS ORDERS
``` ```
#### Querying ### Querying
Information about the configuration of the Stream can be seen, and if you did not specify the Stream like below, it will prompt you based on all known ones: Information about the configuration of the Stream can be seen, and if you did not specify the Stream like below, it will prompt you based on all known ones:
```nohighlight ```text
$ nats str info ORDERS $ nats str info ORDERS
Information for Stream ORDERS Information for Stream ORDERS
@ -114,7 +117,7 @@ Statistics:
Most commands that show data as above support `-j` to show the results as JSON: Most commands that show data as above support `-j` to show the results as JSON:
```nohighlight ```text
$ nats str info ORDERS -j $ nats str info ORDERS -j
{ {
"config": { "config": {
@ -144,7 +147,7 @@ This is the general pattern for the entire `nats` utility as it relates to JetSt
In clustered mode additional information will be included: In clustered mode additional information will be included:
```nohighlight ```text
$ nats str info ORDERS $ nats str info ORDERS
... ...
Cluster Information: Cluster Information:
@ -156,11 +159,11 @@ Cluster Information:
Here the cluster name is configured as `JSC`, there is a server `S1` that's the current leader with `S3` and `S2` are replicas. Both replicas are current and have been seen recently. Here the cluster name is configured as `JSC`, there is a server `S1` that's the current leader with `S3` and `S2` are replicas. Both replicas are current and have been seen recently.
#### Copying ### Copying
A stream can be copied into another, which also allows the configuration of the new one to be adjusted via CLI flags: A stream can be copied into another, which also allows the configuration of the new one to be adjusted via CLI flags:
```nohighlight ```text
$ nats str cp ORDERS ARCHIVE --subjects "ORDERS_ARCVHIVE.*" --max-age 2y $ nats str cp ORDERS ARCHIVE --subjects "ORDERS_ARCVHIVE.*" --max-age 2y
Stream ORDERS was created Stream ORDERS was created
@ -174,11 +177,11 @@ Configuration:
... ...
``` ```
#### Editing ### Editing
A stream configuration can be edited, which allows the configuration to be adjusted via CLI flags. Here I have a incorrectly created ORDERS stream that I fix: A stream configuration can be edited, which allows the configuration to be adjusted via CLI flags. Here I have a incorrectly created ORDERS stream that I fix:
```nohighlight ```text
$ nats str info ORDERS -j | jq .config.subjects $ nats str info ORDERS -j | jq .config.subjects
[ [
"ORDERS.new" "ORDERS.new"
@ -197,24 +200,24 @@ Configuration:
Additionally one can store the configuration in a JSON file, the format of this is the same as `$ nats str info ORDERS -j | jq .config`: Additionally one can store the configuration in a JSON file, the format of this is the same as `$ nats str info ORDERS -j | jq .config`:
``` ```text
$ nats str edit ORDERS --config orders.json $ nats str edit ORDERS --config orders.json
``` ```
#### Publishing Into a Stream ### Publishing Into a Stream
Now let's add in some messages to our Stream. You can use `nats pub` to add messages, pass the `--wait` flag to see the publish ack being returned. Now let's add in some messages to our Stream. You can use `nats pub` to add messages, pass the `--wait` flag to see the publish ack being returned.
You can publish without waiting for acknowledgement: You can publish without waiting for acknowledgement:
```nohighlight ```text
$ nats pub ORDERS.scratch hello $ nats pub ORDERS.scratch hello
Published [sub1] : 'hello' Published [sub1] : 'hello'
``` ```
But if you want to be sure your messages got to JetStream and were persisted you can make a request: But if you want to be sure your messages got to JetStream and were persisted you can make a request:
```nohighlight ```text
$ nats req ORDERS.scratch hello $ nats req ORDERS.scratch hello
13:45:03 Sending request on [ORDERS.scratch] 13:45:03 Sending request on [ORDERS.scratch]
13:45:03 Received on [_INBOX.M8drJkd8O5otORAo0sMNkg.scHnSafY]: '+OK' 13:45:03 Received on [_INBOX.M8drJkd8O5otORAo0sMNkg.scHnSafY]: '+OK'
@ -222,7 +225,7 @@ $ nats req ORDERS.scratch hello
Keep checking the status of the Stream while doing this and you'll see it's stored messages increase. Keep checking the status of the Stream while doing this and you'll see it's stored messages increase.
```nohighlight ```text
$ nats str info ORDERS $ nats str info ORDERS
Information for Stream ORDERS Information for Stream ORDERS
... ...
@ -237,11 +240,11 @@ Statistics:
After putting some throw away data into the Stream, we can purge all the data out - while keeping the Stream active: After putting some throw away data into the Stream, we can purge all the data out - while keeping the Stream active:
#### Deleting All Data ### Deleting All Data
To delete all data in a stream use `purge`: To delete all data in a stream use `purge`:
```nohighlight ```text
$ nats str purge ORDERS -f $ nats str purge ORDERS -f
... ...
Statistics: Statistics:
@ -253,19 +256,20 @@ Statistics:
Active Consumers: 0 Active Consumers: 0
``` ```
#### Deleting A Message ### Deleting A Message
A single message can be securely removed from the stream: A single message can be securely removed from the stream:
```nohighlight ```text
$ nats str rmm ORDERS 1 -f $ nats str rmm ORDERS 1 -f
``` ```
#### Deleting Sets ### Deleting Sets
Finally for demonstration purposes, you can also delete the whole Stream and recreate it so then we're ready for creating the Consumers: Finally for demonstration purposes, you can also delete the whole Stream and recreate it so then we're ready for creating the Consumers:
``` ```text
$ nats str rm ORDERS -f $ nats str rm ORDERS -f
$ nats str add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 $ nats str add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1
``` ```

View File

@ -1,14 +1,14 @@
### Consumers # Consumers
Consumers is how messages are read or consumed from the Stream. We support pull and push-based Consumers and the example scenario has both, lets walk through that. Consumers is how messages are read or consumed from the Stream. We support pull and push-based Consumers and the example scenario has both, lets walk through that.
#### Creating Pull-Based Consumers ## Creating Pull-Based Consumers
The `NEW` and `DISPATCH` Consumers are pull-based, meaning the services consuming data from them have to ask the system for the next available message. This means you can easily scale your services up by adding more workers and the messages will get spread across the workers based on their availability. The `NEW` and `DISPATCH` Consumers are pull-based, meaning the services consuming data from them have to ask the system for the next available message. This means you can easily scale your services up by adding more workers and the messages will get spread across the workers based on their availability.
Pull-based Consumers are created the same as push-based Consumers, just don't specify a delivery target. Pull-based Consumers are created the same as push-based Consumers, just don't specify a delivery target.
``` ```text
$ nats con ls ORDERS $ nats con ls ORDERS
No Consumers defined No Consumers defined
``` ```
@ -17,7 +17,7 @@ We have no Consumers, lets add the `NEW` one:
I supply the `--sample` options on the CLI as this is not prompted for at present, everything else is prompted. The help in the CLI explains each: I supply the `--sample` options on the CLI as this is not prompted for at present, everything else is prompted. The help in the CLI explains each:
``` ```text
$ nats con add --sample 100 $ nats con add --sample 100
? Select a Stream ORDERS ? Select a Stream ORDERS
? Consumer name NEW ? Consumer name NEW
@ -48,7 +48,7 @@ State:
Redelivered Messages: 0 Redelivered Messages: 0
``` ```
This is a pull-based Consumer (empty Delivery Target), it gets messages from the first available message and requires specific acknowledgement of each and every message. This is a pull-based Consumer \(empty Delivery Target\), it gets messages from the first available message and requires specific acknowledgement of each and every message.
It only received messages that originally entered the Stream on `ORDERS.received`. Remember the Stream subscribes to `ORDERS.*`, this lets us select a subset of messages from the Stream. It only received messages that originally entered the Stream on `ORDERS.received`. Remember the Stream subscribes to `ORDERS.*`, this lets us select a subset of messages from the Stream.
@ -56,21 +56,21 @@ A Maximum Delivery limit of 20 is set, this means if the message is not acknowle
Again this can all be done in a single CLI call, lets make the `DISPATCH` Consumer: Again this can all be done in a single CLI call, lets make the `DISPATCH` Consumer:
``` ```text
$ nats con add ORDERS DISPATCH --filter ORDERS.processed --ack explicit --pull --deliver all --sample 100 --max-deliver 20 $ nats con add ORDERS DISPATCH --filter ORDERS.processed --ack explicit --pull --deliver all --sample 100 --max-deliver 20
``` ```
Additionally, one can store the configuration in a JSON file, the format of this is the same as `$ nats con info ORDERS DISPATCH -j | jq .config`: Additionally, one can store the configuration in a JSON file, the format of this is the same as `$ nats con info ORDERS DISPATCH -j | jq .config`:
``` ```text
$ nats con add ORDERS MONITOR --config monitor.json $ nats con add ORDERS MONITOR --config monitor.json
``` ```
#### Creating Push-Based Consumers ## Creating Push-Based Consumers
Our `MONITOR` Consumer is push-based, has no ack and will only get new messages and is not sampled: Our `MONITOR` Consumer is push-based, has no ack and will only get new messages and is not sampled:
``` ```text
$ nats con add $ nats con add
? Select a Stream ORDERS ? Select a Stream ORDERS
? Consumer name MONITOR ? Consumer name MONITOR
@ -101,21 +101,21 @@ State:
Again you can do this with a single non interactive command: Again you can do this with a single non interactive command:
``` ```text
$ nats con add ORDERS MONITOR --ack none --target monitor.ORDERS --deliver last --replay instant --filter '' $ nats con add ORDERS MONITOR --ack none --target monitor.ORDERS --deliver last --replay instant --filter ''
``` ```
Additionally one can store the configuration in a JSON file, the format of this is the same as `$ nats con info ORDERS MONITOR -j | jq .config`: Additionally one can store the configuration in a JSON file, the format of this is the same as `$ nats con info ORDERS MONITOR -j | jq .config`:
``` ```text
$ nats con add ORDERS --config monitor.json $ nats con add ORDERS --config monitor.json
``` ```
#### Listing ## Listing
You can get a quick list of all the Consumers for a specific Stream: You can get a quick list of all the Consumers for a specific Stream:
``` ```text
$ nats con ls ORDERS $ nats con ls ORDERS
Consumers for Stream ORDERS: Consumers for Stream ORDERS:
@ -124,11 +124,11 @@ Consumers for Stream ORDERS:
NEW NEW
``` ```
#### Querying ## Querying
All details for an Consumer can be queried, lets first look at a pull-based Consumer: All details for an Consumer can be queried, lets first look at a pull-based Consumer:
``` ```text
$ nats con info ORDERS DISPATCH $ nats con info ORDERS DISPATCH
Information for Consumer ORDERS > DISPATCH Information for Consumer ORDERS > DISPATCH
@ -154,13 +154,13 @@ State:
More details about the `State` section will be shown later when discussing the ack models in depth. More details about the `State` section will be shown later when discussing the ack models in depth.
#### Consuming Pull-Based Consumers ## Consuming Pull-Based Consumers
Pull-based Consumers require you to specifically ask for messages and ack them, typically you would do this with the client library `Request()` feature, but the `nats` utility has a helper: Pull-based Consumers require you to specifically ask for messages and ack them, typically you would do this with the client library `Request()` feature, but the `nats` utility has a helper:
First we ensure we have a message: First we ensure we have a message:
``` ```text
$ nats pub ORDERS.processed "order 1" $ nats pub ORDERS.processed "order 1"
$ nats pub ORDERS.processed "order 2" $ nats pub ORDERS.processed "order 2"
$ nats pub ORDERS.processed "order 3" $ nats pub ORDERS.processed "order 3"
@ -168,7 +168,7 @@ $ nats pub ORDERS.processed "order 3"
We can now read them using `nats`: We can now read them using `nats`:
``` ```text
$ nats con next ORDERS DISPATCH $ nats con next ORDERS DISPATCH
--- received on ORDERS.processed --- received on ORDERS.processed
order 1 order 1
@ -186,7 +186,7 @@ You can prevent ACKs by supplying `--no-ack`.
To do this from code you'd send a `Request()` to `$JS.API.CONSUMER.MSG.NEXT.ORDERS.DISPATCH`: To do this from code you'd send a `Request()` to `$JS.API.CONSUMER.MSG.NEXT.ORDERS.DISPATCH`:
``` ```text
$ nats req '$JS.API.CONSUMER.MSG.NEXT.ORDERS.DISPATCH' '' $ nats req '$JS.API.CONSUMER.MSG.NEXT.ORDERS.DISPATCH' ''
Published [$JS.API.CONSUMER.MSG.NEXT.ORDERS.DISPATCH] : '' Published [$JS.API.CONSUMER.MSG.NEXT.ORDERS.DISPATCH] : ''
Received [ORDERS.processed] : 'order 3' Received [ORDERS.processed] : 'order 3'
@ -194,11 +194,11 @@ Received [ORDERS.processed] : 'order 3'
Here `nats req` cannot ack, but in your code you'd respond to the received message with a nil payload as an Ack to JetStream. Here `nats req` cannot ack, but in your code you'd respond to the received message with a nil payload as an Ack to JetStream.
#### Consuming Push-Based Consumers ## Consuming Push-Based Consumers
Push-based Consumers will publish messages to a subject and anyone who subscribes to the subject will get them, they support different Acknowledgement models covered later, but here on the `MONITOR` Consumer we have no Acknowledgement. Push-based Consumers will publish messages to a subject and anyone who subscribes to the subject will get them, they support different Acknowledgement models covered later, but here on the `MONITOR` Consumer we have no Acknowledgement.
``` ```text
$ nats con info ORDERS MONITOR $ nats con info ORDERS MONITOR
... ...
Delivery Subject: monitor.ORDERS Delivery Subject: monitor.ORDERS
@ -207,7 +207,7 @@ $ nats con info ORDERS MONITOR
The Consumer is publishing to that subject, so lets listen there: The Consumer is publishing to that subject, so lets listen there:
``` ```text
$ nats sub monitor.ORDERS $ nats sub monitor.ORDERS
Listening on [monitor.ORDERS] Listening on [monitor.ORDERS]
[#3] Received on [ORDERS.processed]: 'order 3' [#3] Received on [ORDERS.processed]: 'order 3'
@ -217,3 +217,4 @@ Listening on [monitor.ORDERS]
Note the subject here of the received message is reported as `ORDERS.processed` this helps you distinguish what you're seeing in a Stream covering a wildcard, or multiple subject, subject space. Note the subject here of the received message is reported as `ORDERS.processed` this helps you distinguish what you're seeing in a Stream covering a wildcard, or multiple subject, subject space.
This Consumer needs no ack, so any new message into the ORDERS system will show up here in real time. This Consumer needs no ack, so any new message into the ORDERS system will show up here in real time.

View File

@ -1,10 +1,10 @@
### Streams # Streams
The first step is to set up storage for our `ORDERS` related messages, these arrive on a wildcard of subjects all flowing into the same Stream and they are kept for 1 year. The first step is to set up storage for our `ORDERS` related messages, these arrive on a wildcard of subjects all flowing into the same Stream and they are kept for 1 year.
#### Creating ## Creating
```nohighlight ```text
$ nats str add ORDERS $ nats str add ORDERS
? Subjects to consume ORDERS.* ? Subjects to consume ORDERS.*
? Storage backend file ? Storage backend file
@ -41,32 +41,32 @@ Statistics:
You can get prompted interactively for missing information as above, or do it all on one command. Pressing `?` in the CLI will help you map prompts to CLI options: You can get prompted interactively for missing information as above, or do it all on one command. Pressing `?` in the CLI will help you map prompts to CLI options:
``` ```text
nats str add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --discard old --dupe-window="0s" --replicas 1 nats str add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --discard old --dupe-window="0s" --replicas 1
``` ```
Additionally one can store the configuration in a JSON file, the format of this is the same as `$ nats str info ORDERS -j | jq .config`: Additionally one can store the configuration in a JSON file, the format of this is the same as `$ nats str info ORDERS -j | jq .config`:
``` ```text
$ nats str add ORDERS --config orders.json $ nats str add ORDERS --config orders.json
``` ```
#### Listing ## Listing
We can confirm our Stream was created: We can confirm our Stream was created:
```nohighlight ```text
$ nats str ls $ nats str ls
Streams: Streams:
ORDERS ORDERS
``` ```
#### Querying ## Querying
Information about the configuration of the Stream can be seen, and if you did not specify the Stream like below, it will prompt you based on all known ones: Information about the configuration of the Stream can be seen, and if you did not specify the Stream like below, it will prompt you based on all known ones:
```nohighlight ```text
$ nats str info ORDERS $ nats str info ORDERS
Information for Stream ORDERS created 2021-02-27T16:49:36-07:00 Information for Stream ORDERS created 2021-02-27T16:49:36-07:00
@ -95,7 +95,7 @@ State:
Most commands that show data as above support `-j` to show the results as JSON: Most commands that show data as above support `-j` to show the results as JSON:
```nohighlight ```text
$ nats str info ORDERS -j $ nats str info ORDERS -j
{ {
"config": { "config": {
@ -129,11 +129,11 @@ $ nats str info ORDERS -j
This is the general pattern for the entire `nats` utility as it relates to JetStream - prompting for needed information but every action can be run non-interactively making it usable as a cli api. All information output like seen above can be turned into JSON using `-j`. This is the general pattern for the entire `nats` utility as it relates to JetStream - prompting for needed information but every action can be run non-interactively making it usable as a cli api. All information output like seen above can be turned into JSON using `-j`.
#### Copying ## Copying
A stream can be copied into another, which also allows the configuration of the new one to be adjusted via CLI flags: A stream can be copied into another, which also allows the configuration of the new one to be adjusted via CLI flags:
```nohighlight ```text
$ nats str cp ORDERS ARCHIVE --subjects "ORDERS_ARCVHIVE.*" --max-age 2y $ nats str cp ORDERS ARCHIVE --subjects "ORDERS_ARCVHIVE.*" --max-age 2y
Stream ORDERS was created Stream ORDERS was created
@ -162,11 +162,11 @@ State:
Active Consumers: 0 Active Consumers: 0
``` ```
#### Editing ## Editing
A stream configuration can be edited, which allows the configuration to be adjusted via CLI flags. Here I have a incorrectly created ORDERS stream that I fix: A stream configuration can be edited, which allows the configuration to be adjusted via CLI flags. Here I have a incorrectly created ORDERS stream that I fix:
```nohighlight ```text
$ nats str info ORDERS -j | jq .config.subjects $ nats str info ORDERS -j | jq .config.subjects
[ [
"ORDERS.new" "ORDERS.new"
@ -185,24 +185,24 @@ Configuration:
Additionally one can store the configuration in a JSON file, the format of this is the same as `$ nats str info ORDERS -j | jq .config`: Additionally one can store the configuration in a JSON file, the format of this is the same as `$ nats str info ORDERS -j | jq .config`:
``` ```text
$ nats str edit ORDERS --config orders.json $ nats str edit ORDERS --config orders.json
``` ```
#### Publishing Into a Stream ## Publishing Into a Stream
Now let's add in some messages to our Stream. You can use `nats pub` to add messages, pass the `--wait` flag to see the publish ack being returned. Now let's add in some messages to our Stream. You can use `nats pub` to add messages, pass the `--wait` flag to see the publish ack being returned.
You can publish without waiting for acknowledgement: You can publish without waiting for acknowledgement:
```nohighlight ```text
$ nats pub ORDERS.scratch hello $ nats pub ORDERS.scratch hello
Published [sub1] : 'hello' Published [sub1] : 'hello'
``` ```
But if you want to be sure your messages got to JetStream and were persisted you can make a request: But if you want to be sure your messages got to JetStream and were persisted you can make a request:
```nohighlight ```text
$ nats req ORDERS.scratch hello $ nats req ORDERS.scratch hello
13:45:03 Sending request on [ORDERS.scratch] 13:45:03 Sending request on [ORDERS.scratch]
13:45:03 Received on [_INBOX.M8drJkd8O5otORAo0sMNkg.scHnSafY]: '+OK' 13:45:03 Received on [_INBOX.M8drJkd8O5otORAo0sMNkg.scHnSafY]: '+OK'
@ -210,7 +210,7 @@ $ nats req ORDERS.scratch hello
Keep checking the status of the Stream while doing this and you'll see it's stored messages increase. Keep checking the status of the Stream while doing this and you'll see it's stored messages increase.
```nohighlight ```text
$ nats str info ORDERS $ nats str info ORDERS
Information for Stream ORDERS Information for Stream ORDERS
... ...
@ -225,11 +225,11 @@ Statistics:
After putting some throw away data into the Stream, we can purge all the data out - while keeping the Stream active: After putting some throw away data into the Stream, we can purge all the data out - while keeping the Stream active:
#### Deleting All Data ## Deleting All Data
To delete all data in a stream use `purge`: To delete all data in a stream use `purge`:
```nohighlight ```text
$ nats str purge ORDERS -f $ nats str purge ORDERS -f
... ...
State: State:
@ -241,19 +241,20 @@ State:
Active Consumers: 0 Active Consumers: 0
``` ```
#### Deleting A Message ## Deleting A Message
A single message can be securely removed from the stream: A single message can be securely removed from the stream:
```nohighlight ```text
$ nats str rmm ORDERS 1 -f $ nats str rmm ORDERS 1 -f
``` ```
#### Deleting Sets ## Deleting Sets
Finally for demonstration purposes, you can also delete the whole Stream and recreate it so then we're ready for creating the Consumers: Finally for demonstration purposes, you can also delete the whole Stream and recreate it so then we're ready for creating the Consumers:
``` ```text
$ nats str rm ORDERS -f $ nats str rm ORDERS -f
$ nats str add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --discard old --dupe-window="0s" --replicas 1 $ nats str add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --discard old --dupe-window="0s" --replicas 1
``` ```

View File

@ -0,0 +1,111 @@
# Clustering
Clustering in Jetstream is required for a highly available and scalable system. Behind clustering is RAFT. There's no need to understand RAFT in depth to use clustering, but knowing a little explains some of the requirements behind setting up Jetstream clusters.
## RAFT
JetStream uses a NATS optimized RAFT algorithm for clustering. Typically raft generates a lot of traffic, but the NATS server optimizes this by combining the data plane for replicating messages with the messages RAFT would normally use to ensure consensus.
### Raft groups
The RAFT groups include API handlers sstreams, consumers, and an internal algorithm designates which servers handle which streams and consumers.
The raft algorithm has a few requirements:
* A log to persist state
* A quorum for consensus.
### The Quorum
In order to ensure data consistency across complete restarts, a quorum of servers is required. A quorum is ½ cluster size + 1. This is the minimum number of nodes to ensure at least one node has the most recent data and state after a catastrophic failure. So for a cluster size of 3, youll need at least two Jetstream enabled NATS servers available to store new messages. For a cluster size of 5, youll need at least 3 NATS servers, and so forth.
### RAFT Groups
**Meta Group** - all servers join the Meta Group and the JetStream API is managed by this group. A leader is elected and this owns the API and takes care of server placement.
![Meta Group](../../.gitbook/assets/meta-group.png)
**Stream Group** - each Stream creates a RAFT group, this group synchronizes state and data between its members. The elected leader handles ACKs and so forth, if there is no leader the stream will not accept messages.
![Stream Groups](../../.gitbook/assets/stream-groups.png)
**Consumer Group** - each Consumer creates a RAFT group, this group synchronizes consumer state between its members. The group will live on the machines where the Stream Group is and handle consumption ACKs etc. Each Consumer will have its own group.
![Consumer Groups](../../.gitbook/assets/consumer-groups.png)
### Cluster Size
Generally we recommend 3 or 5 Jetstream enabled servers in a NATS cluster. This balances scalability with a tolerance for failure. For example, if 5 servers are Jetstream enabled You would want two servers is one “zone”, two servers in another, and the remaining server in a third. This means you can lose any one “zone” at any time and continue operating.
### Mixing Jetstream enabled servers with standard NATS servers
This is possible, and even recommended in some cases. By mixing server types you can dedicate certain machines optimized for storage for Jetstream and others optimized solely for compute for standard NATS servers, reducing operational expense. With the right configuration, the standard servers would handle non-persistent NATS traffic and the Jetstream enabled servers would handle Jetstream traffic.
## Configuration
To configure Jetstream clusters, just configure clusters as you normally would by specifying a cluster block in the configuration. Any Jetstream enabled servers in the list of clusters will automatically chatter and set themselves up. Unlike core NATS clustering though, each Jetstream node **must specify** a server name and cluster name.
Below are explicity listed server configuration for a three node cluster across three machines, `n1-c1`, `n2-c1`, and `n3-c1`.
### Server 1 \(host\_a\)
```text
server_name=n1-c1
listen=4222
jetstream {
store_dir=/nats/storage
}
cluster {
name: C1
listen: localhost:6222
routes: [
nats-route://host_b:6222
nats-route://host_c:6222
]
}
```
### Server 2 \(host\_b\)
```text
server_name=n2-c1
listen=4222
jetstream {
store_dir=/nats/storage
}
cluster {
name: C1
listen: localhost:6222
routes: [
nats-route://host_a:6222
nats-route://host_c:6222
]
}
```
### Server 3 \(host\_c\)
```text
server_name=n3-c1
listen=4222
jetstream {
store_dir=/nats/storage
}
cluster {
name: C1
listen: localhost:6222
routes: [
nats-route://host_a:6222
nats-route://host_b:6222
]
}
```
Add nodes as necessary. Choose a data directory that makes sense for your environment, ideally a fast SDD, and launch each server. After two servers are running you'll be ready to use Jetstream.

View File

@ -1,4 +1,4 @@
# Cluster Administration # Administration
Once a JetStream cluster is operating interactions with the CLI and with `nats` CLI is the same as before. For these examples, lets assume we have a 5 server cluster, n1-n5 in a cluster named C1. Once a JetStream cluster is operating interactions with the CLI and with `nats` CLI is the same as before. For these examples, lets assume we have a 5 server cluster, n1-n5 in a cluster named C1.
@ -8,9 +8,9 @@ Within an account there are operations and reports that show where users data is
## Creating clustered streams ## Creating clustered streams
When adding a stream using the `nats` CLI the number of replicas will be asked, when you choose a number more than 1, (we suggest 1, 3 or 5), the data will be stored o multiple nodes in your cluster using the RAFT protocol as above. The replica count must be less than the maximum number of servers. When adding a stream using the `nats` CLI the number of replicas will be asked, when you choose a number more than 1, \(we suggest 1, 3 or 5\), the data will be stored o multiple nodes in your cluster using the RAFT protocol as above. The replica count must be less than the maximum number of servers.
```nohighlight ```text
$ nats str add ORDERS --replicas 3 $ nats str add ORDERS --replicas 3
.... ....
Information for Stream ORDERS created 2021-02-05T12:07:34+01:00 Information for Stream ORDERS created 2021-02-05T12:07:34+01:00
@ -25,7 +25,6 @@ Cluster Information:
Leader: n1-c1 Leader: n1-c1
Replica: n4-c1, current, seen 0.07s ago Replica: n4-c1, current, seen 0.07s ago
Replica: n3-c1, current, seen 0.07s ago Replica: n3-c1, current, seen 0.07s ago
``` ```
Above you can see that the cluster information will be reported in all cases where Stream info is shown such as after add or using `nats stream info`. Above you can see that the cluster information will be reported in all cases where Stream info is shown such as after add or using `nats stream info`.
@ -40,7 +39,7 @@ The replica count cannot be edited once configured.
Users can get overall statistics about their streams and also where these streams are placed: Users can get overall statistics about their streams and also where these streams are placed:
``` ```text
$ nats stream report $ nats stream report
Obtaining Stream stats Obtaining Stream stats
+----------+-----------+----------+--------+---------+------+---------+----------------------+----------+ +----------+-----------+----------+--------+---------+------+---------+----------------------+----------+
@ -61,7 +60,7 @@ Every RAFT group has a leader that's elected by the group when needed. Generally
Moving leadership away from a node does not remove it from the cluster and does not prevent it from becoming a leader again, this is merely a triggered leader election. Moving leadership away from a node does not remove it from the cluster and does not prevent it from becoming a leader again, this is merely a triggered leader election.
```nohighlight ```text
$ nats stream cluster step-down ORDERS $ nats stream cluster step-down ORDERS
14:32:17 Requesting leader step down of "n1-c1" in a 3 peer RAFT group 14:32:17 Requesting leader step down of "n1-c1" in a 3 peer RAFT group
14:32:18 New leader elected "n4-c1" 14:32:18 New leader elected "n4-c1"
@ -86,7 +85,7 @@ Systems users can view state of the Meta Group - but not individual Stream or Co
We have a high level report of cluster state: We have a high level report of cluster state:
```nohighlight ```text
$ nats server report jetstream --user system $ nats server report jetstream --user system
+--------------------------------------------------------------------------------------------------+ +--------------------------------------------------------------------------------------------------+
| JetStream Summary | | JetStream Summary |
@ -124,7 +123,7 @@ In the Meta Group report the server `n2-c1` is not current and has not been seen
This report is built using raw data that can be obtained from the monitor port on the `/jsz` url, or over nats using: This report is built using raw data that can be obtained from the monitor port on the `/jsz` url, or over nats using:
```nohightlight ```text
$ nats server req jetstream --help $ nats server req jetstream --help
... ...
--name=NAME Limit to servers matching a server name --name=NAME Limit to servers matching a server name
@ -147,7 +146,7 @@ This will produce a wealth of raw information about the current state of your cl
Similar to Streams and Consumers above the Meta Group allows leader stand down. The Meta Group is cluster wide and spans all accounts, therefore to manage the meta group you have to use a `SYSTEM` user. Similar to Streams and Consumers above the Meta Group allows leader stand down. The Meta Group is cluster wide and spans all accounts, therefore to manage the meta group you have to use a `SYSTEM` user.
```nohighlight ```text
$ nats server raft step-down --user system $ nats server raft step-down --user system
17:44:24 Current leader: n2-c2 17:44:24 Current leader: n2-c2
17:44:24 New leader: n1-c2 17:44:24 New leader: n1-c2
@ -161,7 +160,7 @@ There might be a case though where you know a machine will never return, and you
After the node is removed the cluster will notice that the replica count is not honored anymore and will immediately pick a new node and start replicating data to it. The new node will be selected using the same placement rules as the existing stream. After the node is removed the cluster will notice that the replica count is not honored anymore and will immediately pick a new node and start replicating data to it. The new node will be selected using the same placement rules as the existing stream.
```nohighlight ```text
$ nats s cluster peer-remove ORDERS $ nats s cluster peer-remove ORDERS
? Select a Peer n4-c1 ? Select a Peer n4-c1
14:38:50 Removing peer "n4-c1" 14:38:50 Removing peer "n4-c1"
@ -170,7 +169,7 @@ $ nats s cluster peer-remove ORDERS
At this point the stream and all consumers will have removed `n4-c1` from the group, they will all start new peer selection and data replication. At this point the stream and all consumers will have removed `n4-c1` from the group, they will all start new peer selection and data replication.
```nohighlight ```text
$ nats stream info ORDERS $ nats stream info ORDERS
.... ....
Cluster Information: Cluster Information:

View File

@ -1,106 +0,0 @@
# Clustering
Clustering in Jetstream is required for a highly available and scalable system. Behind
clustering is RAFT. There's no need to understand RAFT in depth to use clustering, but knowing a little explains some of the requirements behind setting up Jetstream clusters.
## RAFT
JetStream uses a NATS optimized RAFT algorithm for clustering. Typically raft generates a lot of traffic, but the NATS server optimizes this by combining the data plane for replicating messages with the messages RAFT would normally use to ensure consensus.
### Raft groups
The RAFT groups include API handlers sstreams, consumers, and an internal algorithm designates which servers handle which streams and consumers.
The raft algorithm has a few requirements:
- A log to persist state
- A quorum for consensus.
### The Quorum
In order to ensure data consistency across complete restarts, a quorum of servers is required. A quorum is ½ cluster size + 1. This is the minimum number of nodes to ensure at least one node has the most recent data and state after a catastrophic failure. So for a cluster size of 3, youll need at least two Jetstream enabled NATS servers available to store new messages. For a cluster size of 5, youll need at least 3 NATS servers, and so forth.
### RAFT Groups
**Meta Group** - all servers join the Meta Group and the JetStream API is managed by this group. A leader is elected and this owns the API and takes care of server placement.
![Meta Group](../../assets/images/meta-group.png)
**Stream Group** - each Stream creates a RAFT group, this group synchronizes state and data between its members. The elected leader handles ACKs and so forth, if there is no leader the stream will not accept messages.
![Stream Groups](../../assets/images/stream-groups.png)
**Consumer Group** - each Consumer creates a RAFT group, this group synchronizes consumer state between its members. The group will live on the machines where the Stream Group is and handle consumption ACKs etc. Each Consumer will have its own group.
![Consumer Groups](../../assets/images/consumer-groups.png)
### Cluster Size
Generally we recommend 3 or 5 Jetstream enabled servers in a NATS cluster. This balances scalability with a tolerance for failure. For example, if 5 servers are Jetstream enabled You would want two servers is one “zone”, two servers in another, and the remaining server in a third. This means you can lose any one “zone” at any time and continue operating.
### Mixing Jetstream enabled servers with standard NATS servers
This is possible, and even recommended in some cases. By mixing server types you can dedicate certain machines optimized for storage for Jetstream and others optimized solely for compute for standard NATS servers, reducing operational expense. With the right configuration, the standard servers would handle non-persistent NATS traffic and the Jetstream enabled servers would handle Jetstream traffic.
## Configuration
To configure Jetstream clusters, just configure clusters as you normally would by specifying a cluster block in the configuration. Any Jetstream enabled servers in the list of clusters will automatically chatter and set themselves up. Unlike core NATS clustering though, each Jetstream node **must specify** a server name and cluster name.
Below are explicity listed server configuration for a three node cluster across three machines, `n1-c1`, `n2-c1`, and `n3-c1`.
### Server 1 (host_a)
```nohighlight
server_name=n1-c1
listen=4222
jetstream {
store_dir=/nats/storage
}
cluster {
name: C1
listen: localhost:6222
routes: [
nats-route://host_b:6222
nats-route://host_c:6222
]
}
```
### Server 2 (host_b)
```nohighlight
server_name=n2-c1
listen=4222
jetstream {
store_dir=/nats/storage
}
cluster {
name: C1
listen: localhost:6222
routes: [
nats-route://host_a:6222
nats-route://host_c:6222
]
}
```
### Server 3 (host_c)
```nohighlight
server_name=n3-c1
listen=4222
jetstream {
store_dir=/nats/storage
}
cluster {
name: C1
listen: localhost:6222
routes: [
nats-route://host_a:6222
nats-route://host_b:6222
]
}
```
Add nodes as necessary. Choose a data directory that makes sense for your environment, ideally a fast SDD, and launch each server. After two servers are running you'll be ready to use Jetstream.

View File

@ -0,0 +1,20 @@
# Concepts
In JetStream the configuration for storing messages is defined separately from how they are consumed. Storage is defined in a _Stream_ and consuming messages is defined by multiple _Consumers_.
We'll discuss these 2 subjects in the context of this architecture.
![Orders](../../.gitbook/assets/streams-and-consumers-75p.png)
While this is an incomplete architecture it does show a number of key points:
* Many related subjects are stored in a Stream
* Consumers can have different modes of operation and receive just subsets of the messages
* Multiple Acknowledgement modes are supported
A new order arrives on `ORDERS.received`, gets sent to the `NEW` Consumer who, on success, will create a new message on `ORDERS.processed`. The `ORDERS.processed` message again enters the Stream where a `DISPATCH` Consumer receives it and once processed it will create an `ORDERS.completed` message which will again enter the Stream. These operations are all `pull` based meaning they are work queues and can scale horizontally. All require acknowledged delivery ensuring no order is missed.
All messages are delivered to a `MONITOR` Consumer without any acknowledgement and using Pub/Sub semantics - they are pushed to the monitor.
As messages are acknowledged to the `NEW` and `DISPATCH` Consumers, a percentage of them are Sampled and messages indicating redelivery counts, ack delays and more, are delivered to the monitoring system.

View File

@ -1,19 +0,0 @@
## Concepts
In JetStream the configuration for storing messages is defined separately from how they are consumed. Storage is defined in a *Stream* and consuming messages is defined by multiple *Consumers*.
We'll discuss these 2 subjects in the context of this architecture.
![Orders](../../assets/images/streams-and-consumers-75p.png)
While this is an incomplete architecture it does show a number of key points:
* Many related subjects are stored in a Stream
* Consumers can have different modes of operation and receive just subsets of the messages
* Multiple Acknowledgement modes are supported
A new order arrives on `ORDERS.received`, gets sent to the `NEW` Consumer who, on success, will create a new message on `ORDERS.processed`. The `ORDERS.processed` message again enters the Stream where a `DISPATCH` Consumer receives it and once processed it will create an `ORDERS.completed` message which will again enter the Stream. These operations are all `pull` based meaning they are work queues and can scale horizontally. All require acknowledged delivery ensuring no order is missed.
All messages are delivered to a `MONITOR` Consumer without any acknowledgement and using Pub/Sub semantics - they are pushed to the monitor.
As messages are acknowledged to the `NEW` and `DISPATCH` Consumers, a percentage of them are Sampled and messages indicating redelivery counts, ack delays and more, are delivered to the monitoring system.

View File

@ -1,4 +1,4 @@
### Configuration # Configuration
The rest of this document introduces the `nats` utility, but for completeness and reference this is how you'd create the ORDERS scenario. We'll configure a 1 year retention for order related messages: The rest of this document introduces the `nats` utility, but for completeness and reference this is how you'd create the ORDERS scenario. We'll configure a 1 year retention for order related messages:
@ -8,3 +8,4 @@ $ nats con add ORDERS NEW --filter ORDERS.received --ack explicit --pull --deliv
$ nats con add ORDERS DISPATCH --filter ORDERS.processed --ack explicit --pull --deliver all --max-deliver=-1 --sample 100 $ nats con add ORDERS DISPATCH --filter ORDERS.processed --ack explicit --pull --deliver all --max-deliver=-1 --sample 100
$ nats con add ORDERS MONITOR --filter '' --ack none --target monitor.ORDERS --deliver last --replay instant $ nats con add ORDERS MONITOR --filter '' --ack none --target monitor.ORDERS --deliver last --replay instant
``` ```

View File

@ -1,4 +1,4 @@
### Consumers # Consumes
Each Consumer, or related group of Consumers, of a Stream will need an Consumer defined. It's ok to define thousands of these pointing at the same Stream. Each Consumer, or related group of Consumers, of a Stream will need an Consumer defined. It's ok to define thousands of these pointing at the same Stream.
@ -6,7 +6,7 @@ Consumers can either be `push` based where JetStream will deliver the messages a
In the orders example above we have 3 Consumers. The first two select a subset of the messages from the Stream by specifying a specific subject like `ORDERS.processed`. The Stream consumes `ORDERS.*` and this allows you to receive just what you need. The final Consumer receives all messages in a `push` fashion. In the orders example above we have 3 Consumers. The first two select a subset of the messages from the Stream by specifying a specific subject like `ORDERS.processed`. The Stream consumes `ORDERS.*` and this allows you to receive just what you need. The final Consumer receives all messages in a `push` fashion.
Consumers track their progress, they know what messages were delivered, acknowledged, etc., and will redeliver messages they sent that were not acknowledged. When first created, the Consumer has to know what message to send as the first one. You can configure either a specific message in the set (`StreamSeq`), specific time (`StartTime`), all (`DeliverAll`) or last (`DeliverLast`). This is the starting point and from there, they all behave the same - delivering all of the following messages with optional Acknowledgement. Consumers track their progress, they know what messages were delivered, acknowledged, etc., and will redeliver messages they sent that were not acknowledged. When first created, the Consumer has to know what message to send as the first one. You can configure either a specific message in the set \(`StreamSeq`\), specific time \(`StartTime`\), all \(`DeliverAll`\) or last \(`DeliverLast`\). This is the starting point and from there, they all behave the same - delivering all of the following messages with optional Acknowledgement.
Acknowledgements default to `AckExplicit` - the only supported mode for pull-based Consumers - meaning every message requires a distinct acknowledgement. But for push-based Consumers, you can set `AckNone` that does not require any acknowledgement, or `AckAll` which quite interestingly allows you to acknowledge a specific message, like message `100`, which will also acknowledge messages `1` through `99`. The `AckAll` mode can be a great performance boost. Acknowledgements default to `AckExplicit` - the only supported mode for pull-based Consumers - meaning every message requires a distinct acknowledgement. But for push-based Consumers, you can set `AckNone` that does not require any acknowledgement, or `AckAll` which quite interestingly allows you to acknowledge a specific message, like message `100`, which will also acknowledge messages `1` through `99`. The `AckAll` mode can be a great performance boost.
@ -16,18 +16,19 @@ To assist with creating monitoring applications, one can set a `SampleFrequency`
When defining Consumers the items below make up the entire configuration of the Consumer: When defining Consumers the items below make up the entire configuration of the Consumer:
|Item|Description| | Item | Description |
|----|-----------| | :--- | :--- |
|AckPolicy|How messages should be acknowledged, `AckNone`, `AckAll` or `AckExplicit`| | AckPolicy | How messages should be acknowledged, `AckNone`, `AckAll` or `AckExplicit` |
|AckWait|How long to allow messages to remain un-acknowledged before attempting redelivery| | AckWait | How long to allow messages to remain un-acknowledged before attempting redelivery |
|DeliverPolicy|The initial starting mode of the consumer, `DeliverAll`, `DeliverLast`, `DeliverNew`, `DeliverByStartSequence` or `DeliverByStartTime`| | DeliverPolicy | The initial starting mode of the consumer, `DeliverAll`, `DeliverLast`, `DeliverNew`, `DeliverByStartSequence` or `DeliverByStartTime` |
|DeliverySubject|The subject to deliver observed messages, when not set, a pull-based Consumer is created| | DeliverySubject | The subject to deliver observed messages, when not set, a pull-based Consumer is created |
|Durable|The name of the Consumer| | Durable | The name of the Consumer |
|FilterSubject|When consuming from a Stream with many subjects, or wildcards, select only a specific incoming subjects, supports wildcards| | FilterSubject | When consuming from a Stream with many subjects, or wildcards, select only a specific incoming subjects, supports wildcards |
|MaxDeliver|Maximum amount times a specific message will be delivered. Use this to avoid poison pills crashing all your services forever| | MaxDeliver | Maximum amount times a specific message will be delivered. Use this to avoid poison pills crashing all your services forever |
|OptStartSeq|When first consuming messages from the Stream start at this particular message in the set| | OptStartSeq | When first consuming messages from the Stream start at this particular message in the set |
|ReplayPolicy|How messages are sent `ReplayInstant` or `ReplayOriginal`| | ReplayPolicy | How messages are sent `ReplayInstant` or `ReplayOriginal` |
|SampleFrequency|What percentage of acknowledgements should be samples for observability, 0-100| | SampleFrequency | What percentage of acknowledgements should be samples for observability, 0-100 |
|OptStartTime|When first consuming messages from the Stream start with messages on or after this time| | OptStartTime | When first consuming messages from the Stream start with messages on or after this time |
|RateLimit|The rate of message delivery in bits per second| | RateLimit | The rate of message delivery in bits per second |
|MaxAckPending|The maximum number of messages without acknowledgement that can be outstanding, once this limit is reached message delivery will be suspended| | MaxAckPending | The maximum number of messages without acknowledgement that can be outstanding, once this limit is reached message delivery will be suspended |

View File

@ -1,4 +1,4 @@
### Streams # Streams
Streams define how messages are stored and retention duration. Streams consume normal NATS subjects, any message found on those subjects will be delivered to the defined storage system. You can do a normal publish to the subject for unacknowledged delivery, else if you send a Request to the subject the JetStream server will reply with an acknowledgement that it was stored. Streams define how messages are stored and retention duration. Streams consume normal NATS subjects, any message found on those subjects will be delivered to the defined storage system. You can do a normal publish to the subject for unacknowledged delivery, else if you send a Request to the subject the JetStream server will reply with an acknowledgement that it was stored.
@ -6,26 +6,27 @@ As of January 2020, in the tech preview we have `file` and `memory` based storag
In the diagram above we show the concept of storing all `ORDERS.*` in the Stream even though there are many types of order related messages. We'll show how you can selectively consume subsets of messages later. Relatively speaking the Stream is the most resource consuming component so being able to combine related data in this manner is important to consider. In the diagram above we show the concept of storing all `ORDERS.*` in the Stream even though there are many types of order related messages. We'll show how you can selectively consume subsets of messages later. Relatively speaking the Stream is the most resource consuming component so being able to combine related data in this manner is important to consider.
Streams can consume many subjects. Here we have `ORDERS.*` but we could also consume `SHIPPING.state` into the same Stream should that make sense (not shown here). Streams can consume many subjects. Here we have `ORDERS.*` but we could also consume `SHIPPING.state` into the same Stream should that make sense \(not shown here\).
Streams support various retention policies - they can be kept based on limits like max count, size or age but also more novel methods like keeping them as long as any Consumers have them unacknowledged, or work queue like behavior where a message is removed after first ack. Streams support various retention policies - they can be kept based on limits like max count, size or age but also more novel methods like keeping them as long as any Consumers have them unacknowledged, or work queue like behavior where a message is removed after first ack.
Streams support deduplication using a `Nats-Msg-Id` header and a sliding window within which to track duplicate messages. See the [Message Deduplication](#message-deduplication) section. Streams support deduplication using a `Nats-Msg-Id` header and a sliding window within which to track duplicate messages. See the [Message Deduplication](../model_deep_dive/#message-deduplication) section.
When defining Streams the items below make up the entire configuration of the set. When defining Streams the items below make up the entire configuration of the set.
|Item|Description| | Item | Description |
|----|-----------| | :--- | :--- |
|MaxAge|Maximum age of any message in the stream, expressed in microseconds| | MaxAge | Maximum age of any message in the stream, expressed in microseconds |
|MaxBytes|How big the Stream may be, when the combined stream size exceeds this old messages are removed| | MaxBytes | How big the Stream may be, when the combined stream size exceeds this old messages are removed |
|MaxMsgSize|The largest message that will be accepted by the Stream| | MaxMsgSize | The largest message that will be accepted by the Stream |
|MaxMsgs|How many messages may be in a Stream, oldest messages will be removed if the Stream exceeds this size| | MaxMsgs | How many messages may be in a Stream, oldest messages will be removed if the Stream exceeds this size |
|MaxConsumers|How many Consumers can be defined for a given Stream, `-1` for unlimited| | MaxConsumers | How many Consumers can be defined for a given Stream, `-1` for unlimited |
|Name|A name for the Stream that may not have spaces, tabs or `.`| | Name | A name for the Stream that may not have spaces, tabs or `.` |
|NoAck|Disables acknowledging messages that are received by the Stream| | NoAck | Disables acknowledging messages that are received by the Stream |
|Replicas|How many replicas to keep for each message in a clustered JetStream, maximum 5| | Replicas | How many replicas to keep for each message in a clustered JetStream, maximum 5 |
|Retention|How message retention is considered, `LimitsPolicy` (default), `InterestPolicy` or `WorkQueuePolicy`| | Retention | How message retention is considered, `LimitsPolicy` \(default\), `InterestPolicy` or `WorkQueuePolicy` |
|Discard|When a Stream reached it's limits either, `DiscardNew` refuses new messages while `DiscardOld` (default) deletes old messages| | Discard | When a Stream reached it's limits either, `DiscardNew` refuses new messages while `DiscardOld` \(default\) deletes old messages |
|Storage|The type of storage backend, `file` and `memory` as of January 2020| | Storage | The type of storage backend, `file` and `memory` as of January 2020 |
|Subjects|A list of subjects to consume, supports wildcards| | Subjects | A list of subjects to consume, supports wildcards |
|Duplicates|The window within which to track duplicate messages| | Duplicates | The window within which to track duplicate messages |

View File

@ -1,50 +1,51 @@
## Configuration Management # Configuration Management
In many cases managing the configuration in your application code is the best model, many teams though wish to pre-create Streams and Consumers. In many cases managing the configuration in your application code is the best model, many teams though wish to pre-create Streams and Consumers.
We support a number of tools to assist with this: We support a number of tools to assist with this:
* `nats` CLI with configuration files * `nats` CLI with configuration files
* [Terraform](https://www.terraform.io/) * [Terraform](https://www.terraform.io/)
* [GitHub Actions](https://github.com/features/actions) * [GitHub Actions](https://github.com/features/actions)
* [Kubernetes JetStream Controller](https://github.com/nats-io/nack#jetstream-controller) * [Kubernetes JetStream Controller](https://github.com/nats-io/nack#jetstream-controller)
### nats Admin CLI ## nats Admin CLI
The `nats` CLI can be used to manage Streams and Consumers easily using it's `--config` flag, for example: The `nats` CLI can be used to manage Streams and Consumers easily using it's `--config` flag, for example:
### Add a new Stream ## Add a new Stream
This creates a new Stream based on `orders.json`. The `orders.json` file can be extracted from an existing stream using `nats stream info ORDERS -j | jq .config` This creates a new Stream based on `orders.json`. The `orders.json` file can be extracted from an existing stream using `nats stream info ORDERS -j | jq .config`
``` ```text
$ nats str add ORDERS --config orders.json $ nats str add ORDERS --config orders.json
``` ```
### Edit an existing Stream ## Edit an existing Stream
This edits an existing stream ensuring it complies with the configuration in `orders.json` This edits an existing stream ensuring it complies with the configuration in `orders.json`
```
```text
$ nats str edit ORDERS --config orders.json $ nats str edit ORDERS --config orders.json
``` ```
### Add a New Consumer ## Add a New Consumer
This creates a new Consumer based on `orders_new.json`. The `orders_new.json` file can be extracted from an existing stream using `nats con info ORDERS NEW -j | jq .config` This creates a new Consumer based on `orders_new.json`. The `orders_new.json` file can be extracted from an existing stream using `nats con info ORDERS NEW -j | jq .config`
``` ```text
$ nats con add ORDERS NEW --config orders_new.json $ nats con add ORDERS NEW --config orders_new.json
``` ```
### Terraform ## Terraform
Terraform is a Cloud configuration tool from Hashicorp found at [terraform.io](https://www.terraform.io/), we maintain a Provider for Terraform called [terraform-provider-jetstream](https://github.com/nats-io/terraform-provider-jetstream/) that can maintain JetStream using Terraform. Terraform is a Cloud configuration tool from Hashicorp found at [terraform.io](https://www.terraform.io/), we maintain a Provider for Terraform called [terraform-provider-jetstream](https://github.com/nats-io/terraform-provider-jetstream/) that can maintain JetStream using Terraform.
#### Setup ### Setup
Our provider is not hosted by Hashicorp so installation is a bit more complex than typical. Browse to the [Release Page](https://github.com/nats-io/terraform-provider-jetstream/releases) and download the release for your platform and extract it into your Terraform plugins directory. Our provider is not hosted by Hashicorp so installation is a bit more complex than typical. Browse to the [Release Page](https://github.com/nats-io/terraform-provider-jetstream/releases) and download the release for your platform and extract it into your Terraform plugins directory.
``` ```text
$ unzip -l terraform-provider-jetstream_0.0.2_darwin_amd64.zip $ unzip -l terraform-provider-jetstream_0.0.2_darwin_amd64.zip
Archive: terraform-provider-jetstream_0.0.2_darwin_amd64.zip Archive: terraform-provider-jetstream_0.0.2_darwin_amd64.zip
Length Date Time Name Length Date Time Name
@ -58,7 +59,7 @@ Place the `terraform-provider-jetstream_v0.0.2` file in `~/.terraform.d/plugins/
In your project you can configure the Provider like this: In your project you can configure the Provider like this:
```terraform ```text
provider "jetstream" { provider "jetstream" {
servers = "connect.ngs.global" servers = "connect.ngs.global"
credentials = "ngs_jetstream_admin.creds" credentials = "ngs_jetstream_admin.creds"
@ -67,7 +68,7 @@ provider "jetstream" {
And start using it, here's an example that create the `ORDERS` example. Review the [Project README](https://github.com/nats-io/terraform-provider-jetstream#readme) for full details. And start using it, here's an example that create the `ORDERS` example. Review the [Project README](https://github.com/nats-io/terraform-provider-jetstream#readme) for full details.
```terraform ```text
resource "jetstream_stream" "ORDERS" { resource "jetstream_stream" "ORDERS" {
name = "ORDERS" name = "ORDERS"
subjects = ["ORDERS.*"] subjects = ["ORDERS.*"]
@ -103,3 +104,4 @@ output "ORDERS_SUBJECTS" {
value = jetstream_stream.ORDERS.subjects value = jetstream_stream.ORDERS.subjects
} }
``` ```

View File

@ -1,4 +1,4 @@
### GitHub Actions # GitHub Actions
We have a pack of GitHub Actions that let you manage an already running JetStream Server, useful for managing releases or standing up test infrastructure. We have a pack of GitHub Actions that let you manage an already running JetStream Server, useful for managing releases or standing up test infrastructure.
@ -53,3 +53,4 @@ jobs:
message: Published new deployment via "${{ github.event_name }}" in "${{ github.repository }}" message: Published new deployment via "${{ github.event_name }}" in "${{ github.repository }}"
server: js.example.net server: js.example.net
``` ```

View File

@ -1,4 +1,4 @@
### Kubernetes JetStream Controller # Kubernetes Controller
The JetStream controllers allows you to manage NATS JetStream Streams and Consumers via K8S CRDs. You can find more info on how to deploy and usage [here](https://github.com/nats-io/nack#getting-started). Below you can find an example on how to create a stream and a couple of consumers: The JetStream controllers allows you to manage NATS JetStream Streams and Consumers via K8S CRDs. You can find more info on how to deploy and usage [here](https://github.com/nats-io/nack#getting-started). Below you can find an example on how to create a stream and a couple of consumers:
@ -41,7 +41,7 @@ spec:
Once the CRDs are installed you can use `kubectl` to manage the streams and consumers as follows: Once the CRDs are installed you can use `kubectl` to manage the streams and consumers as follows:
```sh ```bash
$ kubectl get streams $ kubectl get streams
NAME STATE STREAM NAME SUBJECTS NAME STATE STREAM NAME SUBJECTS
mystream Created mystream [orders.*] mystream Created mystream [orders.*]
@ -55,3 +55,4 @@ my-push-consumer Created mystream my-push-consumer none
# kubectl describe streams mystream # kubectl describe streams mystream
# kubectl describe consumers my-pull-consumer # kubectl describe consumers my-pull-consumer
``` ```

View File

@ -0,0 +1,2 @@
# NATS Admin CLI

View File

@ -0,0 +1,2 @@
# Terraform

View File

@ -1,23 +1,23 @@
## Disaster Recovery # Disaser Recovery
Disaster Recovery of the JetStream system is a topic we are still exploring and fleshing out and that will be impacted by the clustering work. For example replication will extend the options available to you. Disaster Recovery of the JetStream system is a topic we are still exploring and fleshing out and that will be impacted by the clustering work. For example replication will extend the options available to you.
Today we have a few approaches to consider: Today we have a few approaches to consider:
* `nats` CLI + Configuration Backups + Data Snapshots * `nats` CLI + Configuration Backups + Data Snapshots
* Configuration Management + Data Snapshots * Configuration Management + Data Snapshots
### Data Backup ## Data Backup
In all scenarios you can perform data snapshots and restores over the NATS protocol. This is good if you do not manage the NATS servers hosting your data, and you wish to do a backup of your data. In all scenarios you can perform data snapshots and restores over the NATS protocol. This is good if you do not manage the NATS servers hosting your data, and you wish to do a backup of your data.
The backup includes: The backup includes:
* Stream configuration and state * Stream configuration and state
* Stream Consumer configuration and state * Stream Consumer configuration and state
* All data including metadata like timestamps and headers * All data including metadata like timestamps and headers
```nohighlight ```text
$ nats stream backup ORDERS /data/js-backup/ORDERS.tgz $ nats stream backup ORDERS /data/js-backup/ORDERS.tgz
Starting backup of Stream "ORDERS" with 13 data blocks Starting backup of Stream "ORDERS" with 13 data blocks
@ -30,11 +30,11 @@ During the backup the Stream is in a state where it's configuration cannot chang
Progress using the terminal bar can be disabled using `--no-progress`, it will then issue log lines instead. Progress using the terminal bar can be disabled using `--no-progress`, it will then issue log lines instead.
### Restoring Data ## Restoring Data
The backup made above can be restored into another server - but into the same Stream name. The backup made above can be restored into another server - but into the same Stream name.
```nohighlight ```text
$ nats str restore ORDERS /data/js-backup/ORDERS.tgz $ nats str restore ORDERS /data/js-backup/ORDERS.tgz
Starting restore of Stream "ORDERS" from file "/data/js-backup/ORDERS.tgz" Starting restore of Stream "ORDERS" from file "/data/js-backup/ORDERS.tgz"
@ -54,13 +54,13 @@ The `/data/js-backup/ORDERS.tgz` file can also be extracted into the data dir of
Progress using the terminal bar can be disabled using `--no-progress`, it will then issue log lines instead. Progress using the terminal bar can be disabled using `--no-progress`, it will then issue log lines instead.
### Interactive CLI ## Interactive CLI
In environments where the `nats` CLI is used interactively to configure the server you do not have a desired state to recreate the server from. This is not the ideal way to administer the server, we recommend Configuration Management, but many will use this approach. In environments where the `nats` CLI is used interactively to configure the server you do not have a desired state to recreate the server from. This is not the ideal way to administer the server, we recommend Configuration Management, but many will use this approach.
Here you can back up the configuration into a directory from where you can recover the configuration later. The data for File backed stores can also be backed up. Here you can back up the configuration into a directory from where you can recover the configuration later. The data for File backed stores can also be backed up.
```nohighlight ```text
$ nats backup /data/js-backup $ nats backup /data/js-backup
15:56:11 Creating JetStream backup into /data/js-backup 15:56:11 Creating JetStream backup into /data/js-backup
15:56:11 Stream ORDERS to /data/js-backup/stream_ORDERS.json 15:56:11 Stream ORDERS to /data/js-backup/stream_ORDERS.json
@ -74,7 +74,7 @@ During the same process the data can also be backed up by passing `--data`, this
Later the data can be restored, for Streams we support editing the Stream configuration in place to match what was in the backup. Later the data can be restored, for Streams we support editing the Stream configuration in place to match what was in the backup.
``` ```text
$ nats restore /tmp/backup --update-streams $ nats restore /tmp/backup --update-streams
15:57:42 Reading file /tmp/backup/stream_ORDERS.json 15:57:42 Reading file /tmp/backup/stream_ORDERS.json
15:57:42 Reading file /tmp/backup/stream_ORDERS_consumer_NEW.json 15:57:42 Reading file /tmp/backup/stream_ORDERS_consumer_NEW.json
@ -83,3 +83,4 @@ $ nats restore /tmp/backup --update-streams
``` ```
The `nats restore` tool does not support restoring data, the same process using `nats stream restore`, as outlined earlier, can be used which will also restore Stream and Consumer configurations and state. The `nats restore` tool does not support restoring data, the same process using `nats stream restore`, as outlined earlier, can be used which will also restore Stream and Consumer configurations and state.

View File

@ -0,0 +1,27 @@
# Getting Started
Getting started with JetStream is straightforward. While we speak of Jetstream as if it is a seperate component, it's actually a subsystem built into the NATS server that needs to be enabled.
## Command line
Enable JetStream by specifying the `-js` flag when starting the NATS server.
`$ nats-server -js`
## Configuration File
Enable JetStream through a configuration file. By default, the JetStream subsytem will store data in the /tmp directory. Here's a minimal file that will store data in a local "nats" directory, suitable for development and local testing.
`$ nats-server -c js.conf`
```text
# js.conf
jetstream {
store_dir=nats
}
```
Normally JetStream will be run in clustered mode and will replicate data, so the best place to store JetStream data would be locally on a fast SSD. One should specifically avoid NAS or NFS storage for JetStream.
See [Using Docker](using_docker.md) and [Using Source](using_source.md) for more information.

View File

@ -1,26 +0,0 @@
# Getting Started
Getting started with JetStream is straightforward. While we speak of Jetstream as if it is a seperate component, it's actually a subsystem built into the NATS server that needs to be enabled.
## Command line
Enable JetStream by specifying the `-js` flag when starting the NATS server.
`$ nats-server -js`
## Configuration File
Enable JetStream through a configuration file. By default, the JetStream subsytem will store data in the /tmp directory. Here's a minimal file that will store data in a local "nats" directory, suitable for development and local testing.
`$ nats-server -c js.conf`
```text
# js.conf
jetstream {
store_dir=nats
}
```
Normally JetStream will be run in clustered mode and will replicate data, so the best place to store JetStream data would be locally on a fast SSD. One should specifically avoid NAS or NFS storage for JetStream.
See [Using Docker](./using_docker.md) and [Using Source](./using_source.md) for more information.

View File

@ -0,0 +1,2 @@
# Using Docker with NGS

View File

@ -4,13 +4,13 @@ The `natsio/nats-box:latest` docker image contains the `nats` utility this guide
In one window start a JetStream enabled nats server: In one window start a JetStream enabled nats server:
``` ```text
$ docker run --network host -p 4222:4222 nats -js $ docker run --network host -p 4222:4222 nats -js
``` ```
And in another log into the utilities: And in another log into the utilities:
``` ```text
$ docker run -ti --network host natsio/nats-box $ docker run -ti --network host natsio/nats-box
``` ```
@ -18,18 +18,19 @@ This shell has the `nats` utility and all other NATS cli tools used in the rest
Now skip to the `Administer JetStream` section. Now skip to the `Administer JetStream` section.
### Using Docker with NGS ## Using Docker with NGS
You can join a JetStream instance to your [NGS](https://synadia.com/ngs/pricing) account, first we need a credential for testing JetStream: You can join a JetStream instance to your [NGS](https://synadia.com/ngs/pricing) account, first we need a credential for testing JetStream:
You'll want to do this outside of docker to keep the credentials that are generated. You'll want to do this outside of docker to keep the credentials that are generated.
```
```text
$ nsc add user -a YourAccount --name leafnode --expiry 1M $ nsc add user -a YourAccount --name leafnode --expiry 1M
``` ```
You'll get a credential file somewhere like `~/.nkeys/creds/synadia/YourAccount/leafnode.creds`, mount this file into the docker container for JetStream using `-v ~/.nkeys/creds/synadia/YourAccount/leafnode.creds:/leafnode.creds`. You'll get a credential file somewhere like `~/.nkeys/creds/synadia/YourAccount/leafnode.creds`, mount this file into the docker container for JetStream using `-v ~/.nkeys/creds/synadia/YourAccount/leafnode.creds:/leafnode.creds`.
``` ```text
$ docker run -ti -v ~/.nkeys/creds/synadia/YourAccount/leafnode.creds:/leafnode.creds --name jetstream synadia/jsm:latest server $ docker run -ti -v ~/.nkeys/creds/synadia/YourAccount/leafnode.creds:/leafnode.creds --name jetstream synadia/jsm:latest server
[1] 2020/01/20 12:44:11.752465 [INF] Starting nats-server version 2.2.0 [1] 2020/01/20 12:44:11.752465 [INF] Starting nats-server version 2.2.0
... ...
@ -37,3 +38,4 @@ $ docker run -ti -v ~/.nkeys/creds/synadia/YourAccount/leafnode.creds:/leafnode.
``` ```
Your JSM shell will still connect locally, other connections in your NGS account can use JetStream at this point. Your JSM shell will still connect locally, other connections in your NGS account can use JetStream at this point.

View File

@ -1,10 +1,10 @@
# Using Source or a Binary # Using Source
You will also want to have installed from the nats.go repo the examples/tools such as nats-pub, nats-sub, nats-req and possibly nats-bench. One of the design goals of JetStream was to be native to core NATS, so even though we will most certainly add in syntactic sugar to clients to make them more appealing, for this tech preview we will be using plain old NATS. You will also want to have installed from the nats.go repo the examples/tools such as nats-pub, nats-sub, nats-req and possibly nats-bench. One of the design goals of JetStream was to be native to core NATS, so even though we will most certainly add in syntactic sugar to clients to make them more appealing, for this tech preview we will be using plain old NATS.
You will need a copy of the nats-server source locally and will need to be in the jetstream branch. You will need a copy of the nats-server source locally and will need to be in the jetstream branch.
``` ```text
$ git clone https://github.com/nats-io/nats-server.git $ git clone https://github.com/nats-io/nats-server.git
$ cd nats-server $ cd nats-server
$ git checkout master $ git checkout master
@ -14,7 +14,7 @@ $ ls -l nats-server
Starting the server you can use the `-js` flag. This will setup the server to reasonably use memory and disk. This is a sample run on my machine. JetStream will default to 1TB of disk and 75% of available memory for now. Starting the server you can use the `-js` flag. This will setup the server to reasonably use memory and disk. This is a sample run on my machine. JetStream will default to 1TB of disk and 75% of available memory for now.
``` ```text
$ ./nats-server -js $ ./nats-server -js
[16928] 2019/12/04 19:16:29.596968 [INF] Starting nats-server version 2.2.0 [16928] 2019/12/04 19:16:29.596968 [INF] Starting nats-server version 2.2.0
@ -32,7 +32,7 @@ $ ./nats-server -js
You can override the storage directory if you want. You can override the storage directory if you want.
``` ```text
$ ./nats-server -js -sd /tmp/test $ ./nats-server -js -sd /tmp/test
[16943] 2019/12/04 19:20:00.874148 [INF] Starting nats-server version 2.2.0 [16943] 2019/12/04 19:20:00.874148 [INF] Starting nats-server version 2.2.0
@ -50,7 +50,7 @@ $ ./nats-server -js -sd /tmp/test
These options can also be set in your configuration file: These options can also be set in your configuration file:
``` ```text
// enables jetstream, an empty block will enable and use defaults // enables jetstream, an empty block will enable and use defaults
jetstream { jetstream {
// jetstream data will be in /data/nats-server/jetstream // jetstream data will be in /data/nats-server/jetstream
@ -63,3 +63,4 @@ jetstream {
max_file_store: 10737418240 max_file_store: 10737418240
} }
``` ```

35
jetstream/jetstream.md Normal file
View File

@ -0,0 +1,35 @@
# About Jetstream
JetStream was created to solve the problems identified with streaming in technology today - complexity, fragility, and a lack of scalability. Some technologies address these better than others, but no current streaming technology is truly multi-tenant, horizontally scalable, and supports multiple deployment models. No technology we are aware of can scale from edge to cloud under the same security context while having complete deployment observability for operations.
## Goals
JetStream was developed with the following goals in mind:
* The system must be easy to configure and operate and be observable.
* The system must be secure and operate well with NATS 2.0 security models.
* The system must scale horizontally and be applicable to a high ingestion rate.
* The system must support multiple use cases.
* The system must self heal and always be available.
* The system must have an API that is closer to core NATS.
* The system must allow NATS messages to be part of a stream as desired.
* The system must display payload agnostic behavior.
* The system must not have third party dependencies.
## High-Level Design and Features
In terms of deployment, a JetStream server is simply a NATS server with the JetStream subsystem enabled, launched with the `-js` flag with a configured server name and cluster name. From a client perspective, it does not matter which servers are running JetStream so long as there is some route to a JetStream enabled server or servers. This allows for a flexible deployment which to optimize resources for particular servers that will store streams versus very low overhead stateless servers, reducing OpEx and ultimately creating a scalable and manageable system.
## Feature List
* At-least-once delivery; exactly once within a window
* Store messages and replay by time or sequence
* Wildcard support
* Account aware
* Data at rest encryption
* Cleanse specific messages \(GDPR\)
* Horizontal scalability
* Persist Streams and replay via Consumers
JetStream is designed to bifurcate ingestion and consumption of messages to provide multiple ways to consume data from the same stream. To that end, JetStream functionality is composed of server streams and server consumers.

View File

@ -1,8 +1,8 @@
## Model Deep Dive # Model Deep Dive
The Orders example touched on a lot of features, but some like different Ack models and message limits, need a bit more detail. This section will expand on the above and fill in some blanks. The Orders example touched on a lot of features, but some like different Ack models and message limits, need a bit more detail. This section will expand on the above and fill in some blanks.
### Stream Limits, Retention Modes and Discard Policy ## Stream Limits, Retention Modes and Discard Policy
Streams store data on disk, but we cannot store all data forever so we need ways to control their size automatically. Streams store data on disk, but we cannot store all data forever so we need ways to control their size automatically.
@ -10,11 +10,11 @@ There are 3 features that come into play when Streams decide how long they store
The `Retention Policy` describes based on what criteria a set will evict messages from its storage: The `Retention Policy` describes based on what criteria a set will evict messages from its storage:
|Retention Policy|Description| | Retention Policy | Description |
|----------------|-----------| | :--- | :--- |
|`LimitsPolicy` |Limits are set for how many messages, how big the storage and how old messages may be| | `LimitsPolicy` | Limits are set for how many messages, how big the storage and how old messages may be |
|`WorkQueuePolicy`|Messages are kept until they were consumed by any one single observer and then removed| | `WorkQueuePolicy` | Messages are kept until they were consumed by any one single observer and then removed |
|`InterestPolicy`|Messages are kept as long as there are Consumers active for them| | `InterestPolicy` | Messages are kept as long as there are Consumers active for them |
In all Retention Policies the basic limits apply as upper bounds, these are `MaxMsgs` for how many messages are kept in total, `MaxBytes` for how big the set can be in total and `MaxAge` for what is the oldest message that will be kept. These are the only limits in play with `LimitsPolicy` retention. In all Retention Policies the basic limits apply as upper bounds, these are `MaxMsgs` for how many messages are kept in total, `MaxBytes` for how big the set can be in total and `MaxAge` for what is the oldest message that will be kept. These are the only limits in play with `LimitsPolicy` retention.
@ -22,17 +22,17 @@ One can then define additional ways a message may be removed from the Stream ear
In both `WorkQueuePolicy` and `InterestPolicy` the age, size and count limits will still apply as upper bounds. In both `WorkQueuePolicy` and `InterestPolicy` the age, size and count limits will still apply as upper bounds.
A final control is the Maximum Size any single message may have. NATS have it's own limit for maximum size (1 MiB by default), but you can say a Stream will only accept messages up to 1024 bytes using `MaxMsgSize`. A final control is the Maximum Size any single message may have. NATS have it's own limit for maximum size \(1 MiB by default\), but you can say a Stream will only accept messages up to 1024 bytes using `MaxMsgSize`.
The `Discard Policy` sets how messages are discard when limits set by `LimitsPolicy` are reached. The `DiscardOld` option removes old messages making space for new, while `DiscardNew` refuses any new messages. The `Discard Policy` sets how messages are discard when limits set by `LimitsPolicy` are reached. The `DiscardOld` option removes old messages making space for new, while `DiscardNew` refuses any new messages.
The `WorkQueuePolicy` mode is a specialized mode where a message, once consumed and acknowledged, is discarded from the Stream. In this mode there are a few limits on consumers. Inherently it's about 1 message to one consumer, this means you cannot have overlapping consumers defined on the Stream - needs unique filter subjects. The `WorkQueuePolicy` mode is a specialized mode where a message, once consumed and acknowledged, is discarded from the Stream. In this mode there are a few limits on consumers. Inherently it's about 1 message to one consumer, this means you cannot have overlapping consumers defined on the Stream - needs unique filter subjects.
### Message Deduplication ## Message Deduplication
JetStream support idempotent message writes by ignoring duplicate messages as indicated by the `Nats-Msg-Id` header. JetStream support idempotent message writes by ignoring duplicate messages as indicated by the `Nats-Msg-Id` header.
```nohighlight ```text
% nats req -H Nats-Msg-Id:1 ORDERS.new hello1 % nats req -H Nats-Msg-Id:1 ORDERS.new hello1
% nats req -H Nats-Msg-Id:1 ORDERS.new hello2 % nats req -H Nats-Msg-Id:1 ORDERS.new hello2
% nats req -H Nats-Msg-Id:1 ORDERS.new hello3 % nats req -H Nats-Msg-Id:1 ORDERS.new hello3
@ -41,7 +41,7 @@ JetStream support idempotent message writes by ignoring duplicate messages as in
Here we set a `Nats-Msg-Id:1` header which tells JetStream to ensure we do not have duplicates of this message - we only consult the message ID not the body. Here we set a `Nats-Msg-Id:1` header which tells JetStream to ensure we do not have duplicates of this message - we only consult the message ID not the body.
```nohighlight ```text
$ nats str info ORDERS $ nats str info ORDERS
.... ....
State: State:
@ -52,21 +52,21 @@ State:
The default window to track duplicates in is 2 minutes, this can be set on the command line using `--dupe-window` when creating a stream, though we would caution against large windows. The default window to track duplicates in is 2 minutes, this can be set on the command line using `--dupe-window` when creating a stream, though we would caution against large windows.
### Acknowledgement Models ## Acknowledgement Models
Streams support acknowledging receiving a message, if you send a `Request()` to a subject covered by the configuration of the Stream the service will reply to you once it stored the message. If you just publish, it will not. A Stream can be set to disable Acknowledgements by setting `NoAck` to `true` in it's configuration. Streams support acknowledging receiving a message, if you send a `Request()` to a subject covered by the configuration of the Stream the service will reply to you once it stored the message. If you just publish, it will not. A Stream can be set to disable Acknowledgements by setting `NoAck` to `true` in it's configuration.
Consumers have 3 acknowledgement modes: Consumers have 3 acknowledgement modes:
|Mode|Description| | Mode | Description |
|----|-----------| | :--- | :--- |
|`AckExplicit`|This requires every message to be specifically acknowledged, it's the only supported option for pull-based Consumers| | `AckExplicit` | This requires every message to be specifically acknowledged, it's the only supported option for pull-based Consumers |
|`AckAll`|In this mode if you acknowledge message `100` it will also acknowledge message `1`-`99`, this is good for processing batches and to reduce ack overhead| | `AckAll` | In this mode if you acknowledge message `100` it will also acknowledge message `1`-`99`, this is good for processing batches and to reduce ack overhead |
|`AckNone`|No acknowledgements are supported| | `AckNone` | No acknowledgements are supported |
To understand how Consumers track messages we will start with a clean `ORDERS` Stream and `DISPATCH` Consumer. To understand how Consumers track messages we will start with a clean `ORDERS` Stream and `DISPATCH` Consumer.
``` ```text
$ nats str info ORDERS $ nats str info ORDERS
... ...
Statistics: Statistics:
@ -80,7 +80,7 @@ Statistics:
The Set is entirely empty The Set is entirely empty
``` ```text
$ nats con info ORDERS DISPATCH $ nats con info ORDERS DISPATCH
... ...
State: State:
@ -91,11 +91,11 @@ State:
Redelivered Messages: 0 Redelivered Messages: 0
``` ```
The Consumer has no messages oustanding and has never had any (Consumer sequence is 1). The Consumer has no messages oustanding and has never had any \(Consumer sequence is 1\).
We publish one message to the Stream and see that the Stream received it: We publish one message to the Stream and see that the Stream received it:
``` ```text
$ nats pub ORDERS.processed "order 4" $ nats pub ORDERS.processed "order 4"
Published 7 bytes to ORDERS.processed Published 7 bytes to ORDERS.processed
$ nats str info ORDERS $ nats str info ORDERS
@ -111,7 +111,7 @@ Statistics:
As the Consumer is pull-based, we can fetch the message, ack it, and check the Consumer state: As the Consumer is pull-based, we can fetch the message, ack it, and check the Consumer state:
``` ```text
$ nats con next ORDERS DISPATCH $ nats con next ORDERS DISPATCH
--- received on ORDERS.processed --- received on ORDERS.processed
order 4 order 4
@ -132,7 +132,7 @@ The message got delivered and acknowledged - `Acknowledgement floor` is `1` and
We'll publish another message, fetch it but not Ack it this time and see the status: We'll publish another message, fetch it but not Ack it this time and see the status:
``` ```text
$ nats pub ORDERS.processed "order 5" $ nats pub ORDERS.processed "order 5"
Published 7 bytes to ORDERS.processed Published 7 bytes to ORDERS.processed
@ -149,11 +149,11 @@ State:
Redelivered Messages: 0 Redelivered Messages: 0
``` ```
Now we can see the Consumer have processed 2 messages (obs sequence is 3, next message will be 3) but the Ack floor is still 1 - thus 1 message is pending acknowledgement. Indeed this is confirmed in the `Pending messages`. Now we can see the Consumer have processed 2 messages \(obs sequence is 3, next message will be 3\) but the Ack floor is still 1 - thus 1 message is pending acknowledgement. Indeed this is confirmed in the `Pending messages`.
If I fetch it again and again do not ack it: If I fetch it again and again do not ack it:
``` ```text
$ nats con next ORDERS DISPATCH --no-ack $ nats con next ORDERS DISPATCH --no-ack
--- received on ORDERS.processed --- received on ORDERS.processed
order 5 order 5
@ -171,7 +171,7 @@ The Consumer sequence increases - each delivery attempt increase the sequence -
Finally if I then fetch it again and ack it this time: Finally if I then fetch it again and ack it this time:
``` ```text
$ nats con next ORDERS DISPATCH $ nats con next ORDERS DISPATCH
--- received on ORDERS.processed --- received on ORDERS.processed
order 5 order 5
@ -190,13 +190,13 @@ Having now Acked the message there are no more pending.
Additionally there are a few types of acknowledgements: Additionally there are a few types of acknowledgements:
|Type|Bytes|Description| | Type | Bytes | Description |
|----|-----|-----------| | :--- | :--- | :--- |
|`AckAck`|nil, `+ACK`|Acknowledges a message was completely handled| | `AckAck` | nil, `+ACK` | Acknowledges a message was completely handled |
|`AckNak`|`-NAK`|Signals that the message will not be processed now and processing can move onto the next message, NAK'd message will be retried| | `AckNak` | `-NAK` | Signals that the message will not be processed now and processing can move onto the next message, NAK'd message will be retried |
|`AckProgress`|`+WPI`|When sent before the AckWait period indicates that work is ongoing and the period should be extended by another equal to `AckWait`| | `AckProgress` | `+WPI` | When sent before the AckWait period indicates that work is ongoing and the period should be extended by another equal to `AckWait` |
|`AckNext`|`+NXT`|Acknowledges the message was handled and requests delivery of the next message to the reply subject. Only applies to Pull-mode.| | `AckNext` | `+NXT` | Acknowledges the message was handled and requests delivery of the next message to the reply subject. Only applies to Pull-mode. |
|`AckTerm`|`+TERM`|Instructs the server to stop redelivery of a message without acknowledging it as successfully processed| | `AckTerm` | `+TERM` | Instructs the server to stop redelivery of a message without acknowledging it as successfully processed |
So far all the examples was the `AckAck` type of acknowledgement, by replying to the Ack with the body as indicated in `Bytes` you can pick what mode of acknowledgement you want. So far all the examples was the `AckAck` type of acknowledgement, by replying to the Ack with the body as indicated in `Bytes` you can pick what mode of acknowledgement you want.
@ -204,25 +204,25 @@ All of these acknowledgement modes, except `AckNext`, support double acknowledge
The `+NXT` acknowledgement can have a few formats: `+NXT 10` requests 10 messages and `+NXT {"no_wait": true}` which is the same data that can be sent in a Pull request. The `+NXT` acknowledgement can have a few formats: `+NXT 10` requests 10 messages and `+NXT {"no_wait": true}` which is the same data that can be sent in a Pull request.
### Exactly Once Delivery ## Exactly Once Delivery
JetStream supports Exactly Once delivery by combining Message Deduplication and double acks. JetStream supports Exactly Once delivery by combining Message Deduplication and double acks.
On the publishing side you can avoid duplicate message ingestion using the [Message Deduplication](#message-deduplication) feature. On the publishing side you can avoid duplicate message ingestion using the [Message Deduplication](./#message-deduplication) feature.
Consumers can be 100% sure a message was correctly processed by requesting the server Acknowledge having received your acknowledgement by setting a reply subject on the Ack. If you receive this response you will never receive that message again. Consumers can be 100% sure a message was correctly processed by requesting the server Acknowledge having received your acknowledgement by setting a reply subject on the Ack. If you receive this response you will never receive that message again.
### Consumer Starting Position ## Consumer Starting Position
When setting up an Consumer you can decide where to start, the system supports the following for the `DeliverPolicy`: When setting up an Consumer you can decide where to start, the system supports the following for the `DeliverPolicy`:
|Policy|Description| | Policy | Description |
|------|-----------| | :--- | :--- |
|`all`|Delivers all messages that are available| | `all` | Delivers all messages that are available |
|`last`|Delivers the latest message, like a `tail -n 1 -f`| | `last` | Delivers the latest message, like a `tail -n 1 -f` |
|`new`|Delivers only new messages that arrive after subscribe time| | `new` | Delivers only new messages that arrive after subscribe time |
|`by_start_time`|Delivers from a specific time onward. Requires `OptStartTime` to be set| | `by_start_time` | Delivers from a specific time onward. Requires `OptStartTime` to be set |
|`by_start_sequence`|Delivers from a specific stream sequence. Requires `OptStartSeq` to be set| | `by_start_sequence` | Delivers from a specific stream sequence. Requires `OptStartSeq` to be set |
Regardless of what mode you set, this is only the starting point. Once started it will always give you what you have not seen or acknowledged. So this is merely how it picks the very first message. Regardless of what mode you set, this is only the starting point. Once started it will always give you what you have not seen or acknowledged. So this is merely how it picks the very first message.
@ -230,7 +230,7 @@ Lets look at each of these, first we make a new Stream `ORDERS` and add 100 mess
Now create a `DeliverAll` pull-based Consumer: Now create a `DeliverAll` pull-based Consumer:
``` ```text
$ nats con add ORDERS ALL --pull --filter ORDERS.processed --ack none --replay instant --deliver all $ nats con add ORDERS ALL --pull --filter ORDERS.processed --ack none --replay instant --deliver all
$ nats con next ORDERS ALL $ nats con next ORDERS ALL
--- received on ORDERS.processed --- received on ORDERS.processed
@ -241,7 +241,7 @@ Acknowledged message
Now create a `DeliverLast` pull-based Consumer: Now create a `DeliverLast` pull-based Consumer:
``` ```text
$ nats con add ORDERS LAST --pull --filter ORDERS.processed --ack none --replay instant --deliver last $ nats con add ORDERS LAST --pull --filter ORDERS.processed --ack none --replay instant --deliver last
$ nats con next ORDERS LAST $ nats con next ORDERS LAST
--- received on ORDERS.processed --- received on ORDERS.processed
@ -252,7 +252,7 @@ Acknowledged message
Now create a `MsgSetSeq` pull-based Consumer: Now create a `MsgSetSeq` pull-based Consumer:
``` ```text
$ nats con add ORDERS TEN --pull --filter ORDERS.processed --ack none --replay instant --deliver 10 $ nats con add ORDERS TEN --pull --filter ORDERS.processed --ack none --replay instant --deliver 10
$ nats con next ORDERS TEN $ nats con next ORDERS TEN
--- received on ORDERS.processed --- received on ORDERS.processed
@ -263,7 +263,7 @@ Acknowledged message
And finally a time-based Consumer. Let's add some messages a minute apart: And finally a time-based Consumer. Let's add some messages a minute apart:
``` ```text
$ nats str purge ORDERS $ nats str purge ORDERS
$ for i in 1 2 3 $ for i in 1 2 3
do do
@ -274,7 +274,7 @@ done
Then create an Consumer that starts 2 minutes ago: Then create an Consumer that starts 2 minutes ago:
``` ```text
$ nats con add ORDERS 2MIN --pull --filter ORDERS.processed --ack none --replay instant --deliver 2m $ nats con add ORDERS 2MIN --pull --filter ORDERS.processed --ack none --replay instant --deliver 2m
$ nats con next ORDERS 2MIN $ nats con next ORDERS 2MIN
--- received on ORDERS.processed --- received on ORDERS.processed
@ -283,7 +283,7 @@ order 2
Acknowledged message Acknowledged message
``` ```
### Ephemeral Consumers ## Ephemeral Consumers
So far, all the Consumers you have seen were Durable, meaning they exist even after you disconnect from JetStream. In our Orders scenario, though the `MONITOR` Consumer could very well be a short-lived thing there just while an operator is debugging the system, there is no need to remember the last seen position if all you are doing is wanting to observe the real-time state. So far, all the Consumers you have seen were Durable, meaning they exist even after you disconnect from JetStream. In our Orders scenario, though the `MONITOR` Consumer could very well be a short-lived thing there just while an operator is debugging the system, there is no need to remember the last seen position if all you are doing is wanting to observe the real-time state.
@ -293,19 +293,19 @@ Ephemeral Consumers can only be push-based.
Terminal 1: Terminal 1:
``` ```text
$ nats sub my.monitor $ nats sub my.monitor
``` ```
Terminal 2: Terminal 2:
``` ```text
$ nats con add ORDERS --filter '' --ack none --target 'my.monitor' --deliver last --replay instant --ephemeral $ nats con add ORDERS --filter '' --ack none --target 'my.monitor' --deliver last --replay instant --ephemeral
``` ```
The `--ephemeral` switch tells the system to make an Ephemeral Consumer. The `--ephemeral` switch tells the system to make an Ephemeral Consumer.
### Consumer Message Rates ## Consumer Message Rates
Typically what you want is if a new Consumer is made the selected messages are delivered to you as quickly as possible. You might want to replay messages at the rate they arrived though, meaning if messages first arrived 1 minute apart and you make a new Consumer it will get the messages a minute apart. Typically what you want is if a new Consumer is made the selected messages are delivered to you as quickly as possible. You might want to replay messages at the rate they arrived though, meaning if messages first arrived 1 minute apart and you make a new Consumer it will get the messages a minute apart.
@ -313,7 +313,7 @@ This is useful in load testing scenarios etc. This is called the `ReplayPolicy`
You can only set `ReplayPolicy` on push-based Consumers. You can only set `ReplayPolicy` on push-based Consumers.
``` ```text
$ nats con add ORDERS REPLAY --target out.original --filter ORDERS.processed --ack none --deliver all --sample 100 --replay original $ nats con add ORDERS REPLAY --target out.original --filter ORDERS.processed --ack none --deliver all --sample 100 --replay original
... ...
Replay Policy: original Replay Policy: original
@ -322,7 +322,7 @@ $ nats con add ORDERS REPLAY --target out.original --filter ORDERS.processed --a
Now lets publish messages into the Set 10 seconds apart: Now lets publish messages into the Set 10 seconds apart:
``` ```text
$ for i in 1 2 3 <15:15:35 $ for i in 1 2 3 <15:15:35
do do
nats pub ORDERS.processed "order ${i}" nats pub ORDERS.processed "order ${i}"
@ -335,7 +335,7 @@ Published [ORDERS.processed] : 'order 3'
And when we consume them they will come to us 10 seconds apart: And when we consume them they will come to us 10 seconds apart:
``` ```text
$ nats sub -t out.original $ nats sub -t out.original
Listening on [out.original] Listening on [out.original]
2020/01/03 15:17:26 [#1] Received on [ORDERS.processed]: 'order 1' 2020/01/03 15:17:26 [#1] Received on [ORDERS.processed]: 'order 1'
@ -344,11 +344,11 @@ Listening on [out.original]
^C ^C
``` ```
### Stream Templates ## Stream Templates
When you have many similar streams it can be helpful to auto create them, lets say you have a service by client and they are on subjects `CLIENT.*`, you can construct a template that will auto generate streams for any matching traffic. When you have many similar streams it can be helpful to auto create them, lets say you have a service by client and they are on subjects `CLIENT.*`, you can construct a template that will auto generate streams for any matching traffic.
``` ```text
$ nats str template add CLIENTS --subjects "CLIENT.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size 2048 --max-streams 1024 --discard old $ nats str template add CLIENTS --subjects "CLIENT.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size 2048 --max-streams 1024 --discard old
Stream Template CLIENTS was created Stream Template CLIENTS was created
@ -374,13 +374,13 @@ Managed Streams:
You can see no streams currently exist, let's publish some data: You can see no streams currently exist, let's publish some data:
``` ```text
$ nats pub CLIENT.acme hello $ nats pub CLIENT.acme hello
``` ```
And we'll have 1 new Stream: And we'll have 1 new Stream:
``` ```text
$ nats str ls $ nats str ls
Streams: Streams:
@ -389,7 +389,7 @@ Streams:
When the template is deleted all the streams it created will be deleted too. When the template is deleted all the streams it created will be deleted too.
### Ack Sampling ## Ack Sampling
In the earlier sections we saw that samples are being sent to a monitoring system. Let's look at that in depth; how the monitoring system works and what it contains. In the earlier sections we saw that samples are being sent to a monitoring system. Let's look at that in depth; how the monitoring system works and what it contains.
@ -397,24 +397,24 @@ As messages pass through an Consumer you'd be interested in knowing how many are
Consumers can sample Ack'ed messages for you and publish samples so your monitoring system can observe the health of an Consumer. We will add support for this to [NATS Surveyor](https://github.com/nats-io/nats-surveyor). Consumers can sample Ack'ed messages for you and publish samples so your monitoring system can observe the health of an Consumer. We will add support for this to [NATS Surveyor](https://github.com/nats-io/nats-surveyor).
#### Configuration ### Configuration
You can configure an Consumer for sampling by passing the `--sample 80` option to `nats consumer add`, this tells the system to sample 80% of Acknowledgements. You can configure an Consumer for sampling by passing the `--sample 80` option to `nats consumer add`, this tells the system to sample 80% of Acknowledgements.
When viewing info of an Consumer you can tell if it's sampled or not: When viewing info of an Consumer you can tell if it's sampled or not:
``` ```text
$ nats con info ORDERS NEW $ nats con info ORDERS NEW
... ...
Sampling Rate: 100 Sampling Rate: 100
... ...
``` ```
#### Consuming ### Consuming
Samples are published to `$JS.EVENT.METRIC.CONSUMER_ACK.<stream>.<consumer>` in JSON format containing `api.ConsumerAckMetric`. Use the `nats con events` command to view samples: Samples are published to `$JS.EVENT.METRIC.CONSUMER_ACK.<stream>.<consumer>` in JSON format containing `api.ConsumerAckMetric`. Use the `nats con events` command to view samples:
```nohighlight ```text
$ nats con events ORDERS NEW $ nats con events ORDERS NEW
Listening for Advisories on $JS.EVENT.ADVISORY.*.ORDERS.NEW Listening for Advisories on $JS.EVENT.ADVISORY.*.ORDERS.NEW
Listening for Metrics on $JS.EVENT.METRIC.*.ORDERS.NEW Listening for Metrics on $JS.EVENT.METRIC.*.ORDERS.NEW
@ -427,7 +427,7 @@ Listening for Metrics on $JS.EVENT.METRIC.*.ORDERS.NEW
Delay: 1.009ms Delay: 1.009ms
``` ```
```nohighlight ```text
$ nats con events ORDERS NEW --json $ nats con events ORDERS NEW --json
{ {
"stream": "ORDERS", "stream": "ORDERS",
@ -447,7 +447,7 @@ $ nats con events ORDERS NEW --json
} }
``` ```
### Storage Overhead ## Storage Overhead
JetStream file storage is very efficient storing as little extra information about the message as possible. JetStream file storage is very efficient storing as little extra information about the message as possible.
@ -455,17 +455,17 @@ JetStream file storage is very efficient storing as little extra information abo
We do store some message data with each message, namely: We do store some message data with each message, namely:
* Message headers * Message headers
* The subject it was received on * The subject it was received on
* The time it was received * The time it was received
* The message payload * The message payload
* A hash of the message * A hash of the message
* The message sequence * The message sequence
* A few other bits like length of the subject and lengh of headers * A few other bits like length of the subject and lengh of headers
Without any headers the size is: Without any headers the size is:
``` ```text
length of the message record (4bytes) + seq(8) + ts(8) + subj_len(2) + subj + msg + hash(8) length of the message record (4bytes) + seq(8) + ts(8) + subj_len(2) + subj + msg + hash(8)
``` ```
@ -473,9 +473,9 @@ A 5 byte `hello` message without headers will take 39 bytes.
With headers: With headers:
``` ```text
length of the message record (4bytes) + seq(8) + ts(8) + subj_len(2) + subj + hdr_len(4) + hdr + msg + hash(8) length of the message record (4bytes) + seq(8) + ts(8) + subj_len(2) + subj + hdr_len(4) + hdr + msg + hash(8)
``` ```
So if you are publishing many small messages the overhead will be, relatively speaking, quite large, but for larger messages So if you are publishing many small messages the overhead will be, relatively speaking, quite large, but for larger messages the overhead is very small. If you publish many small messages it's worth trying to optimise the subject length.
the overhead is very small. If you publish many small messages it's worth trying to optimise the subject length.

View File

@ -0,0 +1,2 @@
# Ack Sampling

View File

@ -0,0 +1,2 @@
# Acknowledgment Models

View File

@ -0,0 +1,2 @@
# Consumer Message Rates

View File

@ -0,0 +1,2 @@
# Consumer Starting Position

View File

@ -0,0 +1,2 @@
# Ephemeral Consumers

View File

@ -0,0 +1,2 @@
# Exactly Once Delivery

View File

@ -0,0 +1,2 @@
# Message Deduplication

View File

@ -0,0 +1,2 @@
# Storage Overhead

View File

@ -0,0 +1,2 @@
# Stream Limits, Retention Modes and Discard Policy

View File

@ -0,0 +1,2 @@
# Stream Templates

35
jetstream/monitoring.md Normal file
View File

@ -0,0 +1,35 @@
# Monitoring
## Server Metrics
Typically, NATS is monitored via HTTP endpoints like `/varz`, we do not at this moment have a JetStream equivelant, but it's planned that server and account level metrics will be made available.
## Advisories
JetStream publish a number of advisories that can inform operations about health and state of the Streams. These advisories are published to normal NATS subjects below `$JS.EVENT.ADVISORY.>` and one can store these advisories in JetStream Streams if desired.
The command `nats event --js-advisory` can view all these events on your console. The Golang package [jsm.go](https://github.com/nats-io/jsm.go) can consume and render these events and have data types for each of these events.
All these events have JSON Schemas that describe them, schemas can be viewed on the CLI using the `nats schema show <schema kind>` command.
| Description | Subject | Kind |
| :--- | :--- | :--- |
| API interactions | `$JS.EVENT.ADVISORY.API` | `io.nats.jetstream.advisory.v1.api_audit` |
| Stream CRUD operations | `$JS.EVENT.ADVISORY.STREAM.CREATED.<STREAM>` | `io.nats.jetstream.advisory.v1.stream_action` |
| Consumer CRUD operations | `$JS.EVENT.ADVISORY.CONSUMER.CREATED.<STREAM>.<CONSUMER>` | `io.nats.jetstream.advisory.v1.consumer_action` |
| Snapshot started using `nats stream backup` | `$JS.EVENT.ADVISORY.STREAM.SNAPSHOT_CREATE.<STREAM>` | `io.nats.jetstream.advisory.v1.snapshot_create` |
| Snapshot completed | `$JS.EVENT.ADVISORY.STREAM.SNAPSHOT_COMPLETE.<STREAM>` | `io.nats.jetstream.advisory.v1.snapshot_complete` |
| Restore started using `nats stream restore` | `$JS.EVENT.ADVISORY.STREAM.RESTORE_CREATE.<STREAM>` | `io.nats.jetstream.advisory.v1.restore_create` |
| Restore completed | `$JS.EVENT.ADVISORY.STREAM.RESTORE_COMPLETE.<STREAM>` | `io.nats.jetstream.advisory.v1.restore_complete` |
| Consumer maximum delivery reached | `$JS.EVENT.ADVISORY.CONSUMER.MAX_DELIVERIES.<STREAM>.<CONSUMER>` | `io.nats.jetstream.advisory.v1.max_deliver` |
| Message delivery terminated using AckTerm | `$JS.EVENT.ADVISORY.CONSUMER.MSG_TERMINATED.<STREAM>.<CONSUMER>` | `io.nats.jetstream.advisory.v1.terminated` |
| Message acknowledged in a sampled Consumer | `$JS.EVENT.METRIC.CONSUMER.ACK.<STREAM>.<CONSUMER>` | `io.nats.jetstream.metric.v1.consumer_ack` |
| Clustered Stream elected a new leader | `$JS.EVENT.ADVISORY.STREAM.LEADER_ELECTED.<STREAM>` | `io.nats.jetstream.advisory.v1.stream_leader_elected` |
| Clustered Stream lost quorum | `$JS.EVENT.ADVISORY.STREAM.QUORUM_LOST.<STREAM>` | `io.nats.jetstream.advisory.v1.stream_quorum_lost` |
| Clustered Consumer elected a new leader | `$JS.EVENT.ADVISORY.CONSUMER.LEADER_ELECTED.<STREAM>.<CONSUMER>` | `io.nats.jetstream.advisory.v1.consumer_leader_elected` |
| Clustered Consumer lost quorum | `$JS.EVENT.ADVISORY.CONSUMER.QUORUM_LOST.<STREAM>.<CONSUMER>` | `io.nats.jetstream.advisory.v1.consumer_quorum_lost` |
## Dashboards
The [NATS Surveyor](https://github.com/nats-io/nats-surveyor) system has initial support for passing JetStream metrics to Prometheus, dashboards and more will be added towards final release.

View File

@ -1,34 +0,0 @@
## Monitoring
### Server Metrics
Typically, NATS is monitored via HTTP endpoints like `/varz`, we do not at this moment have a JetStream equivelant, but it's planned that server and account level metrics will be made available.
### Advisories
JetStream publish a number of advisories that can inform operations about health and state of the Streams. These advisories are published to normal NATS subjects below `$JS.EVENT.ADVISORY.>` and one can store these advisories in JetStream Streams if desired.
The command `nats event --js-advisory` can view all these events on your console. The Golang package [jsm.go](https://github.com/nats-io/jsm.go) can consume and render these events and have data types for each of these events.
All these events have JSON Schemas that describe them, schemas can be viewed on the CLI using the `nats schema show <schema kind>` command.
|Description|Subject|Kind|
|-----------|-------|----|
|API interactions|`$JS.EVENT.ADVISORY.API`|`io.nats.jetstream.advisory.v1.api_audit`|
|Stream CRUD operations|`$JS.EVENT.ADVISORY.STREAM.CREATED.<STREAM>`|`io.nats.jetstream.advisory.v1.stream_action`|
|Consumer CRUD operations|`$JS.EVENT.ADVISORY.CONSUMER.CREATED.<STREAM>.<CONSUMER>`|`io.nats.jetstream.advisory.v1.consumer_action`|
|Snapshot started using `nats stream backup`|`$JS.EVENT.ADVISORY.STREAM.SNAPSHOT_CREATE.<STREAM>`|`io.nats.jetstream.advisory.v1.snapshot_create`|
|Snapshot completed|`$JS.EVENT.ADVISORY.STREAM.SNAPSHOT_COMPLETE.<STREAM>`|`io.nats.jetstream.advisory.v1.snapshot_complete`|
|Restore started using `nats stream restore`|`$JS.EVENT.ADVISORY.STREAM.RESTORE_CREATE.<STREAM>`|`io.nats.jetstream.advisory.v1.restore_create`|
|Restore completed|`$JS.EVENT.ADVISORY.STREAM.RESTORE_COMPLETE.<STREAM>`|`io.nats.jetstream.advisory.v1.restore_complete`|
|Consumer maximum delivery reached|`$JS.EVENT.ADVISORY.CONSUMER.MAX_DELIVERIES.<STREAM>.<CONSUMER>`|`io.nats.jetstream.advisory.v1.max_deliver`|
|Message delivery terminated using AckTerm|`$JS.EVENT.ADVISORY.CONSUMER.MSG_TERMINATED.<STREAM>.<CONSUMER>`|`io.nats.jetstream.advisory.v1.terminated`|
|Message acknowledged in a sampled Consumer|`$JS.EVENT.METRIC.CONSUMER.ACK.<STREAM>.<CONSUMER>`|`io.nats.jetstream.metric.v1.consumer_ack`|
|Clustered Stream elected a new leader|`$JS.EVENT.ADVISORY.STREAM.LEADER_ELECTED.<STREAM>`|`io.nats.jetstream.advisory.v1.stream_leader_elected`|
|Clustered Stream lost quorum|`$JS.EVENT.ADVISORY.STREAM.QUORUM_LOST.<STREAM>`|`io.nats.jetstream.advisory.v1.stream_quorum_lost`
|Clustered Consumer elected a new leader|`$JS.EVENT.ADVISORY.CONSUMER.LEADER_ELECTED.<STREAM>.<CONSUMER>`|`io.nats.jetstream.advisory.v1.consumer_leader_elected`|
|Clustered Consumer lost quorum|`$JS.EVENT.ADVISORY.CONSUMER.QUORUM_LOST.<STREAM>.<CONSUMER>`|`io.nats.jetstream.advisory.v1.consumer_quorum_lost`|
### Dashboards
The [NATS Surveyor](https://github.com/nats-io/nats-surveyor) system has initial support for passing JetStream metrics to Prometheus, dashboards and more will be added towards final release.

View File

@ -1,18 +1,18 @@
## NATS API Reference # NATS API Reference
Thus far we saw a lot of CLI interactions. The CLI works by sending and receiving specially crafted messages over core NATS to configure the JetStream system. In time we will look to add file based configuration but for now the only method is the NATS API. Thus far we saw a lot of CLI interactions. The CLI works by sending and receiving specially crafted messages over core NATS to configure the JetStream system. In time we will look to add file based configuration but for now the only method is the NATS API.
**NOTE:** Some NATS client libraries may need to enable an option to use old style requests when interacting withe JetStream server. Consult the libraries README's for more information. **NOTE:** Some NATS client libraries may need to enable an option to use old style requests when interacting withe JetStream server. Consult the libraries README's for more information.
### Reference ## Reference
All of these subjects are found as constants in the NATS Server source, so for example the `$JS.API.STREAM.LIST` is a constant in the nats-server source `api.JetStreamListStreams` tables below will reference these constants and likewise data structures in the server for payloads. All of these subjects are found as constants in the NATS Server source, so for example the `$JS.API.STREAM.LIST` is a constant in the nats-server source `api.JetStreamListStreams` tables below will reference these constants and likewise data structures in the server for payloads.
### Error Handling ## Error Handling
The APIs used for administrative tools all respond with standardised JSON and these include errors. The APIs used for administrative tools all respond with standardised JSON and these include errors.
```nohighlight ```text
$ nats req '$JS.API.STREAM.INFO.nonexisting' '' $ nats req '$JS.API.STREAM.INFO.nonexisting' ''
Published 11 bytes to $JS.API.STREAM.INFO.nonexisting Published 11 bytes to $JS.API.STREAM.INFO.nonexisting
Received [_INBOX.lcWgjX2WgJLxqepU0K9pNf.mpBW9tHK] : { Received [_INBOX.lcWgjX2WgJLxqepU0K9pNf.mpBW9tHK] : {
@ -24,7 +24,7 @@ Received [_INBOX.lcWgjX2WgJLxqepU0K9pNf.mpBW9tHK] : {
} }
``` ```
```nohighlight ```text
$ nats req '$JS.STREAM.INFO.ORDERS' '' $ nats req '$JS.STREAM.INFO.ORDERS' ''
Published 6 bytes to $JS.STREAM.INFO.ORDERS Published 6 bytes to $JS.STREAM.INFO.ORDERS
Received [_INBOX.fwqdpoWtG8XFXHKfqhQDVA.vBecyWmF] : '{ Received [_INBOX.fwqdpoWtG8XFXHKfqhQDVA.vBecyWmF] : '{
@ -39,7 +39,7 @@ Here the responses include a `type` which can be used to find the JSON Schema fo
Non admin APIs - like those for adding a message to the stream will respond with `-ERR` or `+OK` with an optional reason after. Non admin APIs - like those for adding a message to the stream will respond with `-ERR` or `+OK` with an optional reason after.
### Admin API ## Admin API
All the admin actions the `nats` CLI can do falls in the sections below. The API structure are kept in the `api` package in the `jsm.go` repository. All the admin actions the `nats` CLI can do falls in the sections below. The API structure are kept in the `api` package in the `jsm.go` repository.
@ -49,61 +49,61 @@ The command `nats events` will show you an audit log of all API access events wh
The API uses JSON for inputs and outputs, all the responses are typed using a `type` field which indicates their Schema. A JSON Schema repository can be found in `nats-io/jetstream/schemas`. The API uses JSON for inputs and outputs, all the responses are typed using a `type` field which indicates their Schema. A JSON Schema repository can be found in `nats-io/jetstream/schemas`.
#### General Info ### General Info
|Subject|Constant|Description|Request Payload|Response Payload| | Subject | Constant | Description | Request Payload | Response Payload |
|-------|--------|-----------|---------------|----------------| | :--- | :--- | :--- | :--- | :--- |
|`$JS.API.INFO`|`api.JSApiAccountInfo`|Retrieves stats and limits about your account|empty payload|`api.JetStreamAccountStats`| | `$JS.API.INFO` | `api.JSApiAccountInfo` | Retrieves stats and limits about your account | empty payload | `api.JetStreamAccountStats` |
#### Streams ### Streams
|Subject|Constant|Description|Request Payload|Response Payload| | Subject | Constant | Description | Request Payload | Response Payload |
|-------|--------|-----------|---------------|----------------| | :--- | :--- | :--- | :--- | :--- |
|`$JS.API.STREAM.LIST`|`api.JSApiStreamList`|Paged list known Streams including all their current information|`api.JSApiStreamListRequest`|`api.JSApiStreamListResponse`| | `$JS.API.STREAM.LIST` | `api.JSApiStreamList` | Paged list known Streams including all their current information | `api.JSApiStreamListRequest` | `api.JSApiStreamListResponse` |
|`$JS.API.STREAM.NAMES`|`api.JSApiStreamNames`|Paged list of Streams|`api.JSApiStreamNamesRequest`|`api.JSApiStreamNamesResponse`| | `$JS.API.STREAM.NAMES` | `api.JSApiStreamNames` | Paged list of Streams | `api.JSApiStreamNamesRequest` | `api.JSApiStreamNamesResponse` |
|`$JS.API.STREAM.CREATE.*`|`api.JSApiStreamCreateT`|Creates a new Stream|`api.StreamConfig`|`api.JSApiStreamCreateResponse`| | `$JS.API.STREAM.CREATE.*` | `api.JSApiStreamCreateT` | Creates a new Stream | `api.StreamConfig` | `api.JSApiStreamCreateResponse` |
|`$JS.API.STREAM.UPDATE.*`|`api.JSApiStreamUpdateT`|Updates an existing Stream with new config|`api.StreamConfig`|`api.JSApiStreamUpdateResponse`| | `$JS.API.STREAM.UPDATE.*` | `api.JSApiStreamUpdateT` | Updates an existing Stream with new config | `api.StreamConfig` | `api.JSApiStreamUpdateResponse` |
|`$JS.API.STREAM.INFO.*`|`api.JSApiStreamInfoT`|Information about config and state of a Stream|empty payload, Stream name in subject|`api.JSApiStreamInfoResponse`| | `$JS.API.STREAM.INFO.*` | `api.JSApiStreamInfoT` | Information about config and state of a Stream | empty payload, Stream name in subject | `api.JSApiStreamInfoResponse` |
|`$JS.API.STREAM.DELETE.*`|`api.JSApiStreamDeleteT`|Deletes a Stream and all its data|empty payload, Stream name in subject|`api.JSApiStreamDeleteResponse`| | `$JS.API.STREAM.DELETE.*` | `api.JSApiStreamDeleteT` | Deletes a Stream and all its data | empty payload, Stream name in subject | `api.JSApiStreamDeleteResponse` |
|`$JS.API.STREAM.PURGE.*`|`api.JSApiStreamPurgeT`|Purges all of the data in a Stream, leaves the Stream|empty payload, Stream name in subject|`api.JSApiStreamPurgeResponse`| | `$JS.API.STREAM.PURGE.*` | `api.JSApiStreamPurgeT` | Purges all of the data in a Stream, leaves the Stream | empty payload, Stream name in subject | `api.JSApiStreamPurgeResponse` |
|`$JS.API.STREAM.MSG.DELETE.*`|`api.JSApiMsgDeleteT`|Deletes a specific message in the Stream by sequence, useful for GDPR compliance|`api.JSApiMsgDeleteRequest`|`api.JSApiMsgDeleteResponse`| | `$JS.API.STREAM.MSG.DELETE.*` | `api.JSApiMsgDeleteT` | Deletes a specific message in the Stream by sequence, useful for GDPR compliance | `api.JSApiMsgDeleteRequest` | `api.JSApiMsgDeleteResponse` |
|`$JS.API.STREAM.MSG.GET.*`|`api.JSApiMsgGetT`|Retrieves a specific message from the stream|`api.JSApiMsgGetRequest`|`api.JSApiMsgGetResponse`| | `$JS.API.STREAM.MSG.GET.*` | `api.JSApiMsgGetT` | Retrieves a specific message from the stream | `api.JSApiMsgGetRequest` | `api.JSApiMsgGetResponse` |
|`$JS.API.STREAM.SNAPSHOT.*`|`api.JSApiStreamSnapshotT`|Initiates a streaming backup of a streams data|`api.JSApiStreamSnapshotRequest`|`api.JSApiStreamSnapshotResponse`| | `$JS.API.STREAM.SNAPSHOT.*` | `api.JSApiStreamSnapshotT` | Initiates a streaming backup of a streams data | `api.JSApiStreamSnapshotRequest` | `api.JSApiStreamSnapshotResponse` |
|`$JS.API.STREAM.RESTORE.*`|`api.JSApiStreamRestoreT`|Initiates a streaming restore of a stream|`{}`|`api.JSApiStreamRestoreResponse`| | `$JS.API.STREAM.RESTORE.*` | `api.JSApiStreamRestoreT` | Initiates a streaming restore of a stream | `{}` | `api.JSApiStreamRestoreResponse` |
#### Stream Templates ### Stream Templates
|Subject|Constant|Description|Request Payload|Response Payload| | Subject | Constant | Description | Request Payload | Response Payload |
|-------|--------|-----------|---------------|----------------| | :--- | :--- | :--- | :--- | :--- |
|`$JS.API.STREAM.TEMPLATE.CREATE.*`|`api.JSApiTemplateCreateT`|Creates a Stream Template|`api.StreamTemplateConfig`|`api.JSApiStreamTemplateCreateResponse`| | `$JS.API.STREAM.TEMPLATE.CREATE.*` | `api.JSApiTemplateCreateT` | Creates a Stream Template | `api.StreamTemplateConfig` | `api.JSApiStreamTemplateCreateResponse` |
|`$JS.API.STREAM.TEMPLATE.NAMES`|`api.JSApiTemplateNames`|Paged list all known templates|`api.JSApiStreamTemplateNamesRequest`|`api.JSApiStreamTemplateNamesResponse`| | `$JS.API.STREAM.TEMPLATE.NAMES` | `api.JSApiTemplateNames` | Paged list all known templates | `api.JSApiStreamTemplateNamesRequest` | `api.JSApiStreamTemplateNamesResponse` |
|`$JS.API.STREAM.TEMPLATE.INFO.*`|`api.JSApiTemplateInfoT`|Information about the config and state of a Stream Template|empty payload, Template name in subject|`api.JSApiStreamTemplateInfoResponse`| | `$JS.API.STREAM.TEMPLATE.INFO.*` | `api.JSApiTemplateInfoT` | Information about the config and state of a Stream Template | empty payload, Template name in subject | `api.JSApiStreamTemplateInfoResponse` |
|`$JS.API.STREAM.TEMPLATE.DELETE.*`|`api.JSApiTemplateDeleteT`|Delete a specific Stream Template **and all streams created by this template**|empty payload, Template name in subject|`api.JSApiStreamTemplateDeleteResponse`| | `$JS.API.STREAM.TEMPLATE.DELETE.*` | `api.JSApiTemplateDeleteT` | Delete a specific Stream Template **and all streams created by this template** | empty payload, Template name in subject | `api.JSApiStreamTemplateDeleteResponse` |
#### Consumers ### Consumers
|Subject|Constant|Description|Request Payload|Response Payload| | Subject | Constant | Description | Request Payload | Response Payload |
|-------|--------|-----------|---------------|----------------| | :--- | :--- | :--- | :--- | :--- |
|`$JS.API.CONSUMER.CREATE.*`|`api.JSApiConsumerCreateT`|Create an ephemeral Consumer|`api.ConsumerConfig`, Stream name in subject|`api.JSApiConsumerCreateResponse`| | `$JS.API.CONSUMER.CREATE.*` | `api.JSApiConsumerCreateT` | Create an ephemeral Consumer | `api.ConsumerConfig`, Stream name in subject | `api.JSApiConsumerCreateResponse` |
|`$JS.API.CONSUMER.DURABLE.CREATE.*`|`api.JSApiDurableCreateT`|Create an Consumer|`api.ConsumerConfig`, Stream name in subject|`api.JSApiConsumerCreateResponse`| | `$JS.API.CONSUMER.DURABLE.CREATE.*` | `api.JSApiDurableCreateT` | Create an Consumer | `api.ConsumerConfig`, Stream name in subject | `api.JSApiConsumerCreateResponse` |
|`$JS.API.CONSUMER.LIST.*`|`api.JSApiConsumerListT`|Paged list of known Consumers including their current info|`api.JSApiConsumerListRequest`|`api.JSApiConsumerListResponse`| | `$JS.API.CONSUMER.LIST.*` | `api.JSApiConsumerListT` | Paged list of known Consumers including their current info | `api.JSApiConsumerListRequest` | `api.JSApiConsumerListResponse` |
|`$JS.API.CONSUMER.NAMES.*`|`api.JSApiConsumerNamesT`|Paged list of known Consumer names|`api.JSApiConsumerNamesRequest`|`api.JSApiConsumerNamesResponse`| | `$JS.API.CONSUMER.NAMES.*` | `api.JSApiConsumerNamesT` | Paged list of known Consumer names | `api.JSApiConsumerNamesRequest` | `api.JSApiConsumerNamesResponse` |
|`$JS.API.CONSUMER.INFO.*.*`|`api.JSApiConsumerInfoT`|Information about an Consumer|empty payload, Stream and Consumer names in subject|`api.JSApiConsumerInfoResponse`| | `$JS.API.CONSUMER.INFO.*.*` | `api.JSApiConsumerInfoT` | Information about an Consumer | empty payload, Stream and Consumer names in subject | `api.JSApiConsumerInfoResponse` |
|`$JS.API.CONSUMER.DELETE.*.*`|`api.JSApiConsumerDeleteT`|Deletes an Consumer|empty payload, Stream and Consumer names in subject|`api.JSApiConsumerDeleteResponse`| | `$JS.API.CONSUMER.DELETE.*.*` | `api.JSApiConsumerDeleteT` | Deletes an Consumer | empty payload, Stream and Consumer names in subject | `api.JSApiConsumerDeleteResponse` |
#### ACLs ### ACLs
It's hard to notice here but there is a clear pattern in these subjects, lets look at the various JetStream related subjects: It's hard to notice here but there is a clear pattern in these subjects, lets look at the various JetStream related subjects:
General information General information
``` ```text
$JS.API.INFO $JS.API.INFO
``` ```
Stream and Consumer Admin Stream and Consumer Admin
``` ```text
$JS.API.STREAM.CREATE.<stream> $JS.API.STREAM.CREATE.<stream>
$JS.API.STREAM.UPDATE.<stream> $JS.API.STREAM.UPDATE.<stream>
$JS.API.STREAM.DELETE.<stream> $JS.API.STREAM.DELETE.<stream>
@ -130,7 +130,7 @@ $JS.API.STREAM.TEMPLATE.NAMES
Stream and Consumer Use Stream and Consumer Use
``` ```text
$JS.API.CONSUMER.MSG.NEXT.<stream>.<consumer> $JS.API.CONSUMER.MSG.NEXT.<stream>.<consumer>
$JS.ACK.<stream>.<consumer>.x.x.x $JS.ACK.<stream>.<consumer>.x.x.x
$JS.SNAPSHOT.ACK.<stream>.<msg id> $JS.SNAPSHOT.ACK.<stream>.<msg id>
@ -139,7 +139,7 @@ $JS.SNAPSHOT.RESTORE.<stream>.<msg id>
Events and Advisories: Events and Advisories:
``` ```text
$JS.EVENT.METRIC.CONSUMER_ACK.<stream>.<consumer> $JS.EVENT.METRIC.CONSUMER_ACK.<stream>.<consumer>
$JS.EVENT.ADVISORY.MAX_DELIVERIES.<stream>.<consumer> $JS.EVENT.ADVISORY.MAX_DELIVERIES.<stream>.<consumer>
$JS.EVENT.ADVISORY.CONSUMER.MSG_TERMINATED.<stream>.<consumer> $JS.EVENT.ADVISORY.CONSUMER.MSG_TERMINATED.<stream>.<consumer>
@ -161,17 +161,17 @@ $JS.EVENT.ADVISORY.API
This design allows you to easily create ACL rules that limit users to a specific Stream or Consumer and to specific verbs for administration purposes. For ensuring only the receiver of a message can Ack it we have response permissions ensuring you can only Publish to Response subject for messages you received. This design allows you to easily create ACL rules that limit users to a specific Stream or Consumer and to specific verbs for administration purposes. For ensuring only the receiver of a message can Ack it we have response permissions ensuring you can only Publish to Response subject for messages you received.
### Acknowledging Messages ## Acknowledging Messages
Messages that need acknowledgement will have a Reply subject set, something like `$JS.ACK.ORDERS.test.1.2.2`, this is the prefix defined in `api.JetStreamAckPre` followed by `<stream>.<consumer>.<delivered count>.<stream sequence>.<consumer sequence>.<timestamp>.<pending messages>`. Messages that need acknowledgement will have a Reply subject set, something like `$JS.ACK.ORDERS.test.1.2.2`, this is the prefix defined in `api.JetStreamAckPre` followed by `<stream>.<consumer>.<delivered count>.<stream sequence>.<consumer sequence>.<timestamp>.<pending messages>`.
In all the Synadia maintained API's you can simply do `msg.Respond(nil)` (or language equivalent) which will send nil to the reply subject. In all the Synadia maintained API's you can simply do `msg.Respond(nil)` \(or language equivalent\) which will send nil to the reply subject.
### Fetching The Next Message From a Pull-based Consumer ## Fetching The Next Message From a Pull-based Consumer
If you have a pull-based Consumer you can send a standard NATS Request to `$JS.API.CONSUMER.MSG.NEXT.<stream>.<consumer>`, here the format is defined in `api.JetStreamRequestNextT` and requires populating using `fmt.Sprintf()`. If you have a pull-based Consumer you can send a standard NATS Request to `$JS.API.CONSUMER.MSG.NEXT.<stream>.<consumer>`, here the format is defined in `api.JetStreamRequestNextT` and requires populating using `fmt.Sprintf()`.
```nohighlight ```text
$ nats req '$JS.API.CONSUMER.MSG.NEXT.ORDERS.test' '1' $ nats req '$JS.API.CONSUMER.MSG.NEXT.ORDERS.test' '1'
Published 1 bytes to $JS.API.CONSUMER.MSG.NEXT.ORDERS.test Published 1 bytes to $JS.API.CONSUMER.MSG.NEXT.ORDERS.test
Received [js.1] : 'message 1' Received [js.1] : 'message 1'
@ -183,7 +183,7 @@ The above request for the next message will stay in the server for as long as th
This is often not desired, pull consumers support a mode where a JSON document is sent describing the pull request. This is often not desired, pull consumers support a mode where a JSON document is sent describing the pull request.
```json ```javascript
{ {
"expires": "2020-11-10T12:41:00.075933464Z", "expires": "2020-11-10T12:41:00.075933464Z",
"batch": 10, "batch": 10,
@ -192,7 +192,7 @@ This is often not desired, pull consumers support a mode where a JSON document i
This requests 10 messages and asks the server to keep this request until the specific `expires` time, this is useful when you poll the server frequently and do not want the pull requests to accumulate on the server. Set the expire time to now + your poll frequency. This requests 10 messages and asks the server to keep this request until the specific `expires` time, this is useful when you poll the server frequently and do not want the pull requests to accumulate on the server. Set the expire time to now + your poll frequency.
```json ```javascript
{ {
"batch": 10, "batch": 10,
"no_wait": true "no_wait": true
@ -201,7 +201,7 @@ This requests 10 messages and asks the server to keep this request until the spe
Here we see a second format of the Pull request that will not store the request on the queue at all but when there are no messages to deliver will send a nil bytes message with a `Status` header of `404`, this way you can know when you reached the end of the stream for example. A `409` is returned if the Consumer has reached `MaxAckPending` limits. Here we see a second format of the Pull request that will not store the request on the queue at all but when there are no messages to deliver will send a nil bytes message with a `Status` header of `404`, this way you can know when you reached the end of the stream for example. A `409` is returned if the Consumer has reached `MaxAckPending` limits.
``` ```text
[rip@dev1]% nats req '$JS.API.CONSUMER.MSG.NEXT.ORDERS.NEW' '{"no_wait": true, "batch": 10}' [rip@dev1]% nats req '$JS.API.CONSUMER.MSG.NEXT.ORDERS.NEW' '{"no_wait": true, "batch": 10}'
test --password test test --password test
13:45:30 Sending request on "$JS.API.CONSUMER.MSG.NEXT.ORDERS.NEW" 13:45:30 Sending request on "$JS.API.CONSUMER.MSG.NEXT.ORDERS.NEW"
@ -210,11 +210,11 @@ Here we see a second format of the Pull request that will not store the request
13:45:30 Description: No Messages 13:45:30 Description: No Messages
``` ```
### Fetching From a Stream By Sequence ## Fetching From a Stream By Sequence
If you know the Stream sequence of a message you can fetch it directly, this does not support acks. Do a Request() to `$JS.API.STREAM.MSG.GET.ORDERS` sending it the message sequence as payload. Here the prefix is defined in `api.JetStreamMsgBySeqT` which also requires populating using `fmt.Sprintf()`. If you know the Stream sequence of a message you can fetch it directly, this does not support acks. Do a Request\(\) to `$JS.API.STREAM.MSG.GET.ORDERS` sending it the message sequence as payload. Here the prefix is defined in `api.JetStreamMsgBySeqT` which also requires populating using `fmt.Sprintf()`.
```nohighlight ```text
$ nats req '$JS.API.STREAM.MSG.GET.ORDERS' '{"seq": 1}' $ nats req '$JS.API.STREAM.MSG.GET.ORDERS' '{"seq": 1}'
Published 1 bytes to $JS.STREAM.ORDERS.MSG.BYSEQ Published 1 bytes to $JS.STREAM.ORDERS.MSG.BYSEQ
Received [_INBOX.cJrbzPJfZrq8NrFm1DsZuH.k91Gb4xM] : '{ Received [_INBOX.cJrbzPJfZrq8NrFm1DsZuH.k91Gb4xM] : '{
@ -230,6 +230,7 @@ Received [_INBOX.cJrbzPJfZrq8NrFm1DsZuH.k91Gb4xM] : '{
The Subject shows where the message was received, Data is base64 encoded and Time is when it was received. The Subject shows where the message was received, Data is base64 encoded and Time is when it was received.
### Consumer Samples ## Consumer Samples
Samples are published to a specific subject per Consumer, something like `$JS.EVENT.METRIC.CONSUMER_ACK.<stream>.<consumer>` you can just subscribe to that and get `api.ConsumerAckMetric` messages in JSON format. The prefix is defined in `api.JetStreamMetricConsumerAckPre`. Samples are published to a specific subject per Consumer, something like `$JS.EVENT.METRIC.CONSUMER_ACK.<stream>.<consumer>` you can just subscribe to that and get `api.ConsumerAckMetric` messages in JSON format. The prefix is defined in `api.JetStreamMetricConsumerAckPre`.

View File

@ -1,8 +1,8 @@
## Data Replication # Data Replication
Replication allows you to move data between streams in either a 1:1 mirror style or by multiplexing multiple source streams into a new stream. In future builds this will allow data to be replicated between accounts as well, ideal for sending data from a Leafnode into a central store. Replication allows you to move data between streams in either a 1:1 mirror style or by multiplexing multiple source streams into a new stream. In future builds this will allow data to be replicated between accounts as well, ideal for sending data from a Leafnode into a central store.
![](../../assets/images/replication.png) ![](../.gitbook/assets/replication.png)
Here we have 2 main streams - _ORDERS_ and _RETURNS_ - these streams are clustered across 3 nodes. These Streams have short retention periods and are memory based. Here we have 2 main streams - _ORDERS_ and _RETURNS_ - these streams are clustered across 3 nodes. These Streams have short retention periods and are memory based.
@ -10,23 +10,23 @@ We create a _ARCHIVE_ stream that has 2 _sources_ set, the _ARCHIVE_ will pull d
Finally, we create a _REPORT_ stream mirrored from _ARCHIVE_ that is not clustered and retains data for a month. The _REPORT_ Stream does not listen for any incoming messages, it can only consume data from _ARCHIVE_. Finally, we create a _REPORT_ stream mirrored from _ARCHIVE_ that is not clustered and retains data for a month. The _REPORT_ Stream does not listen for any incoming messages, it can only consume data from _ARCHIVE_.
### Mirrors ## Mirrors
A *mirror* copies data from 1 other stream, as far as possible IDs and ordering will match exactly the source. A *mirror* does not listen on a subject for any data to be added. The Start Sequence and Start Time can be set, but no subject filter. A stream can only have 1 *mirror* and if it is a mirror it cannot also have any *source*. A _mirror_ copies data from 1 other stream, as far as possible IDs and ordering will match exactly the source. A _mirror_ does not listen on a subject for any data to be added. The Start Sequence and Start Time can be set, but no subject filter. A stream can only have 1 _mirror_ and if it is a mirror it cannot also have any _source_.
### Sources ## Sources
A *source* is a stream where data is copied from, one stream can have multiple sources and will read data in from them all. The stream will also listen for messages on it's own subject. We can therefore not maintain absolute ordering, but data from 1 single source will be in the correct order but mixed in with other streams. You might also find the timestamps of streams can be older and newer mixed in together as a result. A _source_ is a stream where data is copied from, one stream can have multiple sources and will read data in from them all. The stream will also listen for messages on it's own subject. We can therefore not maintain absolute ordering, but data from 1 single source will be in the correct order but mixed in with other streams. You might also find the timestamps of streams can be older and newer mixed in together as a result.
A Stream with sources may also listen on subjects, but could have no listening subject. When using the `nats` CLI to create sourced streams use `--subjects` to supply subjects to listen on. A Stream with sources may also listen on subjects, but could have no listening subject. When using the `nats` CLI to create sourced streams use `--subjects` to supply subjects to listen on.
A source can have Start Time or Start Sequence and can filter by a subject. A source can have Start Time or Start Sequence and can filter by a subject.
### Configuration ## Configuration
The ORDERS and RETURNS streams as normal, I will not show how to create them. The ORDERS and RETURNS streams as normal, I will not show how to create them.
```nohighlight ```text
$ nats s report $ nats s report
Obtaining Stream stats Obtaining Stream stats
@ -40,7 +40,7 @@ Obtaining Stream stats
We now add the ARCHIVE: We now add the ARCHIVE:
```nohighlight ```text
$ nats s add ARCHIVE --source ORDERS --source RETURNS $ nats s add ARCHIVE --source ORDERS --source RETURNS
? Storage backend file ? Storage backend file
? Retention Policy Limits ? Retention Policy Limits
@ -61,7 +61,7 @@ $ nats s add ARCHIVE --source ORDERS --source RETURNS
And we add the REPORT: And we add the REPORT:
```nohighlight ```text
$ nats s add REPORT --mirror ARCHIVE $ nats s add REPORT --mirror ARCHIVE
? Storage backend file ? Storage backend file
? Retention Policy Limits ? Retention Policy Limits
@ -79,7 +79,7 @@ $ nats s add REPORT --mirror ARCHIVE
When configured we'll see some additional information in a `nats stream info` output: When configured we'll see some additional information in a `nats stream info` output:
```nohighlight ```text
$ nats stream info ARCHIVE $ nats stream info ARCHIVE
... ...
Source Information: Source Information:
@ -107,7 +107,7 @@ Here the `Lag` is how far behind we were reported as being last time we saw a me
We can confirm all our setup using a `nats stream report`: We can confirm all our setup using a `nats stream report`:
```nohighlight ```text
$ nats s report $ nats s report
+-------------------------------------------------------------------------------------------------------------------+ +-------------------------------------------------------------------------------------------------------------------+
| Stream Report | | Stream Report |
@ -133,14 +133,14 @@ $ nats s report
We then create some data in both ORDERS and RETURNS: We then create some data in both ORDERS and RETURNS:
```nohighlight ```text
$ nats req ORDERS.new "ORDER {{Count}}" --count 100 $ nats req ORDERS.new "ORDER {{Count}}" --count 100
$ nats req RETURNS.new "RETURN {{Count}}" --count 100 $ nats req RETURNS.new "RETURN {{Count}}" --count 100
``` ```
We can now see from a Stream Report that the data has been replicated: We can now see from a Stream Report that the data has been replicated:
```nohighlight ```text
$ nats s report --dot replication.dot $ nats s report --dot replication.dot
Obtaining Stream stats Obtaining Stream stats
@ -166,4 +166,5 @@ Obtaining Stream stats
Here we also pass the `--dot replication.dot` argument that writes a GraphViz format map of the replication setup. Here we also pass the `--dot replication.dot` argument that writes a GraphViz format map of the replication setup.
![](../../assets/images/replication-setup.png) ![](../.gitbook/assets/replication-setup.png)

View File

@ -1,16 +1,18 @@
# Multi-tenancy & Resource Mgmt
## Multi Tenancy and Resource Management ## Multi Tenancy and Resource Management
JetStream is compatible with NATS 2.0 Multi Tenancy using Accounts. A JetStream enabled server supports creating fully isolated JetStream environments for different accounts. JetStream is compatible with NATS 2.0 Multi Tenancy using Accounts. A JetStream enabled server supports creating fully isolated JetStream environments for different accounts.
To enable JetStream in a server we have to configure it at the top level first: To enable JetStream in a server we have to configure it at the top level first:
``` ```text
jetstream: enabled jetstream: enabled
``` ```
This will dynamically determine the available resources. It's recommended that you set specific limits though: This will dynamically determine the available resources. It's recommended that you set specific limits though:
``` ```text
jetstream { jetstream {
store_dir: /data/jetstream store_dir: /data/jetstream
max_mem: 1G max_mem: 1G
@ -20,7 +22,7 @@ jetstream {
At this point JetStream will be enabled and if you have a server that does not have accounts enabled all users in the server would have access to JetStream At this point JetStream will be enabled and if you have a server that does not have accounts enabled all users in the server would have access to JetStream
``` ```text
jetstream { jetstream {
store_dir: /data/jetstream store_dir: /data/jetstream
max_mem: 1G max_mem: 1G
@ -36,7 +38,7 @@ accounts {
Here the `HR` account would have access to all the resources configured on the server, we can restrict it: Here the `HR` account would have access to all the resources configured on the server, we can restrict it:
``` ```text
jetstream { jetstream {
store_dir: /data/jetstream store_dir: /data/jetstream
max_mem: 1G max_mem: 1G
@ -63,14 +65,13 @@ If you try to configure JetStream for an account without enabling it globally yo
As part of the JetStream efforts a new `nats` CLI is being developed to act as a single point of access to the NATS eco system. As part of the JetStream efforts a new `nats` CLI is being developed to act as a single point of access to the NATS eco system.
This CLI has been seen throughout the guide, it's available in the Docker containers today and downloadable on the [Releases](https://github.com/nats-io/jetstream/releases) This CLI has been seen throughout the guide, it's available in the Docker containers today and downloadable on the [Releases](https://github.com/nats-io/jetstream/releases) page.
page.
### Configuration Contexts ### Configuration Contexts
The CLI has a number of environment configuration settings - where your NATS server is, credentials, TLS keys and more: The CLI has a number of environment configuration settings - where your NATS server is, credentials, TLS keys and more:
```nohighlight ```text
$ nats --help $ nats --help
... ...
-s, --server=NATS_URL NATS servers -s, --server=NATS_URL NATS servers
@ -86,14 +87,13 @@ $ nats --help
... ...
``` ```
You can set these using the CLI flag, the environmet variable - like **NATS_URL** - or using our context feature. You can set these using the CLI flag, the environmet variable - like **NATS\_URL** - or using our context feature.
A context is a named configuration that stores all these settings, you can switch between access configurations and A context is a named configuration that stores all these settings, you can switch between access configurations and designate a default.
designate a default.
Creating one is easy, just specify the same settings to the `nats context save` Creating one is easy, just specify the same settings to the `nats context save`
```nohighlight ```text
$ nats context save example --server nats://nats.example.net:4222 --description 'Example.Net Server' $ nats context save example --server nats://nats.example.net:4222 --description 'Example.Net Server'
$ nats context save local --server nats://localhost:4222 --description 'Local Host' --select $ nats context save local --server nats://localhost:4222 --description 'Local Host' --select
$ nats context ls $ nats context ls
@ -105,7 +105,7 @@ Known contexts:
We passed `--select` to the `local` one meaning it will be the default when nothing is set. We passed `--select` to the `local` one meaning it will be the default when nothing is set.
```nohighlight ```text
$ nats rtt $ nats rtt
nats://localhost:4222: nats://localhost:4222:
@ -126,8 +126,9 @@ All `nats` commands are context aware and the `nats context` command has various
Server URLs and Credential paths can be resolved via the `nsc` command by specifying an url, for example to find user `new` within the `orders` account of the `acme` operator you can use this: Server URLs and Credential paths can be resolved via the `nsc` command by specifying an url, for example to find user `new` within the `orders` account of the `acme` operator you can use this:
```nohighlight ```text
$ nats context save example --description 'Example.Net Server' --nsc nsc://acme/orders/new $ nats context save example --description 'Example.Net Server' --nsc nsc://acme/orders/new
``` ```
The server list and credentials path will now be resolved via `nsc`, if these are specifically set in the context, the specific context configuration will take precedence. The server list and credentials path will now be resolved via `nsc`, if these are specifically set in the context, the specific context configuration will take precedence.

View File

@ -1,3 +1,5 @@
# Using a Load Balancer for External Access to NATS
## Using a Load Balancer for External Access to NATS ## Using a Load Balancer for External Access to NATS
In the example below, you can find how to use an [AWS Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) to connect externally to a cluster that has TLS setup. In the example below, you can find how to use an [AWS Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) to connect externally to a cluster that has TLS setup.
@ -42,7 +44,7 @@ Also, it would be recommended to set [no\_advertise](../nats-server/configuratio
With the following, you can create a 3-node NATS Server cluster: With the following, you can create a 3-node NATS Server cluster:
```sh ```bash
kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/b55687a97a5fd55485e1af302fbdbe43d2d3b968/nats-server/leafnodes/nats-cluster.yaml kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/b55687a97a5fd55485e1af302fbdbe43d2d3b968/nats-server/leafnodes/nats-cluster.yaml
``` ```
@ -85,13 +87,13 @@ data:
Now let's expose the NATS Server by creating an L4 load balancer on Azure: Now let's expose the NATS Server by creating an L4 load balancer on Azure:
```sh ```bash
kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/b55687a97a5fd55485e1af302fbdbe43d2d3b968/nats-server/leafnodes/lb.yaml kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/b55687a97a5fd55485e1af302fbdbe43d2d3b968/nats-server/leafnodes/lb.yaml
``` ```
Confirm the public IP that was allocated to the `nats-lb` service that was created, in this case it is `52.155.49.45`: Confirm the public IP that was allocated to the `nats-lb` service that was created, in this case it is `52.155.49.45`:
``` ```text
$ kubectl get svc -o wide $ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 81d <none> kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 81d <none>
@ -113,13 +115,13 @@ leaf {
You can also add a NATS Streaming cluster into the cluster connecting to the port 4222: You can also add a NATS Streaming cluster into the cluster connecting to the port 4222:
```sh ```bash
kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/b55687a97a5fd55485e1af302fbdbe43d2d3b968/nats-server/leafnodes/stan-server.yaml kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/b55687a97a5fd55485e1af302fbdbe43d2d3b968/nats-server/leafnodes/stan-server.yaml
``` ```
Now if you create two NATS Servers that connect to the same leafnode port, they will be able to receive messages to each other: Now if you create two NATS Servers that connect to the same leafnode port, they will be able to receive messages to each other:
```sh ```bash
nats-server -c leafnodes/leaf.conf -p 4222 & nats-server -c leafnodes/leaf.conf -p 4222 &
nats-server -c leafnodes/leaf.conf -p 4223 & nats-server -c leafnodes/leaf.conf -p 4223 &

View File

@ -461,3 +461,4 @@ Subscribe to get all the messages:
```bash ```bash
stan-sub -c stan -all foo stan-sub -c stan -all foo
``` ```

View File

@ -274,7 +274,7 @@ As noted above, the `routez` endpoint does support the `subs` argument from the
#### Example #### Example
* Get JetStream information: http://host:port/jsz?accounts=1&streams=1&consumers=1&config=1 * Get JetStream information: [http://host:port/jsz?accounts=1&streams=1&consumers=1&config=1](http://host:port/jsz?accounts=1&streams=1&consumers=1&config=1)
#### Response #### Response

View File

@ -1,9 +1,6 @@
# Lame Duck Mode # Lame Duck Mode
In production we recommend that a server is shut down with lame duck mode In production we recommend that a server is shut down with lame duck mode as a graceful way to slowly evict clients. With large deployments this mitigates the "thundering herd" situation that will place CPU pressure on servers as TLS enabled clients reconnect.
as a graceful way to slowly evict clients. With large deployments this
mitigates the "thundering herd" situation that will place CPU pressure on
servers as TLS enabled clients reconnect.
## Server ## Server
@ -13,16 +10,9 @@ Lame duck mode is initiated by signaling the server:
nats-server --signal ldm nats-server --signal ldm
``` ```
After entering lame duck mode, the server will stop accepting new connections, After entering lame duck mode, the server will stop accepting new connections, wait for a 10 second grace period, then begin to evict clients over a period of time configurable by the [lame\_duck\_duration](https://docs.nats.io/nats-server/configuration#runtime-configuration) configuration option. This period defaults to 2 minutes.
wait for a 10 second grace period, then begin to evict clients over a period of time
configurable by the [lame_duck_duration](https://docs.nats.io/nats-server/configuration#runtime-configuration)
configuration option. This period defaults to 2 minutes.
## Clients ## Clients
When entering lame duck mode, the server will send a message to clients. Some When entering lame duck mode, the server will send a message to clients. Some maintainer supported clients will invoke an optional callback indicating that a server is entering lame duck mode. This is used for cases where an application can benefit from preparing for the short outage between the time it is evicted and automatically reconnected to another server.
maintainer supported clients will invoke an optional callback indicating that
a server is entering lame duck mode. This is used for cases where an application
can benefit from preparing for the short outage between the time it is evicted and
automatically reconnected to another server.

View File

@ -38,7 +38,7 @@ nats-sub -s nats://localhost:4222 ">"
`nats-sub` is a subscriber sample included with all NATS clients. `nats-sub` subscribes to a subject and prints out any messages received. You can find the source code to the go version of `nats-sub` in [GitHub](https://github.com/nats-io/nats.go/tree/master/examples). After starting the subscriber you should see a message on 'A' that a new client connected. `nats-sub` is a subscriber sample included with all NATS clients. `nats-sub` subscribes to a subject and prints out any messages received. You can find the source code to the go version of `nats-sub` in [GitHub](https://github.com/nats-io/nats.go/tree/master/examples). After starting the subscriber you should see a message on 'A' that a new client connected.
We have two servers and a client. Time to simulate our rolling upgrade. But wait, before we upgrade 'A', let's introduce a new server 'C'. Server 'C' will join the existing cluster while we perform the upgrade. Its sole purpose is to provide an additional place where clients can go other than 'A' and ensure we don't end up with a single server serving all the clients after the upgrade procedure. Clients will randomly select a server when connecting unless a special option is provided that disables that functionality \(usually called 'DontRandomize' or 'noRandomize'\). You can read more about ["Avoiding the Thundering Herd"](). Suffice it to say that clients redistribute themselves about evenly between all servers in the cluster. In our case 1/2 of the clients on 'A' will jump over to 'B' and the remaining half to 'C'. We have two servers and a client. Time to simulate our rolling upgrade. But wait, before we upgrade 'A', let's introduce a new server 'C'. Server 'C' will join the existing cluster while we perform the upgrade. Its sole purpose is to provide an additional place where clients can go other than 'A' and ensure we don't end up with a single server serving all the clients after the upgrade procedure. Clients will randomly select a server when connecting unless a special option is provided that disables that functionality \(usually called 'DontRandomize' or 'noRandomize'\). You can read more about ["Avoiding the Thundering Herd"](upgrading_cluster.md). Suffice it to say that clients redistribute themselves about evenly between all servers in the cluster. In our case 1/2 of the clients on 'A' will jump over to 'B' and the remaining half to 'C'.
Let's start our temporary server: Let's start our temporary server:

View File

@ -68,11 +68,11 @@ Note: You may need to scroll horizontally to see all columns.
| hb\_interval | Interval at which the server sends an heartbeat to a client | Duration | `hb_interval: "10s"` | `30s` | v0.3.6 | | hb\_interval | Interval at which the server sends an heartbeat to a client | Duration | `hb_interval: "10s"` | `30s` | v0.3.6 |
| hb\_timeout | How long the server waits for a heartbeat response from the client before considering it a failed heartbeat | Duration | `hb_timeout: "10s"` | `10s` | v0.3.6 | | hb\_timeout | How long the server waits for a heartbeat response from the client before considering it a failed heartbeat | Duration | `hb_timeout: "10s"` | `10s` | v0.3.6 |
| hb\_fail\_count | Count of failed heartbeats before server closes the client connection. The actual total wait is: \(fail count + 1\) \* \(hb interval + hb timeout\) | Number | `hb_fail_count: 2` | `10` | v0.3.6 | | hb\_fail\_count | Count of failed heartbeats before server closes the client connection. The actual total wait is: \(fail count + 1\) \* \(hb interval + hb timeout\) | Number | `hb_fail_count: 2` | `10` | v0.3.6 |
| ft\_group | In Fault Tolerance mode, you can start a group of streaming servers with only one server being active while others are running in standby mode. This is the name of this FT group | String | `ft_group: "my_ft_group"` | N/A | v0.4.0| | ft\_group | In Fault Tolerance mode, you can start a group of streaming servers with only one server being active while others are running in standby mode. This is the name of this FT group | String | `ft_group: "my_ft_group"` | N/A | v0.4.0 |
| partitioning | If set to true, a list of channels must be defined in store\_limits/channels section. This section then serves two purposes, overriding limits for a given channel or adding it to the partition | `true` or `false` | `partitioning: true` | `false` | v0.5.0 | | partitioning | If set to true, a list of channels must be defined in store\_limits/channels section. This section then serves two purposes, overriding limits for a given channel or adding it to the partition | `true` or `false` | `partitioning: true` | `false` | v0.5.0 |
| sql\_options | SQL Store specific options | Map: `sql_options: { ... }` | [**See details below**](cfgfile.md#sql-options-configuration) | | v0.7.0 | | sql\_options | SQL Store specific options | Map: `sql_options: { ... }` | [**See details below**](cfgfile.md#sql-options-configuration) | | v0.7.0 |
| cluster | Cluster Configuration | Map: `cluster: { ... }` | [**See details below**](cfgfile.md#cluster-configuration) | | v0.9.0 | | cluster | Cluster Configuration | Map: `cluster: { ... }` | [**See details below**](cfgfile.md#cluster-configuration) | | v0.9.0 |
| syslog_name | On Windows, when running several servers as a service, use this name for the event source | String | | v0.11.0 | | syslog\_name | On Windows, when running several servers as a service, use this name for the event source | String | | v0.11.0 | |
| encrypt | Specify if server should encrypt messages \(only the payload\) when storing them | `true` or `false` | `encrypt: true` | `false` | v0.12.0 | | encrypt | Specify if server should encrypt messages \(only the payload\) when storing them | `true` or `false` | `encrypt: true` | `false` | v0.12.0 |
| encryption\_cipher | Cipher to use for encryption. Currently support AES and CHAHA \(ChaChaPoly\). Defaults to AES | `AES` or `CHACHA` | `encryption_cipher: "AES"` | Depends on platform | v0.12.0 | | encryption\_cipher | Cipher to use for encryption. Currently support AES and CHAHA \(ChaChaPoly\). Defaults to AES | `AES` or `CHACHA` | `encryption_cipher: "AES"` | Depends on platform | v0.12.0 |
| encryption\_key | Encryption key. It is recommended to specify the key through the `NATS_STREAMING_ENCRYPTION_KEY` environment variable instead | String | `encryption_key: "mykey"` | N/A | v0.12.0 | | encryption\_key | Encryption key. It is recommended to specify the key through the `NATS_STREAMING_ENCRYPTION_KEY` environment variable instead | String | `encryption_key: "mykey"` | N/A | v0.12.0 |
@ -81,7 +81,7 @@ Note: You may need to scroll horizontally to see all columns.
| password | Password used with above `username` | String | `password: "password"` | N/A | v0.19.0 | | password | Password used with above `username` | String | `password: "password"` | N/A | v0.19.0 |
| token | Authentication token if the NATS Server requires a token | String | `token: "some_token"` | N/A | v0.19.0 | | token | Authentication token if the NATS Server requires a token | String | `token: "some_token"` | N/A | v0.19.0 |
| nkey\_seed\_file | Path to an NKey seed file \(1\) if NKey authentication is used | File Path | `nkey_seed_file: "/path/to/some/seedfile"` | N/A | v0.19.0 | | nkey\_seed\_file | Path to an NKey seed file \(1\) if NKey authentication is used | File Path | `nkey_seed_file: "/path/to/some/seedfile"` | N/A | v0.19.0 |
| replace_durable | Replace the existing durable subscription instead of reporting a duplicate durable error | `true` or `false` | `replace_durable: true` | `false` | v0.20.0 | | replace\_durable | Replace the existing durable subscription instead of reporting a duplicate durable error | `true` or `false` | `replace_durable: true` | `false` | v0.20.0 |
Notes: Notes:
@ -196,10 +196,10 @@ For a given channel, the possible parameters are:
| raft\_lease\_timeout | Specifies how long a leader waits without being able to contact a quorum of nodes before stepping down as leader | Duration | `raft_lease_timeout: "1s"` | `1s` | v0.11.2 | | raft\_lease\_timeout | Specifies how long a leader waits without being able to contact a quorum of nodes before stepping down as leader | Duration | `raft_lease_timeout: "1s"` | `1s` | v0.11.2 |
| raft\_commit\_timeout | Specifies the time without an Apply\(\) operation before sending an heartbeat to ensure timely commit. Due to random staggering, may be delayed as much as 2x this value | Duration | `raft_commit_timeout: "100ms"` | `100ms` | v0.11.2 | | raft\_commit\_timeout | Specifies the time without an Apply\(\) operation before sending an heartbeat to ensure timely commit. Due to random staggering, may be delayed as much as 2x this value | Duration | `raft_commit_timeout: "100ms"` | `100ms` | v0.11.2 |
| proceed\_on\_restore\_failure | Allow a non leader node to proceed with restore failures, do not use unless you understand the risks! | `true` or `false` | `proceed_on_restore_failure: true` | `false` | v0.17.0 | | proceed\_on\_restore\_failure | Allow a non leader node to proceed with restore failures, do not use unless you understand the risks! | `true` or `false` | `proceed_on_restore_failure: true` | `false` | v0.17.0 |
| allow_add_remove_node| Enable the ability to send NATS requests to the leader to add/remove cluster nodes | `true` or `false` | `allow_add_remove_node: true` | `false` | v0.19.0 | | allow\_add\_remove\_node | Enable the ability to send NATS requests to the leader to add/remove cluster nodes | `true` or `false` | `allow_add_remove_node: true` | `false` | v0.19.0 |
| bolt_free_list_sync | Causes the RAFT log to synchronize the free list on write operations. Reduces performance at runtime, but makes the recovery faster | `true` or `false` | `bolt_free_list_sync: true` | `false` | v0.21.0 | | bolt\_free\_list\_sync | Causes the RAFT log to synchronize the free list on write operations. Reduces performance at runtime, but makes the recovery faster | `true` or `false` | `bolt_free_list_sync: true` | `false` | v0.21.0 |
| bolt_free_list_map | Sets the backend freelist type to use a map instead of the default array type. Improves performance for large RAFT logs, with fragmented free list | `true` or `false` | `bolt_free_list_map: true` | `false` | v0.21.0 | | bolt\_free\_list\_map | Sets the backend freelist type to use a map instead of the default array type. Improves performance for large RAFT logs, with fragmented free list | `true` or `false` | `bolt_free_list_map: true` | `false` | v0.21.0 |
| nodes_connections | Enable creation of dedicated NATS connections to communicate with other nodes | `true` or `false` | `nodes_connections: true` | `false` | v0.21.0 | | nodes\_connections | Enable creation of dedicated NATS connections to communicate with other nodes | `true` or `false` | `nodes_connections: true` | `false` | v0.21.0 |
## SQL Options Configuration ## SQL Options Configuration
@ -209,5 +209,5 @@ For a given channel, the possible parameters are:
| source | How to connect to the database. This is driver specific | String | `source: "ivan:pwd@/nss_db"` | N/A | v0.7.0 | | source | How to connect to the database. This is driver specific | String | `source: "ivan:pwd@/nss_db"` | N/A | v0.7.0 |
| no\_caching | Enable/Disable caching for messages and subscriptions operations. | `true` or `false` | `no_caching: false` | `false` \(caching enabled\) | v0.7.0 | | no\_caching | Enable/Disable caching for messages and subscriptions operations. | `true` or `false` | `no_caching: false` | `false` \(caching enabled\) | v0.7.0 |
| max\_open\_conns | Maximum number of opened connections to the database. Value &lt;= 0 means no limit. | Number | `max_open_conns: 5` | unlimited | v0.7.0 | | max\_open\_conns | Maximum number of opened connections to the database. Value &lt;= 0 means no limit. | Number | `max_open_conns: 5` | unlimited | v0.7.0 |
| bulk\_insert\_limit | Maximum number of messages stored with a single SQL "INSERT" statement. The default behavior is to send individual insert commands as part of a SQL transaction. | Number | `bulk_insert_limit: 1000` | `0` (not enabled) | v0.20.0 | | bulk\_insert\_limit | Maximum number of messages stored with a single SQL "INSERT" statement. The default behavior is to send individual insert commands as part of a SQL transaction. | Number | `bulk_insert_limit: 1000` | `0` \(not enabled\) | v0.20.0 |

View File

@ -60,7 +60,7 @@ Generated account key - private key stored "~/.nkeys/AAA/accounts/B/B.nk"
Success! - added account "B" Success! - added account "B"
``` ```
With the account and a couple of users in place, let's push all the accounts to the nats-account-server. If the account JWT server URL is not set on the operator, you may want to set it. Note that account servers typically require the path `/jwt/v1` in addition to the protocol and hostport (or you can specify the `--account-jwt-server-url` flag to nsc's `push` command). With the account and a couple of users in place, let's push all the accounts to the nats-account-server. If the account JWT server URL is not set on the operator, you may want to set it. Note that account servers typically require the path `/jwt/v1` in addition to the protocol and hostport \(or you can specify the `--account-jwt-server-url` flag to nsc's `push` command\).
```text ```text
nsc edit operator --account-jwt-server-url http://localhost:9090/jwt/v1 nsc edit operator --account-jwt-server-url http://localhost:9090/jwt/v1

View File

@ -1,4 +1,4 @@
# natscli # nats
A command line utility to interact with and manage NATS. A command line utility to interact with and manage NATS.
@ -10,20 +10,20 @@ Check out the repo for more details: [github.com/nats-io/natscli](https://github
For macOS: For macOS:
``` ```text
> brew tap nats-io/nats-tools > brew tap nats-io/nats-tools
> brew install nats-io/nats-tools/nats > brew install nats-io/nats-tools/nats
``` ```
For Arch Linux: For Arch Linux:
``` ```text
> yay natscli > yay natscli
``` ```
For Docker: For Docker:
``` ```text
docker pull synadia/nats-box:latest docker pull synadia/nats-box:latest
docker run -ti synadia/nats-box docker run -ti synadia/nats-box
@ -39,7 +39,7 @@ The `nats` utility has a command for creating `bcrypt` hashes. This can be used
With `nats` installed: With `nats` installed:
```plain ```text
> nats server passwd > nats server passwd
? Enter password [? for help] ********************** ? Enter password [? for help] **********************
? Reenter password [? for help] ********************** ? Reenter password [? for help] **********************
@ -49,7 +49,7 @@ $2a$11$3kIDaCxw.Glsl1.u5nKa6eUnNDLV5HV9tIuUp7EHhMt6Nm9myW1aS
To use the password on the server, add the hash into the server configuration file's authorization section. To use the password on the server, add the hash into the server configuration file's authorization section.
``` ```text
authorization { authorization {
user: derek user: derek
password: $2a$11$3kIDaCxw.Glsl1.u5nKa6eUnNDLV5HV9tIuUp7EHhMt6Nm9myW1aS password: $2a$11$3kIDaCxw.Glsl1.u5nKa6eUnNDLV5HV9tIuUp7EHhMt6Nm9myW1aS
@ -57,3 +57,4 @@ To use the password on the server, add the hash into the server configuration fi
``` ```
Note the client will still have to provide the plain text version of the password, the server however will only store the hash to verify that the password is correct when supplied. Note the client will still have to provide the plain text version of the password, the server however will only store the hash to verify that the password is correct when supplied.