mirror of
https://github.com/taigrr/nats.docs
synced 2025-01-18 04:03:23 -08:00
updates
Signed-off-by: Colin Sullivan <colin@synadia.com>
This commit is contained in:
parent
70a9b61e4e
commit
d3ebabbeb0
@ -206,6 +206,7 @@
|
||||
|
||||
## JetStream
|
||||
|
||||
* [About Jetstream](jetstream/about_jetstream/jetstream.md)
|
||||
* [Concepts](jetstream/concepts/concepts.md)
|
||||
* [Streams](jetstream/concepts/streams.md)
|
||||
* [Consumes](jetstream/concepts/consumers.md)
|
||||
@ -219,8 +220,10 @@
|
||||
* [Streams](jetstream/administration/streams.md)
|
||||
* [Consumers](jetstream/administration/consumers.md)
|
||||
* [Monitoring](jetstream/monitoring/monitoring.md)
|
||||
* [Clustering](jetstream/clustering/clustering.md)
|
||||
* [Administration](jetstream/clustering/administration.md)
|
||||
* [Configuration Management](jetstream/configuration_mgmt/configuration_mgmt.md)
|
||||
* [nats Admin CLI](jetstream/configuration_mgmt/configuration_mgmt.md#nats-admin-cli)
|
||||
* [NATS Admin CLI](jetstream/configuration_mgmt/configuration_mgmt.md#nats-admin-cli)
|
||||
* [Terraform](jetstream/configuration_mgmt/configuration_mgmt.md#terraform)
|
||||
* [GitHub Actions](jetstream/configuration_mgmt/github_actions.md)
|
||||
* [Kubernetes Controller](jetstream/configuration_mgmt/kubernetes_controller.md)
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Jetstream
|
||||
# JetStream
|
||||
|
||||
JetStream was created to solve the problems identified with streaming in technology today - complexity, fragility, and a lack of scalability. Some technologies address these better than others, but no current streaming technology is truly multi-tenant, horizontally scalable, and supports multiple deployment models. No technology we are aware of can scale from edge to cloud under the same security context while having complete deployment observability for operations.
|
||||
|
||||
|
@ -20,15 +20,15 @@ In order to ensure data consistency across complete restarts, a quorum of server
|
||||
|
||||
**Meta Group** - all servers join the Meta Group and the JetStream API is managed by this group. A leader is elected and this owns the API and takes care of server placement.
|
||||
|
||||
 FIXME
|
||||

|
||||
|
||||
**Stream Group** - each Stream creates a RAFT group, this group synchronizes state and data between its members. The elected leader handles ACKs and so forth, if there is no leader the stream will not accept messages.
|
||||
|
||||
 FIXME
|
||||

|
||||
|
||||
**Consumer Group** - each Consumer creates a RAFT group, this group synchronizes consumer state between its members. The group will live on the machines where the Stream Group is and handle consumption ACKs etc. Each Consumer will have its own group.
|
||||
|
||||
 FIXME
|
||||

|
||||
|
||||
### Cluster Size
|
||||
Generally we recommend 3 or 5 Jetstream enabled servers in a NATS cluster. This balances scalability with a tolerance for failure. For example, if 5 servers are Jetstream enabled You would want two servers is one “zone”, two servers in another, and the remaining server in a third. This means you can lose any one “zone” at any time and continue operating.
|
||||
@ -103,90 +103,4 @@ cluster {
|
||||
}
|
||||
```
|
||||
|
||||
Add nodes as necessary. Choose a data directory that makes sense for your environment, ideally a fast SDD, and launch each server. After two servers are running you'll be ready to use Jetstream.
|
||||
|
||||
## Administration
|
||||
|
||||
Once a JetStream cluster is operating interactions with the CLI and with `nats` CLI is the same as before. For these examples, lets assume we have a 5 server cluster, n1-n5 in a cluster named C1.
|
||||
|
||||
### Creating clustered streams
|
||||
|
||||
When adding a stream using the `nats` CLI the number of replicas will be asked, when you choose a number more than 1, (we suggest 1, 3 or 5), the data will be stored o multiple nodes in your cluster using the RAFT protocol as above.
|
||||
|
||||
```nohighlight
|
||||
$ nats str add ORDERS --replicas 3
|
||||
....
|
||||
Information for Stream ORDERS_4 created 2021-02-05T12:07:34+01:00
|
||||
....
|
||||
Configuration:
|
||||
....
|
||||
Replicas: 3
|
||||
|
||||
Cluster Information:
|
||||
|
||||
Name: C1
|
||||
Leader: n1-c1
|
||||
Replica: n4-c1, current, seen 0.07s ago
|
||||
Replica: n3-c1, current, seen 0.07s ago
|
||||
|
||||
```
|
||||
|
||||
Above you can see that the cluster information will be reported in all cases where Stream info is shown such as after add or using `nats stream info`.
|
||||
|
||||
Here we have a stream in the NATS cluster `C1`, its current leader is a node `n1-c1` and it has 2 followers - `n4-c1` and `n3-c1`.
|
||||
|
||||
The `current` indicates that followers are up to date and have all the messages, here both cluster peers were seen very recently.
|
||||
|
||||
The replica count cannot be edited once configured.
|
||||
|
||||
### Forcing leader election
|
||||
|
||||
Every RAFT group has a leader that's elected by the group when needed. Generally there is no reason to interfere with this process but you might want to trigger a leader change at a convenient time. Leader elections will represent short interruptions to the stream so if you know you will work on a node later it might be worth moving leadership away from it ahead of time.
|
||||
|
||||
Moving leadership away from a node does not remove it from the cluster and does not prevent it from becoming a leader again, this is merely a triggered leader election.
|
||||
|
||||
```nohighlight
|
||||
$ nats stream cluster step-down ORDERS
|
||||
14:32:17 Requesting leader step down of "n1-c1" in a 3 peer RAFT group
|
||||
14:32:18 New leader elected "n4-c1"
|
||||
|
||||
Information for Stream ORDERS created 2021-02-05T12:07:34+01:00
|
||||
...
|
||||
Cluster Information:
|
||||
|
||||
Name: c1
|
||||
Leader: n4-c1
|
||||
Replica: n1-c1, current, seen 0.12s ago
|
||||
Replica: n3-c1, current, seen 0.12s ago
|
||||
```
|
||||
|
||||
### Evicting a peer
|
||||
|
||||
Generally when shutting down NATS, including using Lame Duck Mode, the cluster will notice this and continue to function. A 5 node cluster can withstand 2 nodes being down.
|
||||
|
||||
There might be a case though where you know a machine will never return, and you want to signal to JetStream that the machine will not return. This will remove it from the Stream in question and all it's Consumers.
|
||||
|
||||
After the node is removed the cluster will notice that the replica count is not honored anymore and will immediately pick a new node and start replicating data to it. The new node will be selected using the same placement rules as the existing stream.
|
||||
|
||||
```nohighlight
|
||||
$ nats s cluster peer-remove ORDERS
|
||||
? Select a Peer n4-c1
|
||||
14:38:50 Removing peer "n4-c1"
|
||||
14:38:50 Requested removal of peer "n4-c1"
|
||||
```
|
||||
|
||||
At this point the stream and all consumers will have removed `n4-c1` from the group, they will all start new peer selection and data replication.
|
||||
|
||||
```nohighlight
|
||||
$ nats stream info ORDERS
|
||||
....
|
||||
Cluster Information:
|
||||
|
||||
Name: c1
|
||||
Leader: n3-c1
|
||||
Replica: n1-c1, current, seen 0.02s ago
|
||||
Replica: n2-c1, outdated, seen 0.42s ago
|
||||
```
|
||||
|
||||
We can see a new replica was picked, the stream is back to replication level of 3 and `n4-c1` is not active any more in this Stream or any of its Consumers.
|
||||
|
||||
Add nodes as necessary. Choose a data directory that makes sense for your environment, ideally a fast SDD, and launch each server. After two servers are running you'll be ready to use Jetstream.
|
@ -4,7 +4,7 @@ In JetStream the configuration for storing messages is defined separately from h
|
||||
|
||||
We'll discuss these 2 subjects in the context of this architecture.
|
||||
|
||||

|
||||

|
||||
|
||||
While this is an incomplete architecture it does show a number of key points:
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user