1
0
mirror of https://github.com/taigrr/nats.docs synced 2025-01-18 04:03:23 -08:00

GitBook: [master] 14 pages modified

This commit is contained in:
Ginger Collison 2021-03-15 14:58:32 +00:00 committed by gitbook-bot
parent 45ab567ab3
commit 9791a4d12c
No known key found for this signature in database
GPG Key ID: 07D2180C7B12D0FF
13 changed files with 5 additions and 35 deletions

View File

@ -132,17 +132,7 @@
* [GitHub Actions](jetstream/configuration_mgmt/github_actions.md)
* [Kubernetes Controller](jetstream/configuration_mgmt/kubernetes_controller.md)
* [Disaser Recovery](jetstream/disaster_recovery.md)
* [Model Deep Dive](jetstream/model_deep_dive/README.md)
* [Stream Limits, Retention Modes and Discard Policy](jetstream/model_deep_dive/stream-limits-retention-modes-and-discard-policy.md)
* [Message Deduplication](jetstream/model_deep_dive/message-deduplication.md)
* [Acknowledgment Models](jetstream/model_deep_dive/acknowledgment-models.md)
* [Exactly Once Delivery](jetstream/model_deep_dive/exactly-once-delivery.md)
* [Consumer Starting Position](jetstream/model_deep_dive/consumer-starting-position.md)
* [Ephemeral Consumers](jetstream/model_deep_dive/ephemeral-consumers.md)
* [Consumer Message Rates](jetstream/model_deep_dive/consumer-message-rates.md)
* [Stream Templates](jetstream/model_deep_dive/stream-templates.md)
* [Ack Sampling](jetstream/model_deep_dive/ack-sampling.md)
* [Storage Overhead](jetstream/model_deep_dive/storage-overhead.md)
* [Model Deep Dive](jetstream/model_deep_dive.md)
* [NATS API Reference](jetstream/nats_api_reference.md)
* [Multi-tenancy & Resource Mgmt](jetstream/resource_management.md)

View File

@ -10,7 +10,7 @@ Streams can consume many subjects. Here we have `ORDERS.*` but we could also con
Streams support various retention policies - they can be kept based on limits like max count, size or age but also more novel methods like keeping them as long as any Consumers have them unacknowledged, or work queue like behavior where a message is removed after first ack.
Streams support deduplication using a `Nats-Msg-Id` header and a sliding window within which to track duplicate messages. See the [Message Deduplication](../model_deep_dive/#message-deduplication) section.
Streams support deduplication using a `Nats-Msg-Id` header and a sliding window within which to track duplicate messages. See the [Message Deduplication](../model_deep_dive.md#message-deduplication) section.
When defining Streams the items below make up the entire configuration of the set.

View File

@ -24,9 +24,9 @@ In both `WorkQueuePolicy` and `InterestPolicy` the age, size and count limits wi
A final control is the Maximum Size any single message may have. NATS have it's own limit for maximum size \(1 MiB by default\), but you can say a Stream will only accept messages up to 1024 bytes using `MaxMsgSize`.
The `Discard Policy` sets how messages are discard when limits set by `LimitsPolicy` are reached. The `DiscardOld` option removes old messages making space for new, while `DiscardNew` refuses any new messages.
The `Discard Policy` sets how messages are discarded when limits set by `LimitsPolicy` are reached. The `DiscardOld` option removes old messages making space for new, while `DiscardNew` refuses any new messages.
The `WorkQueuePolicy` mode is a specialized mode where a message, once consumed and acknowledged, is discarded from the Stream. In this mode there are a few limits on consumers. Inherently it's about 1 message to one consumer, this means you cannot have overlapping consumers defined on the Stream - needs unique filter subjects.
The `WorkQueuePolicy` mode is a specialized mode where a message, once consumed and acknowledged, is discarded from the Stream. In this mode, there are a few limits on consumers. Inherently it's about 1 message to one consumer, this means you cannot have overlapping consumers defined on the Stream - needs unique filter subjects.
## Message Deduplication
@ -208,7 +208,7 @@ The `+NXT` acknowledgement can have a few formats: `+NXT 10` requests 10 message
JetStream supports Exactly Once delivery by combining Message Deduplication and double acks.
On the publishing side you can avoid duplicate message ingestion using the [Message Deduplication](./#message-deduplication) feature.
On the publishing side you can avoid duplicate message ingestion using the [Message Deduplication](model_deep_dive.md#message-deduplication) feature.
Consumers can be 100% sure a message was correctly processed by requesting the server Acknowledge having received your acknowledgement by setting a reply subject on the Ack. If you receive this response you will never receive that message again.

View File

@ -1,2 +0,0 @@
# Ack Sampling

View File

@ -1,2 +0,0 @@
# Acknowledgment Models

View File

@ -1,2 +0,0 @@
# Consumer Message Rates

View File

@ -1,2 +0,0 @@
# Consumer Starting Position

View File

@ -1,2 +0,0 @@
# Ephemeral Consumers

View File

@ -1,2 +0,0 @@
# Exactly Once Delivery

View File

@ -1,2 +0,0 @@
# Message Deduplication

View File

@ -1,2 +0,0 @@
# Storage Overhead

View File

@ -1,2 +0,0 @@
# Stream Limits, Retention Modes and Discard Policy

View File

@ -1,2 +0,0 @@
# Stream Templates