1
0
mirror of https://github.com/taigrr/nats.docs synced 2025-01-18 04:03:23 -08:00

GitBook: [master] 82 pages modified

This commit is contained in:
Ginger Collison
2019-12-18 22:09:45 +00:00
committed by gitbook-bot
parent 7e27f03c98
commit b082996143
71 changed files with 865 additions and 930 deletions

View File

@@ -16,7 +16,7 @@ When detected at the client, the application is notified and messages are droppe
## Slow consumers identified in the client
A [client can detect it is a slow consumer](../../developing-with-nats/intro-5/slow.md) on a local connection and notify the application through use of the asynchronous error callback. It is better to catch a slow consumer locally in the client rather than to allow the server to detect this condition. This example demonstrates how to define and register an asynchronous error handler that will handle slow consumer errors.
A [client can detect it is a slow consumer]() on a local connection and notify the application through use of the asynchronous error callback. It is better to catch a slow consumer locally in the client rather than to allow the server to detect this condition. This example demonstrates how to define and register an asynchronous error handler that will handle slow consumer errors.
```go
func natsErrHandler(nc *nats.Conn, sub *nats.Subscription, natsErr error) {
@@ -66,7 +66,7 @@ Apart from using [NATS streaming](../../nats-streaming-concepts/intro.md) or opt
**Scaling with queue subscribers**
This is ideal if you do not rely on message order. Ensure your NATS subscription belongs to a [queue group](../../concepts/queue.md), then scale as required by creating more instances of your service or application. This is a great approach for microservices - each instance of your microservice will receive a portion of the messages to process, and simply add more instances of your service to scale. No code changes, configuration changes, or downtime whatsoever.
This is ideal if you do not rely on message order. Ensure your NATS subscription belongs to a [queue group](../../nats-concepts/queue.md), then scale as required by creating more instances of your service or application. This is a great approach for microservices - each instance of your microservice will receive a portion of the messages to process, and simply add more instances of your service to scale. No code changes, configuration changes, or downtime whatsoever.
**Create a subject namespace that can scale**

View File

@@ -38,7 +38,7 @@ nats-sub -s nats://localhost:4222 ">"
`nats-sub` is a subscriber sample included with all NATS clients. `nats-sub` subscribes to a subject and prints out any messages received. You can find the source code to the go version of `nats-sub` \[here\)\([https://github.com/nats-io/nats.go/tree/master/examples](https://github.com/nats-io/nats.go/tree/master/examples)\). After starting the subscriber you should see a message on 'A' that a new client connected.
We have two servers and a client. Time to simulate our rolling upgrade. But wait, before we upgrade 'A', let's introduce a new server 'C'. Server 'C' will join the existing cluster while we perform the upgrade. Its sole purpose is to provide an additional place where clients can go other than 'A' and ensure we don't end up with a single server serving all the clients after the upgrade procedure. Clients will randomly select a server when connecting unless a special option is provided that disables that functionality \(usually called 'DontRandomize' or 'noRandomize'\). You can read more about ["Avoiding the Thundering Herd"](../../developing-with-nats/intro-1/random.md). Suffice it to say that clients redistribute themselves about evenly between all servers in the cluster. In our case 1/2 of the clients on 'A' will jump over to 'B' and the remaining half to 'C'.
We have two servers and a client. Time to simulate our rolling upgrade. But wait, before we upgrade 'A', let's introduce a new server 'C'. Server 'C' will join the existing cluster while we perform the upgrade. Its sole purpose is to provide an additional place where clients can go other than 'A' and ensure we don't end up with a single server serving all the clients after the upgrade procedure. Clients will randomly select a server when connecting unless a special option is provided that disables that functionality \(usually called 'DontRandomize' or 'noRandomize'\). You can read more about ["Avoiding the Thundering Herd"](). Suffice it to say that clients redistribute themselves about evenly between all servers in the cluster. In our case 1/2 of the clients on 'A' will jump over to 'B' and the remaining half to 'C'.
Let's start our temporary server: