mirror of
https://github.com/taigrr/nats.docs
synced 2025-01-18 04:03:23 -08:00
GitBook: [master] 82 pages modified
This commit is contained in:
committed by
gitbook-bot
parent
7e27f03c98
commit
b082996143
@@ -124,7 +124,7 @@ authorization: {
|
||||
| `max_traced_msg_len` | Set a limit to the trace of the payload of a message |
|
||||
| `disable_sublist_cache` | Disable sublist cache globally for accounts. |
|
||||
| [`operator`](../../nats-tools/nsc/nsc.md#nats-server-configuration) | Path to an operator JWT |
|
||||
| [`ping_interval`](../../developing-with-nats/intro/pingpong.md) | Interval in seconds in which the server checks if a connection is active |
|
||||
| [`ping_interval`]() | Interval in seconds in which the server checks if a connection is active |
|
||||
| `port` | Port for client connections |
|
||||
| `reconnect_error_reports` | Number of failed attempt to reconnect a route, gateway or leaf node connection. Default is to report every attempt. |
|
||||
| [`resolver`](../../nats-tools/nsc/nsc.md#nats-server-configuration) | Resolver type `MEMORY` or `URL` for account JWTs |
|
||||
|
||||
@@ -9,14 +9,13 @@ Leaf nodes are useful in IoT and edge scenarios and when the local server traffi
|
||||
* Subjects that the user is allowed to publish are exported to the cluster.
|
||||
* Subjects the user is allowed to subscribe to, are imported into the leaf node.
|
||||
|
||||
|
||||
> Leaf Nodes are an important component as a way to bridge traffic between local NATS servers you control and servers that are managed by a third-party. Synadia's [NATS Global Service \(NGS\)](https://www.synadia.com/) allows accounts to use leaf nodes, but gain accessibility to the global network to inexpensively connect geographically distributed servers or small clusters.
|
||||
|
||||
[LeafNode Configuration Options](leafnode_conf.md)
|
||||
|
||||
## LeafNode Configuration Tutorial
|
||||
|
||||
The main server is just a standard NATS server. Clients to the main cluster are just using token authentication, but any kind of authentication can be used. The server allows leaf node connections at port 7422 (default port):
|
||||
The main server is just a standard NATS server. Clients to the main cluster are just using token authentication, but any kind of authentication can be used. The server allows leaf node connections at port 7422 \(default port\):
|
||||
|
||||
```text
|
||||
leafnodes {
|
||||
@@ -28,6 +27,7 @@ authorization {
|
||||
```
|
||||
|
||||
Start the server:
|
||||
|
||||
```bash
|
||||
nats-server -c /tmp/server.conf
|
||||
...
|
||||
@@ -35,14 +35,13 @@ nats-server -c /tmp/server.conf
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
We create a replier on the server to listen for requests on 'q', which it will aptly respond with '42':
|
||||
|
||||
```bash
|
||||
nats-rply -s nats://s3cr3t@localhost q 42
|
||||
```
|
||||
|
||||
|
||||
The leaf node, allows local clients to connect to through port 4111, and doesn't require any kind of authentication. The configuration specifies where the remote cluster is located, and specifies how to connect to it (just a simple token in this case):
|
||||
The leaf node, allows local clients to connect to through port 4111, and doesn't require any kind of authentication. The configuration specifies where the remote cluster is located, and specifies how to connect to it \(just a simple token in this case\):
|
||||
|
||||
```text
|
||||
listen: "127.0.0.1:4111"
|
||||
@@ -158,7 +157,6 @@ Check your email, verify the email, and specify an credit card, after that:
|
||||
╰───────────────────────────┴──────────────────────────────────────────────────────────╯
|
||||
|
||||
....
|
||||
|
||||
```
|
||||
|
||||
Note the limits on the account, specify that the account can have up-to 2 leaf node connections. Let's use them:
|
||||
@@ -172,7 +170,6 @@ Note the limits on the account, specify that the account can have up-to 2 leaf n
|
||||
|
||||
Let's craft a leaf node connection much like we did earlier:
|
||||
|
||||
|
||||
```text
|
||||
leafnodes {
|
||||
remotes = [
|
||||
@@ -210,10 +207,7 @@ Published [q] : ''
|
||||
Received [_INBOX.hgG0zVcVcyr4G5KBwOuyJw.uUYkEyKr] : '42'
|
||||
```
|
||||
|
||||
|
||||
## Leaf Authorization
|
||||
|
||||
In some cases you may want to restrict what messages can be exported from the leaf node or imported from the leaf connection. You can specify restrictions by limiting what the leaf connection client can publish and subscribe to. See [NATS Authorization](../securing_nats/authorization.md) for how you can do this.
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,38 +1,41 @@
|
||||
# Configuration
|
||||
|
||||
## `leafnodes` Configuration Block
|
||||
|
||||
| Property | Description |
|
||||
| :------ | :---- |
|
||||
| :--- | :--- |
|
||||
| `advertise` | Hostport `<host>:<port>` to advertise to other servers. |
|
||||
| `authorization` | Authorization block. [**See Authorization Block section below**](#authorization-block). |
|
||||
| `authorization` | Authorization block. [**See Authorization Block section below**](leafnode_conf.md#authorization-block). |
|
||||
| `host` | Interface where the server will listen for incoming leafnode connections. |
|
||||
| `listen` | Combines `host` and `port` as `<host>:<port>` |
|
||||
| `no_advertise` | if `true` the leafnode shouldn't be advertised. |
|
||||
| `port` | Port where the server will listen for incoming leafnode connections (default is 7422). |
|
||||
| `port` | Port where the server will listen for incoming leafnode connections \(default is 7422\). |
|
||||
| `remotes` | List of `remote` entries specifying servers where leafnode client connection can be made. |
|
||||
| `tls` | TLS configuration block (same as other nats-server `tls` configuration). |
|
||||
| `tls` | TLS configuration block \(same as other nats-server `tls` configuration\). |
|
||||
|
||||
## Authorization Block
|
||||
|
||||
| Property | Description |
|
||||
| :------ | :---- |
|
||||
| :--- | :--- |
|
||||
| `user` | Username for the leaf node connection. |
|
||||
| `password` | Password for the user entry. |
|
||||
| `account` | Account this leaf node connection should be bound to. |
|
||||
| `timeout` | Maximum number of seconds to wait for leaf node authentication. |
|
||||
| `users` | List of credentials and account to bind to leaf node connections. [**See User Block section below**](#users-block). |
|
||||
| `users` | List of credentials and account to bind to leaf node connections. [**See User Block section below**](leafnode_conf.md#users-block). |
|
||||
|
||||
### Users Block
|
||||
|
||||
| Property | Description |
|
||||
| :------ | :---- |
|
||||
| :--- | :--- |
|
||||
| `user` | Username for the leaf node connection. |
|
||||
| `password` | Password for the user entry. |
|
||||
| `account` | Account this leaf node connection should be bound to. |
|
||||
|
||||
Here are some examples of using basic user/password authentication for leaf nodes (note while this is using accounts it is not using JWTs)
|
||||
Here are some examples of using basic user/password authentication for leaf nodes \(note while this is using accounts it is not using JWTs\)
|
||||
|
||||
Singleton mode:
|
||||
```
|
||||
|
||||
```text
|
||||
leafnodes {
|
||||
port: ...
|
||||
authorization {
|
||||
@@ -42,10 +45,12 @@ leafnodes {
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
With above configuration, if a soliciting server creates a Leafnode connection with url: `nats://leaf:secret@host:port`, then the accepting server will bind the leafnode connection to the account "TheAccount". This account need to exist otherwise the connection will be rejected.
|
||||
|
||||
Multi-users mode:
|
||||
```
|
||||
|
||||
```text
|
||||
leafnodes {
|
||||
port: ...
|
||||
authorization {
|
||||
@@ -56,13 +61,14 @@ leafnodes {
|
||||
}
|
||||
}
|
||||
```
|
||||
With the above, if a server connects using `leaf1:secret@host:port`, then the accepting server will bind the connection to account `account1`.
|
||||
If using `leaf2` user, then the accepting server will bind to connection to `account2`.
|
||||
|
||||
If username/password (either singleton or multi-users) is defined, then the connecting server MUST provide the proper credentials otherwise the connection will be rejected.
|
||||
With the above, if a server connects using `leaf1:secret@host:port`, then the accepting server will bind the connection to account `account1`. If using `leaf2` user, then the accepting server will bind to connection to `account2`.
|
||||
|
||||
If username/password \(either singleton or multi-users\) is defined, then the connecting server MUST provide the proper credentials otherwise the connection will be rejected.
|
||||
|
||||
If no username/password is provided, it is still possible to provide the account the connection should be associated with:
|
||||
```
|
||||
|
||||
```text
|
||||
leafnodes {
|
||||
port: ...
|
||||
authorization {
|
||||
@@ -70,17 +76,17 @@ leafnodes {
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
With the above, a connection without credentials will be bound to the account "TheAccount".
|
||||
|
||||
If other form of credentials are used (jwt, nkey or other), then the server will attempt to authenticate and if successful associate to the account for that specific user. If the user authentication fails (wrong password, no such user, etc..) the connection will be also rejected.
|
||||
|
||||
If other form of credentials are used \(jwt, nkey or other\), then the server will attempt to authenticate and if successful associate to the account for that specific user. If the user authentication fails \(wrong password, no such user, etc..\) the connection will be also rejected.
|
||||
|
||||
## LeafNode `remotes` Entry Block
|
||||
|
||||
| Property | Description |
|
||||
| :------ | :---- |
|
||||
| `url` | Leafnode URL (URL protocol should be `nats-leaf`). |
|
||||
| `urls` | Leafnode URL array. Supports multiple URLs for discovery, e.g., urls: [ "nats-leaf://host1:7422", "nats-leaf://host2:7422" ]|
|
||||
| :--- | :--- |
|
||||
| `url` | Leafnode URL \(URL protocol should be `nats-leaf`\). |
|
||||
| `urls` | Leafnode URL array. Supports multiple URLs for discovery, e.g., urls: \[ "nats-leaf://host1:7422", "nats-leaf://host2:7422" \] |
|
||||
| `account` | Account public key identifying the leafnode. Account must be defined locally. |
|
||||
| `credentials` | Credential file for connecting to the leafnode server. |
|
||||
| `tls` | A TLS configuration block. Leafnode client will use specified TLS certificates when connecting/authenticating. |
|
||||
@@ -88,14 +94,14 @@ If other form of credentials are used (jwt, nkey or other), then the server will
|
||||
## `tls` Configuration Block
|
||||
|
||||
| Property | Description |
|
||||
| :------ | :---- |
|
||||
| :--- | :--- |
|
||||
| `cert_file` | TLS certificate file. |
|
||||
| `key_file` | TLS certificate key file. |
|
||||
| `ca_file` | TLS certificate authority file. |
|
||||
| `insecure` | Skip certificate verification. |
|
||||
| `verify` | If `true`, require and verify client certificates. |
|
||||
| `verify_and_map` | If `true`, require and verify client certificates and use values map certificate values for authentication purposes. |
|
||||
| `cipher_suites` | When set, only the specified TLS cipher suites will be allowed. Values must match golang version used to build the server. |
|
||||
| `cipher_suites` | When set, only the specified TLS cipher suites will be allowed. Values must match golang version used to build the server. |
|
||||
| `curve_preferences` | List of TLS cypher curves to use in order. |
|
||||
| `timeout` | TLS handshake timeout in fractional seconds. |
|
||||
|
||||
|
||||
@@ -68,30 +68,30 @@ authorization {
|
||||
|
||||
Here's another example, where the `allow` and `deny` options are specified:
|
||||
|
||||
```
|
||||
```text
|
||||
authorization: {
|
||||
users = [
|
||||
{
|
||||
user: admin
|
||||
password: secret
|
||||
permissions: {
|
||||
publish: ">"
|
||||
subscribe: ">"
|
||||
}
|
||||
}
|
||||
{
|
||||
user: test
|
||||
password: test
|
||||
permissions: {
|
||||
publish: {
|
||||
deny: ">"
|
||||
},
|
||||
subscribe: {
|
||||
allow: "client.>"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
users = [
|
||||
{
|
||||
user: admin
|
||||
password: secret
|
||||
permissions: {
|
||||
publish: ">"
|
||||
subscribe: ">"
|
||||
}
|
||||
}
|
||||
{
|
||||
user: test
|
||||
password: test
|
||||
permissions: {
|
||||
publish: {
|
||||
deny: ">"
|
||||
},
|
||||
subscribe: {
|
||||
allow: "client.>"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ When detected at the client, the application is notified and messages are droppe
|
||||
|
||||
## Slow consumers identified in the client
|
||||
|
||||
A [client can detect it is a slow consumer](../../developing-with-nats/intro-5/slow.md) on a local connection and notify the application through use of the asynchronous error callback. It is better to catch a slow consumer locally in the client rather than to allow the server to detect this condition. This example demonstrates how to define and register an asynchronous error handler that will handle slow consumer errors.
|
||||
A [client can detect it is a slow consumer]() on a local connection and notify the application through use of the asynchronous error callback. It is better to catch a slow consumer locally in the client rather than to allow the server to detect this condition. This example demonstrates how to define and register an asynchronous error handler that will handle slow consumer errors.
|
||||
|
||||
```go
|
||||
func natsErrHandler(nc *nats.Conn, sub *nats.Subscription, natsErr error) {
|
||||
@@ -66,7 +66,7 @@ Apart from using [NATS streaming](../../nats-streaming-concepts/intro.md) or opt
|
||||
|
||||
**Scaling with queue subscribers**
|
||||
|
||||
This is ideal if you do not rely on message order. Ensure your NATS subscription belongs to a [queue group](../../concepts/queue.md), then scale as required by creating more instances of your service or application. This is a great approach for microservices - each instance of your microservice will receive a portion of the messages to process, and simply add more instances of your service to scale. No code changes, configuration changes, or downtime whatsoever.
|
||||
This is ideal if you do not rely on message order. Ensure your NATS subscription belongs to a [queue group](../../nats-concepts/queue.md), then scale as required by creating more instances of your service or application. This is a great approach for microservices - each instance of your microservice will receive a portion of the messages to process, and simply add more instances of your service to scale. No code changes, configuration changes, or downtime whatsoever.
|
||||
|
||||
**Create a subject namespace that can scale**
|
||||
|
||||
|
||||
@@ -38,7 +38,7 @@ nats-sub -s nats://localhost:4222 ">"
|
||||
|
||||
`nats-sub` is a subscriber sample included with all NATS clients. `nats-sub` subscribes to a subject and prints out any messages received. You can find the source code to the go version of `nats-sub` \[here\)\([https://github.com/nats-io/nats.go/tree/master/examples](https://github.com/nats-io/nats.go/tree/master/examples)\). After starting the subscriber you should see a message on 'A' that a new client connected.
|
||||
|
||||
We have two servers and a client. Time to simulate our rolling upgrade. But wait, before we upgrade 'A', let's introduce a new server 'C'. Server 'C' will join the existing cluster while we perform the upgrade. Its sole purpose is to provide an additional place where clients can go other than 'A' and ensure we don't end up with a single server serving all the clients after the upgrade procedure. Clients will randomly select a server when connecting unless a special option is provided that disables that functionality \(usually called 'DontRandomize' or 'noRandomize'\). You can read more about ["Avoiding the Thundering Herd"](../../developing-with-nats/intro-1/random.md). Suffice it to say that clients redistribute themselves about evenly between all servers in the cluster. In our case 1/2 of the clients on 'A' will jump over to 'B' and the remaining half to 'C'.
|
||||
We have two servers and a client. Time to simulate our rolling upgrade. But wait, before we upgrade 'A', let's introduce a new server 'C'. Server 'C' will join the existing cluster while we perform the upgrade. Its sole purpose is to provide an additional place where clients can go other than 'A' and ensure we don't end up with a single server serving all the clients after the upgrade procedure. Clients will randomly select a server when connecting unless a special option is provided that disables that functionality \(usually called 'DontRandomize' or 'noRandomize'\). You can read more about ["Avoiding the Thundering Herd"](). Suffice it to say that clients redistribute themselves about evenly between all servers in the cluster. In our case 1/2 of the clients on 'A' will jump over to 'B' and the remaining half to 'C'.
|
||||
|
||||
Let's start our temporary server:
|
||||
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
# NATS and Docker
|
||||
|
||||
## NATS Server Containerization
|
||||
|
||||
@@ -9,32 +10,32 @@ The NATS server is provided as a Docker image on [Docker Hub](https://hub.docker
|
||||
|
||||
To use the Docker container image, install Docker and pull the public image:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
docker pull nats
|
||||
```
|
||||
|
||||
Run the NATS server image:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
docker run nats
|
||||
```
|
||||
|
||||
By default the NATS server exposes multiple ports:
|
||||
|
||||
- 4222 is for clients.
|
||||
- 8222 is an HTTP management port for information reporting.
|
||||
- 6222 is a routing port for clustering.
|
||||
- Use -p or -P to customize.
|
||||
* 4222 is for clients.
|
||||
* 8222 is an HTTP management port for information reporting.
|
||||
* 6222 is a routing port for clustering.
|
||||
* Use -p or -P to customize.
|
||||
|
||||
### Creating a NATS Cluster
|
||||
|
||||
First run a server with the ports exposed on a `docker network`:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
$ docker network create nats
|
||||
```
|
||||
|
||||
```sh
|
||||
```bash
|
||||
docker run --name nats --network nats --rm -p 4222:4222 -p 8222:8222 nats
|
||||
[INF] Starting nats-server version 2.1.0
|
||||
[INF] Git commit [1cc5ae0]
|
||||
@@ -47,7 +48,7 @@ docker run --name nats --network nats --rm -p 4222:4222 -p 8222:8222 nats
|
||||
|
||||
Next, start another couple of servers and point them to the seed server to make them form a cluster:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
docker run --name nats-1 --network nats --rm nats --cluster nats://0.0.0.0:6222 --routes=nats://ruser:T0pS3cr3t@nats:6222
|
||||
docker run --name nats-2 --network nats --rm nats --cluster nats://0.0.0.0:6222 --routes=nats://ruser:T0pS3cr3t@nats:6222
|
||||
```
|
||||
@@ -56,7 +57,7 @@ docker run --name nats-2 --network nats --rm nats --cluster nats://0.0.0.0:6222
|
||||
|
||||
To verify the routes are connected, you can make a request to the monitoring endpoint on `/routez` as follows and confirm that there are now 2 routes:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
curl http://127.0.0.1:8222/routez
|
||||
{
|
||||
"server_id": "ND34PZ64QLLJKSU5SLSWRS5EUXEKNHW5BUVLCNFWA56R4D7XKDYWJFP7",
|
||||
@@ -99,7 +100,7 @@ curl http://127.0.0.1:8222/routez
|
||||
|
||||
### Creating a NATS Cluster with Docker Compose
|
||||
|
||||
It is also straightforward to create a cluster using Docker Compose. Below is a simple example that uses a network named `nats` to create a full mesh cluster.
|
||||
It is also straightforward to create a cluster using Docker Compose. Below is a simple example that uses a network named `nats` to create a full mesh cluster.
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
@@ -122,7 +123,7 @@ networks:
|
||||
|
||||
Now we use Docker Compose to create the cluster that will be using the `nats` network:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
$ docker network create nats
|
||||
|
||||
$ docker-compose -f nats-cluster.yaml up
|
||||
@@ -146,20 +147,18 @@ nats-1_1 | [1] 2019/10/19 06:41:27.153078 [INF] 172.18.0.4:6222 - rid:3 - Route
|
||||
|
||||
Now, the following should work: make a subscription on one of the nodes and publish it from another node. You should be able to receive the message without problems.
|
||||
|
||||
```sh
|
||||
```bash
|
||||
$ docker run --network nats --rm -it synadia/nats-box
|
||||
~ # nats-sub -s nats://nats:4222 hello &
|
||||
Listening on [hello]
|
||||
|
||||
~ # nats-pub -s "nats://nats-1:4222" hello first
|
||||
~ # nats-pub -s "nats://nats-2:4222" hello second
|
||||
[#1] Received on [hello]: 'first'
|
||||
[#2] Received on [hello]: 'second'
|
||||
```
|
||||
|
||||
Also stopping the seed node to which the subscription was done, should trigger an automatic failover to the other nodes:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
$ docker stop nats
|
||||
|
||||
...
|
||||
@@ -169,7 +168,7 @@ Reconnected [nats://172.17.0.4:4222]
|
||||
|
||||
Publishing again will continue to work after the reconnection:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
~ # nats-pub -s "nats://nats-1:4222" hello again
|
||||
~ # nats-pub -s "nats://nats-2:4222" hello again
|
||||
```
|
||||
@@ -177,3 +176,4 @@ Publishing again will continue to work after the reconnection:
|
||||
## Tutorial
|
||||
|
||||
See the [NATS Docker tutorial](nats-docker-tutorial.md) for more instructions on using the NATS server Docker image.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user