mirror of
https://github.com/taigrr/nats.docs
synced 2025-01-18 04:03:23 -08:00
GitBook: [master] 326 pages and 16 assets modified
This commit is contained in:
committed by
gitbook-bot
parent
8b7ba5c3bb
commit
fb0d5c8355
84
nats-server/clients.md
Normal file
84
nats-server/clients.md
Normal file
@@ -0,0 +1,84 @@
|
||||
# Clients
|
||||
|
||||
The nats-server doesn't come bundled with any clients. But most client libraries come with tools that allow you to publish, subscribe, send requests and reply messages.
|
||||
|
||||
If you have a client library installed, you can try using a bundled client. Otherwise, you can easily install some clients.
|
||||
|
||||
## If you have Go installed:
|
||||
|
||||
```text
|
||||
> go get github.com/nats-io/go-nats-examples/tools/nats-pub
|
||||
> go get github.com/nats-io/go-nats-examples/tools/nats-sub
|
||||
```
|
||||
|
||||
## Or download a zip file
|
||||
|
||||
You can install pre-built binaries from the [go-nats-examples repo](https://github.com/nats-io/go-nats-examples/releases/tag/0.0.50)
|
||||
|
||||
## Testing your setup
|
||||
|
||||
Open a terminal and [start a nats-server](running/):
|
||||
|
||||
```text
|
||||
> nats-server
|
||||
[29670] 2019/05/16 08:45:59.836809 [INF] Starting nats-server version 2.0.0
|
||||
[29670] 2019/05/16 08:45:59.836889 [INF] Git commit [not set]
|
||||
[29670] 2019/05/16 08:45:59.837161 [INF] Listening for client connections on 0.0.0.0:4222
|
||||
[29670] 2019/05/16 08:45:59.837168 [INF] Server id is NAYH35Q7ROQHLQ3K565JR4OPTJGO5EK4FJX6KX5IHHEPLQBRSYVWI2NO
|
||||
[29670] 2019/05/16 08:45:59.837170 [INF] Server is ready
|
||||
```
|
||||
|
||||
On another terminal session start a subscriber:
|
||||
|
||||
```text
|
||||
> nats-sub ">"
|
||||
Listening on [>]
|
||||
```
|
||||
|
||||
Note that when the client connected, the server didn't log anything interesting because server output is relatively quiet unless something interesting happens.
|
||||
|
||||
To make the server output more lively, you can specify the `-V` flag to enable logging of server protocol tracing messages. Go ahead and `<ctrl>+c` the process running the server, and restart the server with the `-V` flag:
|
||||
|
||||
```text
|
||||
> nats-server -V
|
||||
[29785] 2019/05/16 08:46:12.731278 [INF] Starting nats-server version 2.0.0
|
||||
[29785] 2019/05/16 08:46:12.731347 [INF] Git commit [not set]
|
||||
[29785] 2019/05/16 08:46:12.731488 [INF] Listening for client connections on 0.0.0.0:4222
|
||||
[29785] 2019/05/16 08:46:12.731493 [INF] Server id is NCEOJJ5SBJKUTMZEDNU3NBPJSLJPCMQJUIQIWKFHWE5DPETJKHX2CO2Y
|
||||
[29785] 2019/05/16 08:46:12.731495 [INF] Server is ready
|
||||
[29785] 2019/05/16 08:46:13.467099 [TRC] 127.0.0.1:49805 - cid:1 - <<- [CONNECT {"verbose":false,"pedantic":false,"tls_required":false,"name":"NATS Sample Subscriber","lang":"go","version":"1.7.0","protocol":1,"echo":true}]
|
||||
[29785] 2019/05/16 08:46:13.467200 [TRC] 127.0.0.1:49805 - cid:1 - <<- [PING]
|
||||
[29785] 2019/05/16 08:46:13.467206 [TRC] 127.0.0.1:49805 - cid:1 - ->> [PONG]
|
||||
```
|
||||
|
||||
If you had created a subscriber, you should notice output on the subscriber telling you that it disconnected, and reconnected. The server output above is more interesting. You can see the subscriber send a `CONNECT` protocol message and a `PING` which was responded to by the server with a `PONG`.
|
||||
|
||||
> You can learn more about the [NATS protocol here](../nats-protocol/nats-protocol/), but more intersting than the protocol description is [an interactive demo](../nats-protocol/nats-protocol-demo.md).
|
||||
|
||||
On a third terminal, publish your first message:
|
||||
|
||||
```text
|
||||
> nats-pub hello world
|
||||
Published [hello] : 'world'
|
||||
```
|
||||
|
||||
On the subscriber window you should see:
|
||||
|
||||
```text
|
||||
|
||||
```
|
||||
|
||||
## Testing Against a Remote Server
|
||||
|
||||
If the NATS server were running in a different machine or a different port, you'd have to specify that to the client by specifying a _NATS URL_. NATS URLs take the form of: `nats://<server>:<port>` and `tls://<server>:<port>`. URLs with a `tls` protocol sport a secured TLS connection.
|
||||
|
||||
```text
|
||||
> nats-sub -s nats://server:port ">"
|
||||
```
|
||||
|
||||
If you want to try on a remote server, the NATS team maintains a demo server you can reach at `demo.nats.io`.
|
||||
|
||||
```text
|
||||
> nats-sub -s nats://demo.nats.io ">"
|
||||
```
|
||||
|
||||
142
nats-server/configuration/README.md
Normal file
142
nats-server/configuration/README.md
Normal file
@@ -0,0 +1,142 @@
|
||||
# Configuration
|
||||
|
||||
While the NATS server has many flags that allow for simple testing of features, the NATS server products provide a flexible configuration format that combines the best of traditional formats and newer styles such as JSON and YAML.
|
||||
|
||||
The NATS configuration file supports the following syntax:
|
||||
|
||||
* Lines can be commented with `#` and `//`
|
||||
* Values can be assigned to properties with:
|
||||
* Equals sign: `foo = 2`
|
||||
* Colon: `foo: 2`
|
||||
* Whitespace: `foo 2`
|
||||
* Arrays are enclosed in brackets: `["a", "b", "c"]`
|
||||
* Maps are enclosed in braces: `{foo: 2}`
|
||||
* Maps can be assigned with no key separator
|
||||
* Semicolons can be used as terminators
|
||||
|
||||
## Strings and Numbers
|
||||
|
||||
The configuration parser is very forgiving, as you have seen:
|
||||
|
||||
* values can be a primitive, or a list, or a map
|
||||
* strings and numbers typically do the right thing
|
||||
|
||||
String values that start with a digit _can_ create issues. To force such values as strings, quote them.
|
||||
|
||||
_BAD Config_:
|
||||
|
||||
```text
|
||||
listen: 127.0.0.1:4222
|
||||
authorization: {
|
||||
# BAD!
|
||||
token: 3secret
|
||||
}
|
||||
```
|
||||
|
||||
Fixed Config:
|
||||
|
||||
```text
|
||||
listen: 127.0.0.1:4222
|
||||
authorization: {
|
||||
token: "3secret"
|
||||
}
|
||||
```
|
||||
|
||||
## Variables
|
||||
|
||||
Server configurations can specify variables. Variables allow you to reference a value from one or more sections in the configuration.
|
||||
|
||||
Variables:
|
||||
|
||||
* Are block-scoped
|
||||
* Are referenced with a `$` prefix.
|
||||
* Can be resolved from environment variables having the same name
|
||||
|
||||
> If the environment variable value begins with a number you may have trouble resolving it depending on the server version you are running.
|
||||
|
||||
```text
|
||||
# Define a variable in the config
|
||||
TOKEN: "secret"
|
||||
|
||||
# Reference the variable
|
||||
authorization {
|
||||
token: $TOKEN
|
||||
}
|
||||
```
|
||||
|
||||
A similar configuration, but this time, the value is in the environment:
|
||||
|
||||
```text
|
||||
# TOKEN is defined in the environment
|
||||
authorization {
|
||||
token: $TOKEN
|
||||
}
|
||||
```
|
||||
|
||||
export TOKEN="hello"; nats-server -c /config/file
|
||||
|
||||
## Include Directive
|
||||
|
||||
The `include` directive allows you to split a server configuration into several files. This is useful for separating configuration into chunks that you can easily reuse between different servers.
|
||||
|
||||
Includes _must_ use relative paths, and are relative to the main configuration \(the one specified via the `-c` option\):
|
||||
|
||||
server.conf:
|
||||
|
||||
```text
|
||||
listen: 127.0.0.1:4222
|
||||
include ./auth.conf
|
||||
```
|
||||
|
||||
> Note that `include` is not followed by `=` or `:`, as it is a _directive_.
|
||||
|
||||
auth.conf:
|
||||
|
||||
```text
|
||||
authorization: {
|
||||
token: "f0oBar"
|
||||
}
|
||||
```
|
||||
|
||||
```text
|
||||
> nats-server -c server.conf
|
||||
```
|
||||
|
||||
## Configuration Properties
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| [`authorization`](securing_nats/auth_intro/) | Configuration map for client authentication/authorization |
|
||||
| [`cluster`](clustering/cluster_config.md) | Configuration map for clustering configuration |
|
||||
| `connect_error_reports` | Number of attempts at which a repeated failed route, gateway or leaf node connection is reported. Default is 3600, approx every hour. |
|
||||
| `debug` | If `true` enable debug log messages |
|
||||
| [`gateway`](gateways/gateway.md) | Gateway configuration map |
|
||||
| `host` | Host for client connections |
|
||||
| [`http_port`](monitoring.md) | http port for server monitoring |
|
||||
| [`https_port`](monitoring.md) | https port for server monitoring |
|
||||
| [`leafnode`](leafnodes/leafnode_conf.md) | Leafnode configuration map |
|
||||
| `listen` | Host/port for client connections |
|
||||
| `max_connections` | Maximum number of active client connections |
|
||||
| `max_control_line` | Maximum length of a protocol line \(including subject length\) |
|
||||
| `max_payload` | Maximum number of bytes in a message payload |
|
||||
| `max_pending` | Maximum number of bytes buffered for a connection |
|
||||
| `max_subscriptions` | Maximum numbers of subscriptions for a client connection |
|
||||
| `max_traced_msg_len` | Set a limit to the trace of the payload of a message |
|
||||
| `disable_sublist_cache` | Disable sublist cache globally for accounts. |
|
||||
| [`operator`](../../nats-tools/nsc/nsc.md#nats-server-configuration) | Path to an operator JWT |
|
||||
| [`ping_interval`](../../developing-with-nats/intro/pingpong.md) | Interval in seconds in which the server checks if a connection is active |
|
||||
| `port` | Port for client connections |
|
||||
| `reconnect_error_reports` | Number of failed attempt to reconnect a route, gateway or leaf node connection. Default is to report every attempt. |
|
||||
| [`resolver`](../../nats-tools/nsc/nsc.md#nats-server-configuration) | Resolver type `MEMORY` or `URL` for account JWTs |
|
||||
| [`tls`](securing_nats/tls.md#tls-configuration) | Configuration map for tls for client and http monitoring |
|
||||
| `trace` | If `true` enable protocol trace log messages |
|
||||
| `write_deadline` | Maximum number of seconds the server will block when writing a to a client \(slow consumer\) |
|
||||
|
||||
## Configuration Reloading
|
||||
|
||||
A server can reload most configuration changes without requiring a server restart or clients to disconnect by sending the nats-server a [signal](../nats_admin/signals.md):
|
||||
|
||||
```text
|
||||
> nats-server --signal reload
|
||||
```
|
||||
|
||||
186
nats-server/configuration/clustering/README.md
Normal file
186
nats-server/configuration/clustering/README.md
Normal file
@@ -0,0 +1,186 @@
|
||||
# Clustering
|
||||
|
||||
## NATS Server Clustering
|
||||
|
||||
NATS supports running each server in clustered mode. You can cluster servers together for high volume messaging systems and resiliency and high availability. Clients are cluster-aware.
|
||||
|
||||
Note that NATS clustered servers have a forwarding limit of one hop. This means that each `nats-server` instance will **only** forward messages that it has received **from a client** to the immediately adjacent `nats-server` instances to which it has routes. Messages received **from** a route will only be distributed to local clients. Therefore a full mesh cluster, or complete graph, is recommended for NATS to function as intended and as described throughout the documentation.
|
||||
|
||||
## Cluster URLs
|
||||
|
||||
In addition to a port for listening for clients, `nats-server` can listen on a "cluster" URL \(the `-cluster` option\). Additional `nats-server` servers can then add that URL to their `-routes` argument to join the cluster. These options can also be specified in a config file, but only the command-line version is shown in this overview for simplicity.
|
||||
|
||||
## Running a Simple Cluster
|
||||
|
||||
Here is a simple cluster running on the same machine:
|
||||
|
||||
```bash
|
||||
# Server A - the 'seed server'
|
||||
> nats-server -p 4222 -cluster nats://0.0.0.0:5222
|
||||
|
||||
# Server B
|
||||
> nats-server -p -1 -cluster nats://0.0.0.0:-1 -routes nats://localhost:5222
|
||||
# Check the output of the server for the selected client and route ports.
|
||||
|
||||
# Server C
|
||||
> nats-server -p -1 -cluster nats://0.0.0.0:-1 -routes nats://localhost:5222
|
||||
# Check the output of the server for the selected client and route ports.
|
||||
```
|
||||
|
||||
The _seed server_ simply declares its client and clustering port. All other servers delegate to the nats-server to auto-select a port that is not in use for both clients and cluster connections, and route to the seed server. Because the clustering protocol gossips members of the cluster, all servers are able to discover other server servers in the cluster. When a server is discovered, the discovering server will automatically attempt to connect to it in order to form a _full mesh_. Typically only one instance of the server will run per machine, so you can reuse the client port \(4222\) and the cluster port \(5222\), and simply the route to the host/port of the seed server.
|
||||
|
||||
Similarly, clients connecting to any server in the cluster will discover other servers in the cluster. If the connection to the server is interrupted, the client will attempt to connect to all other known servers.
|
||||
|
||||
## Command Line Options
|
||||
|
||||
The following cluster options are supported:
|
||||
|
||||
```text
|
||||
--routes [rurl-1, rurl-2] Routes to solicit and connect
|
||||
--cluster nats://host:port Cluster URL for solicited routes
|
||||
```
|
||||
|
||||
When a NATS server routes to a specified URL, it will advertise its own cluster URL to all other servers in the route effectively creating a routing mesh to all other servers.
|
||||
|
||||
**Note:** when using the `-routes` option, you must also specify a `-cluster` option.
|
||||
|
||||
Clustering can also be configured using the server [config file](cluster_config.md).
|
||||
|
||||
## Three Server Cluster Example
|
||||
|
||||
The following example demonstrates how to run a cluster of 3 servers on the same host. We will start with the seed server and use the `-D` command line parameter to produce debug information.
|
||||
|
||||
```bash
|
||||
nats-server -p 4222 -cluster nats://localhost:5222 -D
|
||||
```
|
||||
|
||||
Alternatively, you could use a configuration file, let's call it `seed.conf`, with a content similar to this:
|
||||
|
||||
```text
|
||||
# Cluster Seed Node
|
||||
|
||||
listen: 127.0.0.1:4222
|
||||
http: 8222
|
||||
|
||||
cluster {
|
||||
listen: 127.0.0.1:5222
|
||||
}
|
||||
```
|
||||
|
||||
And start the server like this:
|
||||
|
||||
```bash
|
||||
nats-server -config ./seed.conf -D
|
||||
```
|
||||
|
||||
This will produce an output similar to:
|
||||
|
||||
```bash
|
||||
[75653] 2016/04/26 15:14:47.339321 [INF] Listening for route connections on 127.0.0.1:4248
|
||||
[75653] 2016/04/26 15:14:47.340787 [INF] Listening for client connections on 127.0.0.1:4222
|
||||
[75653] 2016/04/26 15:14:47.340822 [DBG] server id is xZfu3u7usAPWkuThomoGzM
|
||||
[75653] 2016/04/26 15:14:47.340825 [INF] server is ready
|
||||
```
|
||||
|
||||
It is also possible to specify the hostname and port independently. At the minimum, the port is required. If you leave the hostname off it will bind to all the interfaces \('0.0.0.0'\).
|
||||
|
||||
```text
|
||||
cluster {
|
||||
host: 127.0.0.1
|
||||
port: 4248
|
||||
}
|
||||
```
|
||||
|
||||
Now let's start two more servers, each one connecting to the seed server.
|
||||
|
||||
```bash
|
||||
nats-server -p 5222 -cluster nats://localhost:5248 -routes nats://localhost:4248 -D
|
||||
```
|
||||
|
||||
When running on the same host, we need to pick different ports for the client connections `-p`, and for the port used to accept other routes `-cluster`. Note that `-routes` points to the `-cluster` address of the seed server \(`localhost:4248`\).
|
||||
|
||||
Here is the log produced. See how it connects and registers a route to the seed server \(`...GzM`\).
|
||||
|
||||
```bash
|
||||
[75665] 2016/04/26 15:14:59.970014 [INF] Listening for route connections on localhost:5248
|
||||
[75665] 2016/04/26 15:14:59.971150 [INF] Listening for client connections on 0.0.0.0:5222
|
||||
[75665] 2016/04/26 15:14:59.971176 [DBG] server id is 53Yi78q96t52QdyyWLKIyE
|
||||
[75665] 2016/04/26 15:14:59.971179 [INF] server is ready
|
||||
[75665] 2016/04/26 15:14:59.971199 [DBG] Trying to connect to route on localhost:4248
|
||||
[75665] 2016/04/26 15:14:59.971551 [DBG] 127.0.0.1:4248 - rid:1 - Route connection created
|
||||
[75665] 2016/04/26 15:14:59.971559 [DBG] 127.0.0.1:4248 - rid:1 - Route connect msg sent
|
||||
[75665] 2016/04/26 15:14:59.971720 [DBG] 127.0.0.1:4248 - rid:1 - Registering remote route "xZfu3u7usAPWkuThomoGzM"
|
||||
[75665] 2016/04/26 15:14:59.971731 [DBG] 127.0.0.1:4248 - rid:1 - Route sent local subscriptions
|
||||
```
|
||||
|
||||
From the seed's server log, we see that the route is indeed accepted:
|
||||
|
||||
```bash
|
||||
[75653] 2016/04/26 15:14:59.971602 [DBG] 127.0.0.1:52679 - rid:1 - Route connection created
|
||||
[75653] 2016/04/26 15:14:59.971733 [DBG] 127.0.0.1:52679 - rid:1 - Registering remote route "53Yi78q96t52QdyyWLKIyE"
|
||||
[75653] 2016/04/26 15:14:59.971739 [DBG] 127.0.0.1:52679 - rid:1 - Route sent local subscriptions
|
||||
```
|
||||
|
||||
Finally, let's start the third server:
|
||||
|
||||
```bash
|
||||
nats-server -p 6222 -cluster nats://localhost:6248 -routes nats://localhost:4248 -D
|
||||
```
|
||||
|
||||
Again, notice that we use a different client port and cluster address, but still point to the same seed server at the address `nats://localhost:4248`:
|
||||
|
||||
```bash
|
||||
[75764] 2016/04/26 15:19:11.528185 [INF] Listening for route connections on localhost:6248
|
||||
[75764] 2016/04/26 15:19:11.529787 [INF] Listening for client connections on 0.0.0.0:6222
|
||||
[75764] 2016/04/26 15:19:11.529829 [DBG] server id is IRepas80TBwJByULX1ulAp
|
||||
[75764] 2016/04/26 15:19:11.529842 [INF] server is ready
|
||||
[75764] 2016/04/26 15:19:11.529872 [DBG] Trying to connect to route on localhost:4248
|
||||
[75764] 2016/04/26 15:19:11.530272 [DBG] 127.0.0.1:4248 - rid:1 - Route connection created
|
||||
[75764] 2016/04/26 15:19:11.530281 [DBG] 127.0.0.1:4248 - rid:1 - Route connect msg sent
|
||||
[75764] 2016/04/26 15:19:11.530408 [DBG] 127.0.0.1:4248 - rid:1 - Registering remote route "xZfu3u7usAPWkuThomoGzM"
|
||||
[75764] 2016/04/26 15:19:11.530414 [DBG] 127.0.0.1:4248 - rid:1 - Route sent local subscriptions
|
||||
[75764] 2016/04/26 15:19:11.530595 [DBG] 127.0.0.1:52727 - rid:2 - Route connection created
|
||||
[75764] 2016/04/26 15:19:11.530659 [DBG] 127.0.0.1:52727 - rid:2 - Registering remote route "53Yi78q96t52QdyyWLKIyE"
|
||||
[75764] 2016/04/26 15:19:11.530664 [DBG] 127.0.0.1:52727 - rid:2 - Route sent local subscriptions
|
||||
```
|
||||
|
||||
First a route is created to the seed server \(`...GzM`\) and after that, a route from `...IyE` - which is the ID of the second server - is accepted.
|
||||
|
||||
The log from the seed server shows that it accepted the route from the third server:
|
||||
|
||||
```bash
|
||||
[75653] 2016/04/26 15:19:11.530308 [DBG] 127.0.0.1:52726 - rid:2 - Route connection created
|
||||
[75653] 2016/04/26 15:19:11.530384 [DBG] 127.0.0.1:52726 - rid:2 - Registering remote route "IRepas80TBwJByULX1ulAp"
|
||||
[75653] 2016/04/26 15:19:11.530389 [DBG] 127.0.0.1:52726 - rid:2 - Route sent local subscriptions
|
||||
```
|
||||
|
||||
And the log from the second server shows that it connected to the third.
|
||||
|
||||
```bash
|
||||
[75665] 2016/04/26 15:19:11.530469 [DBG] Trying to connect to route on 127.0.0.1:6248
|
||||
[75665] 2016/04/26 15:19:11.530565 [DBG] 127.0.0.1:6248 - rid:2 - Route connection created
|
||||
[75665] 2016/04/26 15:19:11.530570 [DBG] 127.0.0.1:6248 - rid:2 - Route connect msg sent
|
||||
[75665] 2016/04/26 15:19:11.530644 [DBG] 127.0.0.1:6248 - rid:2 - Registering remote route "IRepas80TBwJByULX1ulAp"
|
||||
[75665] 2016/04/26 15:19:11.530650 [DBG] 127.0.0.1:6248 - rid:2 - Route sent local subscriptions
|
||||
```
|
||||
|
||||
At this point, there is a full mesh cluster of NATS servers.
|
||||
|
||||
### Testing the Cluster
|
||||
|
||||
Now, the following should work: make a subscription to Node A then publish to Node C. You should be able to to receive the message without problems.
|
||||
|
||||
```bash
|
||||
nats-sub -s "nats://192.168.59.103:7222" hello &
|
||||
|
||||
nats-pub -s "nats://192.168.59.105:7222" hello world
|
||||
|
||||
[#1] Received on [hello] : 'world'
|
||||
|
||||
# nats-server on Node C logs:
|
||||
[1] 2015/06/23 05:20:31.100032 [TRC] 192.168.59.103:7244 - rid:2 - <<- [MSG hello RSID:8:2 5]
|
||||
|
||||
# nats-server on Node A logs:
|
||||
[1] 2015/06/23 05:20:31.100600 [TRC] 10.0.2.2:51007 - cid:8 - <<- [MSG hello 2 5]
|
||||
```
|
||||
|
||||
34
nats-server/configuration/clustering/cluster_config.md
Normal file
34
nats-server/configuration/clustering/cluster_config.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# Configuration
|
||||
|
||||
The `cluster` configuration map has the following configuration options:
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| `listen` | host/port for inbound route connections |
|
||||
| `authorization` | [authorization](../securing_nats/authorization.md) map for configuring cluster clients. Supports `token`, `username`/`password` and `TLS authentication`. `permissions` are ignored. |
|
||||
| `timeout` | Maximum amount of time \(in seconds\) to wait for a clustering connection to complete |
|
||||
| `tls` | A [`tls` configuration map](../securing_nats/tls.md#tls-configuration) for securing the clustering connection |
|
||||
| `routes` | A list of other servers \(URLs\) to cluster with. Self-routes are ignored. |
|
||||
|
||||
```text
|
||||
cluster {
|
||||
listen: localhost:4244 # host/port for inbound route connections
|
||||
|
||||
# Authorization for route connections
|
||||
authorization {
|
||||
user: route_user
|
||||
# ./util/mkpasswd -p T0pS3cr3tT00!
|
||||
password: $2a$11$xH8dkGrty1cBNtZjhPeWJewu/YPbSU.rXJWmS6SFilOBXzmZoMk9m
|
||||
timeout: 0.5
|
||||
}
|
||||
|
||||
# Routes are actively solicited and connected to from this server.
|
||||
# Other servers can connect to us if they supply the correct credentials
|
||||
# in their routes definitions from above.
|
||||
routes = [
|
||||
nats-route://user1:pass1@127.0.0.1:4245
|
||||
nats-route://user2:pass2@127.0.0.1:4246
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
28
nats-server/configuration/clustering/cluster_tls.md
Normal file
28
nats-server/configuration/clustering/cluster_tls.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# TLS Authentication
|
||||
|
||||
When setting up clusters all servers in the cluster, if using TLS, will both verify the connecting endpoints and the server responses. So certificates are checked in both directions. Certificates can be configured only for the server's cluster identity, keeping client and server certificates separate from cluster formation.
|
||||
|
||||
TLS Mutual Authentication _is the recommended way_ of securing routes.
|
||||
|
||||
```text
|
||||
cluster {
|
||||
listen: 127.0.0.1:4244
|
||||
|
||||
tls {
|
||||
# Route cert
|
||||
cert_file: "./configs/certs/srva-cert.pem"
|
||||
# Private key
|
||||
key_file: "./configs/certs/srva-key.pem"
|
||||
# Optional certificate authority verifying connected routes
|
||||
# Required when we have self-signed CA, etc.
|
||||
ca_file: "./configs/certs/ca.pem"
|
||||
}
|
||||
# Routes are actively solicited and connected to from this server.
|
||||
# Other servers can connect to us if they supply the correct credentials
|
||||
# in their routes definitions from above.
|
||||
routes = [
|
||||
nats-route://127.0.0.1:4246
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
78
nats-server/configuration/gateways/README.md
Normal file
78
nats-server/configuration/gateways/README.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# Gateways
|
||||
|
||||
## Gateways
|
||||
|
||||
Gateways enable connecting one or more clusters together; they allow the formation of super clusters from smaller clusters. Cluster and Gateway protocols listen in different ports. Clustering is used for adjacent servers; gateways are for joining clusters together. Typically all cluster nodes will also be gateway nodes, but this is not a requirement.
|
||||
|
||||
Gateway configuration is similar to clustering:
|
||||
|
||||
* gateways have a dedicated port where they listen for gateway requests
|
||||
* gateways gossip gateway members and remote discovered gateways
|
||||
|
||||
Unlike clusters, gateways:
|
||||
|
||||
* don't form a full mesh
|
||||
* are bound by uni-directional connections
|
||||
|
||||
Gateways exist to:
|
||||
|
||||
* reduce the number of connections required between servers
|
||||
* optimize the interest graph propagation
|
||||
|
||||
## Gateway Connections
|
||||
|
||||
A nats-server in a gateway role will specify a port where it will accept gateways connections. If the configuration specifies other _external_ `gateways`, the gateway will create one outbound gateway connection for each gateway in its configuration. It will also gossip other gateways it knows or discovered.
|
||||
|
||||
If the local cluster has three gateway nodes, this means there will be three outbound connections to each external gateway.
|
||||
|
||||

|
||||
|
||||
> In the example above cluster _A_ has configured gateway connections for _B_ \(solid lines\). B has discovered gateway connections to _A_ \(dotted lines\). Note that the number of outgoing connections always matches the number of gateways with the same name.
|
||||
|
||||

|
||||
|
||||
> In this second example, again configured connections are shown with solid lines and discovered gateway connections are shown using dotted lines. Gateways _A_ and _C_ were both discovered via gossiping; _B_ discovered _A_ and _A_ discovered _C_.
|
||||
|
||||
A key point in the description above is that each node in the cluster will make a connection to a single node in the remote cluster — a difference from the clustering protocol, where every node is directly connected to all other nodes.
|
||||
|
||||
For those mathematically inclined, cluster connections are `N(N-1)/2` where _N_ is the number of nodes in the cluster. On gateway configurations, outbound connections are the summation of `Ni(M-1)` where Ni is the number of nodes in a gateway _i_, and _M_ is the total number of gateways. Inbound connections are the summation of `U-Ni` where U is the sum of all gateway nodes in all gateways, and N is the number of nodes in a gateway _i_. It works out that both inbound and outbound connection counts are the same.
|
||||
|
||||
The number of connections required to join clusters using clustering vs. gateways is apparent very quickly. For 3 clusters, with N nodes:
|
||||
|
||||
| Nodes per Cluster | Full Mesh Conns | Gateway Conns |
|
||||
| ---: | ---: | ---: |
|
||||
| 1 | 3 | 6 |
|
||||
| 2 | 15 | 12 |
|
||||
| 3 | 36 | 18 |
|
||||
| 4 | 66 | 24 |
|
||||
| 5 | 105 | 30 |
|
||||
| 30 | 4005 | 180 |
|
||||
|
||||
## Interest Propagation
|
||||
|
||||
Gateways propagate interest using three different mechanisms:
|
||||
|
||||
* Optimistic Mode
|
||||
* Interest-only Mode
|
||||
* Queue Subscriptions
|
||||
|
||||
### Optimistic Mode
|
||||
|
||||
When a publisher in _A_ publishes "foo", the _A_ gateway will check if cluster _B_ has registered _no_ interest in "foo". If not, it forwards "foo" to _B_. If upon receiving "foo", _B_ has no subscribers on "foo", _B_ will send a gateway protocol message to _A_ expressing that it has no interest on "foo", preventing future messages on "foo" from being forwarded.
|
||||
|
||||
Should a subscriber on _B_ create a subscription to "foo", _B_ knowing that it had previously rejected interest on _foo_, will send a gateway protocol message to cancel its previous _no interest_ on "foo" in _A_.
|
||||
|
||||
### Interest-only Mode
|
||||
|
||||
When a gateway on _A_ sends many messages on various subjects for which _B_ has no interest. _B_ sends a gateway protocol message for _A_ to stop sending optimistically, and instead send if there's known interest in the subject. As subscriptions come and go on _B_, _B_ will update its subject interest with _A_.
|
||||
|
||||
### Queue Subscriptions
|
||||
|
||||
When a queue subscriber creates a new subscription, the gateway propagates the subscription interest to other gateways. The subscription interest is only propagated _once_ per _Account_ and subject. When the last queue subscriber is gone, the cluster interest is removed.
|
||||
|
||||
Queue subscriptions work on _Interest-only Mode_ to honor NATS' queue semantics across the _Super Cluster_. For each queue group, a message is only delivered to a single queue subscriber. Only when a local queue group member is not found, is a message forwarded to a different interested cluster; gateways will always try to serve local queue subscribers first and only failover when a local queue subscriber is not found.
|
||||
|
||||
### Gateway Configuration
|
||||
|
||||
The [Gateway Configuration](gateway.md) document describes all the options available to gateways.
|
||||
|
||||
97
nats-server/configuration/gateways/gateway.md
Normal file
97
nats-server/configuration/gateways/gateway.md
Normal file
@@ -0,0 +1,97 @@
|
||||
# Configuration
|
||||
|
||||
The `gateway` configuration block is similar to a `cluster` block:
|
||||
|
||||
```text
|
||||
gateway {
|
||||
name: "A"
|
||||
listen: "localhost:7222"
|
||||
authorization {
|
||||
user: gwu
|
||||
password: gwp
|
||||
}
|
||||
|
||||
gateways: [
|
||||
{name: "A", url: "nats://gwu:gwp@localhost:7222"},
|
||||
{name: "B", url: "nats://gwu:gwp@localhost:7333"},
|
||||
{name: "C", url: "nats://gwu:gwp@localhost:7444"},
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
One difference is that instead of `routes` you specify `gateways`. As expected _self-gateway_ connections are ignored, so you can share gateway configurations with minimal fuzz.
|
||||
|
||||
Starting a server:
|
||||
|
||||
```text
|
||||
> nats-server -c A.conf
|
||||
[85803] 2019/05/07 10:50:55.902474 [INF] Starting nats-server version 2.0.0
|
||||
[85803] 2019/05/07 10:50:55.903669 [INF] Gateway name is A
|
||||
[85803] 2019/05/07 10:50:55.903684 [INF] Listening for gateways connections on localhost:7222
|
||||
[85803] 2019/05/07 10:50:55.903696 [INF] Address for gateway "A" is localhost:7222
|
||||
[85803] 2019/05/07 10:50:55.903909 [INF] Listening for client connections on 0.0.0.0:4222
|
||||
[85803] 2019/05/07 10:50:55.903914 [INF] Server id is NBHUDBF3TVJSWCDPG2HSKI4I2SBSPDTNYEXEMOFAZUZYXVA2IYRUGPZU
|
||||
[85803] 2019/05/07 10:50:55.903917 [INF] Server is ready
|
||||
[85803] 2019/05/07 10:50:56.830669 [INF] 127.0.0.1:50892 - gid:2 - Processing inbound gateway connection
|
||||
[85803] 2019/05/07 10:50:56.830673 [INF] 127.0.0.1:50891 - gid:1 - Processing inbound gateway connection
|
||||
[85803] 2019/05/07 10:50:56.831079 [INF] 127.0.0.1:50892 - gid:2 - Inbound gateway connection from "C" (NBHWDFO3KHANNI6UCEUL27VNWL7NWD2MC4BI4L2C7VVLFBSMZ3CRD7HE) registered
|
||||
[85803] 2019/05/07 10:50:56.831211 [INF] 127.0.0.1:50891 - gid:1 - Inbound gateway connection from "B" (ND2UJB3GFUHXOQ2UUMZQGOCL4QVR2LRJODPZH7MIPGLWCQRARJBU27C3) registered
|
||||
[85803] 2019/05/07 10:50:56.906103 [INF] Connecting to explicit gateway "B" (localhost:7333) at 127.0.0.1:7333
|
||||
[85803] 2019/05/07 10:50:56.906104 [INF] Connecting to explicit gateway "C" (localhost:7444) at 127.0.0.1:7444
|
||||
[85803] 2019/05/07 10:50:56.906404 [INF] 127.0.0.1:7333 - gid:3 - Creating outbound gateway connection to "B"
|
||||
[85803] 2019/05/07 10:50:56.906444 [INF] 127.0.0.1:7444 - gid:4 - Creating outbound gateway connection to "C"
|
||||
[85803] 2019/05/07 10:50:56.906647 [INF] 127.0.0.1:7444 - gid:4 - Outbound gateway connection to "C" (NBHWDFO3KHANNI6UCEUL27VNWL7NWD2MC4BI4L2C7VVLFBSMZ3CRD7HE) registered
|
||||
[85803] 2019/05/07 10:50:56.906772 [INF] 127.0.0.1:7333 - gid:3 - Outbound gateway connection to "B" (ND2UJB3GFUHXOQ2UUMZQGOCL4QVR2LRJODPZH7MIPGLWCQRARJBU27C3) registered
|
||||
```
|
||||
|
||||
Once all the gateways are up, these clusters of one will forward messages as expected:
|
||||
|
||||
```text
|
||||
> nats-pub -s localhost:4444 foo bar
|
||||
Published [foo] : 'bar'
|
||||
|
||||
# On a different session...
|
||||
> nats-sub -s localhost:4333 ">"
|
||||
Listening on [>]
|
||||
```
|
||||
|
||||
## `Gateway` Configuration Block
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| `advertise` | Hostport `<host>:<port>` to advertise to other gateways. |
|
||||
| `authorization` | Authorization block \(same as other nats-server `authorization` configuration\). |
|
||||
| `connect_retries` | Number of times the server will try to connect to a discovered gateway. |
|
||||
| `gateways` | List of Gateway entries - see below. |
|
||||
| `host` | Interface where the gateway will listen for incomming gateway connections. |
|
||||
| `listen` | Combines `host` and `port` as `<host>:<port>` |
|
||||
| `name` | Name for this cluster, all gateways belonging to the same cluster, should specify the same name. |
|
||||
| `port` | Port where the gateway will listen for incomming gateway connections. |
|
||||
| `reject_unknown` | If `true`, gateway will reject connections from gateways that are not configured in `gateways`. |
|
||||
| `tls` | TLS configuration block \(same as other [nats-server `tls` configuration](../securing_nats/tls.md#tls-configuration)\). |
|
||||
|
||||
### `Gateway` Entry
|
||||
|
||||
The `gateways` configuration block is a list of gateway entries with the following properties:
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| `name` | Gateway name. |
|
||||
| `url` | Hostport `<host>:<port>` describing where the remote gateway can be reached. If multiple IPs are returned, one is randomly selected. |
|
||||
| `urls` | A list of `url` |
|
||||
|
||||
By using `urls` and an array, you can specify a list of endpoints that form part of a cluster as below. A NATS Server will pick one of those addresses randomly and only establish a single outbound gateway connection to one of the members from another cluster:
|
||||
|
||||
```text
|
||||
gateway {
|
||||
name: "DC-A"
|
||||
listen: "localhost:7222"
|
||||
|
||||
gateways: [
|
||||
{name: "DC-A", urls: ["nats://localhost:7222", "nats://localhost:7223", "nats://localhost:7224"]},
|
||||
{name: "DC-B", urls: ["nats://localhost:7332", "nats://localhost:7333", "nats://localhost:7334"]},
|
||||
{name: "DC-C", urls: ["nats://localhost:7442", "nats://localhost:7333", "nats://localhost:7335"]}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
243
nats-server/configuration/leafnodes/README.md
Normal file
243
nats-server/configuration/leafnodes/README.md
Normal file
@@ -0,0 +1,243 @@
|
||||
# Leaf Nodes
|
||||
|
||||
A _Leaf Node_ allows an extension to a cluter or supercluster that bridges accounts and security domains. This is useful in IoT and edge scenarios and when the local server traffic should be low RTT and local unless routed to the super cluster.
|
||||
|
||||
Leaf Nodes leverage [accounts](../securing_nats/auth_intro/jwt_auth.md) and JWTs to enable a server to connect to another and filter messages as per the leaf node's account user configuration.
|
||||
|
||||
This effectively means that the leaf node clusters with the other server at an account level:
|
||||
|
||||
* Leaf nodes clients authenticate locally \(or just connect if authentication is not required\)
|
||||
* Traffic between the leaf node and the cluster assumes the restrictions of the user configuration used to create the leaf connection.
|
||||
* Subjects that the user is allowed to publish are exported to the cluster.
|
||||
* Subjects the user is allowed to subscribe to, are imported into the leaf node.
|
||||
|
||||
> Leaf Nodes are an important component as a way to bridge traffic between local NATS servers you control and servers that are managed by a third-party. Synadia's [NATS Global Service \(NGS\)](https://www.synadia.com/) allows accounts to use leaf nodes, but gain accessibility to the global network to inexpensively connect geographically distributed servers or small clusters.
|
||||
|
||||
[LeafNode Configuration Options](leafnode_conf.md)
|
||||
|
||||
## LeafNode Configuration Tutorial
|
||||
|
||||
Create a new operator called "O":
|
||||
|
||||
```text
|
||||
> nsc add operator -n O
|
||||
Generated operator key - private key stored "~/.nkeys/O/O.nk"
|
||||
Success! - added operator "O"
|
||||
```
|
||||
|
||||
Create an account called "A":
|
||||
|
||||
```text
|
||||
> nsc add account -n A
|
||||
Generated account key - private key stored "~/.nkeys/O/accounts/A/A.nk"
|
||||
Success! - added account "A"
|
||||
```
|
||||
|
||||
Create an user called "leaf":
|
||||
|
||||
```text
|
||||
> nsc add user -n leaf
|
||||
Generated user key - private key stored "~/.nkeys/O/accounts/A/users/leaf.nk"
|
||||
Generated user creds file "~/.nkeys/O/accounts/A/users/leaf.creds"
|
||||
Success! - added user "leaf" to "A"
|
||||
```
|
||||
|
||||
Let's create an second user called 'nolimit'
|
||||
|
||||
```text
|
||||
> nsc add user -n nolimit
|
||||
Generated user key - private key stored "~/.nkeys/O/accounts/A/users/nolimit.nk"
|
||||
Generated user creds file "~/.nkeys/O/accounts/A/users/nolimit.creds"
|
||||
Success! - added user "nolimit" to "A"
|
||||
```
|
||||
|
||||
Start a nats-account-server:
|
||||
|
||||
```text
|
||||
> nats-account-server -nsc ~/.nsc/nats/O
|
||||
```
|
||||
|
||||
Create the server configuration file \(server.conf\) with the following contents:
|
||||
|
||||
```text
|
||||
operator: /Users/synadia/.nsc/nats/O/O.jwt
|
||||
resolver: URL(http://localhost:9090/jwt/v1/accounts/)
|
||||
leafnodes {
|
||||
listen: "127.0.0.1:4000"
|
||||
}
|
||||
```
|
||||
|
||||
The server configuration naturally requires an `operator` and `resolver` to deal with the JWT authentication and accounts. In addition the `leafnodes` configuration exposes a `listen` where the server will receive leaf nodes. In this case on the localhost on port 4000.
|
||||
|
||||
Start the nats-server:
|
||||
|
||||
```text
|
||||
> nats-server -c server.conf
|
||||
```
|
||||
|
||||
Create a subscriber on the server:
|
||||
|
||||
```text
|
||||
> nats-sub -creds ~/.nkeys/O/accounts/A/users/nolimit.creds ">"
|
||||
Listening on [>]
|
||||
```
|
||||
|
||||
Create the leaf server configuration \(leaf.conf\) with the following contents:
|
||||
|
||||
```text
|
||||
port: 4111
|
||||
leafnodes {
|
||||
remotes = [
|
||||
{
|
||||
url: "nats-leaf://localhost:4000"
|
||||
credentials: "/Users/synadia/.nkeys/O/accounts/A/users/leaf.creds"
|
||||
},
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Note the leaf node configuration lists a number of `remotes`. The `url` specifies the port on the server where leaf node connections are allowed. The `credentials` configuration specifies the path to a user's credentials file.
|
||||
|
||||
The leaf server configuration \(leaf.conf\) also supports multiple URLs with `urls` such as the following:
|
||||
|
||||
```text
|
||||
port: 4111
|
||||
leafnodes {
|
||||
remotes = [
|
||||
{
|
||||
urls: ["nats-leaf://host1:4000", "nats-leaf://host2:4000"]
|
||||
credentials: "/Users/synadia/.nkeys/O/accounts/A/users/leaf.creds"
|
||||
},
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Create a subscriber on the leaf:
|
||||
|
||||
```text
|
||||
> nats-sub -s localhost:4111 ">"
|
||||
Listening on [>]
|
||||
```
|
||||
|
||||
Publish a message on the server:
|
||||
|
||||
```text
|
||||
> nats-pub -creds ~/.nkeys/O/accounts/A/users/leaf.creds foo foo
|
||||
Published [foo] : 'foo'
|
||||
```
|
||||
|
||||
Both the server and leaf subscriber print:
|
||||
|
||||
```text
|
||||
|
||||
```
|
||||
|
||||
Publish a message on the leaf:
|
||||
|
||||
```text
|
||||
> nats-pub -s localhost:4111 bar bar
|
||||
Published [bar] : 'bar'
|
||||
```
|
||||
|
||||
Both the server and leaf subscribers print:
|
||||
|
||||
```text
|
||||
|
||||
```
|
||||
|
||||
The leaf forwards all local messages to the server where members of the account are able to receive them. Messages published on the server by the account are forwarded to the leaf where subscribers are able to receive them.
|
||||
|
||||
## Leaf Authorization
|
||||
|
||||
In some cases you may want to restrict what messages can be exported from the leaf node or imported from the account. For leaf servers this is simply a user account configuration, as users can have specific permissions on what subjects to publish and/or subscribe to.
|
||||
|
||||
Let's put some restrictions on the `leaf` user so that it can only publish to `foo` and subscribe to `bar`:
|
||||
|
||||
```text
|
||||
> nsc edit user -n leaf --allow-pub foo --allow-sub bar
|
||||
Updated user creds file "~/.nkeys/O/accounts/A/users/leaf.creds"
|
||||
Success! - edited user "leaf" in account "A"
|
||||
|
||||
-----BEGIN NATS ACCOUNT JWT-----
|
||||
eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJVSk9RTFVSTUVFTVZXQVpVT0E2VlE1UVQ0UEdIV081WktDWlBLVFBJQVpLSldaSTJGNVpRIiwiaWF0IjoxNTU2ODM1MzU4LCJpc3MiOiJBRDU3TUZOQklLTzNBRFU2VktMRkVYQlBVQjdFWlpLU0tVUDdZTzNWVUFJTUlBWUpVNE1EM0NDUiIsIm5hbWUiOiJsZWFmIiwic3ViIjoiVUNEMlpSVUs1UE8yMk02MlNWVTZITzZJS01BVERDUlJYVVVGWDRRU1VTWFdRSDRHU1Y3RDdXVzMiLCJ0eXBlIjoidXNlciIsIm5hdHMiOnsicHViIjp7ImFsbG93IjpbImZvbyJdfSwic3ViIjp7ImFsbG93IjpbImJhciJdfX19.IeqSylTaisMQMH3Ih_0G8LLxoxe0gIClpxTm3B_ys_XwL9TtPIW-M2qdaYQZ_ZmR2glMvYK4EJ6J8RQ1UZdGAg
|
||||
------END NATS ACCOUNT JWT------
|
||||
|
||||
> nsc describe user -n leaf
|
||||
╭───────────────────────────────────────────╮
|
||||
│ User │
|
||||
├─────────────────┬─────────────────────────┤
|
||||
│ Name │ leaf │
|
||||
│ User ID │ UCD2ZRUK5PO2 │
|
||||
│ Issuer ID │ AD57MFNBIKO3 │
|
||||
│ Issued │ 2019-05-02 22:15:58 UTC │
|
||||
│ Expires │ │
|
||||
├─────────────────┼─────────────────────────┤
|
||||
│ Pub Allow │ foo │
|
||||
│ Sub Allow │ bar │
|
||||
├─────────────────┼─────────────────────────┤
|
||||
│ Max Messages │ Unlimited │
|
||||
│ Max Msg Payload │ Unlimited │
|
||||
│ Network Src │ Any │
|
||||
│ Time │ Any │
|
||||
╰─────────────────┴─────────────────────────╯
|
||||
```
|
||||
|
||||
As we can see on the inspection of the user, the restrictions have been applied.
|
||||
|
||||
Let's repeat the experiment. This time we'll restart the leaf server so that the new user configuration is applied:
|
||||
|
||||
```text
|
||||
> nats-server -c leaf.conf
|
||||
```
|
||||
|
||||
You should see a new message on the leaf subscriber:
|
||||
|
||||
```text
|
||||
Reconnected [nats://localhost:4111]
|
||||
```
|
||||
|
||||
Let's publish a message on the leaf:
|
||||
|
||||
```text
|
||||
> nats-pub -s localhost:4111 foo foo
|
||||
Published [foo] : 'foo'
|
||||
```
|
||||
|
||||
You should see a new message in all your subscriber windows:
|
||||
|
||||
```text
|
||||
|
||||
```
|
||||
|
||||
Now publish a new message on the leaf, but this time with the subject `bar`:
|
||||
|
||||
```text
|
||||
> nats-pub -s localhost:4111 bar bar
|
||||
Published [bar] : 'bar'
|
||||
```
|
||||
|
||||
This time only the leaf subscriber will print `[#4] Received on [bar]: 'bar'`, the account subscriber won't print it because the leaf user doesn't have permissions to publish on 'bar'.
|
||||
|
||||
Let's try the flow of messages from the server to the leaf node:
|
||||
|
||||
```text
|
||||
> nats-pub -creds ~/.nkeys/O/accounts/A/users/leaf.creds foo foo
|
||||
Published [foo] : 'foo'
|
||||
```
|
||||
|
||||
Only the server subscriber will receive the message as expected.
|
||||
|
||||
Repeat the publish this time with 'bar':
|
||||
|
||||
```text
|
||||
> nats-pub -creds ~/.nkeys/O/accounts/A/users/leaf.creds bar bar
|
||||
Published [bar] : 'bar'
|
||||
```
|
||||
|
||||
Both subscribers will receive the message as expected.
|
||||
|
||||
As you can see:
|
||||
|
||||
* Messages to and from the leaf node to the server are limited by the user associated with the leaf node connection.
|
||||
* Messages within the leaf node are as per the server's authentication and authorization configuration
|
||||
|
||||
37
nats-server/configuration/leafnodes/leafnode_conf.md
Normal file
37
nats-server/configuration/leafnodes/leafnode_conf.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# Configuration
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| `advertise` | Hostport `<host>:<port>` to advertise to other servers. |
|
||||
| `authorization` | Authorization block \(same as other nats-server `authorization` configuration\). |
|
||||
| `host` | Interface where the server will listen for incoming leafnode connections. |
|
||||
| `listen` | Combines `host` and `port` as `<host>:<port>` |
|
||||
| `no_advertise` | if `true` the leafnode shouldn't be advertised. |
|
||||
| `port` | Port where the server will listen for incoming leafnode connections. |
|
||||
| `remotes` | List of `remote` entries specifying servers where leafnode client connection can be made. |
|
||||
| `tls` | TLS configuration block \(same as other nats-server `tls` configuration\). |
|
||||
|
||||
## LeafNode `remotes` Entry Block
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| `url` | Leafnode URL \(URL protocol should be `nats-leaf`\). |
|
||||
| `urls` | Leafnode URL array. Supports multiple URLs for discovery, e.g., urls: \[ "nats-leaf://host1:7422", "nats-leaf://host2:7422" \] |
|
||||
| `account` | Account public key identifying the leafnode. Account must be defined locally. |
|
||||
| `credentials` | Credential file for connecting to the leafnode server. |
|
||||
| `tls` | A TLS configuration block. Leafnode client will use specified TLS certificates when connecting/authenticating. |
|
||||
|
||||
## `tls` Configuration Block
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| `cert_file` | TLS certificate file. |
|
||||
| `key_file` | TLS certificate key file. |
|
||||
| `ca_file` | TLS certificate authority file. |
|
||||
| `insecure` | Skip certificate verification. |
|
||||
| `verify` | If `true`, require and verify client certificates. |
|
||||
| `verify_and_map` | If `true`, require and verify client certificates and use values map certificate values for authentication purposes. |
|
||||
| `cipher_suites` | When set, only the specified TLS cipher suites will be allowed. Values must match golang version used to build the server. |
|
||||
| `curve_preferences` | List of TLS cypher curves to use in order. |
|
||||
| `timeout` | TLS handshake timeout in fractional seconds. |
|
||||
|
||||
100
nats-server/configuration/logging.md
Normal file
100
nats-server/configuration/logging.md
Normal file
@@ -0,0 +1,100 @@
|
||||
# Logging
|
||||
|
||||
## Configuring Logging
|
||||
|
||||
The NATS server provides various logging options that you can set via the command line or the configuration file.
|
||||
|
||||
### Command Line Options
|
||||
|
||||
The following logging operations are supported:
|
||||
|
||||
```text
|
||||
-l, --log FILE File to redirect log output.
|
||||
-T, --logtime Timestamp log entries (default is true).
|
||||
-s, --syslog Enable syslog as log method.
|
||||
-r, --remote_syslog Syslog server address.
|
||||
-D, --debug Enable debugging output.
|
||||
-V, --trace Trace the raw protocol.
|
||||
-DV Debug and Trace.
|
||||
```
|
||||
|
||||
#### Debug and trace
|
||||
|
||||
The `-DV` flag enables trace and debug for the server.
|
||||
|
||||
```bash
|
||||
nats-server -DV -m 8222 -user foo -pass bar
|
||||
```
|
||||
|
||||
#### Log file redirect
|
||||
|
||||
```bash
|
||||
nats-server -DV -m 8222 -l nats.log
|
||||
```
|
||||
|
||||
#### Timestamp
|
||||
|
||||
If `-T false` then log entries are not timestamped. Default is true.
|
||||
|
||||
#### Syslog
|
||||
|
||||
You can configure syslog with `UDP`:
|
||||
|
||||
```bash
|
||||
nats-server -s udp://localhost:514
|
||||
```
|
||||
|
||||
or `syslog:`
|
||||
|
||||
```bash
|
||||
nats-server -r syslog://<hostname>:<port>
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```bash
|
||||
syslog://logs.papertrailapp.com:26900
|
||||
```
|
||||
|
||||
### Using the Configuration File
|
||||
|
||||
All of these settings are available in the configuration file as well.
|
||||
|
||||
```text
|
||||
debug: false
|
||||
trace: true
|
||||
logtime: false
|
||||
log_file: "/tmp/nats-server.log"
|
||||
```
|
||||
|
||||
### Log Rotation with logrotate
|
||||
|
||||
NATS server does not provide tools to manage log files, but it does include mechanisms that make log rotation simple. We can use this mechanism with [logrotate](https://github.com/logrotate/logrotate); a simple standard Linux utility to rotate logs available on most distributions like Debian, Ubuntu, RedHat \(CentOS\), etc.
|
||||
|
||||
For example, you could configure `logrotate` with:
|
||||
|
||||
```text
|
||||
/path/to/nats-server.log {
|
||||
daily
|
||||
rotate 30
|
||||
compress
|
||||
missingok
|
||||
notifempty
|
||||
postrotate
|
||||
kill -SIGUSR1 `cat /var/run/nats-server.pid`
|
||||
endscript
|
||||
}
|
||||
```
|
||||
|
||||
The first line specifies the location that the subsequent lines will apply to.
|
||||
|
||||
The rest of the file specifies that the logs will rotate daily \("daily" option\) and that 30 older copies will be preserved \("rotate" option\). Other options are described in [logrorate documentation](https://linux.die.net/man/8/logrotate).
|
||||
|
||||
The "postrotate" section tells NATS server to reload the log files once the rotation is complete. The command ``kill -SIGUSR1 `cat /var/run/nats-server.pid``` does not kill the NATS server process, but instead sends it a signal causing it to reload its log files. This will cause new requests to be logged to the refreshed log file.
|
||||
|
||||
The `/var/run/nats-server.pid` file is where NATS server stores the master process's pid.
|
||||
|
||||
## Some Logging Notes
|
||||
|
||||
* The NATS server, in verbose mode, will log the receipt of `UNSUB` messages, but this does not indicate the subscription is gone, only that the message was received. The `DELSUB` message in the log can be used to determine when the actual subscription removal has taken place.
|
||||
|
||||
533
nats-server/configuration/monitoring.md
Normal file
533
nats-server/configuration/monitoring.md
Normal file
@@ -0,0 +1,533 @@
|
||||
# Monitoring
|
||||
|
||||
## Monitoring NATS
|
||||
|
||||
To monitor the NATS messaging system, `nats-server` provides a lightweight HTTP server on a dedicated monitoring port. The monitoring server provides several endpoints, providing statistics and other information about the following:
|
||||
|
||||
* [General Server Information](monitoring.md#general-information)
|
||||
* [Connections](monitoring.md#connection-information)
|
||||
* [Routing](monitoring.md#route-information)
|
||||
* [Gateways](monitoring.md#gateway-information)
|
||||
* [Leaf Nodes](monitoring.md#leaf-nodes-information)
|
||||
* [Subscription Routing](monitoring.md#subscription-routing-information)
|
||||
|
||||
All endpoints return a JSON object.
|
||||
|
||||
The NATS monitoring endpoints support JSONP and CORS, making it easy to create single page monitoring web applications.
|
||||
|
||||
### Enabling monitoring from the command line
|
||||
|
||||
To enable the monitoring server, start the NATS server with the monitoring flag `-m` and the monitoring port, or turn it on in the [configuration file](./#configuration-properties).
|
||||
|
||||
```text
|
||||
-m, --http_port PORT HTTP PORT for monitoring
|
||||
-ms,--https_port PORT Use HTTPS PORT for monitoring
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
$ nats-server -m 8222
|
||||
[4528] 2019/06/01 20:09:58.572939 [INF] Starting nats-server version 2.0.0
|
||||
[4528] 2019/06/01 20:09:58.573007 [INF] Starting http monitor on port 8222
|
||||
[4528] 2019/06/01 20:09:58.573071 [INF] Listening for client connections on 0.0.0.0:4222
|
||||
[4528] 2019/06/01 20:09:58.573090 [INF] nats-server is ready</td>
|
||||
```
|
||||
|
||||
To test, run `nats-server -m 8222`, then go to [http://demo.nats.io:8222/](http://demo.nats.io:8222/)
|
||||
|
||||
### Enable monitoring from the configuration file
|
||||
|
||||
You can also enable monitoring using the configuration file as follows:
|
||||
|
||||
```yaml
|
||||
http_port: 8222
|
||||
```
|
||||
|
||||
For example, to monitor this server locally, the endpoint would be [http://demo.nats.io:8222/varz](http://demo.nats.io:8222/varz) reports various general statistics.
|
||||
|
||||
## Monitoring endpoints
|
||||
|
||||
The following sections describe each supported monitoring endpoint: `varz`, `connz`, `routez`, `subsz`, and `gatewayz`. There are not any required arguments, however use of arguments can let you tailor monitoring to your environment and tooling.
|
||||
|
||||
### General Information
|
||||
|
||||
The `/varz` endpoint returns general information about the server state and configuration.
|
||||
|
||||
**Endpoint:** `http://server:port/varz`
|
||||
|
||||
| Result | Return Code |
|
||||
| :--- | :--- |
|
||||
| Success | 200 \(OK\) |
|
||||
| Error | 400 \(Bad Request\) |
|
||||
|
||||
#### Arguments
|
||||
|
||||
N/A
|
||||
|
||||
#### Example
|
||||
|
||||
[http://demo.nats.io:8222/varz](http://demo.nats.io:8222/varz)
|
||||
|
||||
#### Response
|
||||
|
||||
```javascript
|
||||
{
|
||||
"server_id": "NACDVKFBUW4C4XA24OOT6L4MDP56MW76J5RJDFXG7HLABSB46DCMWCOW",
|
||||
"version": "2.0.0",
|
||||
"proto": 1,
|
||||
"go": "go1.12",
|
||||
"host": "0.0.0.0",
|
||||
"port": 4222,
|
||||
"max_connections": 65536,
|
||||
"ping_interval": 120000000000,
|
||||
"ping_max": 2,
|
||||
"http_host": "0.0.0.0",
|
||||
"http_port": 8222,
|
||||
"https_port": 0,
|
||||
"auth_timeout": 1,
|
||||
"max_control_line": 4096,
|
||||
"max_payload": 1048576,
|
||||
"max_pending": 67108864,
|
||||
"cluster": {},
|
||||
"gateway": {},
|
||||
"leaf": {},
|
||||
"tls_timeout": 0.5,
|
||||
"write_deadline": 2000000000,
|
||||
"start": "2019-06-24T14:24:43.928582-07:00",
|
||||
"now": "2019-06-24T14:24:46.894852-07:00",
|
||||
"uptime": "2s",
|
||||
"mem": 9617408,
|
||||
"cores": 4,
|
||||
"cpu": 0,
|
||||
"connections": 0,
|
||||
"total_connections": 0,
|
||||
"routes": 0,
|
||||
"remotes": 0,
|
||||
"leafnodes": 0,
|
||||
"in_msgs": 0,
|
||||
"out_msgs": 0,
|
||||
"in_bytes": 0,
|
||||
"out_bytes": 0,
|
||||
"slow_consumers": 0,
|
||||
"subscriptions": 0,
|
||||
"http_req_stats": {
|
||||
"/": 0,
|
||||
"/connz": 0,
|
||||
"/gatewayz": 0,
|
||||
"/routez": 0,
|
||||
"/subsz": 0,
|
||||
"/varz": 1
|
||||
},
|
||||
"config_load_time": "2019-06-24T14:24:43.928582-07:00"
|
||||
}
|
||||
```
|
||||
|
||||
### Connection Information
|
||||
|
||||
The `/connz` endpoint reports more detailed information on current and recently closed connections. It uses a paging mechanism which defaults to 1024 connections.
|
||||
|
||||
**Endpoint:** `http://server:port/connz`
|
||||
|
||||
| Result | Return Code |
|
||||
| :--- | :--- |
|
||||
| Success | 200 \(OK\) |
|
||||
| Error | 400 \(Bad Request\) |
|
||||
|
||||
#### Arguments
|
||||
|
||||
| Argument | Values | Description |
|
||||
| :--- | :--- | :--- |
|
||||
| sort | \(_see sort options_\) | Sorts the results. Default is connection ID. |
|
||||
| auth | true, 1, false, 0 | Include username. Default is false. |
|
||||
| subs | true, 1, false, 0 | Include subscriptions. Default is false. |
|
||||
| offset | number > 0 | Pagination offset. Default is 0. |
|
||||
| limit | number > 0 | Number of results to return. Default is 1024. |
|
||||
| cid | number, valid id | Return a connection by it's id |
|
||||
| state | open, \*closed, any | Return connections of partular state. Default is open. |
|
||||
|
||||
_The server will default to holding the last 10,000 closed connections._
|
||||
|
||||
**Sort Options**
|
||||
|
||||
| Option | Sort by |
|
||||
| :--- | :--- |
|
||||
| cid | Connection ID |
|
||||
| start | Connection start time, same as CID |
|
||||
| subs | Number of subscriptions |
|
||||
| pending | Amount of data in bytes waiting to be sent to client |
|
||||
| msgs\_to | Number of messages sent |
|
||||
| msgs\_from | Number of messages received |
|
||||
| bytes\_to | Number of bytes sent |
|
||||
| bytes\_from | Number of bytes received |
|
||||
| last | Last activity |
|
||||
| idle | Amount of inactivity |
|
||||
| uptime | Lifetime of the connection |
|
||||
| stop | Stop time for a closed connection |
|
||||
| reason | Reason for a closed connection |
|
||||
|
||||
#### Examples
|
||||
|
||||
Get up to 1024 connections: [http://demo.nats.io:8222/connz](http://demo.nats.io:8222/connz)
|
||||
|
||||
Control limit and offset: [http://demo.nats.io:8222/connz?limit=16&offset=128](http://demo.nats.io:8222/connz?limit=16&offset=128).
|
||||
|
||||
Get closed connection information: [http://demo.nats.io:8222/connz?state=closed](http://demo.nats.io:8222/connz?state=closed).
|
||||
|
||||
You can also report detailed subscription information on a per connection basis using subs=1. For example: [http://demo.nats.io:8222/connz?limit=1&offset=1&subs=1](http://demo.nats.io:8222/connz?limit=1&offset=1&subs=1).
|
||||
|
||||
#### Response
|
||||
|
||||
```javascript
|
||||
{
|
||||
"server_id": "NACDVKFBUW4C4XA24OOT6L4MDP56MW76J5RJDFXG7HLABSB46DCMWCOW",
|
||||
"now": "2019-06-24T14:28:16.520365-07:00",
|
||||
"num_connections": 2,
|
||||
"total": 2,
|
||||
"offset": 0,
|
||||
"limit": 1024,
|
||||
"connections": [
|
||||
{
|
||||
"cid": 1,
|
||||
"ip": "127.0.0.1",
|
||||
"port": 49764,
|
||||
"start": "2019-06-24T14:27:25.94611-07:00",
|
||||
"last_activity": "2019-06-24T14:27:25.954046-07:00",
|
||||
"rtt": "275µs",
|
||||
"uptime": "50s",
|
||||
"idle": "50s",
|
||||
"pending_bytes": 0,
|
||||
"in_msgs": 0,
|
||||
"out_msgs": 0,
|
||||
"in_bytes": 0,
|
||||
"out_bytes": 0,
|
||||
"subscriptions": 1,
|
||||
"name": "NATS Sample Subscriber",
|
||||
"lang": "go",
|
||||
"version": "1.8.1",
|
||||
"subscriptions_list": [
|
||||
"hello.world"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cid": 2,
|
||||
"ip": "127.0.0.1",
|
||||
"port": 49767,
|
||||
"start": "2019-06-24T14:27:43.403923-07:00",
|
||||
"last_activity": "2019-06-24T14:27:43.406568-07:00",
|
||||
"rtt": "96µs",
|
||||
"uptime": "33s",
|
||||
"idle": "33s",
|
||||
"pending_bytes": 0,
|
||||
"in_msgs": 0,
|
||||
"out_msgs": 0,
|
||||
"in_bytes": 0,
|
||||
"out_bytes": 0,
|
||||
"subscriptions": 1,
|
||||
"name": "NATS Sample Subscriber",
|
||||
"lang": "go",
|
||||
"version": "1.8.1",
|
||||
"subscriptions_list": [
|
||||
"foo.bar"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Route Information
|
||||
|
||||
The `/routez` endpoint reports information on active routes for a cluster. Routes are expected to be low, so there is no paging mechanism with this endpoint.
|
||||
|
||||
**Endpoint:** `http://server:port/routez`
|
||||
|
||||
| Result | Return Code |
|
||||
| :--- | :--- |
|
||||
| Success | 200 \(OK\) |
|
||||
| Error | 400 \(Bad Request\) |
|
||||
|
||||
#### Arguments
|
||||
|
||||
| Argument | Values | Description |
|
||||
| :--- | :--- | :--- |
|
||||
| subs | true, 1, false, 0 | Include internal subscriptions. Default is false. |
|
||||
|
||||
As noted above, the `routez` endpoint does support the `subs` argument from the `/connz` endpoint. For example: [http://demo.nats.io:8222/routez?subs=1](http://demo.nats.io:8222/routez?subs=1)
|
||||
|
||||
#### Example
|
||||
|
||||
* Get route information: [http://demo.nats.io:8222/routez?subs=1](http://demo.nats.io:8222/routez?subs=1)
|
||||
|
||||
#### Response
|
||||
|
||||
```javascript
|
||||
{
|
||||
"server_id": "NACDVKFBUW4C4XA24OOT6L4MDP56MW76J5RJDFXG7HLABSB46DCMWCOW",
|
||||
"now": "2019-06-24T14:29:16.046656-07:00",
|
||||
"num_routes": 1,
|
||||
"routes": [
|
||||
{
|
||||
"rid": 1,
|
||||
"remote_id": "de475c0041418afc799bccf0fdd61b47",
|
||||
"did_solicit": true,
|
||||
"ip": "127.0.0.1",
|
||||
"port": 61791,
|
||||
"pending_size": 0,
|
||||
"in_msgs": 0,
|
||||
"out_msgs": 0,
|
||||
"in_bytes": 0,
|
||||
"out_bytes": 0,
|
||||
"subscriptions": 0
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Gateway Information
|
||||
|
||||
The `/gatewayz` endpoint reports information about gateways used to create a NATS supercluster. Like routes, the number of gateways are expected to be low, so there is no paging mechanism with this endpoint.
|
||||
|
||||
**Endpoint:** `http://server:port/gatewayz`
|
||||
|
||||
| Result | Return Code |
|
||||
| :--- | :--- |
|
||||
| Success | 200 \(OK\) |
|
||||
| Error | 400 \(Bad Request\) |
|
||||
|
||||
#### Arguments
|
||||
|
||||
| Argument | Values | Description |
|
||||
| :--- | :--- | :--- |
|
||||
| accs | true, 1, false, 0 | Include account information. Default is false. |
|
||||
| gw\_name | string | Return only remote gateways with this name. |
|
||||
| acc\_name | string | Limit the list of accounts to this account name. |
|
||||
|
||||
#### Examples
|
||||
|
||||
* Retrieve Gateway Information: [http://demo.nats.io:8222/gatewayz](http://demo.nats.io:8222/gatewayz)
|
||||
|
||||
#### Response
|
||||
|
||||
```javascript
|
||||
{
|
||||
"server_id": "NANVBOU62MDUWTXWRQ5KH3PSMYNCHCEUHQV3TW3YH7WZLS7FMJE6END6",
|
||||
"now": "2019-07-24T18:02:55.597398-06:00",
|
||||
"name": "region1",
|
||||
"host": "2601:283:4601:1350:1895:efda:2010:95a1",
|
||||
"port": 4501,
|
||||
"outbound_gateways": {
|
||||
"region2": {
|
||||
"configured": true,
|
||||
"connection": {
|
||||
"cid": 7,
|
||||
"ip": "127.0.0.1",
|
||||
"port": 5500,
|
||||
"start": "2019-07-24T18:02:48.765621-06:00",
|
||||
"last_activity": "2019-07-24T18:02:48.765621-06:00",
|
||||
"uptime": "6s",
|
||||
"idle": "6s",
|
||||
"pending_bytes": 0,
|
||||
"in_msgs": 0,
|
||||
"out_msgs": 0,
|
||||
"in_bytes": 0,
|
||||
"out_bytes": 0,
|
||||
"subscriptions": 0,
|
||||
"name": "NCXBIYWT7MV7OAQTCR4QTKBN3X3HDFGSFWTURTCQ22ZZB6NKKJPO7MN4"
|
||||
}
|
||||
},
|
||||
"region3": {
|
||||
"configured": true,
|
||||
"connection": {
|
||||
"cid": 5,
|
||||
"ip": "::1",
|
||||
"port": 6500,
|
||||
"start": "2019-07-24T18:02:48.764685-06:00",
|
||||
"last_activity": "2019-07-24T18:02:48.764685-06:00",
|
||||
"uptime": "6s",
|
||||
"idle": "6s",
|
||||
"pending_bytes": 0,
|
||||
"in_msgs": 0,
|
||||
"out_msgs": 0,
|
||||
"in_bytes": 0,
|
||||
"out_bytes": 0,
|
||||
"subscriptions": 0,
|
||||
"name": "NCVS7Q65WX3FGIL2YQRLI77CE6MQRWO2Y453HYVLNMBMTVLOKMPW7R6K"
|
||||
}
|
||||
}
|
||||
},
|
||||
"inbound_gateways": {
|
||||
"region2": [
|
||||
{
|
||||
"configured": false,
|
||||
"connection": {
|
||||
"cid": 9,
|
||||
"ip": "::1",
|
||||
"port": 52029,
|
||||
"start": "2019-07-24T18:02:48.76677-06:00",
|
||||
"last_activity": "2019-07-24T18:02:48.767096-06:00",
|
||||
"uptime": "6s",
|
||||
"idle": "6s",
|
||||
"pending_bytes": 0,
|
||||
"in_msgs": 0,
|
||||
"out_msgs": 0,
|
||||
"in_bytes": 0,
|
||||
"out_bytes": 0,
|
||||
"subscriptions": 0,
|
||||
"name": "NCXBIYWT7MV7OAQTCR4QTKBN3X3HDFGSFWTURTCQ22ZZB6NKKJPO7MN4"
|
||||
}
|
||||
}
|
||||
],
|
||||
"region3": [
|
||||
{
|
||||
"configured": false,
|
||||
"connection": {
|
||||
"cid": 4,
|
||||
"ip": "::1",
|
||||
"port": 52025,
|
||||
"start": "2019-07-24T18:02:48.764577-06:00",
|
||||
"last_activity": "2019-07-24T18:02:48.764994-06:00",
|
||||
"uptime": "6s",
|
||||
"idle": "6s",
|
||||
"pending_bytes": 0,
|
||||
"in_msgs": 0,
|
||||
"out_msgs": 0,
|
||||
"in_bytes": 0,
|
||||
"out_bytes": 0,
|
||||
"subscriptions": 0,
|
||||
"name": "NCVS7Q65WX3FGIL2YQRLI77CE6MQRWO2Y453HYVLNMBMTVLOKMPW7R6K"
|
||||
}
|
||||
},
|
||||
{
|
||||
"configured": false,
|
||||
"connection": {
|
||||
"cid": 8,
|
||||
"ip": "127.0.0.1",
|
||||
"port": 52026,
|
||||
"start": "2019-07-24T18:02:48.766173-06:00",
|
||||
"last_activity": "2019-07-24T18:02:48.766999-06:00",
|
||||
"uptime": "6s",
|
||||
"idle": "6s",
|
||||
"pending_bytes": 0,
|
||||
"in_msgs": 0,
|
||||
"out_msgs": 0,
|
||||
"in_bytes": 0,
|
||||
"out_bytes": 0,
|
||||
"subscriptions": 0,
|
||||
"name": "NCKCYK5LE3VVGOJQ66F65KA27UFPCLBPX4N4YOPOXO3KHGMW24USPCKN"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Leaf Nodes Information
|
||||
|
||||
The `/leafz` endpoint reports detailed information about the leaf node connections.
|
||||
|
||||
**Endpoint:** `http://server:port/leafz`
|
||||
|
||||
| Result | Return Code |
|
||||
| :--- | :--- |
|
||||
| Success | 200 \(OK\) |
|
||||
| Error | 400 \(Bad Request\) |
|
||||
|
||||
#### Arguments
|
||||
|
||||
| Argument | Values | Description |
|
||||
| :--- | :--- | :--- |
|
||||
| subs | true, 1, false, 0 | Include internal subscriptions. Default is false. |
|
||||
|
||||
As noted above, the `leafz` endpoint does support the `subs` argument from the `/connz` endpoint. For example: [http://demo.nats.io:8222/leafz?subs=1](http://demo.nats.io:8222/leafz?subs=1)
|
||||
|
||||
#### Example
|
||||
|
||||
* Get leaf nodes information: [http://demo.nats.io:8222/leafz?subs=1](http://demo.nats.io:8222/leafz?subs=1)
|
||||
|
||||
#### Response
|
||||
|
||||
```javascript
|
||||
{
|
||||
"server_id": "NC2FJCRMPBE5RI5OSRN7TKUCWQONCKNXHKJXCJIDVSAZ6727M7MQFVT3",
|
||||
"now": "2019-08-27T09:07:05.841132-06:00",
|
||||
"leafnodes": 1,
|
||||
"leafs": [
|
||||
{
|
||||
"account": "$G",
|
||||
"ip": "127.0.0.1",
|
||||
"port": 6223,
|
||||
"rtt": "200µs",
|
||||
"in_msgs": 0,
|
||||
"out_msgs": 10000,
|
||||
"in_bytes": 0,
|
||||
"out_bytes": 1280000,
|
||||
"subscriptions": 1,
|
||||
"subscriptions_list": [
|
||||
"foo"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Subscription Routing Information
|
||||
|
||||
The `/subz` endpoint reports detailed information about the current subscriptions and the routing data structure. It is not normally used.
|
||||
|
||||
**Endpoint:** `http://server:port/subz`
|
||||
|
||||
| Result | Return Code |
|
||||
| :--- | :--- |
|
||||
| Success | 200 \(OK\) |
|
||||
| Error | 400 \(Bad Request\) |
|
||||
|
||||
#### Arguments
|
||||
|
||||
| Argument | Values | Description |
|
||||
| :--- | :--- | :--- |
|
||||
| subs | true, 1, false, 0 | Include subscriptions. Default is false. |
|
||||
| offset | integer > 0 | Pagination offset. Default is 0. |
|
||||
| limit | integer > 0 | Number of results to return. Default is 1024. |
|
||||
| test | subject | Test whether a subsciption exists. |
|
||||
|
||||
#### Example
|
||||
|
||||
* Get subscription routing information: [http://demo.nats.io:8222/subsz](http://demo.nats.io:8222/subsz)
|
||||
|
||||
#### Response
|
||||
|
||||
```javascript
|
||||
{
|
||||
"num_subscriptions": 2,
|
||||
"num_cache": 0,
|
||||
"num_inserts": 2,
|
||||
"num_removes": 0,
|
||||
"num_matches": 0,
|
||||
"cache_hit_rate": 0,
|
||||
"max_fanout": 0,
|
||||
"avg_fanout": 0
|
||||
}
|
||||
```
|
||||
|
||||
## Creating Monitoring Applications
|
||||
|
||||
NATS monitoring endpoints support [JSONP](https://en.wikipedia.org/wiki/JSONP) and [CORS](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing#How_CORS_works). You can easily create single page web applications for monitoring. To do this you simply pass the `callback` query parameter to any endpoint.
|
||||
|
||||
For example:
|
||||
|
||||
```bash
|
||||
http://demo.nats.io:8222/connz?callback=cb
|
||||
```
|
||||
|
||||
Here is a JQuery example implementation:
|
||||
|
||||
```javascript
|
||||
$.getJSON('http://demo.nats.io:8222/connz?callback=?', function(data) {
|
||||
console.log(data);
|
||||
});
|
||||
```
|
||||
|
||||
## Monitoring Tools
|
||||
|
||||
In addition to writing custom monitoring tools, you can monitor nats-server in Prometheus. The [Prometheus NATS Exporter](https://github.com/nats-io/prometheus-nats-exporter) allows you to configure the metrics you want to observe and store in Prometheus. There's a sample [Grafana](https://grafana.com) dashboard that you can use to visualize the server metrics.
|
||||
|
||||
8
nats-server/configuration/securing_nats/README.md
Normal file
8
nats-server/configuration/securing_nats/README.md
Normal file
@@ -0,0 +1,8 @@
|
||||
# Securing NATS
|
||||
|
||||
The NATS server provides several forms of security:
|
||||
|
||||
* Connections can be [_encrypted_ with TLS](tls.md)
|
||||
* Client connections can require [_authentication_](auth_intro/)
|
||||
* Clients can require [_authorization_](authorization.md) for subjects the publish or subscribe to
|
||||
|
||||
38
nats-server/configuration/securing_nats/auth_intro/README.md
Normal file
38
nats-server/configuration/securing_nats/auth_intro/README.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# Authentication
|
||||
|
||||
The NATS server provides various ways of authenticating clients:
|
||||
|
||||
* [Token Authentication](tokens.md)
|
||||
* [Username/Password credentials](username_password.md)
|
||||
* [TLS Certificate](tls_mutual_auth.md)
|
||||
* [NKEY with Challenge](nkey_auth.md)
|
||||
* [Accounts](accounts.md)
|
||||
* [JWTs](jwt_auth.md)
|
||||
|
||||
Authentication deals with allowing a NATS client to connect to the server. Except for JWT authentication, authentication and authorization are configured in the `authorization` section of the configuration.
|
||||
|
||||
## Authorization Map
|
||||
|
||||
The `authorization` block provides _authentication_ configuration as well as _authorization_:
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| [`token`](tokens.md) | Specifies a global token that can be used to authenticate to the server \(exclusive of user and password\) |
|
||||
| [`user`](username_password.md) | Specifies a single _global_ user name for clients to the server \(exclusive of token\) |
|
||||
| [`password`](username_password.md) | Specifies a single _global_ password for clients to the server \(exclusive of `token`\) |
|
||||
| `users` | A list of user configuration maps |
|
||||
| `timeout` | Maximum number of seconds to wait for client authentication |
|
||||
|
||||
For multiple username and password credentials, specify a `users` list.
|
||||
|
||||
## User Configuration Map
|
||||
|
||||
A `user` configuration map specifies credentials and permissions options for a single user:
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| [`user`](username_password.md) | username for client authentication |
|
||||
| [`password`](username_password.md) | password for the user entry |
|
||||
| [`nkey`](nkey_auth.md) | public nkey identifying an user |
|
||||
| [`permissions`](../authorization.md) | permissions map configuring subjects accessible to the user |
|
||||
|
||||
169
nats-server/configuration/securing_nats/auth_intro/accounts.md
Normal file
169
nats-server/configuration/securing_nats/auth_intro/accounts.md
Normal file
@@ -0,0 +1,169 @@
|
||||
# Accounts
|
||||
|
||||
## Accounts
|
||||
|
||||
_Accounts_ expand on the authentication foundation. With traditional authentication \(except for JWT authentication\), all clients can publish and subscribe to anything unless explicitly configured otherwise. To protect clients and information, you have to carve the subject space and permission clients carefully.
|
||||
|
||||
_Accounts_ allow the grouping of clients, _isolating_ them from clients in other accounts, thus enabling _multi-tenancy_ in the server. With accounts, the subject space is not globally shared, greatly simplifying the messaging environment. Instead of devising complicated subject name carving patterns, clients can use short subjects without explicit authorization rules.
|
||||
|
||||
Accounts configuration is done in `accounts` map. The contents of an account entry includes:
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| `users` | a list of [user configuration maps](./#user-configuration-map) |
|
||||
| `exports` | a list of export maps |
|
||||
| `imports` | a list of import maps |
|
||||
|
||||
The `accounts` list is a map, where the keys on the map are an account name.
|
||||
|
||||
```text
|
||||
accounts: {
|
||||
A: {
|
||||
users: [
|
||||
{user: a, password: a}
|
||||
]
|
||||
},
|
||||
B: {
|
||||
users: [
|
||||
{user: b, password: b}
|
||||
]
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
> In the most straightforward configuration above you have an account named `A` which has a single user identified by the username `a` and the password `a`, and an account named `B` with a user identified by the username `b` and the password `b`.
|
||||
>
|
||||
> These two accounts are isolated from each other. Messages published by users in `A` are not visible to users in `B`.
|
||||
>
|
||||
> The user configuration map is the same as any other NATS [user configuration map](./#user-configuration-map). You can use:
|
||||
|
||||
* username/password
|
||||
* nkeys
|
||||
* and add permissions
|
||||
|
||||
> While the name _account_ implies one or more users, it is much simpler and enlightening to think of one account as a messaging container for one application. Users in the account are simply the minimum number of services that must work together to provide some functionality. In simpler terms, more accounts with few \(even one\) clients is a better design topology than a large account with many users with complex authorization configuration.
|
||||
|
||||
### Exporting and Importing
|
||||
|
||||
Messaging exchange between different accounts is enabled by _exporting_ streams and services from one account and _importing_ them into another. Each account controls what is exported and imported.
|
||||
|
||||
The `exports` configuration list enable you to define the services and streams that others can import. Services and streams are expressed as an [Export configuration map](accounts.md#export-configuration-map).
|
||||
|
||||
### Streams
|
||||
|
||||
Streams are messages your application publishes. Importing applications won't be able to make requests from your applications but will be able to consume messages you generate.
|
||||
|
||||
### Services
|
||||
|
||||
Services are messages your application can consume and act on, enabling other accounts to make requests that are fulfilled by your account.
|
||||
|
||||
### Export Configuration Map
|
||||
|
||||
The export configuration map binds a subject for use as a `service` or `stream` and optionally defines specific accounts that can import the stream or service. Here are the supported configuration properties:
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| `stream` | A subject or subject with wildcards that the account will publish. \(exclusive of `service`\) |
|
||||
| `service` | A subject or subject with wildcards that the account will subscribe to. \(exclusive of `stream`\) |
|
||||
| `accounts` | A list of account names that can import the stream or service. If not specified, the service or stream is public and any account can import it. |
|
||||
|
||||
Here are some example exports:
|
||||
|
||||
```text
|
||||
accounts: {
|
||||
A: {
|
||||
users: [
|
||||
{user: a, password: a}
|
||||
]
|
||||
exports: [
|
||||
{stream: puba.>}
|
||||
{service: pubq.>}
|
||||
{stream: b.>, accounts: [B]}
|
||||
{service: q.b, accounts: [B]}
|
||||
]
|
||||
}
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Here's what `A` is exporting:
|
||||
|
||||
* a public stream on the wildcard subject `puba.>`
|
||||
* a public service on the wildcard subject `pubq.>`
|
||||
* a stream to account `B` on the wildcard subject `a.>`
|
||||
* a service to account `B` on the subject `q.b`
|
||||
|
||||
## Source Configuration Map
|
||||
|
||||
The _source configuration map_ describes an export from a remote account by specifying the `account` and `subject` of the export being imported. This map is embedded in the [import configuration map](accounts.md#import-configuration-map):
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| `account` | Account name owning the export. |
|
||||
| `subject` | The subject under which the stream or service is made accessible to the importing account |
|
||||
|
||||
### Import Configuration Map
|
||||
|
||||
An import enables an account to consume streams published by another account or make requests to services implemented by another account. All imports require a corresponding export on the exporting account. Accounts cannot do self-imports.
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| `stream` | Stream import source configuration. \(exclusive of `service`\) |
|
||||
| `service` | Service import source configuration \(exclusive of `stream`\) |
|
||||
| `prefix` | A local subject prefix mapping for the imported stream. |
|
||||
| `to` | A local subject mapping for imported service. |
|
||||
|
||||
The `prefix` and `to` options allow you to remap the subject that is used locally to receive stream messages or publish service requests.
|
||||
|
||||
```text
|
||||
accounts: {
|
||||
A: {
|
||||
users: [
|
||||
{user: a, password: a}
|
||||
]
|
||||
exports: [
|
||||
{stream: puba.>}
|
||||
{service: pubq.>}
|
||||
{stream: b.>, accounts: [B]}
|
||||
{service: q.b, accounts: [B]}
|
||||
]
|
||||
},
|
||||
B: {
|
||||
users: [
|
||||
{user: b, password: b}
|
||||
]
|
||||
imports: [
|
||||
{stream: {account: A, subject: b.>}}
|
||||
{service: {account: A, subject: q.b}}
|
||||
]
|
||||
}
|
||||
C: {
|
||||
users: [
|
||||
{user: c, password: c}
|
||||
]
|
||||
imports: [
|
||||
{stream: {account: A, subject: puba.>}, prefix: from_a}
|
||||
{service: {account: A, subject: pubq.C}, to: Q}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Account `B` imports:
|
||||
|
||||
* the private stream from `A` that only `B` can receive on `b.>`
|
||||
* the private service from `A` that only `B` can send requests on `q.b`
|
||||
|
||||
Account `C` imports the public service and stream from `A`, but also:
|
||||
|
||||
* remaps the `puba.>` stream to be locally available under `from_a.puba.>`. The messages will have their original subjects prefixed by `from_a`.
|
||||
* remaps the `pubq.C` service to be locally available under `Q`. Account `C` only needs to publish to `Q` locally.
|
||||
|
||||
It is important to reiterate that:
|
||||
|
||||
* stream `puba.>` from `A` is visible to all external accounts that imports the stream.
|
||||
* service `pubq.>` from `A` is available to all external accounts so long as they know the full subject of where to send the request. Typically an account will export a wildcard service but then coordinate with a client account on specific subjects where requests will be answered. On our example, account `C` access the service on `pubq.C` \(but has mapped it for simplicity to `Q`\).
|
||||
* stream `b.>` is private, only account `B` can receive messages from the stream.
|
||||
* service `q.b` is private; only account `B` can send requests to the service.
|
||||
* When `C` publishes a request to `Q`, local `C` clients will see `Q` messages. However, the server will remap `Q` to `pubq.C` and forward the requests to account `A`.
|
||||
|
||||
@@ -0,0 +1,20 @@
|
||||
# Authentication Timeout
|
||||
|
||||
Much like the [`tls timeout` option](../tls.md#tls-timeout), authentication can specify a `timeout` value.
|
||||
|
||||
If a client doesn't authenticate to the server within the specified time, the server disconnects the server to prevent abuses.
|
||||
|
||||
Timeouts are specified in seconds \(and can be fractional\).
|
||||
|
||||
As with TLS timeouts, long timeouts can be an opportunity for abuse. If setting the authentication timeout, it is important to note that it should be longer than the `tls timeout` option, as the authentication timeout includes the TLS upgrade time.
|
||||
|
||||
```text
|
||||
authorization: {
|
||||
timeout: 3
|
||||
users: [
|
||||
{user: a, password b},
|
||||
{user: b, password a}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
@@ -0,0 +1,81 @@
|
||||
# JWTs
|
||||
|
||||
_Accounts_ expand on [Accounts](accounts.md) and [NKeys](nkey_auth.md) authentication foundation to create a decentralized authentication and authorization model.
|
||||
|
||||
With other authentication mechanisms, configuration for identifying a user or an account is in the server configuration file. JWT authentication leverages [JSON Web Tokens \(JWT\)](https://jwt.io/) to describe the various entities supported. When a client connects, servers query for account JWTs and validate a trust chain. Users are not directly tracked by the server, but rather verified as belonging to an account. This enables the management of users without requiring server configuration updates.
|
||||
|
||||
Effectively, accounts provide for a distributed configuration paradigm. Previously each user \(or client\) needed to be known and authorized a priori in the server’s configuration requiring an administrator to modify and update server configurations. Accounts eliminate these chores.
|
||||
|
||||
## JSON Web Tokens
|
||||
|
||||
[JSON Web Tokens \(JWT\)](https://jwt.io/) are an open and industry standard [RFC7519](https://tools.ietf.org/html/rfc7519) method for representing claims securely between two parties.
|
||||
|
||||
Claims are a fancy way of asserting information on a _subject_. In this context, a _subject_ is the entity being described \(not a messaging subject\). Standard JWT claims are typically digitally signed and verified.
|
||||
|
||||
NATS further restricts JWTs by requiring that JWTs be:
|
||||
|
||||
* Digitally signed _always_ and only using [Ed25519](https://ed25519.cr.yp.to/).
|
||||
* NATS adopts the convention that all _Issuer_ and _Subject_ fields in a JWT claim must be a public [NKEY](nkey_auth.md).
|
||||
* _Issuer_ and _Subject_ must match specific roles depending on the claim [NKeys](https://github.com/nats-io/nkeys).
|
||||
|
||||
### NKey Roles
|
||||
|
||||
NKey Roles are:
|
||||
|
||||
* Operators
|
||||
* Accounts
|
||||
* Users
|
||||
|
||||
Roles are hierarchical and form a chain of trust. Operators issue Accounts which in turn issue Users. Servers trust specific Operators. If an account is issued by an operator that is trusted, account users are trusted.
|
||||
|
||||
## The Authentication Process
|
||||
|
||||
When a _User_ connects to a server, it presents a JWT issued by its _Account_. The user proves its identity by signing a server-issued cryptographic challenge with its private key. The signature verification validates that the signature is attributable to the user's public key. Next, the server retrieves the associated account JWT that issued the user. It verifies the _User_ issuer matches the referenced account. Finally, the server checks that a trusted _Operator_ issued the _Account_, completing the trust chain verification.
|
||||
|
||||
## The Authorization Process
|
||||
|
||||
From an authorization point of view, the account provides information on messaging subjects that are imported from other accounts \(including any ancillary related authorization\) as well as messaging subjects exported to other accounts. Accounts can also bear limits, such as the maximum number of connections they may have. A user JWT can express restrictions on the messaging subjects to which it can publish or subscribe.
|
||||
|
||||
When a new user is added to an account, the account configuration need not change, as each user can and should have its own user JWT that can be verified by simply resolving its parent account.
|
||||
|
||||
## JWTs and Privacy
|
||||
|
||||
One crucial detail to keep in mind is that while in other systems JWTs are used as sessions or proof of authentication, NATS JWTs are only used as configuration describing:
|
||||
|
||||
* the public ID of the entity
|
||||
* the public ID of the entity that issued it
|
||||
* capabilities of the entity
|
||||
|
||||
Authentication is a public key cryptographic process — a client signs a nonce proving identity while the trust chain and configuration provides the authorization.
|
||||
|
||||
The server is never aware of private keys but can verify that a signer or issuer indeed matches a specified or known public key.
|
||||
|
||||
Lastly, all NATS JWTs \(Operators, Accounts, Users and others\) are expected to be signed using the [Ed25519](https://ed25519.cr.yp.to/) algorithm. If they are not, they are rejected by the system.
|
||||
|
||||
## Sharing Between Accounts
|
||||
|
||||
While accounts provide isolation, there are many cases where you want to be able to consume messages produced by one account in another. There are two kinds of shares an account can _export_:
|
||||
|
||||
* Streams
|
||||
* Services
|
||||
|
||||
Streams are messages published by a foreign account; Subscribers in an _importing_ account can receive messages from a stream _exported_ by another.
|
||||
|
||||
Services are endpoints exported by a foreign account; Requesters _importing_ the service can publish requests to the _exported_ endpoint.
|
||||
|
||||
Streams and Services can be public; Public exports can be imported by any account. Or they can be private. Private streams and services require an authorization token from the exporting account that authorizes the foreign account to import the stream or service.
|
||||
|
||||
An importing account can remap the subject where a stream subscriber will receive messages or where a service requestor can make requests. This enables the importing account to simplify their subject space.
|
||||
|
||||
Exports and imports from an account are explicit, and they are visible in the account's JWT. For private exports, the import will embed an authorization token or a URL storing the token. Imports and exports make it easy to audit where data is coming from or going to.
|
||||
|
||||
## Configuration
|
||||
|
||||
Entity JWT configuration is done using the [`nsc` tool](../../../../nats-tools/nsc/). The basic steps include:
|
||||
|
||||
* [Creation of an operator JWT](../../../../nats-tools/nsc/nsc.md#creating-an-operator)
|
||||
* [Configuring an Account Server](../../../../nats-tools/nsc/nsc.md#account-server-configuration)
|
||||
* [Setting up the NATS server to resolve Accounts](../../../../nats-tools/nsc/nsc.md#nats-server-configuration)
|
||||
|
||||
After that, `nsc` is used to create and edit accounts and users.
|
||||
|
||||
@@ -0,0 +1,61 @@
|
||||
# NKeys
|
||||
|
||||
NKeys are a new, highly secure public-key signature system based on [Ed25519](https://ed25519.cr.yp.to/).
|
||||
|
||||
With NKeys the server can verify identities without ever storing or ever seeing private keys. The authentication system works by requiring a connecting client to provide its public key and digitally sign a challenge with its private key. The server generates a random challenge with every connection request, making it immune to playback attacks. The generated signature is validated against the provided public key, thus proving the identity of the client. If the public key is known to the server, authentication succeeds.
|
||||
|
||||
> NKey is an excellent replacement for token authentication because a connecting client will have to prove it controls the private key for the authorized public key.
|
||||
|
||||
To generate nkeys, you'll need the [`nk` tool](../../../../nats-tools/nk.md).
|
||||
|
||||
## Generating NKeys and Configuring the Server
|
||||
|
||||
To generate a _User_ NKEY:
|
||||
|
||||
```text
|
||||
> nk -gen user -pubout
|
||||
SUACSSL3UAHUDXKFSNVUZRF5UHPMWZ6BFDTJ7M6USDXIEDNPPQYYYCU3VY
|
||||
UDXU4RCSJNZOIQHZNWXHXORDPRTGNJAHAHFRGZNEEJCPQTT2M7NLCNF4
|
||||
```
|
||||
|
||||
The first output line starts with the letter `S` for _Seed_. The second letter, `U` stands for _User_. Seeds are private keys; you should treat them as secrets and guard them with care.
|
||||
|
||||
The second line starts with the letter `U` for _User_ and is a public key which can be safely shared.
|
||||
|
||||
To use nkey authentication, add a user, and set the `nkey` property to the public key of the user you want to authenticate:
|
||||
|
||||
```text
|
||||
authorization: {
|
||||
users: [
|
||||
{ nkey: UDXU4RCSJNZOIQHZNWXHXORDPRTGNJAHAHFRGZNEEJCPQTT2M7NLCNF4 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Note that the user section sets the `nkey` property \(user/password/token properties are not needed\). Add `permission` sections as required.
|
||||
|
||||
## Client Configuration
|
||||
|
||||
Now that you have a user nkey, let's configure a client to use it for authentication. As an example, here are the connect options for the node client:
|
||||
|
||||
```javascript
|
||||
const NATS = require('nats');
|
||||
const nkeys = require('ts-nkeys');
|
||||
|
||||
const nkey_seed = ‘SUACSSL3UAHUDXKFSNVUZRF5UHPMWZ6BFDTJ7M6USDXIEDNPPQYYYCU3VY’;
|
||||
const nc = NATS.connect({
|
||||
port: PORT,
|
||||
nkey: 'UDXU4RCSJNZOIQHZNWXHXORDPRTGNJAHAHFRGZNEEJCPQTT2M7NLCNF4',
|
||||
sigCB: function (nonce) {
|
||||
// client loads seed safely from a file
|
||||
// or some constant like `nkey_seed` defined in
|
||||
// the program
|
||||
const sk = nkeys.fromSeed(Buffer.from(nkey_seed));
|
||||
return sk.sign(nonce);
|
||||
}
|
||||
});
|
||||
...
|
||||
```
|
||||
|
||||
The client provides a function that it uses to parse the seed \(the private key\) and sign the connection challenge.
|
||||
|
||||
@@ -0,0 +1,97 @@
|
||||
# TLS Authentication
|
||||
|
||||
The server can require TLS certificates from a client. When needed, you can use the certificates to:
|
||||
|
||||
* Validate the client certificate matches a known or trusted CA
|
||||
* Extract information from a trusted certificate to provide authentication
|
||||
|
||||
## Validating a Client Certificate
|
||||
|
||||
The server can verify a client certificate using a CA certificate. To require verification, add the option `verify` the TLS configuration section as follows:
|
||||
|
||||
```text
|
||||
tls {
|
||||
cert_file: "./configs/certs/server-cert.pem"
|
||||
key_file: "./configs/certs/server-key.pem"
|
||||
ca_file: "./configs/certs/ca.pem"
|
||||
verify: true
|
||||
}
|
||||
```
|
||||
|
||||
Or via the command line:
|
||||
|
||||
```bash
|
||||
> ./nats-server --tlsverify --tlscert=./test/configs/certs/server-cert.pem --tlskey=./test/configs/certs/server-key.pem --tlscacert=./test/configs/certs/ca.pem
|
||||
```
|
||||
|
||||
This option verifies the client's certificate is signed by the CA specified in the `ca_file` option.
|
||||
|
||||
## Mapping Client Certificates To A User
|
||||
|
||||
In addition to verifying that a specified CA issued a client certificate, you can use information encoded in the certificate to authenticate a client. The client wouldn't have to provide or track usernames or passwords.
|
||||
|
||||
To have TLS Mutual Authentication map certificate attributes to the user's identity use `verify_and_map` as shown as follows:
|
||||
|
||||
```text
|
||||
tls {
|
||||
cert_file: "./configs/certs/server-cert.pem"
|
||||
key_file: "./configs/certs/server-key.pem"
|
||||
ca_file: "./configs/certs/ca.pem"
|
||||
# Require a client certificate and map user id from certificate
|
||||
verify_and_map: true
|
||||
}
|
||||
```
|
||||
|
||||
> Note that `verify` was changed to `verify_and_map`.
|
||||
|
||||
There are two options for certificate attributes that can be mapped to user names. The first is the email address in the Subject Alternative Name \(SAN\) field of the certificate. While generating a certificate with this attribute is outside the scope of this document, you can view one with `openssl`:
|
||||
|
||||
```text
|
||||
$ openssl x509 -noout -text -in test/configs/certs/client-id-auth-cert.pem
|
||||
Certificate:
|
||||
...
|
||||
X509v3 extensions:
|
||||
X509v3 Subject Alternative Name:
|
||||
DNS:localhost, IP Address:127.0.0.1, email:derek@nats.io
|
||||
X509v3 Extended Key Usage:
|
||||
TLS Web Client Authentication
|
||||
...
|
||||
```
|
||||
|
||||
The configuration to authorize this user would be as follows:
|
||||
|
||||
```text
|
||||
authorization {
|
||||
users = [
|
||||
{user: "derek@nats.io"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
> Note: This configuration only works for the first email address if there are multiple emails in the SAN field.
|
||||
|
||||
The second option is to use the RFC 2253 Distinguished Names syntax from the certificate subject as follows:
|
||||
|
||||
```text
|
||||
$ openssl x509 -noout -text -in test/configs/certs/tlsauth/client2.pem
|
||||
Certificate:
|
||||
Data:
|
||||
...
|
||||
Subject: OU=CNCF, CN=example.com
|
||||
...
|
||||
```
|
||||
|
||||
The configuration to authorize this user would be as follows:
|
||||
|
||||
```text
|
||||
authorization {
|
||||
users = [
|
||||
{user: "CN=example.com,OU=CNCF"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## TLS Timeout
|
||||
|
||||
[TLS timeout](../tls.md#tls-timeout) is described here.
|
||||
|
||||
54
nats-server/configuration/securing_nats/auth_intro/tokens.md
Normal file
54
nats-server/configuration/securing_nats/auth_intro/tokens.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# Tokens
|
||||
|
||||
Token authentication is a string that if provided by a client, allows it to connect. It is the most straightforward authentication provided by the NATS server.
|
||||
|
||||
To use token authentication, you can specify an `authorization` section with the `token` property set:
|
||||
|
||||
```text
|
||||
authorization {
|
||||
token: "s3cr3t"
|
||||
}
|
||||
```
|
||||
|
||||
Token authentication can be used in the authorization section for clients and clusters.
|
||||
|
||||
Or start the server with the `--auth` flag:
|
||||
|
||||
```text
|
||||
> nats-server --auth s3cr3t
|
||||
```
|
||||
|
||||
A client can easily connect by specifying the server URL:
|
||||
|
||||
```text
|
||||
> nats-sub -s nats://s3cr3t@localhost:4222 ">"
|
||||
Listening on [>]
|
||||
```
|
||||
|
||||
## Bcrypted Tokens
|
||||
|
||||
Tokens can be bcrypted enabling an additional layer of security, as the clear-text version of the token would not be persisted on the server configuration file.
|
||||
|
||||
You can generate bcrypted tokens and passwords using the [`mkpasswd`](../../../../nats-tools/mkpasswd.md) tool:
|
||||
|
||||
```text
|
||||
> mkpasswd
|
||||
pass: dag0HTXl4RGg7dXdaJwbC8
|
||||
bcrypt hash: $2a$11$PWIFAL8RsWyGI3jVZtO9Nu8.6jOxzxfZo7c/W0eLk017hjgUKWrhy
|
||||
```
|
||||
|
||||
Here's a simple configuration file:
|
||||
|
||||
```text
|
||||
authorization {
|
||||
token: "$2a$11$PWIFAL8RsWyGI3jVZtO9Nu8.6jOxzxfZo7c/W0eLk017hjgUKWrhy"
|
||||
}
|
||||
```
|
||||
|
||||
The client will still require the clear-text token to connect:
|
||||
|
||||
```text
|
||||
nats-sub -s nats://dag0HTXl4RGg7dXdaJwbC8@localhost:4222 ">"
|
||||
Listening on [>]
|
||||
```
|
||||
|
||||
@@ -0,0 +1,59 @@
|
||||
# Username/Password
|
||||
|
||||
You can authenticate one or more clients using username and passwords; this enables you to have greater control over the management and issuance of credential secrets.
|
||||
|
||||
For a single user:
|
||||
|
||||
```text
|
||||
authorization: {
|
||||
user: a,
|
||||
password: b
|
||||
}
|
||||
```
|
||||
|
||||
You can also specify a single username/password by:
|
||||
|
||||
```text
|
||||
> nats-server --user a --pass b
|
||||
```
|
||||
|
||||
For multiple users:
|
||||
|
||||
```text
|
||||
authorization: {
|
||||
users: [
|
||||
{user: a, password: b},
|
||||
{user: b, password: a}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Bcrypted Passwords
|
||||
|
||||
Username/password also supports bcrypted passwords using the [`mkpasswd`](../../../../nats-tools/mkpasswd.md) tool. Simply replace the clear text password with the bcrypted entries:
|
||||
|
||||
```text
|
||||
> mkpasswd
|
||||
ass: (Uffs#rG42PAu#Oxi^BNng
|
||||
bcrypt hash: $2a$11$V1qrpBt8/SLfEBr4NJq4T.2mg8chx8.MTblUiTBOLV3MKDeAy.f7u
|
||||
```
|
||||
|
||||
And on the configuration file:
|
||||
|
||||
```text
|
||||
authorization: {
|
||||
users: [
|
||||
{user: a, password: "$2a$11$V1qrpBt8/SLfEBr4NJq4T.2mg8chx8.MTblUiTBOLV3MKDeAy.f7u"},
|
||||
...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Reloading a Configuration
|
||||
|
||||
As you add/remove passwords from the server configuration file, you'll want your changes to take effect. To reload without restarting the server and disconnecting clients, do:
|
||||
|
||||
```text
|
||||
> nats-server --signal reload
|
||||
```
|
||||
|
||||
68
nats-server/configuration/securing_nats/authorization.md
Normal file
68
nats-server/configuration/securing_nats/authorization.md
Normal file
@@ -0,0 +1,68 @@
|
||||
# Authorization
|
||||
|
||||
The NATS server supports authorization using subject-level permissions on a per-user basis. Permission-based authorization is available with multi-user authentication via the `users` list.
|
||||
|
||||
Each permission specifies the subjects the user can publish to and subscribe to. The parser is generous at understanding what the intent is, so both arrays and singletons are processed. For more complex configuration, you can specify a `permission` object which explicitly allows or denies subjects. The specified subjects can specify wildcards. Permissions can make use of [variables](../#variables).
|
||||
|
||||
You configure authorization by creating a `permissions` entry in the `authorization` object.
|
||||
|
||||
## Permissions Configuration Map
|
||||
|
||||
The `permissions` map specify subjects that can be subscribed to or published by the specified client.
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| `publish` | subject, list of subjects, or permission map the client can publish |
|
||||
| `subscribe` | subject, list of subjects, or permission map the client can subscribe |
|
||||
|
||||
## Permission Map
|
||||
|
||||
The `permission` map provides additional properties for configuring a `permissions` map. Instead of providing a list of subjects that are allowed, the `permission` map allows you to explicitly list subjects you want to`allow` or `deny`:
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| `allow` | List of subject names that are allowed to the client |
|
||||
| `deny` | List of subjects that are denied to the client |
|
||||
|
||||
**Important Note** NATS Authorizations can be _allow lists_, _deny lists_, or both. It is important to not break request/reply patterns. In some cases \(as shown below\) you need to add rules as above with Alice and Bob for the `_INBOX.>` pattern. If an unauthorized client publishes or attempts to subscribe to a subject that has not been _allow listed_, the action fails and is logged at the server, and an error message is returned to the client.
|
||||
|
||||
## Example
|
||||
|
||||
Here is an example authorization configuration that uses _variables_ which defines four users, three of whom are assigned explicit permissions.
|
||||
|
||||
```text
|
||||
authorization {
|
||||
ADMIN = {
|
||||
publish = ">"
|
||||
subscribe = ">"
|
||||
}
|
||||
REQUESTOR = {
|
||||
publish = ["req.a", "req.b"]
|
||||
subscribe = "_INBOX.>"
|
||||
}
|
||||
RESPONDER = {
|
||||
subscribe = ["req.a", "req.b"]
|
||||
publish = "_INBOX.>"
|
||||
}
|
||||
DEFAULT_PERMISSIONS = {
|
||||
publish = "SANDBOX.*"
|
||||
subscribe = ["PUBLIC.>", "_INBOX.>"]
|
||||
}
|
||||
users = [
|
||||
{user: admin, password: $ADMIN_PASS, permissions: $ADMIN}
|
||||
{user: client, password: $CLIENT_PASS, permissions: $REQUESTOR}
|
||||
{user: service, password: $SERVICE_PASS, permissions: $RESPONDER}
|
||||
{user: other, password: $OTHER_PASS}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
> _DEFAULT\_PERMISSIONS_ is a special permissions name. If defined, it applies to all users that don't have specific permissions set.
|
||||
|
||||
* _admin_ has `ADMIN` permissions and can publish/subscribe on any subject. We use the wildcard `>` to match any subject.
|
||||
* _client_ is a `REQUESTOR` and can publish requests on subjects `req.a` or `req.b`, and subscribe to anything that is a response \(`_INBOX.>`\).
|
||||
* _service_ is a `RESPONDER` to `req.a` and `req.b` requests, so it needs to be able to subscribe to the request subjects and respond to client's that can publish requests to `req.a` and `req.b`. The reply subject is an inbox. Typically inboxes start with the prefix `_INBOX.` followed by a generated string. The `_INBOX.>` subject matches all subjects that begin with `_INBOX.`.
|
||||
* _other_ has no permissions granted and therefore inherits the default permission set. You set the inherited default permissions by assigning them to the `default_permissions` entry inside of the authorization configuration block.
|
||||
|
||||
> Note that in the above example, any client with permissions to subscribe to `_INBOX.>` can receive _all_ responses published. More sensitive installations will want to add or subset the prefix to further limit subjects that a client can subscribe. Alternatively, [_Accounts_](auth_intro/accounts.md) allow complete isolation limiting what members of an account can see.
|
||||
|
||||
61
nats-server/configuration/securing_nats/tls.md
Normal file
61
nats-server/configuration/securing_nats/tls.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# Enabling TLS
|
||||
|
||||
The NATS server uses modern TLS semantics to encrypt client, route, and monitoring connections. Server configuration revolves around a `tls` map, which has the following properties:
|
||||
|
||||
| Property | Description |
|
||||
| :--- | :--- |
|
||||
| `ca_file` | TLS certificate authority file. |
|
||||
| `cert_file` | TLS certificate file. |
|
||||
| `cipher_suites` | When set, only the specified TLS cipher suites will be allowed. Values must match the golang version used to build the server. |
|
||||
| `curve_preferences` | List of TLS cipher curves to use in order. |
|
||||
| `insecure` | Skip certificate verification. |
|
||||
| `key_file` | TLS certificate key file. |
|
||||
| `timeout` | TLS handshake timeout in fractional seconds. Default set to 2 seconds. |
|
||||
| `verify_and_map` | If `true`, require and verify client certificates and map certificate values for authentication purposes. |
|
||||
| `verify` | If `true`, require and verify client certificates. |
|
||||
|
||||
The simplest configuration:
|
||||
|
||||
```text
|
||||
tls: {
|
||||
cert_file: "./server-cert.pem"
|
||||
key_file: "./server-key.pem"
|
||||
}
|
||||
```
|
||||
|
||||
Or by using [server options](../../flags.md#tls-options):
|
||||
|
||||
```text
|
||||
> nats-server --tls --tlscert=./server-cert.pem --tlskey=./server-key.pem
|
||||
[21417] 2019/05/16 11:21:19.801539 [INF] Starting nats-server version 2.0.0
|
||||
[21417] 2019/05/16 11:21:19.801621 [INF] Git commit [not set]
|
||||
[21417] 2019/05/16 11:21:19.801777 [INF] Listening for client connections on 0.0.0.0:4222
|
||||
[21417] 2019/05/16 11:21:19.801782 [INF] TLS required for client connections
|
||||
[21417] 2019/05/16 11:21:19.801785 [INF] Server id is ND6ZZDQQDGKYQGDD6QN2Y26YEGLTH6BMMOJZ2XJB2VASPVII3XD6RFOQ
|
||||
[21417] 2019/05/16 11:21:19.801787 [INF] Server is ready
|
||||
```
|
||||
|
||||
Notice that the log indicates that the client connections will be required to use TLS. If you run the server in Debug mode with `-D` or `-DV`, the logs will show the cipher suite selection for each connected client:
|
||||
|
||||
```text
|
||||
[22242] 2019/05/16 11:22:20.216322 [DBG] 127.0.0.1:51383 - cid:1 - Client connection created
|
||||
[22242] 2019/05/16 11:22:20.216539 [DBG] 127.0.0.1:51383 - cid:1 - Starting TLS client connection handshake
|
||||
[22242] 2019/05/16 11:22:20.367275 [DBG] 127.0.0.1:51383 - cid:1 - TLS handshake complete
|
||||
[22242] 2019/05/16 11:22:20.367291 [DBG] 127.0.0.1:51383 - cid:1 - TLS version 1.2, cipher suite TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
|
||||
```
|
||||
|
||||
When a `tls` section is specified at the root of the configuration, it also affects the monitoring port if `https_port` option is specified. Other sections such as `cluster` can specify a `tls` block.
|
||||
|
||||
## TLS Timeout
|
||||
|
||||
The `timeout` setting enables you to control the amount of time that a client is allowed to upgrade its connection to tls. If your clients are experiencing disconnects during TLS handshake, you'll want to increase the value, however, if you do be aware that an extended `timeout` exposes your server to attacks where a client doesn't upgrade to TLS and thus consumes resources. Conversely, if you reduce the TLS `timeout` too much, you are likely to experience handshake errors.
|
||||
|
||||
```text
|
||||
tls: {
|
||||
cert_file: "./server-cert.pem"
|
||||
key_file: "./server-key.pem"
|
||||
# clients will fail to connect (value is too low)
|
||||
timeout: 0.0001
|
||||
}
|
||||
```
|
||||
|
||||
88
nats-server/flags.md
Normal file
88
nats-server/flags.md
Normal file
@@ -0,0 +1,88 @@
|
||||
# Flags
|
||||
|
||||
The NATS server has many flags to customize its behavior without having to write a configuration file.
|
||||
|
||||
The configuration flags revolve around:
|
||||
|
||||
* Server Options
|
||||
* Logging
|
||||
* Authorization
|
||||
* TLS Security
|
||||
* Clustering
|
||||
* Information
|
||||
|
||||
## Server Options
|
||||
|
||||
| Flag | Description |
|
||||
| :--- | :--- |
|
||||
| `-a`, `--addr` | Host address to bind to \(default: `0.0.0.0` - all interfaces\). |
|
||||
| `-p`, `--port` | NATS client port \(default: 4222\). |
|
||||
| `-P`, `--pid` | File to store the process ID \(PID\). |
|
||||
| `-m`, `--http_port` | HTTP port for monitoring dashboard \(exclusive of `--https_port`\). |
|
||||
| `-ms`, `--https_port` | HTTPS port monitoring for monitoring dashboard \(exclusive of `--http_port`\). |
|
||||
| `-c`, `--config` | Path to NATS server configuration file. |
|
||||
| `-sl`, `--signal` | Send a signal to nats-server process. See [process signaling](nats_admin/signals.md). |
|
||||
| `--client_advertise` | Client HostPort to advertise to other servers. |
|
||||
| `-t` | Test configuration and exit |
|
||||
|
||||
## Authentication Options
|
||||
|
||||
The following options control straightforward authentication:
|
||||
|
||||
| Flag | Description |
|
||||
| :--- | :--- |
|
||||
| `--user` | Required _username_ for connections \(exclusive of `--token`\). |
|
||||
| `--pass` | Required _password_ for connections \(exclusive of `--token`\). |
|
||||
| `--auth` | Required _authorization token_ for connections \(exclusive of `--user` and `--password`\). |
|
||||
|
||||
See [token authentication](configuration/securing_nats/auth_intro/tokens.md), and [username/password](configuration/securing_nats/auth_intro/username_password.md) for more information.
|
||||
|
||||
## Logging Options
|
||||
|
||||
The following flags are available on the server to configure logging:
|
||||
|
||||
| Flag | Description |
|
||||
| :--- | :--- |
|
||||
| `-l`, `--log` | File to redirect log output |
|
||||
| `-T`, `--logtime` | Specify `-T=false` to disable timestamping log entries |
|
||||
| `-s`, `--syslog` | Log to syslog or windows event log |
|
||||
| `-r`, `--remote_syslog` | The syslog server address, like `udp://localhost:514` |
|
||||
| `-D`, `--debug` | Enable debugging output |
|
||||
| `-V`, `--trace` | Enable protocol trace log messages |
|
||||
| `-DV` | Enable both debug and protocol trace messages |
|
||||
| `--max_traced_msg_len` | Maximum printable length for traced messages. 0 for unlimited |
|
||||
|
||||
You can read more about [logging configuration here](configuration/logging.md).
|
||||
|
||||
## TLS Options
|
||||
|
||||
| Flag | Description |
|
||||
| :--- | :--- |
|
||||
| `--tls` | Enable TLS, do not verify clients |
|
||||
| `--tlscert` | Server certificate file |
|
||||
| `--tlskey` | Private key for server certificate |
|
||||
| `--tlsverify` | Enable client TLS certificate verification |
|
||||
| `--tlscacert` | Client certificate CA for verification |
|
||||
|
||||
## Cluster Options
|
||||
|
||||
The following flags are available on the server to configure clustering:
|
||||
|
||||
| Flag | Description |
|
||||
| :--- | :--- |
|
||||
| `--routes` | Comma-separated list of cluster URLs to solicit and connect |
|
||||
| `--cluster` | Cluster URL for clustering requests |
|
||||
| `--no_advertise` | Do not advertise known cluster information to clients |
|
||||
| `--cluster_advertise` | Cluster URL to advertise to other servers |
|
||||
| `--connect_retries` | For implicit routes, number of connect retries |
|
||||
|
||||
You can read more about [clustering configuration here](configuration/clustering/).
|
||||
|
||||
## Common Options
|
||||
|
||||
| Flag | Description |
|
||||
| :--- | :--- |
|
||||
| `-h`, `--help` | Show this message |
|
||||
| `-v`, `--version` | Show version |
|
||||
| `--help_tls` | TLS help |
|
||||
|
||||
105
nats-server/installation.md
Normal file
105
nats-server/installation.md
Normal file
@@ -0,0 +1,105 @@
|
||||
# Installing
|
||||
|
||||
NATS philosophy is simplicity. Installation is just decompressing a zip file and copying the binary to an appropriate directory; you can also use your favorite package manager. Here's a list of different ways you can install or run NATS:
|
||||
|
||||
* [Docker](installation.md#installing-via-docker)
|
||||
* [Kubernetes](installation.md#installing-on-kubernetes-with-nats-operator)
|
||||
* [Package Manager](installation.md#installing-via-a-package-manager)
|
||||
* [Release Zip](installation.md#downloading-a-release-build)
|
||||
* [Development Build](installation.md#installing-from-the-source)
|
||||
|
||||
## Installing via Docker
|
||||
|
||||
With Docker you can install the server easily without scattering binaries and other artifacts on your system. The only pre-requisite is to [install docker](https://docs.docker.com/install).
|
||||
|
||||
```text
|
||||
> docker pull nats:latest
|
||||
latest: Pulling from library/nats
|
||||
Digest: sha256:0c98cdfc4332c0de539a064bfab502a24aae18ef7475ddcc7081331502327354
|
||||
Status: Image is up to date for nats:latest
|
||||
docker.io/library/nats:latest
|
||||
```
|
||||
|
||||
To run NATS on Docker:
|
||||
|
||||
```text
|
||||
> docker run -p 4222:4222 -ti nats:latest
|
||||
[1] 2019/05/24 15:42:58.228063 [INF] Starting nats-server version #.#.#
|
||||
[1] 2019/05/24 15:42:58.228115 [INF] Git commit [#######]
|
||||
[1] 2019/05/24 15:42:58.228201 [INF] Starting http monitor on 0.0.0.0:8222
|
||||
[1] 2019/05/24 15:42:58.228740 [INF] Listening for client connections on 0.0.0.0:4222
|
||||
[1] 2019/05/24 15:42:58.228765 [INF] Server is ready
|
||||
[1] 2019/05/24 15:42:58.229003 [INF] Listening for route connections on 0.0.0.0:6222
|
||||
```
|
||||
|
||||
More information on [containerized NATS is available here](nats_docker/).
|
||||
|
||||
## Installing on Kubernetes with NATS Operator
|
||||
|
||||
Installation via the NATS Operator is beyond this tutorial. You can read about the [NATS Operator](https://github.com/nats-io/nats-operator) here.
|
||||
|
||||
## Installing via a Package Manager
|
||||
|
||||
On Windows:
|
||||
|
||||
```text
|
||||
> choco install nats-server
|
||||
```
|
||||
|
||||
On Mac OS:
|
||||
|
||||
```text
|
||||
> brew install nats-server
|
||||
```
|
||||
|
||||
To test your installation \(provided the executable is visible to your shell\):
|
||||
|
||||
```text
|
||||
> nats-server
|
||||
[41634] 2019/05/13 09:42:11.745919 [INF] Starting nats-server version 2.0.0
|
||||
[41634] 2019/05/13 09:42:11.746240 [INF] Listening for client connections on 0.0.0.0:4222
|
||||
...
|
||||
[41634] 2019/05/13 09:42:11.746249 [INF] Server id is NBNYNR4ZNTH4N2UQKSAAKBAFLDV3PZO4OUYONSUIQASTQT7BT4ZF6WX7
|
||||
[41634] 2019/05/13 09:42:11.746252 [INF] Server is ready
|
||||
```
|
||||
|
||||
## Downloading a Release Build
|
||||
|
||||
You can find the latest release of nats-server [here](https://github.com/nats-io/nats-server/releases/latest).
|
||||
|
||||
Download the zip file matching your systems architecture, and unzip. For this example, assuming version 2.0.0 of the server and a Linux AMD64:
|
||||
|
||||
```text
|
||||
> curl -L https://github.com/nats-io/nats-server/releases/download/v2.0.0/nats-server-v2.0.0-linux-amd64.zip -o nats-server.zip
|
||||
|
||||
> unzip nats-server.zip -d nats-server
|
||||
Archive: nats-server.zip
|
||||
creating: nats-server-v2.0.0-darwin-amd64/
|
||||
inflating: nats-server-v2.0.0-darwin-amd64/README.md
|
||||
inflating: nats-server-v2.0.0-darwin-amd64/LICENSE
|
||||
inflating: nats-server-v2.0.0darwin-amd64/nats-server
|
||||
|
||||
> cp nats-server-v2.0.0darwin-amd64/nats-server /usr/local/bin
|
||||
```
|
||||
|
||||
## Installing From the Source
|
||||
|
||||
If you have Go installed, installing the binary is easy:
|
||||
|
||||
```text
|
||||
> GO111MODULE=on go get github.com/nats-io/nats-server/v2
|
||||
```
|
||||
|
||||
This mechanism will install a build of [master](https://github.com/nats-io/nats-server), which almost certainly will not be a released version. If you are a developer and want to play with the latest, this is the easiest way.
|
||||
|
||||
To test your installation \(provided the $GOPATH/bin is set\):
|
||||
|
||||
```text
|
||||
> nats-server
|
||||
[41634] 2019/05/13 09:42:11.745919 [INF] Starting nats-server version 2.0.0
|
||||
[41634] 2019/05/13 09:42:11.746240 [INF] Listening for client connections on 0.0.0.0:4222
|
||||
...
|
||||
[41634] 2019/05/13 09:42:11.746249 [INF] Server id is NBNYNR4ZNTH4N2UQKSAAKBAFLDV3PZO4OUYONSUIQASTQT7BT4ZF6WX7
|
||||
[41634] 2019/05/13 09:42:11.746252 [INF] Server is ready
|
||||
```
|
||||
|
||||
8
nats-server/nats_admin/README.md
Normal file
8
nats-server/nats_admin/README.md
Normal file
@@ -0,0 +1,8 @@
|
||||
# Managing A NATS Server
|
||||
|
||||
Managing a NATS server is simple, typical lifecycle operations include:
|
||||
|
||||
* [Sending signals](signals.md) to a server to reload a configuration or rotate log files
|
||||
* [Upgrading](upgrading_cluster.md) a server \(or cluster\)
|
||||
* Understanding [slow consumers](slow_consumers.md)
|
||||
|
||||
43
nats-server/nats_admin/signals.md
Normal file
43
nats-server/nats_admin/signals.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# Signals
|
||||
|
||||
On Unix systems, the NATS server responds to the following signals:
|
||||
|
||||
| Signal | Result |
|
||||
| :--- | :--- |
|
||||
| `SIGKILL` | Kills the process immediately |
|
||||
| `SIGINT` | Stops the server gracefully |
|
||||
| `SIGUSR1` | Reopens the log file for log rotation |
|
||||
| `SIGHUP` | Reloads server configuration file |
|
||||
| `SIGUSR2` | Stops the server after evicting all clients \(lame duck mode\) |
|
||||
|
||||
The `nats-server` binary can be used to send these signals to running NATS servers using the `-sl` flag:
|
||||
|
||||
```bash
|
||||
# Quit the server
|
||||
nats-server --signal quit
|
||||
|
||||
# Stop the server
|
||||
nats-server --signal stop
|
||||
|
||||
# Reopen log file for log rotation
|
||||
nats-server --signal reopen
|
||||
|
||||
# Reload server configuration
|
||||
nats-server --signal reload
|
||||
|
||||
# Lame duck mode server configuration
|
||||
nats-server --signal ldm
|
||||
```
|
||||
|
||||
If there are multiple `nats-server` processes running, or if `pgrep` isn't available, you must either specify a PID or the absolute path to a PID file:
|
||||
|
||||
```bash
|
||||
nats-server --signal stop=<pid>
|
||||
```
|
||||
|
||||
```bash
|
||||
nats-server --signal stop=/path/to/pidfile
|
||||
```
|
||||
|
||||
See the [Windows Service](../running/windows_srv.md) section for information on signaling the NATS server on Windows.
|
||||
|
||||
112
nats-server/nats_admin/slow_consumers.md
Normal file
112
nats-server/nats_admin/slow_consumers.md
Normal file
@@ -0,0 +1,112 @@
|
||||
# Slow Consumers
|
||||
|
||||
To support resiliency and high availability, NATS provides built-in mechanisms to automatically prune the registered listener interest graph that is used to keep track of subscribers, including slow consumers and lazy listeners. NATS automatically handles a slow consumer. If a client is not processing messages quick enough, the NATS server cuts it off. To support scaling, NATS provides for auto-pruning of client connections. If a subscriber does not respond to ping requests from the server within the [ping-pong interval](../../nats-protocol/nats-protocol/#PINGPONG), the client is cut off \(disconnected\). The client will need to have reconnect logic to reconnect with the server.
|
||||
|
||||
In core NATS, consumers that cannot keep up are handled differently from many other messaging systems: NATS favors the approach of protecting the system as a whole over accommodating a particular consumer to ensure message delivery.
|
||||
|
||||
**What is a slow consumer?**
|
||||
|
||||
A slow consumer is a subscriber that cannot keep up with the message flow delivered from the NATS server. This is a common case in distributed systems because it is often easier to generate data than it is to process it. When consumers cannot process data fast enough, back pressure is applied to the rest of the system. NATS has mechanisms to reduce this back pressure.
|
||||
|
||||
NATS identifies slow consumers in the client or the server, providing notification through registered callbacks, log messages, and statistics in the server's monitoring endpoints.
|
||||
|
||||
**What happens to slow consumers?**
|
||||
|
||||
When detected at the client, the application is notified and messages are dropped to allow the consumer to continue and reduce potential back pressure. When detected in the server, the server will disconnect the connection with the slow consumer to protect itself and the integrity of the messaging system.
|
||||
|
||||
## Slow consumers identified in the client
|
||||
|
||||
A [client can detect it is a slow consumer](../../developing-with-nats/intro-5/slow.md) on a local connection and notify the application through use of the asynchronous error callback. It is better to catch a slow consumer locally in the client rather than to allow the server to detect this condition. This example demonstrates how to define and register an asynchronous error handler that will handle slow consumer errors.
|
||||
|
||||
```go
|
||||
func natsErrHandler(nc *nats.Conn, sub *nats.Subscription, natsErr error) {
|
||||
fmt.Printf("error: %v\n", natsErr)
|
||||
if natsErr == nats.ErrSlowConsumer {
|
||||
pendingMsgs, _, err := sub.Pending()
|
||||
if err != nil {
|
||||
fmt.Printf("couldn't get pending messages: %v", err)
|
||||
return
|
||||
}
|
||||
fmt.Printf("Falling behind with %d pending messages on subject %q.\n",
|
||||
pendingMsgs, sub.Subject)
|
||||
// Log error, notify operations...
|
||||
}
|
||||
// check for other errors
|
||||
}
|
||||
|
||||
// Set the error handler when creating a connection.
|
||||
nc, err := nats.Connect("nats://localhost:4222",
|
||||
nats.ErrorHandler(natsErrHandler))
|
||||
```
|
||||
|
||||
With this example code and default settings, a slow consumer error would generate output something like this:
|
||||
|
||||
```bash
|
||||
error: nats: slow consumer, messages dropped
|
||||
Falling behind with 65536 pending messages on subject "foo".
|
||||
```
|
||||
|
||||
Note that if you are using a synchronous subscriber, `Subscription.NextMsg(timeout time.Duration)` will also return an error indicating there was a slow consumer and messages have been dropped.
|
||||
|
||||
## Slow consumers identified by the server
|
||||
|
||||
When a client does not process messages fast enough, the server will buffer messages in the outbound connection to the client. When this happens and the server cannot write data fast enough to the client, in order to protect itself, it will designate a subscriber as a "slow consumer" and may drop the associated connection.
|
||||
|
||||
When the server initiates a slow consumer error, you'll see the following in the server output:
|
||||
|
||||
```bash
|
||||
[54083] 2017/09/28 14:45:18.001357 [INF] ::1:63283 - cid:7 - Slow Consumer Detected
|
||||
```
|
||||
|
||||
The server will also keep count of the number of slow consumer errors encountered, available through the monitoring `varz` endpoint in the `slow_consumers` field.
|
||||
|
||||
## Handling slow consumers
|
||||
|
||||
Apart from using [NATS streaming](../../nats-streaming-concepts/intro.md) or optimizing your consuming application, there are a few options available: scale, meter, or tune NATS to your environment.
|
||||
|
||||
**Scaling with queue subscribers**
|
||||
|
||||
This is ideal if you do not rely on message order. Ensure your NATS subscription belongs to a [queue group](../../concepts/queue.md), then scale as required by creating more instances of your service or application. This is a great approach for microservices - each instance of your microservice will receive a portion of the messages to process, and simply add more instances of your service to scale. No code changes, configuration changes, or downtime whatsoever.
|
||||
|
||||
**Create a subject namespace that can scale**
|
||||
|
||||
You can distribute work further through the subject namespace, with some forethought in design. This approach is useful if you need to preserve message order. The general idea is to publish to a deep subject namespace, and consume with wildcard subscriptions while giving yourself room to expand and distribute work in the future.
|
||||
|
||||
For a simple example, if you have a service that receives telemetry data from IoT devices located throughout a city, you can publish to a subject namespace like `Sensors.North`, `Sensors.South`, `Sensors.East` and `Sensors.West`. Initially, you'll subscribe to `Sensors.>` to process everything in one consumer. As your enterprise grows and data rates exceed what one consumer can handle, you can replace your single consumer with four consuming applications to subscribe to each subject representing a smaller segment of your data. Note that your publishing applications remain untouched.
|
||||
|
||||
**Meter the publisher**
|
||||
|
||||
A less favorable option may be to meter the publisher. There are several ways to do this varying from simply slowing down your publisher to a more complex approach periodically issuing a blocking request/reply to match subscriber rates.
|
||||
|
||||
**Tune NATS through configuration**
|
||||
|
||||
The NATS server can be tuned to determine how much data can be buffered before a consumer is considered slow, and some officially supported clients allow buffer sizes to be adjusted. Decreasing buffer sizes will let you identify slow consumers more quickly. Increasing buffer sizes is not typically recommended unless you are handling temporary bursts of data. Often, increasing buffer capacity will only _postpone_ slow consumer problems.
|
||||
|
||||
### Server Configuration
|
||||
|
||||
The NATS server has a write deadline it uses to write to a connection. When this write deadline is exceeded, a client is considered to have a slow consumer. If you are encountering slow consumer errors in the server, you can increase the write deadline to buffer more data.
|
||||
|
||||
The `write_deadline` configuration option in the NATS server configuration file will tune this:
|
||||
|
||||
```text
|
||||
write_deadline: 2s
|
||||
```
|
||||
|
||||
Tuning this parameter is ideal when you have bursts of data to accommodate. _**Be sure you are not just postponing a slow consumer error.**_
|
||||
|
||||
### Client Configuration
|
||||
|
||||
Most officially supported clients have an internal buffer of pending messages and will notify your application through an asynchronous error callback if a local subscription is not catching up. Receiving an error locally does not necessarily mean that the server will have identified a subscription as a slow consumer.
|
||||
|
||||
This buffer can be configured through setting the pending limits after a subscription has been created:
|
||||
|
||||
```go
|
||||
if err := sub.SetPendingLimits(1024*500, 1024*5000); err != nil {
|
||||
log.Fatalf("Unable to set pending limits: %v", err)
|
||||
}
|
||||
```
|
||||
|
||||
The default subscriber pending message limit is `65536`, and the default subscriber pending byte limit is `65536*1024`
|
||||
|
||||
If the client reaches this internal limit, it will drop messages and continue to process new messages. This is aligned with NATS at most once delivery. It is up to your application to detect the missing messages and recover from this condition.
|
||||
|
||||
17
nats-server/nats_admin/sys_accounts/README.md
Normal file
17
nats-server/nats_admin/sys_accounts/README.md
Normal file
@@ -0,0 +1,17 @@
|
||||
# System Accounts
|
||||
|
||||
NATS servers leverage [Account](../../configuration/securing_nats/auth_intro/jwt_auth.md) support and generate events such as:
|
||||
|
||||
* account connect/disconnect
|
||||
* authentication errors
|
||||
* server shutdown
|
||||
* server stat summary
|
||||
|
||||
In addition the server supports a limitted number of requests that can be used to query for account connections, server stat summaries, and pinging servers in the cluster.
|
||||
|
||||
These events are only accepted and visible to _system account_ users.
|
||||
|
||||
## The System Events Tutorial
|
||||
|
||||
You can learn more about System Accounts in the [Tutorial](sys_accounts.md).
|
||||
|
||||
250
nats-server/nats_admin/sys_accounts/sys_accounts.md
Normal file
250
nats-server/nats_admin/sys_accounts/sys_accounts.md
Normal file
@@ -0,0 +1,250 @@
|
||||
# Configuration
|
||||
|
||||
The following is a short tutorial on how you can activate a system account to:
|
||||
|
||||
* receive periodic updates from the server
|
||||
* send requests to the server
|
||||
* send an account update to the server
|
||||
|
||||
## Events and Services
|
||||
|
||||
The system account publishes messages under well known subject patterns.
|
||||
|
||||
Server initiated events:
|
||||
|
||||
* `$SYS.ACCOUNT.<id>.CONNECT` \(client connects\)
|
||||
* `$SYS.ACCOUNT.<id>.DISCONNECT` \(client disconnects\)
|
||||
* `$SYS.SERVER.ACCOUNT.<id>.CONNS` \(connections for an account changed\)
|
||||
* `$SYS.SERVER.<id>.CLIENT.AUTH.ERR` \(authentication error\)
|
||||
* `$SYS.ACCOUNT.<id>.LEAFNODE.CONNECT` \(leaf node connnects\)
|
||||
* `$SYS.ACCOUNT.<id>.LEAFNODE.DISCONNECT` \(leaf node disconnects\)
|
||||
* `$SYS.SERVER.<id>.STATSZ` \(stats summary\)
|
||||
|
||||
In addition other tools with system account privileges, can initiate requests:
|
||||
|
||||
* `$SYS.REQ.SERVER.<id>.STATSZ` \(request server stat summary\)
|
||||
* `$SYS.REQ.SERVER.PING` \(discover servers - will return multiple messages\)
|
||||
|
||||
Servers like `nats-account-server` publish system account messages when a claim is updated, the nats-server listens for them, and updates its account information accordingly:
|
||||
|
||||
* `$SYS.ACCOUNT.<id>.CLAIMS.UPDATE`
|
||||
|
||||
With these few messages you can build fairly surprisingly useful monitoring tools:
|
||||
|
||||
* health/load of your servers
|
||||
* client connects/disconnects
|
||||
* account connections
|
||||
* authentication errors
|
||||
|
||||
## Enabling System Events
|
||||
|
||||
To enable and access system events, you'll have to:
|
||||
|
||||
* Create an Operator, Account and User
|
||||
* Run a NATS Account Server \(or Memory Resolver\)
|
||||
|
||||
### Create an Operator, Account, User
|
||||
|
||||
Let's create an operator, system account and system account user:
|
||||
|
||||
```text
|
||||
# Create an operator if you
|
||||
> nsc add operator -n SAOP
|
||||
Generated operator key - private key stored "~/.nkeys/SAOP/SAOP.nk"
|
||||
Success! - added operator "SAOP"
|
||||
|
||||
# Add the system account
|
||||
> nsc add account -n SYS
|
||||
Generated account key - private key stored "~/.nkeys/SAOP/accounts/SYS/SYS.nk"
|
||||
Success! - added account "SYS"
|
||||
|
||||
# Add a system account user
|
||||
> nsc add user -n SYSU
|
||||
Generated user key - private key stored "~/.nkeys/SAOP/accounts/SYS/users/SYSU.nk"
|
||||
Generated user creds file "~/.nkeys/SAOP/accounts/SYS/users/SYSU.creds"
|
||||
Success! - added user "SYSU" to "SYS"
|
||||
```
|
||||
|
||||
By default, the operator JWT can be found in `~/.nsc/nats/<operator_name>/<operator.name>.jwt`.
|
||||
|
||||
### Nats-Account-Server
|
||||
|
||||
To vend the credentials to the nats-server, we'll use a [nats-account-server](../../../nats-tools/nas/). Let's start a nats-account-server to serve the JWT credentials:
|
||||
|
||||
```text
|
||||
> nats-account-server -nsc ~/.nsc/nats/SAOP
|
||||
```
|
||||
|
||||
The server will by default vend JWT configurations on the an endpoint at: `http(s)://<server_url>/jwt/v1/accounts/`.
|
||||
|
||||
### NATS Server Configuration
|
||||
|
||||
The server configuration will need:
|
||||
|
||||
* The operator JWT - \(`~/.nsc/nats/<operator_name>/<operator.name>.jwt`\)
|
||||
* The URL where the server can resolve accounts \(`http://localhost:9090/jwt/v1/accounts/`\)
|
||||
* The public key of the `system_account`
|
||||
|
||||
The only thing we don't have handy is the public key for the system account. We can get it easy enough:
|
||||
|
||||
```text
|
||||
> nsc list accounts -W
|
||||
╭─────────────────────────────────────────────────────────────────╮
|
||||
│ Accounts │
|
||||
├──────┬──────────────────────────────────────────────────────────┤
|
||||
│ Name │ Public Key │
|
||||
├──────┼──────────────────────────────────────────────────────────┤
|
||||
│ SYS │ ADWJVSUSEVC2GHL5GRATN2LOEOQOY2E6Z2VXNU3JEIK6BDGPWNIW3AXF │
|
||||
╰──────┴──────────────────────────────────────────────────────────╯
|
||||
```
|
||||
|
||||
Because the server has additional resolver implementations, you need to enclose the server url like: `URL(<url>)`.
|
||||
|
||||
Let's create server config with the following contents and save it to `server.conf`:
|
||||
|
||||
```text
|
||||
operator: /Users/synadia/.nsc/nats/SAOP/SAOP.jwt
|
||||
system_account: ADWJVSUSEVC2GHL5GRATN2LOEOQOY2E6Z2VXNU3JEIK6BDGPWNIW3AXF
|
||||
resolver: URL(http://localhost:9090/jwt/v1/accounts/)
|
||||
```
|
||||
|
||||
Let's start the nats-server:
|
||||
|
||||
```text
|
||||
> nats-server -c server.conf
|
||||
```
|
||||
|
||||
## Inspecting Server Events
|
||||
|
||||
Let's add a subscriber for all the events published by the system account:
|
||||
|
||||
```text
|
||||
> nats-sub -creds ~/.nkeys/SAOP/accounts/SYS/users/SYSU.creds ">"
|
||||
```
|
||||
|
||||
Very quickly we'll start seeing messages from the server as they are published by the NATS server. As should be expected, the messages are just JSON, so they can easily be inspected even if just using a simple `nats-sub` to read them.
|
||||
|
||||
To see an an account update:
|
||||
|
||||
```text
|
||||
> nats-pub -creds ~/.nkeys/SAOP/accounts/SYS/users/SYSU.creds foo bar
|
||||
```
|
||||
|
||||
The subscriber will print the connect and disconnect:
|
||||
|
||||
```text
|
||||
"server": {
|
||||
"host": "0.0.0.0",
|
||||
"id": "NBTGVY3OKDKEAJPUXRHZLKBCRH3LWCKZ6ZXTAJRS2RMYN3PMDRMUZWPR",
|
||||
"ver": "2.0.0-RC5",
|
||||
"seq": 32,
|
||||
"time": "2019-05-03T14:53:15.455266-05:00"
|
||||
},
|
||||
"acc": "ADWJVSUSEVC2GHL5GRATN2LOEOQOY2E6Z2VXNU3JEIK6BDGPWNIW3AXF",
|
||||
"conns": 1,
|
||||
"total_conns": 1
|
||||
}'
|
||||
"server": {
|
||||
"host": "0.0.0.0",
|
||||
"id": "NBTGVY3OKDKEAJPUXRHZLKBCRH3LWCKZ6ZXTAJRS2RMYN3PMDRMUZWPR",
|
||||
"ver": "2.0.0-RC5",
|
||||
"seq": 33,
|
||||
"time": "2019-05-03T14:53:15.455304-05:00"
|
||||
},
|
||||
"client": {
|
||||
"start": "2019-05-03T14:53:15.453824-05:00",
|
||||
"host": "127.0.0.1",
|
||||
"id": 6,
|
||||
"acc": "ADWJVSUSEVC2GHL5GRATN2LOEOQOY2E6Z2VXNU3JEIK6BDGPWNIW3AXF",
|
||||
"user": "UACPEXCAZEYWZK4O52MEGWGK4BH3OSGYM3P3C3F3LF2NGNZUS24IVG36",
|
||||
"name": "NATS Sample Publisher",
|
||||
"lang": "go",
|
||||
"ver": "1.7.0",
|
||||
"stop": "2019-05-03T14:53:15.45526-05:00"
|
||||
},
|
||||
"sent": {
|
||||
"msgs": 1,
|
||||
"bytes": 3
|
||||
},
|
||||
"received": {
|
||||
"msgs": 0,
|
||||
"bytes": 0
|
||||
},
|
||||
"reason": "Client Closed"
|
||||
}'
|
||||
```
|
||||
|
||||
## `$SYS.REQ.SERVER.PING` - Discovering Servers
|
||||
|
||||
To discover servers in the cluster, and get a small heath summary, publish a request to `$SYS.REQ.SERVER.PING`. Note that while the example below uses `nats-req`, only the first answer for the request will be printed. You can easily modify the example to wait until no additional responses are received for a specific amount of time, thus allowing for all responses to be collected.
|
||||
|
||||
```text
|
||||
> nats-req -creds ~/.nkeys/SAOP/accounts/SYS/users/SYSU.creds \$SYS.REQ.SERVER.PING ""
|
||||
Published [$SYS.REQ.SERVER.PING] : ''
|
||||
Received [_INBOX.G5mbsf0k7l7nb4eWHa7GTT.omklmvnm] : '{
|
||||
"server": {
|
||||
"host": "0.0.0.0",
|
||||
"id": "NCZQDUX77OSSTGN2ESEOCP4X7GISMARX3H4DBGZBY34VLAI4TQEPK6P6",
|
||||
"ver": "2.0.0-RC9",
|
||||
"seq": 47,
|
||||
"time": "2019-05-02T14:02:46.402166-05:00"
|
||||
},
|
||||
"statsz": {
|
||||
"start": "2019-05-02T13:41:01.113179-05:00",
|
||||
"mem": 12922880,
|
||||
"cores": 20,
|
||||
"cpu": 0,
|
||||
"connections": 2,
|
||||
"total_connections": 2,
|
||||
"active_accounts": 1,
|
||||
"subscriptions": 10,
|
||||
"sent": {
|
||||
"msgs": 7,
|
||||
"bytes": 2761
|
||||
},
|
||||
"received": {
|
||||
"msgs": 0,
|
||||
"bytes": 0
|
||||
},
|
||||
"slow_consumers": 0
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
## `$SYS.SERVER.<id>.STATSZ` - Requesting Server Stats Summary
|
||||
|
||||
If you know the server id for a particular server \(such as from a response to `$SYS.REQ.SERVER.PING`\), you can query the specific server for its health information:
|
||||
|
||||
```text
|
||||
nats-req -creds ~/.nkeys/SAOP/accounts/SYS/users/SYSU.creds \$SYS.REQ.SERVER.NC7AKPQRC6CIZGWRJOTVFIGVSL7VW7WXTQCTUJFNG7HTCMCKQTGE5PUL.STATSZ ""
|
||||
Published [$SYS.REQ.SERVER.NC7AKPQRC6CIZGWRJOTVFIGVSL7VW7WXTQCTUJFNG7HTCMCKQTGE5PUL.STATSZ] : ''
|
||||
Received [_INBOX.DQD44ugVt0O4Ur3pWIOOD1.WQOBevoq] : '{
|
||||
"server": {
|
||||
"host": "0.0.0.0",
|
||||
"id": "NC7AKPQRC6CIZGWRJOTVFIGVSL7VW7WXTQCTUJFNG7HTCMCKQTGE5PUL",
|
||||
"ver": "2.0.0-RC5",
|
||||
"seq": 25,
|
||||
"time": "2019-05-03T14:34:02.066077-05:00"
|
||||
},
|
||||
"statsz": {
|
||||
"start": "2019-05-03T14:32:19.969037-05:00",
|
||||
"mem": 11874304,
|
||||
"cores": 20,
|
||||
"cpu": 0,
|
||||
"connections": 2,
|
||||
"total_connections": 4,
|
||||
"active_accounts": 1,
|
||||
"subscriptions": 10,
|
||||
"sent": {
|
||||
"msgs": 26,
|
||||
"bytes": 9096
|
||||
},
|
||||
"received": {
|
||||
"msgs": 2,
|
||||
"bytes": 0
|
||||
},
|
||||
"slow_consumers": 0
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
90
nats-server/nats_admin/upgrading_cluster.md
Normal file
90
nats-server/nats_admin/upgrading_cluster.md
Normal file
@@ -0,0 +1,90 @@
|
||||
# Upgrading a Cluster
|
||||
|
||||
The basic strategy for upgrading a cluster revolves around the server's ability to gossip cluster configuration to clients and other servers. When cluster configuration changes, clients become aware of new servers automatically. In the case of a disconnect, a client has a list of servers that joined the cluster in addition to the ones it knew about from its connection settings.
|
||||
|
||||
Note that since each server stores it's own permission and authentication configuration, new servers added to a cluster should provide the same users and authorization to prevent clients from getting rejected or gaining unexpected privileges.
|
||||
|
||||
For purposes of describing the scenario, let's get some fingers on keyboards, and go through the motions. Let's consider a cluster of two servers: 'A' and 'B', and yes - clusters should be _three_ to _five_ servers, but for purposes of describing the behavior and cluster upgrade process, a cluster of two servers will suffice.
|
||||
|
||||
Let's build this cluster:
|
||||
|
||||
```bash
|
||||
nats-server -D -p 4222 -cluster nats://localhost:6222 -routes nats://localhost:6222,nats://localhost:6333
|
||||
```
|
||||
|
||||
The command above is starting nats-server with debug output enabled, listening for clients on port 4222, and accepting cluster connections on port 6222. The `-routes` option specifies a list of nats URLs where the server will attempt to connect to other servers. These URLs define the cluster ports enabled on the cluster peers.
|
||||
|
||||
Keen readers will notice a self-route. The NATS server will ignore the self-route, but it makes for a single consistent configuration for all servers.
|
||||
|
||||
You will see the server started, we notice it emits some warnings because it cannot connect to 'localhost:6333'. The message more accurately reads:
|
||||
|
||||
```text
|
||||
Error trying to connect to route: dial tcp localhost:6333: connect: connection refused
|
||||
```
|
||||
|
||||
Let's fix that, by starting the second server:
|
||||
|
||||
```bash
|
||||
nats-server -D -p 4333 -cluster nats://localhost:6333 -routes nats://localhost:6222,nats://localhost:6333
|
||||
```
|
||||
|
||||
The second server was started on port 4333 with its cluster port on 6333. Otherwise the same as 'A'.
|
||||
|
||||
Let's get one client, so we can observe it moving between servers as servers get removed:
|
||||
|
||||
```bash
|
||||
nats-sub -s nats://localhost:4222 ">"
|
||||
```
|
||||
|
||||
`nats-sub` is a subscriber sample included with all NATS clients. `nats-sub` subscribes to a subject and prints out any messages received. You can find the source code to the go version of `nats-sub` \[here\)\([https://github.com/nats-io/nats.go/tree/master/examples](https://github.com/nats-io/nats.go/tree/master/examples)\). After starting the subscriber you should see a message on 'A' that a new client connected.
|
||||
|
||||
We have two servers and a client. Time to simulate our rolling upgrade. But wait, before we upgrade 'A', let's introduce a new server 'C'. Server 'C' will join the existing cluster while we perform the upgrade. Its sole purpose is to provide an additional place where clients can go other than 'A' and ensure we don't end up with a single server serving all the clients after the upgrade procedure. Clients will randomly select a server when connecting unless a special option is provided that disables that functionality \(usually called 'DontRandomize' or 'noRandomize'\). You can read more about ["Avoiding the Thundering Herd"](../../developing-with-nats/intro-1/random.md). Suffice it to say that clients redistribute themselves about evenly between all servers in the cluster. In our case 1/2 of the clients on 'A' will jump over to 'B' and the remaining half to 'C'.
|
||||
|
||||
Let's start our temporary server:
|
||||
|
||||
```bash
|
||||
nats-server -D -p 4444 -cluster nats://localhost:6444 -routes nats://localhost:6222,nats://localhost:6333
|
||||
```
|
||||
|
||||
After an instant or so, clients on 'A' learn of the new cluster member that joined. On our hands-on tutorial, `nats-sub` is now aware of 3 possible servers, 'A' \(specified when we started the tool\) and 'B' and 'C' learned from the cluster gossip.
|
||||
|
||||
We invoke our admin powers and turn off 'A' by issuing a `CTRL+C` to the terminal on 'A' and observe that either 'B' or 'C' reports that a new client connected. That is our `nats-sub` client.
|
||||
|
||||
We perform the upgrade process, update the binary for 'A', and restart 'A':
|
||||
|
||||
```bash
|
||||
nats-server -D -p 4222 -cluster nats://localhost:6222 -routes nats://localhost:6222,nats://localhost:6333
|
||||
```
|
||||
|
||||
We move on to upgrade 'B'. Notice that clients from 'B' reconnect to 'A' and 'C'. We upgrade and restart 'B':
|
||||
|
||||
```bash
|
||||
nats-server -D -p 4333 -cluster nats://localhost:6333 -routes nats://localhost:6222,nats://localhost:6333
|
||||
```
|
||||
|
||||
If we had more servers, we would continue the stop, update, restart rotation as we did for 'A' and 'B'. After restarting the last server, we can go ahead and turn off 'C.' Any clients on 'C' will redistribute to our permanent cluster members.
|
||||
|
||||
## Seed Servers
|
||||
|
||||
In the examples above we started nats-server specifying two clustering routes. It is possible to allow the server gossip protocol drive it and reduce the amount of configuration. You could for example start A, B and C as follows:
|
||||
|
||||
### A - Seed Server
|
||||
|
||||
```bash
|
||||
nats-server -D -p 4222 -cluster nats://localhost:6222
|
||||
```
|
||||
|
||||
### B
|
||||
|
||||
```bash
|
||||
nats-server -D -p 4333 -cluster nats://localhost:6333 -routes nats://localhost:6222
|
||||
```
|
||||
|
||||
### C
|
||||
|
||||
```bash
|
||||
nats-server -D -p 4444 -cluster nats://localhost:6444 -routes nats://localhost:6222
|
||||
```
|
||||
|
||||
Once they connect to the 'seed server', they will learn about all the other servers and connect to each other forming the full mesh.
|
||||
|
||||
260
nats-server/nats_docker/README.md
Normal file
260
nats-server/nats_docker/README.md
Normal file
@@ -0,0 +1,260 @@
|
||||
# NATS and Docker
|
||||
|
||||
## NATS Server Containerization
|
||||
|
||||
The NATS server is provided as a Docker image on [Docker Hub](https://hub.docker.com/_/nats/) that you can run using the Docker daemon. The NATS server Docker image is extremely lightweight, coming in under 10 MB in size.
|
||||
|
||||
[Synadia](https://synadia.com) actively maintains and supports the NATS server Docker image.
|
||||
|
||||
### Usage
|
||||
|
||||
To use the Docker container image, install Docker and pull the public image:
|
||||
|
||||
```bash
|
||||
> docker pull nats
|
||||
```
|
||||
|
||||
Run the NATS server image:
|
||||
|
||||
```bash
|
||||
> docker run -d --name nats-main nats
|
||||
```
|
||||
|
||||
By default the NATS server exposes multiple ports:
|
||||
|
||||
* 4222 is for clients.
|
||||
* 8222 is an HTTP management port for information reporting.
|
||||
* 6222 is a routing port for clustering.
|
||||
* Use -p or -P to customize.
|
||||
|
||||
For example:
|
||||
|
||||
```bash
|
||||
$ docker run -d --name nats-main nats
|
||||
[INF] Starting nats-server version 0.6.6
|
||||
[INF] Starting http monitor on port 8222
|
||||
[INF] Listening for route connections on 0.0.0.0:6222
|
||||
[INF] Listening for client connections on 0.0.0.0:4222
|
||||
[INF] nats-server is ready
|
||||
```
|
||||
|
||||
To run with the ports exposed on the host:
|
||||
|
||||
```bash
|
||||
> docker run -d -p 4222:4222 -p 6222:6222 -p 8222:8222 --name nats-main nats
|
||||
```
|
||||
|
||||
To run a second server and cluster them together:
|
||||
|
||||
```bash
|
||||
> docker run -d --name=nats-2 --link nats-main nats --routes=nats-route://ruser:T0pS3cr3t@nats-main:6222
|
||||
```
|
||||
|
||||
**NOTE** Since the Docker image protects routes using credentials we need to provide them above. Extracted [from Docker image configuration](https://github.com/nats-io/nats-docker/blob/master/amd64/nats-server.conf#L16-L20)
|
||||
|
||||
```text
|
||||
# Routes are protected, so need to use them with --routes flag
|
||||
# e.g. --routes=nats-route://ruser:T0pS3cr3t@otherdockerhost:6222
|
||||
authorization {
|
||||
user: ruser
|
||||
password: T0pS3cr3t
|
||||
timeout: 2
|
||||
}
|
||||
```
|
||||
|
||||
To verify the routes are connected:
|
||||
|
||||
```bash
|
||||
$ docker run -d --name=nats-2 --link nats-main nats --routes=nats-route://ruser:T0pS3cr3t@nats-main:6222 -DV
|
||||
[INF] Starting nats-server version 2.0.0
|
||||
[INF] Starting http monitor on port 8222
|
||||
[INF] Listening for route connections on :6222
|
||||
[INF] Listening for client connections on 0.0.0.0:4222
|
||||
[INF] nats-server is ready
|
||||
[DBG] Trying to connect to route on nats-main:6222
|
||||
[DBG] 172.17.0.52:6222 - rid:1 - Route connection created
|
||||
[DBG] 172.17.0.52:6222 - rid:1 - Route connect msg sent
|
||||
[DBG] 172.17.0.52:6222 - rid:1 - Registering remote route "ee35d227433a738c729f9422a59667bb"
|
||||
[DBG] 172.17.0.52:6222 - rid:1 - Route sent local subscriptions
|
||||
```
|
||||
|
||||
## Clustering With Docker
|
||||
|
||||
Below is are a couple examples of how to setup nats-server cluster using Docker. We put 3 different configurations \(one per nats-server server\) under a folder named conf as follows:
|
||||
|
||||
```text
|
||||
|-- conf
|
||||
|-- nats-server-A.conf
|
||||
|-- nats-server-B.conf
|
||||
|-- nats-server-C.conf
|
||||
```
|
||||
|
||||
Each one of those files have the following content below: \(Here I am using ip 192.168.59.103 as an example, so just replace with the proper ip from your server\)
|
||||
|
||||
### Example 1: Setting up a cluster on 3 different servers provisioned beforehand
|
||||
|
||||
In this example, the three servers are started with config files that know about the other servers.
|
||||
|
||||
#### nats-server-A
|
||||
|
||||
```text
|
||||
# Cluster Server A
|
||||
|
||||
port: 7222
|
||||
|
||||
cluster {
|
||||
host: '0.0.0.0'
|
||||
port: 7244
|
||||
|
||||
routes = [
|
||||
nats-route://192.168.59.103:7246
|
||||
nats-route://192.168.59.103:7248
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### nats-server-B
|
||||
|
||||
```text
|
||||
# Cluster Server B
|
||||
|
||||
port: 8222
|
||||
|
||||
cluster {
|
||||
host: '0.0.0.0'
|
||||
port: 7246
|
||||
|
||||
routes = [
|
||||
nats-route://192.168.59.103:7244
|
||||
nats-route://192.168.59.103:7248
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### nats-server-C
|
||||
|
||||
```text
|
||||
# Cluster Server C
|
||||
|
||||
port: 9222
|
||||
|
||||
cluster {
|
||||
host: '0.0.0.0'
|
||||
port: 7248
|
||||
|
||||
routes = [
|
||||
nats-route://192.168.59.103:7244
|
||||
nats-route://192.168.59.103:7246
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
To start the containers, on each one of your servers, you should be able to start the nats-server image as follows:
|
||||
|
||||
```bash
|
||||
docker run -it -p 0.0.0.0:7222:7222 -p 0.0.0.0:7244:7244 --rm -v $(pwd)/conf/nats-server-A.conf:/tmp/cluster.conf nats -c /tmp/cluster.conf -p 7222 -D -V
|
||||
```
|
||||
|
||||
```text
|
||||
docker run -it -p 0.0.0.0:8222:8222 -p 0.0.0.0:7246:7246 --rm -v $(pwd)/conf/nats-server-B.conf:/tmp/cluster.conf nats -c /tmp/cluster.conf -p 8222 -D -V
|
||||
```
|
||||
|
||||
```text
|
||||
docker run -it -p 0.0.0.0:9222:9222 -p 0.0.0.0:7248:7248 --rm -v $(pwd)/conf/nats-server-C.conf:/tmp/cluster.conf nats -c /tmp/cluster.conf -p 9222 -D -V
|
||||
```
|
||||
|
||||
### Example 2: Setting a nats-server cluster one by one
|
||||
|
||||
In this scenario:
|
||||
|
||||
* We bring up A and get its ip \(nats-route://192.168.59.103:7244\)
|
||||
* Then create B and then use address of A in its configuration.
|
||||
* Get the address of B nats-route://192.168.59.104:7246 and create C and use the addresses of A and B.
|
||||
|
||||
First, we create the Node A and start up a nats-server server with the following config:
|
||||
|
||||
```text
|
||||
# Cluster Server A
|
||||
|
||||
port: 4222
|
||||
|
||||
cluster {
|
||||
host: '0.0.0.0'
|
||||
port: 7244
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
docker run -it -p 0.0.0.0:4222:4222 -p 0.0.0.0:7244:7244 --rm -v $(pwd)/conf/nats-server-A.conf:/tmp/cluster.conf nats -c /tmp/cluster.conf -p 4222 -D -V
|
||||
```
|
||||
|
||||
Then we proceed to create the next node. We realize that the first node has ip:port as `192.168.59.103:7244` so we add this to the routes configuration as follows:
|
||||
|
||||
```text
|
||||
# Cluster Server B
|
||||
|
||||
port: 4222
|
||||
|
||||
cluster {
|
||||
host: '0.0.0.0'
|
||||
port: 7244
|
||||
|
||||
routes = [
|
||||
nats-route://192.168.59.103:7244
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Then start server B:
|
||||
|
||||
```bash
|
||||
docker run -it -p 0.0.0.0:4222:4222 -p 0.0.0.0:7244:7244 --rm -v $(pwd)/conf/nats-server-B.conf:/tmp/cluster.conf nats -c /tmp/cluster.conf -p 4222 -D -V
|
||||
```
|
||||
|
||||
Finally, we create another Node C. We now know the routes of A and B so we can add it to its configuration:
|
||||
|
||||
```text
|
||||
# Cluster Server C
|
||||
|
||||
port: 4222
|
||||
|
||||
cluster {
|
||||
host: '0.0.0.0'
|
||||
port: 7244
|
||||
|
||||
routes = [
|
||||
nats-route://192.168.59.103:7244
|
||||
nats-route://192.168.59.104:7244
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Then start it:
|
||||
|
||||
```bash
|
||||
docker run -it -p 0.0.0.0:4222:4222 -p 0.0.0.0:7244:7244 --rm -v $(pwd)/conf/nats-server-C.conf:/tmp/cluster.conf nats -c /tmp/cluster.conf -p 9222 -D -V
|
||||
```
|
||||
|
||||
### Testing the Clusters
|
||||
|
||||
Now, the following should work: make a subscription to Node A then publish to Node C. You should be able to to receive the message without problems.
|
||||
|
||||
```bash
|
||||
nats-sub -s "nats://192.168.59.103:7222" hello &
|
||||
|
||||
nats-pub -s "nats://192.168.59.105:7222" hello world
|
||||
|
||||
[#1] Received on [hello] : 'world'
|
||||
|
||||
# nats-server on Node C logs:
|
||||
[1] 2015/06/23 05:20:31.100032 [TRC] 192.168.59.103:7244 - rid:2 - <<- [MSG hello RSID:8:2 5]
|
||||
|
||||
# nats-server on Node A logs:
|
||||
[1] 2015/06/23 05:20:31.100600 [TRC] 10.0.2.2:51007 - cid:8 - <<- [MSG hello 2 5]
|
||||
```
|
||||
|
||||
## Tutorial
|
||||
|
||||
See the [NATS Docker tutorial](https://github.com/nats-io/nats.docs/tree/51fc56e3090645f7cedb242415e2d5361e1807e7/nats_docker/tutorial.md) for more instructions on using the NATS server Docker image.
|
||||
|
||||
100
nats-server/nats_docker/docker_swarm.md
Normal file
100
nats-server/nats_docker/docker_swarm.md
Normal file
@@ -0,0 +1,100 @@
|
||||
# Docker Swarm
|
||||
|
||||
### Step 1:
|
||||
|
||||
Create an overlay network for the cluster \(in this example, `nats-cluster-example`\), and instantiate an initial NATS server.
|
||||
|
||||
First create an overlay network:
|
||||
|
||||
```bash
|
||||
% docker network create --driver overlay nats-cluster-example
|
||||
```
|
||||
|
||||
Next instantiate an initial "seed" server for a NATS cluster listening for other servers to join route to it on port 6222:
|
||||
|
||||
```bash
|
||||
% docker service create --network nats-cluster-example --name nats-cluster-node-1 nats:1.0.0 -cluster nats://0.0.0.0:6222 -DV
|
||||
```
|
||||
|
||||
### Step 2:
|
||||
|
||||
The 2nd step is to create another service which connects to the NATS server within the overlay network. Note that we connect to to the server at `nats-cluster-node-1`:
|
||||
|
||||
```bash
|
||||
% docker service create --name ruby-nats --network nats-cluster-example wallyqs/ruby-nats:ruby-2.3.1-nats-v0.8.0 -e '
|
||||
NATS.on_error do |e|
|
||||
puts "ERROR: #{e}"
|
||||
end
|
||||
NATS.start(:servers => ["nats://nats-cluster-node-1:4222"]) do |nc|
|
||||
inbox = NATS.create_inbox
|
||||
puts "[#{Time.now}] Connected to NATS at #{nc.connected_server}, inbox: #{inbox}"
|
||||
|
||||
nc.subscribe(inbox) do |msg, reply|
|
||||
puts "[#{Time.now}] Received reply - #{msg}"
|
||||
end
|
||||
|
||||
nc.subscribe("hello") do |msg, reply|
|
||||
next if reply == inbox
|
||||
puts "[#{Time.now}] Received greeting - #{msg} - #{reply}"
|
||||
nc.publish(reply, "world")
|
||||
end
|
||||
|
||||
EM.add_periodic_timer(1) do
|
||||
puts "[#{Time.now}] Saying hi (servers in pool: #{nc.server_pool}"
|
||||
nc.publish("hello", "hi", inbox)
|
||||
end
|
||||
end'
|
||||
```
|
||||
|
||||
### Step 3:
|
||||
|
||||
Now you can add more nodes to the Swarm cluster via more docker services, referencing the seed server in the `-routes` parameter:
|
||||
|
||||
```bash
|
||||
% docker service create --network nats-cluster-example --name nats-cluster-node-2 nats:1.0.0 -cluster nats://0.0.0.0:6222 -routes nats://nats-cluster-node-1:6222 -DV
|
||||
```
|
||||
|
||||
In this case, `nats-cluster-node-1` is seeding the rest of the cluster through the autodiscovery feature. Now NATS servers `nats-cluster-node-1` and `nats-cluster-node-2` are clustered together.
|
||||
|
||||
Add in more replicas of the subscriber:
|
||||
|
||||
```bash
|
||||
% docker service scale ruby-nats=3
|
||||
```
|
||||
|
||||
Then confirm the distribution on the Docker Swarm cluster:
|
||||
|
||||
```bash
|
||||
% docker service ps ruby-nats
|
||||
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
|
||||
25skxso8honyhuznu15e4989m ruby-nats.1 wallyqs/ruby-nats:ruby-2.3.1-nats-v0.8.0 node-1 Running Running 2 minutes ago
|
||||
0017lut0u3wj153yvp0uxr8yo ruby-nats.2 wallyqs/ruby-nats:ruby-2.3.1-nats-v0.8.0 node-1 Running Running 2 minutes ago
|
||||
2sxl8rw6vm99x622efbdmkb96 ruby-nats.3 wallyqs/ruby-nats:ruby-2.3.1-nats-v0.8.0 node-2 Running Running 2 minutes ago
|
||||
```
|
||||
|
||||
The sample output after adding more NATS server nodes to the cluster, is below - and notice that the client is _dynamically_ aware of more nodes being part of the cluster via auto discovery!
|
||||
|
||||
```bash
|
||||
[2016-08-15 12:51:52 +0000] Saying hi (servers in pool: [{:uri=>#<URI::Generic nats://10.0.1.3:4222>, :was_connected=>true, :reconnect_attempts=>0}]
|
||||
[2016-08-15 12:51:53 +0000] Saying hi (servers in pool: [{:uri=>#<URI::Generic nats://10.0.1.3:4222>, :was_connected=>true, :reconnect_attempts=>0}]
|
||||
[2016-08-15 12:51:54 +0000] Saying hi (servers in pool: [{:uri=>#<URI::Generic nats://10.0.1.3:4222>, :was_connected=>true, :reconnect_attempts=>0}]
|
||||
[2016-08-15 12:51:55 +0000] Saying hi (servers in pool: [{:uri=>#<URI::Generic nats://10.0.1.3:4222>, :was_connected=>true, :reconnect_attempts=>0}, {:uri=>#<URI::Generic nats://10.0.1.7:4222>, :reconnect_attempts=>0}, {:uri=>#<URI::Generic nats://10.0.1.6:4222>, :reconnect_attempts=>0}]
|
||||
```
|
||||
|
||||
Sample output after adding more workers which can reply back \(since ignoring own responses\):
|
||||
|
||||
```bash
|
||||
[2016-08-15 16:06:26 +0000] Received reply - world
|
||||
[2016-08-15 16:06:26 +0000] Received reply - world
|
||||
[2016-08-15 16:06:27 +0000] Received greeting - hi - _INBOX.b8d8c01753d78e562e4dc561f1
|
||||
[2016-08-15 16:06:27 +0000] Received greeting - hi - _INBOX.4c35d18701979f8c8ed7e5f6ea
|
||||
```
|
||||
|
||||
## And so forth...
|
||||
|
||||
From here you can experiment adding to the NATS cluster by simply adding servers with new service names, that route to the seed server `nats-cluster-node-1`. As you've seen above, clients will automatically be updated to know that new servers are available in the cluster.
|
||||
|
||||
```bash
|
||||
% docker service create --network nats-cluster-example --name nats-cluster-node-3 nats:1.0.0 -cluster nats://0.0.0.0:6222 -routes nats://nats-cluster-node-1:6222 -DV
|
||||
```
|
||||
|
||||
60
nats-server/nats_docker/nats-docker-tutorial.md
Normal file
60
nats-server/nats_docker/nats-docker-tutorial.md
Normal file
@@ -0,0 +1,60 @@
|
||||
# Tutorial
|
||||
|
||||
In this tutorial you run the [NATS server Docker image](https://hub.docker.com/_/nats/). The Docker image provides an instance of the [NATS Server](../../). Synadia actively maintains and supports the nats-server Docker image. The NATS image is only 6 MB in size.
|
||||
|
||||
**1. Set up Docker.**
|
||||
|
||||
See [Get Started with Docker](http://docs.docker.com/mac/started/) for guidance.
|
||||
|
||||
The easiest way to run Docker is to use the [Docker Toolbox](http://docs.docker.com/mac/step_one/).
|
||||
|
||||
**2. Run the nats-server Docker image.**
|
||||
|
||||
```bash
|
||||
> docker run -p 4222:4222 -p 8222:8222 -p 6222:6222 --name nats-server -ti nats:latest
|
||||
```
|
||||
|
||||
**3. Verify that the NATS server is running.**
|
||||
|
||||
You should see the following:
|
||||
|
||||
```bash
|
||||
Unable to find image 'nats:latest' locally
|
||||
latest: Pulling from library/nats
|
||||
2d3d00b0941f: Pull complete
|
||||
24bc6bd33ea7: Pull complete
|
||||
Digest: sha256:47b825feb34e545317c4ad122bd1a752a3172bbbc72104fc7fb5e57cf90f79e4
|
||||
Status: Downloaded newer image for nats:latest
|
||||
```
|
||||
|
||||
Followed by this, indicating that the NATS server is running:
|
||||
|
||||
```bash
|
||||
[1] 2019/06/01 18:34:19.605144 [INF] Starting nats-server version 2.0.0
|
||||
[1] 2019/06/01 18:34:19.605191 [INF] Starting http monitor on 0.0.0.0:8222
|
||||
[1] 2019/06/01 18:34:19.605286 [INF] Listening for client connections on 0.0.0.0:4222
|
||||
[1] 2019/06/01 18:34:19.605312 [INF] Server is ready
|
||||
[1] 2019/06/01 18:34:19.608756 [INF] Listening for route connections on 0.0.0.0:6222
|
||||
```
|
||||
|
||||
Notice how quickly the NATS server Docker image downloads. It is a mere 6 MB in size.
|
||||
|
||||
**4. Test the NATS server to verify it is running.**
|
||||
|
||||
An easy way to test the client connection port is through using telnet.
|
||||
|
||||
```bash
|
||||
> telnet localhost 4222
|
||||
```
|
||||
|
||||
Expected result:
|
||||
|
||||
```bash
|
||||
Trying ::1...
|
||||
Connected to localhost.
|
||||
Escape character is '^]'.
|
||||
INFO {"server_id":"NDP7NP2P2KADDDUUBUDG6VSSWKCW4IC5BQHAYVMLVAJEGZITE5XP7O5J","version":"2.0.0","proto":1,"go":"go1.11.10","host":"0.0.0.0","port":4222,"max_payload":1048576,"client_id":13249}
|
||||
```
|
||||
|
||||
You can also test the monitoring endpoint, viewing `http://localhost:8222` with a browser.
|
||||
|
||||
34
nats-server/running/README.md
Normal file
34
nats-server/running/README.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# Running
|
||||
|
||||
The nats-server has many command line options. To get started, you don't have to specify anything. In the absence of any flags, the NATS server will start listening for NATS client connections on port 4222. By default, security is disabled.
|
||||
|
||||
## Standalone
|
||||
|
||||
When the server starts it will print some information including where the server is listening for client connections:
|
||||
|
||||
```text
|
||||
> nats-server
|
||||
[41634] 2019/05/13 09:42:11.745919 [INF] Starting nats-server version 2.0.0
|
||||
[41634] 2019/05/13 09:42:11.746240 [INF] Listening for client connections on 0.0.0.0:4222
|
||||
...
|
||||
[41634] 2019/05/13 09:42:11.746249 [INF] Server id is NBNYNR4ZNTH4N2UQKSAAKBAFLDV3PZO4OUYONSUIQASTQT7BT4ZF6WX7
|
||||
[41634] 2019/05/13 09:42:11.746252 [INF] Server is ready
|
||||
```
|
||||
|
||||
## Docker
|
||||
|
||||
If you are running your NATS server in a docker container:
|
||||
|
||||
```text
|
||||
> docker run -p 4222:4222 -ti nats:latest
|
||||
[1] 2019/05/13 14:55:11.981434 [INF] Starting nats-server version 2.0.0
|
||||
...
|
||||
[1] 2019/05/13 14:55:11.981545 [INF] Starting http monitor on 0.0.0.0:8222
|
||||
[1] 2019/05/13 14:55:11.981560 [INF] Listening for client connections on 0.0.0.0:4222
|
||||
[1] 2019/05/13 14:55:11.981565 [INF] Server is ready
|
||||
[1] 2019/05/13 14:55:11.982492 [INF] Listening for route connections on 0.0.0.0:6222
|
||||
...
|
||||
```
|
||||
|
||||
More information on [containerized NATS is available here](../nats_docker/).
|
||||
|
||||
30
nats-server/running/windows_srv.md
Normal file
30
nats-server/running/windows_srv.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Window Service
|
||||
|
||||
The NATS server supports running as a Windows service. In fact, this is the recommended way of running NATS on Windows. There is currently no installer; users should use `sc.exe` to install the service:
|
||||
|
||||
```text
|
||||
sc.exe create nats-server binPath= "%NATS_PATH%\nats-server.exe [nats-server flags]"
|
||||
sc.exe start nats-server
|
||||
```
|
||||
|
||||
The above will create and start a `nats-server` service. Note that the nats-server flags should be provided when creating the service. This allows for running multiple NATS server configurations on a single Windows server by using a 1:1 service instance per installed NATS server service. Once the service is running, it can be controlled using `sc.exe` or `nats-server.exe --signal`:
|
||||
|
||||
```text
|
||||
REM Reload server configuration
|
||||
nats-server.exe --signal reload
|
||||
|
||||
REM Reopen log file for log rotation
|
||||
nats-server.exe --signal reopen
|
||||
|
||||
REM Stop the server
|
||||
nats-server.exe --signal stop
|
||||
```
|
||||
|
||||
The above commands will default to controlling the `nats-server` service. If the service is another name, it can be specified:
|
||||
|
||||
```text
|
||||
nats-server.exe --signal stop=<service name>
|
||||
```
|
||||
|
||||
For a complete list of signals, see [process signaling](../nats_admin/signals.md).
|
||||
|
||||
Reference in New Issue
Block a user