mirror of
https://github.com/taigrr/nats.docs
synced 2025-01-18 04:03:23 -08:00
GitBook: [master] 82 pages modified
This commit is contained in:
parent
7e27f03c98
commit
b082996143
29
SUMMARY.md
29
SUMMARY.md
@ -6,13 +6,13 @@
|
||||
|
||||
## NATS Concepts
|
||||
|
||||
* [What is NATS](concepts/intro.md)
|
||||
* [Subject-Based Messaging](concepts/subjects.md)
|
||||
* [Publish-Subscribe](concepts/pubsub.md)
|
||||
* [Request-Reply](concepts/reqreply.md)
|
||||
* [Queue Groups](concepts/queue.md)
|
||||
* [Acknowledgements](concepts/acks.md)
|
||||
* [Sequence Numbers](concepts/seq_num.md)
|
||||
* [What is NATS](nats-concepts/intro.md)
|
||||
* [Subject-Based Messaging](nats-concepts/subjects.md)
|
||||
* [Publish-Subscribe](nats-concepts/pubsub.md)
|
||||
* [Request-Reply](nats-concepts/reqreply.md)
|
||||
* [Queue Groups](nats-concepts/queue.md)
|
||||
* [Acknowledgements](nats-concepts/acks.md)
|
||||
* [Sequence Numbers](nats-concepts/seq_num.md)
|
||||
|
||||
## Developing With NATS
|
||||
|
||||
@ -102,7 +102,7 @@
|
||||
|
||||
## NATS Tools
|
||||
|
||||
* [Introduction](nats-tools/README.md)
|
||||
* [Introduction](nats-tools/nats-tools.md)
|
||||
* [mkpasswd](nats-tools/mkpasswd.md)
|
||||
* [nk](nats-tools/nk.md)
|
||||
* [nsc](nats-tools/nsc/README.md)
|
||||
@ -173,7 +173,7 @@
|
||||
* [Persistence](nats-streaming-server/configuring/persistence/README.md)
|
||||
* [File Store](nats-streaming-server/configuring/persistence/file_store.md)
|
||||
* [SQL Store](nats-streaming-server/configuring/persistence/sql_store.md)
|
||||
* [Securing](nats-streaming-server/configuring/tls/README.md)
|
||||
* [Securing](nats-streaming-server/configuring/tls.md)
|
||||
* [Process Signaling](nats-streaming-server/process-signaling.md)
|
||||
* [Windows Service](nats-streaming-server/windows-service.md)
|
||||
* [Embedding NATS Streaming Server](nats-streaming-server/embedding.md)
|
||||
@ -188,8 +188,9 @@
|
||||
|
||||
## NATS on Kubernetes
|
||||
|
||||
* [Introduction](nats-kubernetes/README.md)
|
||||
* [NATS Streaming Cluster with FT Mode](nats-kubernetes/stan-ft-k8s-aws.md)
|
||||
* [NATS + Prometheus Operator](nats-kubernetes/prometheus-and-nats-operator.md)
|
||||
* [NATS + Cert Manager](nats-kubernetes/nats-cluster-and-cert-manager.md)
|
||||
* [Securing a NATS Cluster with cfssl](nats-kubernetes/operator-tls-setup-with-cfssl.md)
|
||||
* [Introduction](nats-on-kubernetes/nats-kubernetes.md)
|
||||
* [NATS Streaming Cluster with FT Mode](nats-on-kubernetes/stan-ft-k8s-aws.md)
|
||||
* [NATS and Prometheus Operator](nats-on-kubernetes/prometheus-and-nats-operator.md)
|
||||
* [NATS Cluster and Cert Manager](nats-on-kubernetes/nats-cluster-and-cert-manager.md)
|
||||
* [Securing a NATS Cluster with cfssl](nats-on-kubernetes/operator-tls-setup-with-cfssl.md)
|
||||
|
||||
|
@ -12,7 +12,7 @@ Just be aware that using an at least once guarantee is the facet of messaging wi
|
||||
|
||||
NATS streaming is ideal when:
|
||||
|
||||
* A historical record of a stream is required. This is when a replay of data
|
||||
* A historical record of a stream is required. This is when a replay of data
|
||||
|
||||
is required by a consumer.
|
||||
|
||||
@ -22,9 +22,9 @@ NATS streaming is ideal when:
|
||||
|
||||
* A-priori knowledge of consumers is not available, but consumers must receive
|
||||
|
||||
messages. This is often a false assumption.
|
||||
messages. This is often a false assumption.
|
||||
|
||||
* Data producers and consumers are highly decoupled. They may be online at
|
||||
* Data producers and consumers are highly decoupled. They may be online at
|
||||
|
||||
different times and consumers must receive messages.
|
||||
|
||||
@ -45,7 +45,7 @@ These include:
|
||||
* Service patterns where there is a tightly coupled request/reply
|
||||
* A request is made, and the application handles error cases upon timeout
|
||||
|
||||
\(resends, errors, etc\). \_\_Relying on a messaging system to resend here is
|
||||
\(resends, errors, etc\). \_\_Relying on a messaging system to resend here is
|
||||
|
||||
considered an anti-pattern.\_\_
|
||||
* Where only the last message received is important and new messages will
|
||||
@ -62,7 +62,7 @@ These include:
|
||||
|
||||
* The expected consumer set for a message is available a-priori and consumers
|
||||
|
||||
are expected to be live. The request/reply pattern works well here or
|
||||
are expected to be live. The request/reply pattern works well here or
|
||||
|
||||
consumers can send an application level acknowledgement.
|
||||
|
||||
|
@ -24,7 +24,7 @@ servers := []string{"nats://127.0.0.1:1222", "nats://127.0.0.1:1223", "nats://12
|
||||
|
||||
nc, err := nats.Connect(strings.Join(servers, ","))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -7,7 +7,7 @@ Each library has its own, language preferred way, to pass connection options. On
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io", nats.Name("API Options Example"), nats.Timeout(10*time.Second))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -7,7 +7,7 @@ Some libraries also provide a special way to connect to a _default_ url, which i
|
||||
```go
|
||||
nc, err := nats.Connect(nats.DefaultURL)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -14,7 +14,7 @@ Keep in mind that each connection will have to turn off echo, and that it is per
|
||||
// Turn off echo
|
||||
nc, err := nats.Connect("demo.nats.io", nats.Name("API NoEcho Example"), nats.NoEcho())
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -14,7 +14,7 @@ If you have a connection that is going to be open a long time with few messages
|
||||
// Set Ping Interval to 20 seconds
|
||||
nc, err := nats.Connect("demo.nats.io", nats.Name("API Ping Example"), nats.PingInterval(20*time.Second))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -99,14 +99,14 @@ For example, to set the maximum number of outgoing pings to 5:
|
||||
{% tab title="Go" %}
|
||||
```go
|
||||
// Set maximum number of PINGs out without getting a PONG back
|
||||
// before the connection will be disconnected as a stale connection.
|
||||
nc, err := nats.Connect("demo.nats.io", nats.Name("API MaxPing Example"), nats.MaxPingsOutstanding(5))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
// before the connection will be disconnected as a stale connection.
|
||||
nc, err := nats.Connect("demo.nats.io", nats.Name("API MaxPing Example"), nats.MaxPingsOutstanding(5))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
// Do something with the connection
|
||||
// Do something with the connection
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
|
@ -67,7 +67,7 @@ While the client can't control the maximum payload size, clients may provide a w
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -160,7 +160,7 @@ opts.Url = "demo.nats.io"
|
||||
opts.Pedantic = true
|
||||
nc, err := opts.Connect()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -245,7 +245,7 @@ opts.Url = "demo.nats.io"
|
||||
opts.Verbose = true
|
||||
nc, err := opts.Connect()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -15,7 +15,7 @@ For example, to connect to the demo server with a URL you can use:
|
||||
// nats.Connect("nats://demo.nats.io:4222")
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -9,19 +9,19 @@ For example, the client library may provide a mechanism to get the connection's
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io", nats.Name("API Example"))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
getStatusTxt := func(nc *nats.Conn) string {
|
||||
switch nc.Status() {
|
||||
case nats.CONNECTED:
|
||||
return "Connected"
|
||||
case nats.CLOSED:
|
||||
return "Closed"
|
||||
default:
|
||||
return "Other"
|
||||
}
|
||||
switch nc.Status() {
|
||||
case nats.CONNECTED:
|
||||
return "Connected"
|
||||
case nats.CLOSED:
|
||||
return "Closed"
|
||||
default:
|
||||
return "Other"
|
||||
}
|
||||
}
|
||||
log.Printf("The connection is %v\n", getStatusTxt(nc))
|
||||
|
||||
|
@ -166,12 +166,12 @@ When working with a cluster, servers may be added or changed. Some of the client
|
||||
// Be notified if a new server joins the cluster.
|
||||
// Print all the known servers and the only the ones that were discovered.
|
||||
nc, err := nats.Connect("demo.nats.io",
|
||||
nats.DiscoveredServersHandler(func(nc *nats.Conn) {
|
||||
log.Printf("Known servers: %v\n", nc.Servers())
|
||||
log.Printf("Discovered servers: %v\n", nc.DiscoveredServers())
|
||||
}))
|
||||
nats.DiscoveredServersHandler(func(nc *nats.Conn) {
|
||||
log.Printf("Known servers: %v\n", nc.Servers())
|
||||
log.Printf("Discovered servers: %v\n", nc.DiscoveredServers())
|
||||
}))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -251,11 +251,11 @@ The client library may separate server-to-client errors from events. Many server
|
||||
```go
|
||||
// Set the callback that will be invoked when an asynchronous error occurs.
|
||||
nc, err := nats.Connect("demo.nats.io",
|
||||
nats.ErrorHandler(func(_ *nats.Conn, _ *nats.Subscription, err error) {
|
||||
log.Printf("Error: %v", err)
|
||||
}))
|
||||
nats.ErrorHandler(func(_ *nats.Conn, _ *nats.Subscription, err error) {
|
||||
log.Printf("Error: %v", err)
|
||||
}))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -25,14 +25,14 @@ The first way that the incoming queue can be limited is by message count. The se
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
// Subscribe
|
||||
sub1, err := nc.Subscribe("updates", func(m *nats.Msg) {})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Set limits of 1000 messages or 5MB, whichever comes first
|
||||
@ -41,7 +41,7 @@ sub1.SetPendingLimits(1000, 5*1024*1024)
|
||||
// Subscribe
|
||||
sub2, err := nc.Subscribe("updates", func(m *nats.Msg) {})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Set no limits for this subscription
|
||||
@ -124,7 +124,7 @@ Some libraries, like Java, will not send this notification on every dropped mess
|
||||
// Set the callback that will be invoked when an asynchronous error occurs.
|
||||
nc, err := nats.Connect("demo.nats.io", nats.ErrorHandler(logSlowConsumer))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -173,13 +173,11 @@ public class SlowConsumerListener {
|
||||
{% tab title="JavaScript" %}
|
||||
```javascript
|
||||
// slow consumer detection is not configurable on NATS JavaScript client.
|
||||
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
{% tab title="Python" %}
|
||||
```python
|
||||
|
||||
nc = NATS()
|
||||
|
||||
async def error_cb(e):
|
||||
@ -202,7 +200,7 @@ public class SlowConsumerListener {
|
||||
|
||||
if len(msgs) == 3:
|
||||
# Head of line blocking on other messages caused
|
||||
# by single message proccesing taking long...
|
||||
# by single message proccesing taking long...
|
||||
await asyncio.sleep(1)
|
||||
|
||||
await nc.subscribe("updates", cb=cb, pending_msgs_limit=5)
|
||||
@ -235,3 +233,4 @@ public class SlowConsumerListener {
|
||||
```
|
||||
{% endtab %}
|
||||
{% endtabs %}
|
||||
|
||||
|
@ -9,7 +9,7 @@ The following example subscribes to the subject `updates` and handles the incomi
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -19,9 +19,9 @@ wg.Add(1)
|
||||
|
||||
// Subscribe
|
||||
if _, err := nc.Subscribe("updates", func(m *nats.Msg) {
|
||||
wg.Done()
|
||||
wg.Done()
|
||||
}); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Wait for a message to come in
|
||||
|
@ -30,15 +30,15 @@ errCh := make(chan error, 1)
|
||||
// so say: nats.DrainTimeout(10*time.Millisecond).
|
||||
|
||||
nc, err := nats.Connect("demo.nats.io",
|
||||
nats.DrainTimeout(10*time.Second),
|
||||
nats.ErrorHandler(func(_ *nats.Conn, _ *nats.Subscription, err error) {
|
||||
errCh <- err
|
||||
}),
|
||||
nats.ClosedHandler(func(_ *nats.Conn) {
|
||||
wg.Done()
|
||||
}))
|
||||
nats.DrainTimeout(10*time.Second),
|
||||
nats.ErrorHandler(func(_ *nats.Conn, _ *nats.Subscription, err error) {
|
||||
errCh <- err
|
||||
}),
|
||||
nats.ClosedHandler(func(_ *nats.Conn) {
|
||||
wg.Done()
|
||||
}))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Just to not collide using the demo server with other users.
|
||||
@ -46,19 +46,19 @@ subject := nats.NewInbox()
|
||||
|
||||
// Subscribe, but add some delay while processing.
|
||||
if _, err := nc.Subscribe(subject, func(_ *nats.Msg) {
|
||||
time.Sleep(200 * time.Millisecond)
|
||||
time.Sleep(200 * time.Millisecond)
|
||||
}); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Publish a message
|
||||
if err := nc.Publish(subject, []byte("hello")); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Drain the connection, which will close it when done.
|
||||
if err := nc.Drain(); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Wait for the connection to be closed.
|
||||
@ -67,7 +67,7 @@ wg.Wait()
|
||||
// Check if there was an error
|
||||
select {
|
||||
case e := <-errCh:
|
||||
log.Fatal(e)
|
||||
log.Fatal(e)
|
||||
default:
|
||||
}
|
||||
```
|
||||
@ -215,61 +215,60 @@ The API for drain can generally be used instead of unsubscribe:
|
||||
{% tabs %}
|
||||
{% tab title="Go" %}
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
done := sync.WaitGroup{}
|
||||
done.Add(1)
|
||||
|
||||
done := sync.WaitGroup{}
|
||||
done.Add(1)
|
||||
count := 0
|
||||
errCh := make(chan error, 1)
|
||||
|
||||
count := 0
|
||||
errCh := make(chan error, 1)
|
||||
msgAfterDrain := "not this one"
|
||||
|
||||
msgAfterDrain := "not this one"
|
||||
// Just to not collide using the demo server with other users.
|
||||
subject := nats.NewInbox()
|
||||
|
||||
// Just to not collide using the demo server with other users.
|
||||
subject := nats.NewInbox()
|
||||
// This callback will process each message slowly
|
||||
sub, err := nc.Subscribe(subject, func(m *nats.Msg) {
|
||||
if string(m.Data) == msgAfterDrain {
|
||||
errCh <- fmt.Errorf("Should not have received this message")
|
||||
return
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
count++
|
||||
if count == 2 {
|
||||
done.Done()
|
||||
}
|
||||
})
|
||||
|
||||
// This callback will process each message slowly
|
||||
sub, err := nc.Subscribe(subject, func(m *nats.Msg) {
|
||||
if string(m.Data) == msgAfterDrain {
|
||||
errCh <- fmt.Errorf("Should not have received this message")
|
||||
return
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
count++
|
||||
if count == 2 {
|
||||
done.Done()
|
||||
}
|
||||
})
|
||||
// Send 2 messages
|
||||
for i := 0; i < 2; i++ {
|
||||
nc.Publish(subject, []byte("hello"))
|
||||
}
|
||||
|
||||
// Send 2 messages
|
||||
for i := 0; i < 2; i++ {
|
||||
nc.Publish(subject, []byte("hello"))
|
||||
}
|
||||
// Call Drain on the subscription. It unsubscribes but
|
||||
// wait for all pending messages to be processed.
|
||||
if err := sub.Drain(); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Call Drain on the subscription. It unsubscribes but
|
||||
// wait for all pending messages to be processed.
|
||||
if err := sub.Drain(); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
// Send one more message, this message should not be received
|
||||
nc.Publish(subject, []byte(msgAfterDrain))
|
||||
|
||||
// Send one more message, this message should not be received
|
||||
nc.Publish(subject, []byte(msgAfterDrain))
|
||||
// Wait for the subscription to have processed the 2 messages.
|
||||
done.Wait()
|
||||
|
||||
// Wait for the subscription to have processed the 2 messages.
|
||||
done.Wait()
|
||||
|
||||
// Now check that the 3rd message was not received
|
||||
select {
|
||||
case e := <-errCh:
|
||||
log.Fatal(e)
|
||||
case <-time.After(200 * time.Millisecond):
|
||||
// OK!
|
||||
}
|
||||
// Now check that the 3rd message was not received
|
||||
select {
|
||||
case e := <-errCh:
|
||||
log.Fatal(e)
|
||||
case <-time.After(200 * time.Millisecond):
|
||||
// OK!
|
||||
}
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
|
@ -13,7 +13,7 @@ As an example, to subscribe to the queue `workers` with the subject `updates`:
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -23,9 +23,9 @@ wg.Add(10)
|
||||
|
||||
// Create a queue subscription on "updates" with queue name "workers"
|
||||
if _, err := nc.QueueSubscribe("updates", "worker", func(m *nats.Msg) {
|
||||
wg.Done()
|
||||
wg.Done()
|
||||
}); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Wait for messages to come in
|
||||
@ -125,17 +125,17 @@ If you run this example with the publish examples that send to `updates`, you wi
|
||||
|
||||
## Queue Permissions
|
||||
|
||||
Added in NATS Server v2.1.2, Queue Permissions allow you to express authorization for queue groups. As queue groups are integral to implementing horizontally scalable microservices, control of who is allowed to join a specific queue group is important to the overall security model.
|
||||
Added in NATS Server v2.1.2, Queue Permissions allow you to express authorization for queue groups. As queue groups are integral to implementing horizontally scalable microservices, control of who is allowed to join a specific queue group is important to the overall security model.
|
||||
|
||||
A Queue Permission can be defined with the syntax `<subject> <queue>`, where the name of the queue can also use wildcards, for example the following would allow clients to join queue groups v1 and v2.*, but won't allow plain subscriptions:
|
||||
A Queue Permission can be defined with the syntax `<subject> <queue>`, where the name of the queue can also use wildcards, for example the following would allow clients to join queue groups v1 and v2.\*, but won't allow plain subscriptions:
|
||||
|
||||
```hcl
|
||||
```text
|
||||
allow = ["foo v1", "foo v2.*"]
|
||||
```
|
||||
|
||||
The full wildcard can also be used, for example the following would prevent plain subscriptions on `bar` but allow the client to join any queue:
|
||||
|
||||
```
|
||||
```text
|
||||
allow = ["bar >"]
|
||||
```
|
||||
|
||||
@ -155,3 +155,4 @@ users = [
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
|
@ -9,20 +9,20 @@ For example, the following code will listen for that request and respond with th
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
// Subscribe
|
||||
sub, err := nc.SubscribeSync("time")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Read a message
|
||||
msg, err := sub.NextMsg(10 * time.Second)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Get the time
|
||||
@ -69,7 +69,6 @@ nc.subscribe('time', (msg, reply) => {
|
||||
nc.publish(msg.reply, new Date().toLocaleTimeString());
|
||||
}
|
||||
});
|
||||
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
|
@ -9,19 +9,19 @@ For example, to receive JSON you could do:
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
ec, err := nats.NewEncodedConn(nc, nats.JSON_ENCODER)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer ec.Close()
|
||||
|
||||
// Define the object
|
||||
type stock struct {
|
||||
Symbol string
|
||||
Price int
|
||||
Symbol string
|
||||
Price int
|
||||
}
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
@ -29,10 +29,10 @@ wg.Add(1)
|
||||
|
||||
// Subscribe
|
||||
if _, err := ec.Subscribe("updates", func(s *stock) {
|
||||
log.Printf("Stock: %s - Price: %v", s.Symbol, s.Price)
|
||||
wg.Done()
|
||||
log.Printf("Stock: %s - Price: %v", s.Symbol, s.Price)
|
||||
wg.Done()
|
||||
}); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Wait for a message to come in
|
||||
@ -66,7 +66,7 @@ public class SubscribeJSON {
|
||||
|
||||
String json = new String(msg.getData(), StandardCharsets.UTF_8);
|
||||
StockForJsonSub stk = gson.fromJson(json, StockForJsonSub.class);
|
||||
|
||||
|
||||
// Use the object
|
||||
System.out.println(stk);
|
||||
|
||||
|
@ -9,20 +9,20 @@ For example, to subscribe to the subject `updates` and receive a single message
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
// Subscribe
|
||||
sub, err := nc.SubscribeSync("updates")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Wait for a message
|
||||
msg, err := sub.NextMsg(10 * time.Second)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Use the response
|
||||
|
@ -17,26 +17,26 @@ The following example shows unsubscribe after a single message:
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
// Sync Subscription
|
||||
sub, err := nc.SubscribeSync("updates")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
if err := sub.AutoUnsubscribe(1); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Async Subscription
|
||||
sub, err = nc.Subscribe("updates", func(_ *nats.Msg) {})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
if err := sub.AutoUnsubscribe(1); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
{% endtab %}
|
||||
@ -126,7 +126,6 @@ NATS.start(servers:["nats://127.0.0.1:4222"]) do |nc|
|
||||
|
||||
end.resume
|
||||
end
|
||||
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
|
@ -9,26 +9,26 @@ This process requires an interaction with the server, so for an asynchronous sub
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
// Sync Subscription
|
||||
sub, err := nc.SubscribeSync("updates")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
if err := sub.Unsubscribe(); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Async Subscription
|
||||
sub, err = nc.Subscribe("updates", func(_ *nats.Msg) {})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
if err := sub.Unsubscribe(); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
{% endtab %}
|
||||
@ -92,7 +92,6 @@ await nc.unsubscribe(sid)
|
||||
|
||||
# Won't be received...
|
||||
await nc.publish("updates", b'...')
|
||||
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
|
@ -11,7 +11,7 @@ For example, you can subscribe using `*` and then act based on the actual subjec
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -21,10 +21,10 @@ wg.Add(2)
|
||||
|
||||
// Subscribe
|
||||
if _, err := nc.Subscribe("time.*.east", func(m *nats.Msg) {
|
||||
log.Printf("%s: %s", m.Subject, m.Data)
|
||||
wg.Done()
|
||||
log.Printf("%s: %s", m.Subject, m.Data)
|
||||
wg.Done()
|
||||
}); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Wait for the 2 messages to come in
|
||||
@ -176,7 +176,7 @@ or do something similar with `>`:
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -186,10 +186,10 @@ wg.Add(4)
|
||||
|
||||
// Subscribe
|
||||
if _, err := nc.Subscribe("time.>", func(m *nats.Msg) {
|
||||
log.Printf("%s: %s", m.Subject, m.Data)
|
||||
wg.Done()
|
||||
log.Printf("%s: %s", m.Subject, m.Data)
|
||||
wg.Done()
|
||||
}); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Wait for the 4 messages to come in
|
||||
@ -346,13 +346,13 @@ The following example can be used to test these two subscribers. The `*` subscri
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
zoneID, err := time.LoadLocation("America/New_York")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
now := time.Now()
|
||||
zoneDateTime := now.In(zoneID)
|
||||
@ -363,14 +363,13 @@ nc.Publish("time.us.east.atlanta", []byte(formatted))
|
||||
|
||||
zoneID, err = time.LoadLocation("Europe/Warsaw")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
zoneDateTime = now.In(zoneID)
|
||||
formatted = zoneDateTime.String()
|
||||
|
||||
nc.Publish("time.eu.east", []byte(formatted))
|
||||
nc.Publish("time.eu.east.warsaw", []byte(formatted))
|
||||
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
@ -392,7 +391,6 @@ nc.publish("time.eu.east.warsaw", formatted.getBytes(StandardCharsets.UTF_8));
|
||||
|
||||
nc.flush(Duration.ZERO);
|
||||
nc.close();
|
||||
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
@ -402,7 +400,6 @@ nc.publish('time.us.east');
|
||||
nc.publish('time.us.central');
|
||||
nc.publish('time.us.mountain');
|
||||
nc.publish('time.us.west');
|
||||
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
@ -419,7 +416,6 @@ await nc.publish("time.eu.east", b'...')
|
||||
await nc.publish("time.eu.east.warsaw", b'...')
|
||||
|
||||
await nc.close()
|
||||
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
@ -434,7 +430,6 @@ NATS.start do |nc|
|
||||
|
||||
nc.drain
|
||||
end
|
||||
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
@ -444,7 +439,6 @@ nc.publish('time.us.east');
|
||||
nc.publish('time.us.central');
|
||||
nc.publish('time.us.mountain');
|
||||
nc.publish('time.us.west');
|
||||
|
||||
```
|
||||
{% endtab %}
|
||||
{% endtabs %}
|
||||
|
@ -14,7 +14,7 @@ For clients that support this feature, you are able to configure the size of thi
|
||||
// Set reconnect buffer size in bytes (5 MB)
|
||||
nc, err := nats.Connect("demo.nats.io", nats.ReconnectBufSize(5*1024*1024))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -57,7 +57,6 @@ nc.close();
|
||||
{% tab title="TypeScript" %}
|
||||
```typescript
|
||||
// Reconnect buffer size is not configurable on NATS Typescript client
|
||||
|
||||
```
|
||||
{% endtab %}
|
||||
{% endtabs %}
|
||||
|
@ -8,7 +8,7 @@ You can disable automatic reconnect with connection options:
|
||||
// Disable reconnect attempts
|
||||
nc, err := nats.Connect("demo.nats.io", nats.NoReconnect())
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -9,14 +9,14 @@ Because reconnect is primarily under the covers many libraries provide an event
|
||||
// and the state of the connection may have changed when
|
||||
// the callback is invoked.
|
||||
nc, err := nats.Connect("demo.nats.io",
|
||||
nats.DisconnectHandler(func(nc *nats.Conn) {
|
||||
// handle disconnect event
|
||||
}),
|
||||
nats.ReconnectHandler(func(nc *nats.Conn) {
|
||||
// handle reconnect event
|
||||
}))
|
||||
nats.DisconnectHandler(func(nc *nats.Conn) {
|
||||
// handle disconnect event
|
||||
}),
|
||||
nats.ReconnectHandler(func(nc *nats.Conn) {
|
||||
// handle reconnect event
|
||||
}))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -8,7 +8,7 @@ Applications can set the maximum reconnect attempts. Generally, this will limit
|
||||
// Set max reconnects attempts
|
||||
nc, err := nats.Connect("demo.nats.io", nats.MaxReconnects(10))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -8,13 +8,13 @@ However, if you want to disable the randomization process, so that servers are a
|
||||
{% tab title="Go" %}
|
||||
```go
|
||||
servers := []string{"nats://127.0.0.1:1222",
|
||||
"nats://127.0.0.1:1223",
|
||||
"nats://127.0.0.1:1224",
|
||||
"nats://127.0.0.1:1223",
|
||||
"nats://127.0.0.1:1224",
|
||||
}
|
||||
|
||||
nc, err := nats.Connect(strings.Join(servers, ","), nats.DontRandomize())
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -8,7 +8,7 @@ It doesn’t make much sense to try to connect to the same server over and over.
|
||||
// Set reconnect interval to 10 seconds
|
||||
nc, err := nats.Connect("demo.nats.io", nats.ReconnectWait(10*time.Second))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -25,7 +25,7 @@ Given a creds file, a client can authenticate as a specific user belonging to a
|
||||
```go
|
||||
nc, err := nats.Connect("127.0.0.1", nats.UserCredentials("path_to_creds_file"))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -9,11 +9,11 @@ Handling challenge response may require more than just a setting in the connecti
|
||||
```go
|
||||
opt, err := nats.NkeyOptionFromSeed("seed.txt")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
nc, err := nats.Connect("127.0.0.1", opt)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -18,10 +18,10 @@ Connecting to a server with TLS is straightforward. Most clients will automatica
|
||||
{% tab title="Go" %}
|
||||
```go
|
||||
nc, err := nats.Connect("localhost",
|
||||
nats.ClientCert("resources/certs/cert.pem", "resources/certs/key.pem"),
|
||||
nats.RootCAs("resources/certs/ca.pem"))
|
||||
nats.ClientCert("resources/certs/cert.pem", "resources/certs/key.pem"),
|
||||
nats.RootCAs("resources/certs/ca.pem"))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -84,7 +84,7 @@ public class ConnectTLS {
|
||||
sslContext(ctx). // Set the SSL context
|
||||
build();
|
||||
Connection nc = Nats.connect(options);
|
||||
|
||||
|
||||
// Do something with the connection
|
||||
|
||||
nc.close();
|
||||
@ -206,7 +206,7 @@ Some clients may support the `tls` protocol as well as a manual setting to turn
|
||||
```go
|
||||
nc, err := nats.Connect("tls://localhost", nats.RootCAs("resources/certs/ca.pem")) // May need this if server is using self-signed certificate
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -269,7 +269,7 @@ public class ConnectTLS {
|
||||
sslContext(ctx). // Set the SSL context
|
||||
build();
|
||||
Connection nc = Nats.connect(options);
|
||||
|
||||
|
||||
// Do something with the connection
|
||||
|
||||
nc.close();
|
||||
|
@ -18,7 +18,7 @@ The code uses localhost:4222 so that you can start the server on your machine to
|
||||
// Set a token
|
||||
nc, err := nats.Connect("127.0.0.1", nats.Name("API Token Example"), nats.Token("mytoken"))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -85,7 +85,7 @@ Again, once you construct this URL you can connect as if this was a normal URL.
|
||||
// Token in URL
|
||||
nc, err := nats.Connect("mytoken@localhost")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -28,7 +28,7 @@ When logging in with a password `nats-server` will take either a plain text pass
|
||||
// Set a user and plain text password
|
||||
nc, err := nats.Connect("127.0.0.1", nats.UserInfo("myname", "password"))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -109,7 +109,7 @@ Using this format, you can connect to a server using authentication as easily as
|
||||
// Set a user and plain text password
|
||||
nc, err := nats.Connect("myname:password@127.0.0.1")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
|
@ -9,12 +9,12 @@ All of the NATS clients are designed to make sending a message simple. For examp
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io", nats.Name("API PublishBytes Example"))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
if err := nc.Publish("updates", []byte("All is Well")); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
{% endtab %}
|
||||
|
@ -11,7 +11,7 @@ It is the libraries job to make sure messages flow in a high performance manner.
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -19,12 +19,12 @@ defer nc.Close()
|
||||
subject := nats.NewInbox()
|
||||
|
||||
if err := nc.Publish(subject, []byte("All is Well")); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
// Sends a PING and wait for a PONG from the server, up to the given timeout.
|
||||
// This gives guarantee that the server has processed the above message.
|
||||
if err := nc.FlushTimeout(time.Second); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
{% endtab %}
|
||||
|
@ -7,7 +7,7 @@ The optional reply-to field when publishing a message can be used on the receivi
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
@ -17,19 +17,19 @@ uniqueReplyTo := nats.NewInbox()
|
||||
// Listen for a single response
|
||||
sub, err := nc.SubscribeSync(uniqueReplyTo)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Send the request.
|
||||
// If processing is synchronous, use Request() which returns the response message.
|
||||
if err := nc.PublishRequest("time", uniqueReplyTo, nil); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Read the reply
|
||||
msg, err := sub.NextMsg(time.Second)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Use the response
|
||||
|
@ -13,14 +13,14 @@ For example, updating the previous publish example we may request `time` with a
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
// Send the request
|
||||
msg, err := nc.Request("time", nil, time.Second)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Use the response
|
||||
|
@ -9,25 +9,25 @@ Take a simple _stock ticker_ that sends the symbol and price of each stock:
|
||||
```go
|
||||
nc, err := nats.Connect("demo.nats.io")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
ec, err := nats.NewEncodedConn(nc, nats.JSON_ENCODER)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer ec.Close()
|
||||
|
||||
// Define the object
|
||||
type stock struct {
|
||||
Symbol string
|
||||
Price int
|
||||
Symbol string
|
||||
Price int
|
||||
}
|
||||
|
||||
// Publish the message
|
||||
if err := ec.Publish("updates", &stock{Symbol: "GOOG", Price: 1200}); err != nil {
|
||||
log.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
{% endtab %}
|
||||
|
8
faq.md
8
faq.md
@ -24,7 +24,7 @@
|
||||
* [Does NATS support replay/redelivery of historical data?](faq.md#does-nats-support-replayredelivery-of-historical-data)
|
||||
* [How do I gracefully shut down an asynchronous subscriber?](faq.md#how-do-i-gracefully-shut-down-an-asynchronous-subscriber)
|
||||
* [How do I create subjects?](faq.md#how-do-i-create-subjects)
|
||||
* [Does adding a “max_age” to a “channel” for NATS streaming server connected to a SQL store, retroactively delete messages?](faq.md#does-adding-a-max_age-to-a-channel-for-nats-streaming-server-connected-to-a-sql-store-retroactively-delete-messages)
|
||||
* [Does adding a “max\_age” to a “channel” for NATS streaming server connected to a SQL store, retroactively delete messages?](faq.md#does-adding-a-max_age-to-a-channel-for-nats-streaming-server-connected-to-a-sql-store-retroactively-delete-messages)
|
||||
* [What is the upgrade path from NATS 1.x to 2.x?](faq.md#what-is-the-upgrade-path-from-nats-1-x-to-2-x)
|
||||
|
||||
## General
|
||||
@ -141,11 +141,11 @@ To gracefully shutdown an asynchronous subscriber so that any outstanding MsgHan
|
||||
|
||||
Subjects are created and pruned \(deleted\) dynamically based on interest \(subscriptions\). This means that a subject does not exist in a NATS cluster until a client subscribes to it, and the subject goes away after the last subscribing client unsubscribes from that subject.
|
||||
|
||||
### Does adding a “max_age” to a “channel” for NATS streaming server connected to a SQL store, retroactively delete messages?
|
||||
### Does adding a “max\_age” to a “channel” for NATS streaming server connected to a SQL store, retroactively delete messages?
|
||||
|
||||
Yes, any channel limit will be applied on startup. For more information, see [Store Limits](nats-streaming-server/configuring/storelimits.md#limits-are-retroactive).
|
||||
|
||||
### What is the upgrade path from NATS 1.x to 2.x?
|
||||
### What is the upgrade path from NATS 1.x to 2.x?
|
||||
|
||||
NATS 2.0 is completely backwards compatible with NATS < 2.x configure files and clients. Just run the new server. Be sure to review the [What's New in 2.0](whats_new_20.md) for great new features.
|
||||
NATS 2.0 is completely backwards compatible with NATS < 2.x configure files and clients. Just run the new server. Be sure to review the [What's New in 2.0](whats_new_20.md) for great new features.
|
||||
|
||||
|
@ -1,58 +0,0 @@
|
||||
# NATS on Kubernetes
|
||||
|
||||
In this section you can find several examples of how to deploy NATS, NATS Streaming
|
||||
and other tools from the NATS ecosystem on Kubernetes.
|
||||
|
||||
* [Getting Started](README.md#getting-started)
|
||||
* [Creating a NATS Streaming Cluster in k8s with FT mode](stan-ft-k8s-aws.md)
|
||||
* [NATS + Prometheus Operator](prometheus-and-nats-operator)
|
||||
* [NATS + Cert Manager in k8s](nats-cluster-and-cert-manager.md)
|
||||
* [Securing a NATS Cluster using cfssl](operator-tls-setup-with-cfssl.md)
|
||||
|
||||
|
||||
## Running NATS on K8S
|
||||
|
||||
### Getting started
|
||||
|
||||
The fastest and easiest way to get started is with just one shell command:
|
||||
|
||||
```sh
|
||||
curl -sSL https://nats-io.github.io/k8s/setup.sh | sh
|
||||
```
|
||||
|
||||
*In case you don't have a cluster already, you can find some notes on how to create a small cluster using one of the hosted Kubernetes providers [here](https://github.com/nats-io/k8s/docs/create-k8s-cluster.md).*
|
||||
|
||||
This will run a `nats-setup` container with the [required policy](https://github.com/nats-io/k8s/blob/master/setup/bootstrap-policy.yml)
|
||||
and deploy a NATS cluster on Kubernetes with external access, TLS and
|
||||
decentralized authorization.
|
||||
|
||||
[](https://asciinema.org/a/282135)
|
||||
|
||||
By default, the installer will deploy the [Prometheus Operator](https://github.com/coreos/prometheus-operator) and the
|
||||
[Cert Manager](https://github.com/jetstack/cert-manager) for metrics and TLS support, and the NATS instances will
|
||||
also bind the 4222 host port for external access.
|
||||
|
||||
You can customize the installer to install without TLS or without Auth
|
||||
to have a simpler setup as follows:
|
||||
|
||||
```sh
|
||||
# Disable TLS
|
||||
curl -sSL https://nats-io.github.io/k8s/setup.sh | sh -s -- --without-tls
|
||||
|
||||
# Disable Auth and TLS (also disables NATS surveyor and NATS Streaming)
|
||||
curl -sSL https://nats-io.github.io/k8s/setup.sh | sh -s -- --without-tls --without-auth
|
||||
```
|
||||
|
||||
**Note**: Since [NATS Streaming](https://github.com/nats-io/nats-streaming-server) will be running as a [leafnode](https://github.com/nats-io/docs/tree/master/leafnodes) to NATS
|
||||
(under the STAN account) and that [NATS Surveyor](https://github.com/nats-io/nats-surveyor)
|
||||
requires the [system account](,,/nats-server/nats_admin/sys_accounts) to monitor events, disabling auth also means that NATS Streaming and NATS Surveyor based monitoring will be disabled.
|
||||
|
||||
The monitoring dashboard setup using NATS Surveyor can be accessed by using port-forward:
|
||||
|
||||
kubectl port-forward deployments/nats-surveyor-grafana 3000:3000
|
||||
|
||||
Next, open the following URL in your browser:
|
||||
|
||||
http://127.0.0.1:3000/d/nats/nats-surveyor?refresh=5s&orgId=1
|
||||
|
||||

|
@ -1,422 +0,0 @@
|
||||
# Secure NATS Cluster in Kubernetes using the NATS Operator
|
||||
|
||||
## Features
|
||||
|
||||
- Clients TLS setup
|
||||
- TLS based auth certs via secret
|
||||
+ Reloading supported by only updating secret
|
||||
- Routes TLS setup
|
||||
- Advertising public IP per NATS server for external access
|
||||
|
||||
### Creating the Certificates
|
||||
|
||||
#### Generating the Root CA Certs
|
||||
|
||||
```js
|
||||
{
|
||||
"CN": "nats.io",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"OU": "nats.io"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
```sh
|
||||
(
|
||||
cd certs
|
||||
|
||||
# CA certs
|
||||
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
|
||||
)
|
||||
```
|
||||
|
||||
Setup the profiles for the Root CA, we will have 3 main profiles: one
|
||||
for the clients connecting, one for the servers, and another one for
|
||||
the full mesh routing connections between the servers.
|
||||
|
||||
```js :tangle certs/ca-config.json
|
||||
{
|
||||
"signing": {
|
||||
"default": {
|
||||
"expiry": "43800h"
|
||||
},
|
||||
"profiles": {
|
||||
"server": {
|
||||
"expiry": "43800h",
|
||||
"usages": [
|
||||
"signing",
|
||||
"key encipherment",
|
||||
"server auth",
|
||||
"client auth"
|
||||
]
|
||||
},
|
||||
"client": {
|
||||
"expiry": "43800h",
|
||||
"usages": [
|
||||
"signing",
|
||||
"key encipherment",
|
||||
"client auth"
|
||||
]
|
||||
},
|
||||
"route": {
|
||||
"expiry": "43800h",
|
||||
"usages": [
|
||||
"signing",
|
||||
"key encipherment",
|
||||
"server auth",
|
||||
"client auth"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Generating the NATS server certs
|
||||
|
||||
First we generate the certificates for the server.
|
||||
|
||||
```js
|
||||
{
|
||||
"CN": "nats.io",
|
||||
"hosts": [
|
||||
"localhost",
|
||||
"*.nats-cluster.default.svc",
|
||||
"*.nats-cluster-mgmt.default.svc",
|
||||
"nats-cluster",
|
||||
"nats-cluster-mgmt",
|
||||
"nats-cluster.default.svc",
|
||||
"nats-cluster-mgmt.default.svc",
|
||||
"nats-cluster.default.svc.cluster.local",
|
||||
"nats-cluster-mgmt.default.svc.cluster.local",
|
||||
"*.nats-cluster.default.svc.cluster.local",
|
||||
"*.nats-cluster-mgmt.default.svc.cluster.local"
|
||||
],
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"OU": "Operator"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
```sh
|
||||
(
|
||||
# Generating the peer certificates
|
||||
cd certs
|
||||
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server server.json | cfssljson -bare server
|
||||
)
|
||||
```
|
||||
|
||||
#### Generating the NATS server routes certs
|
||||
|
||||
We will also be setting up TLS for the full mesh routes.
|
||||
|
||||
```js
|
||||
{
|
||||
"CN": "nats.io",
|
||||
"hosts": [
|
||||
"localhost",
|
||||
"*.nats-cluster.default.svc",
|
||||
"*.nats-cluster-mgmt.default.svc",
|
||||
"nats-cluster",
|
||||
"nats-cluster-mgmt",
|
||||
"nats-cluster.default.svc",
|
||||
"nats-cluster-mgmt.default.svc",
|
||||
"nats-cluster.default.svc.cluster.local",
|
||||
"nats-cluster-mgmt.default.svc.cluster.local",
|
||||
"*.nats-cluster.default.svc.cluster.local",
|
||||
"*.nats-cluster-mgmt.default.svc.cluster.local"
|
||||
],
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"OU": "Operator"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
```sh
|
||||
# Generating the peer certificates
|
||||
(
|
||||
cd certs
|
||||
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=route route.json | cfssljson -bare route
|
||||
)
|
||||
```
|
||||
|
||||
#### Generating the certs for the clients (CNCF && ACME)
|
||||
|
||||
```js
|
||||
{
|
||||
"CN": "nats.io",
|
||||
"hosts": [""],
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"OU": "CNCF"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
```sh
|
||||
(
|
||||
cd certs
|
||||
# Generating NATS client certs
|
||||
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json | cfssljson -bare client
|
||||
)
|
||||
```
|
||||
|
||||
#### Kubectl create
|
||||
|
||||
```sh :results output
|
||||
cd certs
|
||||
kubectl create secret generic nats-tls-example --from-file=ca.pem --from-file=server-key.pem --from-file=server.pem
|
||||
kubectl create secret generic nats-tls-routes-example --from-file=ca.pem --from-file=route-key.pem --from-file=route.pem
|
||||
kubectl create secret generic nats-tls-client-example --from-file=ca.pem --from-file=client-key.pem --from-file=client.pem
|
||||
```
|
||||
|
||||
### Create the Auth secret
|
||||
|
||||
```js
|
||||
{
|
||||
"users": [
|
||||
{ "username": "CN=nats.io,OU=ACME" },
|
||||
{ "username": "CN=nats.io,OU=CNCF",
|
||||
"permissions": {
|
||||
"publish": ["hello.*"],
|
||||
"subscribe": ["hello.world"]
|
||||
}
|
||||
}
|
||||
],
|
||||
"default_permissions": {
|
||||
"publish": ["SANDBOX.*"],
|
||||
"subscribe": ["PUBLIC.>"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```sh
|
||||
kubectl create secret generic nats-tls-users --from-file=users.json
|
||||
```
|
||||
|
||||
### Create a cluster with TLS
|
||||
|
||||
```sh
|
||||
echo '
|
||||
apiVersion: "nats.io/v1alpha2"
|
||||
kind: "NatsCluster"
|
||||
metadata:
|
||||
name: "nats-cluster"
|
||||
spec:
|
||||
size: 3
|
||||
|
||||
# Using custom edge nats server image for TLS verify and map support.
|
||||
serverImage: "wallyqs/nats-server"
|
||||
version: "edge-2.0.0-RC5"
|
||||
|
||||
tls:
|
||||
enableHttps: true
|
||||
|
||||
# Certificates to secure the NATS client connections:
|
||||
serverSecret: "nats-tls-example"
|
||||
|
||||
# Certificates to secure the routes.
|
||||
routesSecret: "nats-tls-routes-example"
|
||||
|
||||
auth:
|
||||
tlsVerifyAndMap: true
|
||||
clientsAuthSecret: "nats-tls-users"
|
||||
|
||||
# How long to wait for authentication
|
||||
clientsAuthTimeout: 5
|
||||
|
||||
pod:
|
||||
# To be able to reload the secret changes
|
||||
enableConfigReload: true
|
||||
reloaderImage: connecteverything/nats-server-config-reloader
|
||||
|
||||
# Bind the port 4222 as the host port to allow external access.
|
||||
enableClientsHostPort: true
|
||||
|
||||
# Initializer container that resolves the external IP from the
|
||||
# container where it is running.
|
||||
advertiseExternalIP: true
|
||||
|
||||
# Image of container that resolves external IP from K8S API
|
||||
bootconfigImage: "wallyqs/nats-boot-config"
|
||||
bootconfigImageTag: "0.5.0"
|
||||
|
||||
# Service account required to be able to find the external IP
|
||||
template:
|
||||
spec:
|
||||
serviceAccountName: "nats-server"
|
||||
' | kubectl apply -f -
|
||||
```
|
||||
|
||||
### Create APP using certs
|
||||
|
||||
#### Adding a new pod which uses the certificates
|
||||
|
||||
Development
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/nats-io/go-nats"
|
||||
"github.com/nats-io/nuid"
|
||||
)
|
||||
|
||||
func main() {
|
||||
var (
|
||||
serverList string
|
||||
rootCACertFile string
|
||||
clientCertFile string
|
||||
clientKeyFile string
|
||||
)
|
||||
flag.StringVar(&serverList, "s", "tls://nats-1.nats-cluster.default.svc:4222", "List of NATS of servers available")
|
||||
flag.StringVar(&rootCACertFile, "cacert", "./certs/ca.pem", "Root CA Certificate File")
|
||||
flag.StringVar(&clientCertFile, "cert", "./certs/client.pem", "Client Certificate File")
|
||||
flag.StringVar(&clientKeyFile, "key", "./certs/client-key.pem", "Client Private key")
|
||||
flag.Parse()
|
||||
|
||||
log.Println("NATS endpoint:", serverList)
|
||||
log.Println("Root CA:", rootCACertFile)
|
||||
log.Println("Client Cert:", clientCertFile)
|
||||
log.Println("Client Key:", clientKeyFile)
|
||||
|
||||
// Connect options
|
||||
rootCA := nats.RootCAs(rootCACertFile)
|
||||
clientCert := nats.ClientCert(clientCertFile, clientKeyFile)
|
||||
alwaysReconnect := nats.MaxReconnects(-1)
|
||||
|
||||
var nc *nats.Conn
|
||||
var err error
|
||||
for {
|
||||
nc, err = nats.Connect(serverList, rootCA, clientCert, alwaysReconnect)
|
||||
if err != nil {
|
||||
log.Printf("Error while connecting to NATS, backing off for a sec... (error: %s)", err)
|
||||
time.Sleep(1 * time.Second)
|
||||
continue
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
nc.Subscribe("discovery.*.status", func(m *nats.Msg) {
|
||||
log.Printf("[Received on %q] %s", m.Subject, string(m.Data))
|
||||
})
|
||||
|
||||
discoverySubject := fmt.Sprintf("discovery.%s.status", nuid.Next())
|
||||
info := struct {
|
||||
InMsgs uint64 `json:"in_msgs"`
|
||||
OutMsgs uint64 `json:"out_msgs"`
|
||||
Reconnects uint64 `json:"reconnects"`
|
||||
CurrentServer string `json:"current_server"`
|
||||
Servers []string `json:"servers"`
|
||||
}{}
|
||||
|
||||
for range time.NewTicker(1 * time.Second).C {
|
||||
stats := nc.Stats()
|
||||
info.InMsgs = stats.InMsgs
|
||||
info.OutMsgs = stats.OutMsgs
|
||||
info.Reconnects = stats.Reconnects
|
||||
info.CurrentServer = nc.ConnectedUrl()
|
||||
info.Servers = nc.Servers()
|
||||
payload, err := json.Marshal(info)
|
||||
if err != nil {
|
||||
log.Printf("Error marshalling data: %s", err)
|
||||
}
|
||||
err = nc.Publish(discoverySubject, payload)
|
||||
if err != nil {
|
||||
log.Printf("Error during publishing: %s", err)
|
||||
}
|
||||
nc.Flush()
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```text
|
||||
FROM golang:1.11.0-alpine3.8 AS builder
|
||||
COPY . /go/src/github.com/nats-io/nats-kubernetes/examples/nats-cluster-routes-tls/app
|
||||
WORKDIR /go/src/github.com/nats-io/nats-kubernetes/examples/nats-cluster-routes-tls/app
|
||||
RUN apk add --update git
|
||||
RUN go get -u github.com/nats-io/go-nats
|
||||
RUN go get -u github.com/nats-io/nuid
|
||||
RUN CGO_ENABLED=0 go build -o nats-client-app -v -a ./client.go
|
||||
|
||||
FROM scratch
|
||||
COPY --from=builder /go/src/github.com/nats-io/nats-kubernetes/examples/nats-cluster-routes-tls/app/nats-client-app /nats-client-app
|
||||
ENTRYPOINT ["/nats-client-app"]
|
||||
```
|
||||
|
||||
```sh
|
||||
docker build . -t wallyqs/nats-client-app
|
||||
docker run wallyqs/nats-client-app
|
||||
docker push wallyqs/nats-client-app
|
||||
```
|
||||
Pod spec
|
||||
|
||||
```sh :results output
|
||||
echo '
|
||||
apiVersion: apps/v1beta2
|
||||
kind: Deployment
|
||||
|
||||
# The name of the deployment
|
||||
metadata:
|
||||
name: nats-client-app
|
||||
|
||||
spec:
|
||||
# This selector has to match the template.metadata.labels section
|
||||
# which is below in the PodSpec
|
||||
selector:
|
||||
matchLabels:
|
||||
name: nats-client-app
|
||||
|
||||
# Number of instances
|
||||
replicas: 1
|
||||
|
||||
# PodSpec
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: nats-client-app
|
||||
spec:
|
||||
volumes:
|
||||
- name: "client-tls-certs"
|
||||
secret:
|
||||
secretName: "nats-tls-client-example"
|
||||
containers:
|
||||
- name: nats-client-app
|
||||
command: ["/nats-client-app", "-s", "tls://nats-cluster.default.svc:4222", "-cacert", '/etc/nats-client-tls-certs/ca.pem', '-cert', '/etc/nats-client-tls-certs/client.pem', '-key', '/etc/nats-client-tls-certs/client-key.pem']
|
||||
image: wallyqs/nats-client-app:latest
|
||||
imagePullPolicy: Always
|
||||
volumeMounts:
|
||||
- name: "client-tls-certs"
|
||||
mountPath: "/etc/nats-client-tls-certs/"
|
||||
' | kubectl apply -f -
|
||||
```
|
||||
|
@ -23,7 +23,7 @@ clusterissuer.certmanager.k8s.io/selfsigning unchanged
|
||||
|
||||
Next, let's create the CA for the certs:
|
||||
|
||||
``` yaml
|
||||
```yaml
|
||||
---
|
||||
apiVersion: certmanager.k8s.io/v1alpha1
|
||||
kind: Certificate
|
||||
@ -50,9 +50,9 @@ spec:
|
||||
secretName: nats-ca
|
||||
```
|
||||
|
||||
Now create the certs that will match the DNS name used by the clients to connect, in this case traffic is within Kubernetes so we are using the name `nats` which is backed up by a headless service (here is an [example](https://github.com/nats-io/k8s/blob/master/nats-server/nats-server-plain.yml#L24-L47) of sample deployment)
|
||||
Now create the certs that will match the DNS name used by the clients to connect, in this case traffic is within Kubernetes so we are using the name `nats` which is backed up by a headless service \(here is an [example](https://github.com/nats-io/k8s/blob/master/nats-server/nats-server-plain.yml#L24-L47) of sample deployment\)
|
||||
|
||||
``` yaml
|
||||
```yaml
|
||||
---
|
||||
apiVersion: certmanager.k8s.io/v1alpha1
|
||||
kind: Certificate
|
||||
@ -72,7 +72,7 @@ spec:
|
||||
- nats.default.svc
|
||||
```
|
||||
|
||||
In case of using the NATS operator, the Routes use a service named `$YOUR_CLUSTER-mgmt` (this may change in the future)
|
||||
In case of using the NATS operator, the Routes use a service named `$YOUR_CLUSTER-mgmt` \(this may change in the future\)
|
||||
|
||||
```yaml
|
||||
---
|
||||
@ -96,7 +96,7 @@ spec:
|
||||
|
||||
Now let's create an example NATS cluster with the operator:
|
||||
|
||||
``` yaml
|
||||
```yaml
|
||||
apiVersion: "nats.io/v1alpha2"
|
||||
kind: "NatsCluster"
|
||||
metadata:
|
||||
@ -134,11 +134,11 @@ spec:
|
||||
|
||||
Confirm that the pods were deployed:
|
||||
|
||||
``` sh
|
||||
```bash
|
||||
kubectl get pods -o wide
|
||||
```
|
||||
|
||||
``` sh
|
||||
```bash
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
|
||||
nats-1 1/1 Running 0 4s 172.17.0.8 minikube <none>
|
||||
nats-2 1/1 Running 0 3s 172.17.0.9 minikube <none>
|
||||
@ -147,7 +147,7 @@ nats-3 1/1 Running 0 2s 172.17.0.10 minikube <none>
|
||||
|
||||
Follow the logs:
|
||||
|
||||
``` sh
|
||||
```bash
|
||||
kubectl logs nats-1
|
||||
```
|
||||
|
||||
@ -158,3 +158,4 @@ kubectl logs nats-1
|
||||
[1] 2019/12/18 12:27:23.921047 [INF] Server id is NDA6JC3TGEADLLBEPFAQ4BN4PM3WBN237KIXVTFCY3JSTDOSRRVOJCXN
|
||||
[1] 2019/12/18 12:27:23.921055 [INF] Server is ready
|
||||
```
|
||||
|
54
nats-on-kubernetes/nats-kubernetes.md
Normal file
54
nats-on-kubernetes/nats-kubernetes.md
Normal file
@ -0,0 +1,54 @@
|
||||
# Introduction
|
||||
|
||||
In this section you can find several examples of how to deploy NATS, NATS Streaming and other tools from the NATS ecosystem on Kubernetes.
|
||||
|
||||
* [Getting Started](nats-kubernetes.md#getting-started)
|
||||
* [Creating a NATS Streaming Cluster in k8s with FT mode](stan-ft-k8s-aws.md)
|
||||
* [NATS + Prometheus Operator](https://github.com/nats-io/nats.docs/tree/ccb05cdf9225a46fc872a6deab55dca4e072e902/nats-kubernetes/prometheus-and-nats-operator/README.md)
|
||||
* [NATS + Cert Manager in k8s](nats-cluster-and-cert-manager.md)
|
||||
* [Securing a NATS Cluster using cfssl](operator-tls-setup-with-cfssl.md)
|
||||
|
||||
## Running NATS on K8S
|
||||
|
||||
### Getting started
|
||||
|
||||
The fastest and easiest way to get started is with just one shell command:
|
||||
|
||||
```bash
|
||||
curl -sSL https://nats-io.github.io/k8s/setup.sh | sh
|
||||
```
|
||||
|
||||
_In case you don't have a cluster already, you can find some notes on how to create a small cluster using one of the hosted Kubernetes providers_ [_here_](https://github.com/nats-io/k8s/docs/create-k8s-cluster.md)_._
|
||||
|
||||
This will run a `nats-setup` container with the [required policy](https://github.com/nats-io/k8s/blob/master/setup/bootstrap-policy.yml) and deploy a NATS cluster on Kubernetes with external access, TLS and decentralized authorization.
|
||||
|
||||
[](https://asciinema.org/a/282135)
|
||||
|
||||
By default, the installer will deploy the [Prometheus Operator](https://github.com/coreos/prometheus-operator) and the [Cert Manager](https://github.com/jetstack/cert-manager) for metrics and TLS support, and the NATS instances will also bind the 4222 host port for external access.
|
||||
|
||||
You can customize the installer to install without TLS or without Auth to have a simpler setup as follows:
|
||||
|
||||
```bash
|
||||
# Disable TLS
|
||||
curl -sSL https://nats-io.github.io/k8s/setup.sh | sh -s -- --without-tls
|
||||
|
||||
# Disable Auth and TLS (also disables NATS surveyor and NATS Streaming)
|
||||
curl -sSL https://nats-io.github.io/k8s/setup.sh | sh -s -- --without-tls --without-auth
|
||||
```
|
||||
|
||||
**Note**: Since [NATS Streaming](https://github.com/nats-io/nats-streaming-server) will be running as a [leafnode](https://github.com/nats-io/docs/tree/master/leafnodes) to NATS \(under the STAN account\) and that [NATS Surveyor](https://github.com/nats-io/nats-surveyor) requires the [system account](https://github.com/nats-io/nats.docs/tree/ccb05cdf9225a46fc872a6deab55dca4e072e902/nats-kubernetes/,,/nats-server/nats_admin/sys_accounts/README.md) to monitor events, disabling auth also means that NATS Streaming and NATS Surveyor based monitoring will be disabled.
|
||||
|
||||
The monitoring dashboard setup using NATS Surveyor can be accessed by using port-forward:
|
||||
|
||||
```text
|
||||
kubectl port-forward deployments/nats-surveyor-grafana 3000:3000
|
||||
```
|
||||
|
||||
Next, open the following URL in your browser:
|
||||
|
||||
```text
|
||||
http://127.0.0.1:3000/d/nats/nats-surveyor?refresh=5s&orgId=1
|
||||
```
|
||||
|
||||

|
||||
|
393
nats-on-kubernetes/operator-tls-setup-with-cfssl.md
Normal file
393
nats-on-kubernetes/operator-tls-setup-with-cfssl.md
Normal file
@ -0,0 +1,393 @@
|
||||
# Securing a NATS Cluster with cfssl
|
||||
|
||||
## Secure NATS Cluster in Kubernetes using the NATS Operator
|
||||
|
||||
### Features
|
||||
|
||||
* Clients TLS setup
|
||||
* TLS based auth certs via secret
|
||||
* Reloading supported by only updating secret
|
||||
* Routes TLS setup
|
||||
* Advertising public IP per NATS server for external access
|
||||
|
||||
### Creating the Certificates
|
||||
|
||||
### **Generating the Root CA Certs**
|
||||
|
||||
```javascript
|
||||
{
|
||||
"CN": "nats.io",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"OU": "nats.io"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
(
|
||||
cd certs
|
||||
|
||||
# CA certs
|
||||
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
|
||||
)
|
||||
```
|
||||
|
||||
Setup the profiles for the Root CA, we will have 3 main profiles: one for the clients connecting, one for the servers, and another one for the full mesh routing connections between the servers.
|
||||
|
||||
```bash
|
||||
{ "signing": { "default": { "expiry": "43800h" }, "profiles": { "server": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] }, "client": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "client auth" ] }, "route": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } }
|
||||
```
|
||||
|
||||
### **Generating the NATS server certs**
|
||||
|
||||
First we generate the certificates for the server.
|
||||
|
||||
```text
|
||||
{
|
||||
"CN": "nats.io",
|
||||
"hosts": [
|
||||
"localhost",
|
||||
"*.nats-cluster.default.svc",
|
||||
"*.nats-cluster-mgmt.default.svc",
|
||||
"nats-cluster",
|
||||
"nats-cluster-mgmt",
|
||||
"nats-cluster.default.svc",
|
||||
"nats-cluster-mgmt.default.svc",
|
||||
"nats-cluster.default.svc.cluster.local",
|
||||
"nats-cluster-mgmt.default.svc.cluster.local",
|
||||
"*.nats-cluster.default.svc.cluster.local",
|
||||
"*.nats-cluster-mgmt.default.svc.cluster.local"
|
||||
],
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"OU": "Operator"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
(
|
||||
# Generating the peer certificates
|
||||
cd certs
|
||||
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server server.json | cfssljson -bare server
|
||||
)
|
||||
```
|
||||
|
||||
### **Generating the NATS server routes certs**
|
||||
|
||||
We will also be setting up TLS for the full mesh routes.
|
||||
|
||||
```javascript
|
||||
{
|
||||
"CN": "nats.io",
|
||||
"hosts": [
|
||||
"localhost",
|
||||
"*.nats-cluster.default.svc",
|
||||
"*.nats-cluster-mgmt.default.svc",
|
||||
"nats-cluster",
|
||||
"nats-cluster-mgmt",
|
||||
"nats-cluster.default.svc",
|
||||
"nats-cluster-mgmt.default.svc",
|
||||
"nats-cluster.default.svc.cluster.local",
|
||||
"nats-cluster-mgmt.default.svc.cluster.local",
|
||||
"*.nats-cluster.default.svc.cluster.local",
|
||||
"*.nats-cluster-mgmt.default.svc.cluster.local"
|
||||
],
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"OU": "Operator"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
# Generating the peer certificates
|
||||
(
|
||||
cd certs
|
||||
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=route route.json | cfssljson -bare route
|
||||
)
|
||||
```
|
||||
|
||||
**Generating the certs for the clients \(CNCF && ACME\)**
|
||||
|
||||
```javascript
|
||||
{
|
||||
"CN": "nats.io",
|
||||
"hosts": [""],
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"OU": "CNCF"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
(
|
||||
cd certs
|
||||
# Generating NATS client certs
|
||||
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json | cfssljson -bare client
|
||||
)
|
||||
```
|
||||
|
||||
**Kubectl create**
|
||||
|
||||
\`\`\`sh :results output cd certs kubectl create secret generic nats-tls-example --from-file=ca.pem --from-file=server-key.pem --from-file=server.pem kubectl create secret generic nats-tls-routes-example --from-file=ca.pem --from-file=route-key.pem --from-file=route.pem kubectl create secret generic nats-tls-client-example --from-file=ca.pem --from-file=client-key.pem --from-file=client.pem
|
||||
|
||||
```text
|
||||
### Create the Auth secret
|
||||
|
||||
```js
|
||||
{
|
||||
"users": [
|
||||
{ "username": "CN=nats.io,OU=ACME" },
|
||||
{ "username": "CN=nats.io,OU=CNCF",
|
||||
"permissions": {
|
||||
"publish": ["hello.*"],
|
||||
"subscribe": ["hello.world"]
|
||||
}
|
||||
}
|
||||
],
|
||||
"default_permissions": {
|
||||
"publish": ["SANDBOX.*"],
|
||||
"subscribe": ["PUBLIC.>"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl create secret generic nats-tls-users --from-file=users.json
|
||||
```
|
||||
|
||||
#### Create a cluster with TLS
|
||||
|
||||
```bash
|
||||
echo '
|
||||
apiVersion: "nats.io/v1alpha2"
|
||||
kind: "NatsCluster"
|
||||
metadata:
|
||||
name: "nats-cluster"
|
||||
spec:
|
||||
size: 3
|
||||
|
||||
# Using custom edge nats server image for TLS verify and map support.
|
||||
serverImage: "wallyqs/nats-server"
|
||||
version: "edge-2.0.0-RC5"
|
||||
|
||||
tls:
|
||||
enableHttps: true
|
||||
|
||||
# Certificates to secure the NATS client connections:
|
||||
serverSecret: "nats-tls-example"
|
||||
|
||||
# Certificates to secure the routes.
|
||||
routesSecret: "nats-tls-routes-example"
|
||||
|
||||
auth:
|
||||
tlsVerifyAndMap: true
|
||||
clientsAuthSecret: "nats-tls-users"
|
||||
|
||||
# How long to wait for authentication
|
||||
clientsAuthTimeout: 5
|
||||
|
||||
pod:
|
||||
# To be able to reload the secret changes
|
||||
enableConfigReload: true
|
||||
reloaderImage: connecteverything/nats-server-config-reloader
|
||||
|
||||
# Bind the port 4222 as the host port to allow external access.
|
||||
enableClientsHostPort: true
|
||||
|
||||
# Initializer container that resolves the external IP from the
|
||||
# container where it is running.
|
||||
advertiseExternalIP: true
|
||||
|
||||
# Image of container that resolves external IP from K8S API
|
||||
bootconfigImage: "wallyqs/nats-boot-config"
|
||||
bootconfigImageTag: "0.5.0"
|
||||
|
||||
# Service account required to be able to find the external IP
|
||||
template:
|
||||
spec:
|
||||
serviceAccountName: "nats-server"
|
||||
' | kubectl apply -f -
|
||||
```
|
||||
|
||||
#### Create APP using certs
|
||||
|
||||
**Adding a new pod which uses the certificates**
|
||||
|
||||
Development
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/nats-io/go-nats"
|
||||
"github.com/nats-io/nuid"
|
||||
)
|
||||
|
||||
func main() {
|
||||
var (
|
||||
serverList string
|
||||
rootCACertFile string
|
||||
clientCertFile string
|
||||
clientKeyFile string
|
||||
)
|
||||
flag.StringVar(&serverList, "s", "tls://nats-1.nats-cluster.default.svc:4222", "List of NATS of servers available")
|
||||
flag.StringVar(&rootCACertFile, "cacert", "./certs/ca.pem", "Root CA Certificate File")
|
||||
flag.StringVar(&clientCertFile, "cert", "./certs/client.pem", "Client Certificate File")
|
||||
flag.StringVar(&clientKeyFile, "key", "./certs/client-key.pem", "Client Private key")
|
||||
flag.Parse()
|
||||
|
||||
log.Println("NATS endpoint:", serverList)
|
||||
log.Println("Root CA:", rootCACertFile)
|
||||
log.Println("Client Cert:", clientCertFile)
|
||||
log.Println("Client Key:", clientKeyFile)
|
||||
|
||||
// Connect options
|
||||
rootCA := nats.RootCAs(rootCACertFile)
|
||||
clientCert := nats.ClientCert(clientCertFile, clientKeyFile)
|
||||
alwaysReconnect := nats.MaxReconnects(-1)
|
||||
|
||||
var nc *nats.Conn
|
||||
var err error
|
||||
for {
|
||||
nc, err = nats.Connect(serverList, rootCA, clientCert, alwaysReconnect)
|
||||
if err != nil {
|
||||
log.Printf("Error while connecting to NATS, backing off for a sec... (error: %s)", err)
|
||||
time.Sleep(1 * time.Second)
|
||||
continue
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
nc.Subscribe("discovery.*.status", func(m *nats.Msg) {
|
||||
log.Printf("[Received on %q] %s", m.Subject, string(m.Data))
|
||||
})
|
||||
|
||||
discoverySubject := fmt.Sprintf("discovery.%s.status", nuid.Next())
|
||||
info := struct {
|
||||
InMsgs uint64 `json:"in_msgs"`
|
||||
OutMsgs uint64 `json:"out_msgs"`
|
||||
Reconnects uint64 `json:"reconnects"`
|
||||
CurrentServer string `json:"current_server"`
|
||||
Servers []string `json:"servers"`
|
||||
}{}
|
||||
|
||||
for range time.NewTicker(1 * time.Second).C {
|
||||
stats := nc.Stats()
|
||||
info.InMsgs = stats.InMsgs
|
||||
info.OutMsgs = stats.OutMsgs
|
||||
info.Reconnects = stats.Reconnects
|
||||
info.CurrentServer = nc.ConnectedUrl()
|
||||
info.Servers = nc.Servers()
|
||||
payload, err := json.Marshal(info)
|
||||
if err != nil {
|
||||
log.Printf("Error marshalling data: %s", err)
|
||||
}
|
||||
err = nc.Publish(discoverySubject, payload)
|
||||
if err != nil {
|
||||
log.Printf("Error during publishing: %s", err)
|
||||
}
|
||||
nc.Flush()
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```text
|
||||
FROM golang:1.11.0-alpine3.8 AS builder
|
||||
COPY . /go/src/github.com/nats-io/nats-kubernetes/examples/nats-cluster-routes-tls/app
|
||||
WORKDIR /go/src/github.com/nats-io/nats-kubernetes/examples/nats-cluster-routes-tls/app
|
||||
RUN apk add --update git
|
||||
RUN go get -u github.com/nats-io/go-nats
|
||||
RUN go get -u github.com/nats-io/nuid
|
||||
RUN CGO_ENABLED=0 go build -o nats-client-app -v -a ./client.go
|
||||
|
||||
FROM scratch
|
||||
COPY --from=builder /go/src/github.com/nats-io/nats-kubernetes/examples/nats-cluster-routes-tls/app/nats-client-app /nats-client-app
|
||||
ENTRYPOINT ["/nats-client-app"]
|
||||
```
|
||||
|
||||
```bash
|
||||
docker build . -t wallyqs/nats-client-app
|
||||
docker run wallyqs/nats-client-app
|
||||
docker push wallyqs/nats-client-app
|
||||
```
|
||||
|
||||
Pod spec
|
||||
|
||||
\`\`\`sh :results output echo ' apiVersion: apps/v1beta2 kind: Deployment
|
||||
|
||||
## The name of the deployment
|
||||
|
||||
metadata: name: nats-client-app
|
||||
|
||||
spec:
|
||||
|
||||
## This selector has to match the template.metadata.labels section
|
||||
|
||||
## which is below in the PodSpec
|
||||
|
||||
selector: matchLabels: name: nats-client-app
|
||||
|
||||
## Number of instances
|
||||
|
||||
replicas: 1
|
||||
|
||||
## PodSpec
|
||||
|
||||
template: metadata: labels: name: nats-client-app spec: volumes:
|
||||
|
||||
* name: "client-tls-certs"
|
||||
|
||||
secret:
|
||||
|
||||
secretName: "nats-tls-client-example"
|
||||
|
||||
containers:
|
||||
|
||||
* name: nats-client-app
|
||||
|
||||
command: \["/nats-client-app", "-s", "tls://nats-cluster.default.svc:4222", "-cacert", '/etc/nats-client-tls-certs/ca.pem', '-cert', '/etc/nats-client-tls-certs/client.pem', '-key', '/etc/nats-client-tls-certs/client-key.pem'\]
|
||||
|
||||
image: wallyqs/nats-client-app:latest
|
||||
|
||||
imagePullPolicy: Always
|
||||
|
||||
volumeMounts:
|
||||
|
||||
* name: "client-tls-certs"
|
||||
|
||||
mountPath: "/etc/nats-client-tls-certs/"
|
||||
|
||||
' \| kubectl apply -f -
|
||||
|
||||
\`\`\`
|
||||
|
15
nats-kubernetes/prometheus-and-nats-operator.md → nats-on-kubernetes/prometheus-and-nats-operator.md
Executable file → Normal file
15
nats-kubernetes/prometheus-and-nats-operator.md → nats-on-kubernetes/prometheus-and-nats-operator.md
Executable file → Normal file
@ -1,16 +1,15 @@
|
||||
|
||||
# Prometheus Operator + NATS Operator
|
||||
# NATS and Prometheus Operator
|
||||
|
||||
## Installing the Operators
|
||||
|
||||
Install the NATS Operator:
|
||||
|
||||
``` sh
|
||||
```bash
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/nats-io/nats-operator/master/deploy/00-prereqs.yaml
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/nats-io/nats-operator/master/deploy/10-deployment.yaml
|
||||
```
|
||||
|
||||
Install the Prometheus Operator along with its RBAC definition (prometheus-operator service account):
|
||||
Install the Prometheus Operator along with its RBAC definition \(prometheus-operator service account\):
|
||||
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
@ -285,13 +284,11 @@ spec:
|
||||
|
||||
## Confirm
|
||||
|
||||
```
|
||||
```text
|
||||
kubectl port-forward prometheus-prometheus-0 9090:9090
|
||||
```
|
||||
|
||||
Results:
|
||||
|
||||
<img height="400" width="1000" src="https://user-images.githubusercontent.com/26195/59470419-2066fd80-8e27-11e9-9e3e-250296a091da.png">
|
||||
|
||||
### Results
|
||||
|
||||

|
||||
|
@ -1,12 +1,10 @@
|
||||
# Creating a NATS Streaming cluster in K8S with FT mode
|
||||
# NATS Streaming Cluster with FT Mode
|
||||
|
||||
## Preparation
|
||||
|
||||
First, we need a Kubernetes cluster with a provider that offers a
|
||||
service with a `ReadWriteMany` filesystem available. In this short guide,
|
||||
we will create the cluster on AWS and then use EFS for the filesystem:
|
||||
First, we need a Kubernetes cluster with a provider that offers a service with a `ReadWriteMany` filesystem available. In this short guide, we will create the cluster on AWS and then use EFS for the filesystem:
|
||||
|
||||
```
|
||||
```text
|
||||
# Create 3 nodes Kubernetes cluster
|
||||
eksctl create cluster --name nats-eks-cluster \
|
||||
--nodes 3 \
|
||||
@ -17,18 +15,17 @@ eksctl create cluster --name nats-eks-cluster \
|
||||
eksctl utils write-kubeconfig --name nats-eks-cluster --region us-east-2
|
||||
```
|
||||
|
||||
For the FT mode to work, we will need to create an EFS volume which
|
||||
can be shared by more than one pod. Go into the [AWS console](https://us-east-2.console.aws.amazon.com/efs/home?region=us-east-2#/wizard/1) and create one and make the sure that it is in a security group where the k8s nodes will have access to it. In case of clusters created via eksctl, this will be a security group named `ClusterSharedNodeSecurityGroup`:
|
||||
For the FT mode to work, we will need to create an EFS volume which can be shared by more than one pod. Go into the [AWS console](https://us-east-2.console.aws.amazon.com/efs/home?region=us-east-2#/wizard/1) and create one and make the sure that it is in a security group where the k8s nodes will have access to it. In case of clusters created via eksctl, this will be a security group named `ClusterSharedNodeSecurityGroup`:
|
||||
|
||||
<img width="1063" alt="Screen Shot 2019-12-04 at 11 25 08 AM" src="https://user-images.githubusercontent.com/26195/70177488-5ef0bd00-16d2-11ea-9cf3-e0c3196bc7da.png">
|
||||

|
||||
|
||||
<img width="1177" alt="Screen Shot 2019-12-04 at 12 40 13 PM" src="https://user-images.githubusercontent.com/26195/70179769-9497a500-16d6-11ea-9e18-2a8588a71819.png">
|
||||

|
||||
|
||||
### Creating the EFS provisioner
|
||||
|
||||
Confirm from the FilesystemID from the cluster and the DNS name, we will use those values to create an EFS provisioner controller within the K8S cluster:
|
||||
|
||||
<img width="852" alt="Screen Shot 2019-12-04 at 12 08 35 PM" src="https://user-images.githubusercontent.com/26195/70177502-657f3480-16d2-11ea-9d00-b9a8c2f5320b.png">
|
||||

|
||||
|
||||
```yaml
|
||||
---
|
||||
@ -172,7 +169,7 @@ spec:
|
||||
|
||||
Result of deploying the manifest:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
serviceaccount/efs-provisioner created
|
||||
clusterrole.rbac.authorization.k8s.io/efs-provisioner-runner created
|
||||
clusterrolebinding.rbac.authorization.k8s.io/run-efs-provisioner created
|
||||
@ -181,13 +178,12 @@ rolebinding.rbac.authorization.k8s.io/leader-locking-efs-provisioner created
|
||||
configmap/efs-provisioner created
|
||||
deployment.extensions/efs-provisioner created
|
||||
storageclass.storage.k8s.io/aws-efs created
|
||||
persistentvolumeclaim/efs created
|
||||
persistentvolumeclaim/efs created
|
||||
```
|
||||
|
||||
### Setting up the NATS Streaming cluster
|
||||
|
||||
Now create a NATS Streaming cluster with FT mode enabled and using NATS embedded mode
|
||||
that is mounting the EFS volume:
|
||||
Now create a NATS Streaming cluster with FT mode enabled and using NATS embedded mode that is mounting the EFS volume:
|
||||
|
||||
```yaml
|
||||
---
|
||||
@ -218,7 +214,7 @@ metadata:
|
||||
data:
|
||||
stan.conf: |
|
||||
http: 8222
|
||||
|
||||
|
||||
cluster {
|
||||
port: 6222
|
||||
routes [
|
||||
@ -342,7 +338,7 @@ spec:
|
||||
|
||||
Your cluster now will look something like this:
|
||||
|
||||
```
|
||||
```text
|
||||
kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
efs-provisioner-6b7866dd4-4k5wx 1/1 Running 0 21m
|
||||
@ -353,7 +349,7 @@ stan-2 2/2 Running 0 4m42s
|
||||
|
||||
If everything was setup properly, one of the servers will be the active node.
|
||||
|
||||
```
|
||||
```text
|
||||
$ kubectl logs stan-0 -c stan
|
||||
[1] 2019/12/04 20:40:40.429359 [INF] STREAM: Starting nats-streaming-server[test-cluster] version 0.16.2
|
||||
[1] 2019/12/04 20:40:40.429385 [INF] STREAM: ServerID: 7j3t3Ii7e2tifWqanYKwFX
|
||||
@ -388,3 +384,4 @@ $ kubectl logs stan-0 -c stan
|
||||
[1] 2019/12/04 20:40:41.671541 [INF] STREAM: ----------------------------------
|
||||
[1] 2019/12/04 20:40:41.671546 [INF] STREAM: Streaming Server is ready
|
||||
```
|
||||
|
@ -124,7 +124,7 @@ authorization: {
|
||||
| `max_traced_msg_len` | Set a limit to the trace of the payload of a message |
|
||||
| `disable_sublist_cache` | Disable sublist cache globally for accounts. |
|
||||
| [`operator`](../../nats-tools/nsc/nsc.md#nats-server-configuration) | Path to an operator JWT |
|
||||
| [`ping_interval`](../../developing-with-nats/intro/pingpong.md) | Interval in seconds in which the server checks if a connection is active |
|
||||
| [`ping_interval`]() | Interval in seconds in which the server checks if a connection is active |
|
||||
| `port` | Port for client connections |
|
||||
| `reconnect_error_reports` | Number of failed attempt to reconnect a route, gateway or leaf node connection. Default is to report every attempt. |
|
||||
| [`resolver`](../../nats-tools/nsc/nsc.md#nats-server-configuration) | Resolver type `MEMORY` or `URL` for account JWTs |
|
||||
|
@ -9,14 +9,13 @@ Leaf nodes are useful in IoT and edge scenarios and when the local server traffi
|
||||
* Subjects that the user is allowed to publish are exported to the cluster.
|
||||
* Subjects the user is allowed to subscribe to, are imported into the leaf node.
|
||||
|
||||
|
||||
> Leaf Nodes are an important component as a way to bridge traffic between local NATS servers you control and servers that are managed by a third-party. Synadia's [NATS Global Service \(NGS\)](https://www.synadia.com/) allows accounts to use leaf nodes, but gain accessibility to the global network to inexpensively connect geographically distributed servers or small clusters.
|
||||
|
||||
[LeafNode Configuration Options](leafnode_conf.md)
|
||||
|
||||
## LeafNode Configuration Tutorial
|
||||
|
||||
The main server is just a standard NATS server. Clients to the main cluster are just using token authentication, but any kind of authentication can be used. The server allows leaf node connections at port 7422 (default port):
|
||||
The main server is just a standard NATS server. Clients to the main cluster are just using token authentication, but any kind of authentication can be used. The server allows leaf node connections at port 7422 \(default port\):
|
||||
|
||||
```text
|
||||
leafnodes {
|
||||
@ -28,6 +27,7 @@ authorization {
|
||||
```
|
||||
|
||||
Start the server:
|
||||
|
||||
```bash
|
||||
nats-server -c /tmp/server.conf
|
||||
...
|
||||
@ -35,14 +35,13 @@ nats-server -c /tmp/server.conf
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
We create a replier on the server to listen for requests on 'q', which it will aptly respond with '42':
|
||||
|
||||
```bash
|
||||
nats-rply -s nats://s3cr3t@localhost q 42
|
||||
```
|
||||
|
||||
|
||||
The leaf node, allows local clients to connect to through port 4111, and doesn't require any kind of authentication. The configuration specifies where the remote cluster is located, and specifies how to connect to it (just a simple token in this case):
|
||||
The leaf node, allows local clients to connect to through port 4111, and doesn't require any kind of authentication. The configuration specifies where the remote cluster is located, and specifies how to connect to it \(just a simple token in this case\):
|
||||
|
||||
```text
|
||||
listen: "127.0.0.1:4111"
|
||||
@ -158,7 +157,6 @@ Check your email, verify the email, and specify an credit card, after that:
|
||||
╰───────────────────────────┴──────────────────────────────────────────────────────────╯
|
||||
|
||||
....
|
||||
|
||||
```
|
||||
|
||||
Note the limits on the account, specify that the account can have up-to 2 leaf node connections. Let's use them:
|
||||
@ -172,7 +170,6 @@ Note the limits on the account, specify that the account can have up-to 2 leaf n
|
||||
|
||||
Let's craft a leaf node connection much like we did earlier:
|
||||
|
||||
|
||||
```text
|
||||
leafnodes {
|
||||
remotes = [
|
||||
@ -210,10 +207,7 @@ Published [q] : ''
|
||||
Received [_INBOX.hgG0zVcVcyr4G5KBwOuyJw.uUYkEyKr] : '42'
|
||||
```
|
||||
|
||||
|
||||
## Leaf Authorization
|
||||
|
||||
In some cases you may want to restrict what messages can be exported from the leaf node or imported from the leaf connection. You can specify restrictions by limiting what the leaf connection client can publish and subscribe to. See [NATS Authorization](../securing_nats/authorization.md) for how you can do this.
|
||||
|
||||
|
||||
|
||||
|
@ -1,38 +1,41 @@
|
||||
# Configuration
|
||||
|
||||
## `leafnodes` Configuration Block
|
||||
|
||||
| Property | Description |
|
||||
| :------ | :---- |
|
||||
| :--- | :--- |
|
||||
| `advertise` | Hostport `<host>:<port>` to advertise to other servers. |
|
||||
| `authorization` | Authorization block. [**See Authorization Block section below**](#authorization-block). |
|
||||
| `authorization` | Authorization block. [**See Authorization Block section below**](leafnode_conf.md#authorization-block). |
|
||||
| `host` | Interface where the server will listen for incoming leafnode connections. |
|
||||
| `listen` | Combines `host` and `port` as `<host>:<port>` |
|
||||
| `no_advertise` | if `true` the leafnode shouldn't be advertised. |
|
||||
| `port` | Port where the server will listen for incoming leafnode connections (default is 7422). |
|
||||
| `port` | Port where the server will listen for incoming leafnode connections \(default is 7422\). |
|
||||
| `remotes` | List of `remote` entries specifying servers where leafnode client connection can be made. |
|
||||
| `tls` | TLS configuration block (same as other nats-server `tls` configuration). |
|
||||
| `tls` | TLS configuration block \(same as other nats-server `tls` configuration\). |
|
||||
|
||||
## Authorization Block
|
||||
|
||||
| Property | Description |
|
||||
| :------ | :---- |
|
||||
| :--- | :--- |
|
||||
| `user` | Username for the leaf node connection. |
|
||||
| `password` | Password for the user entry. |
|
||||
| `account` | Account this leaf node connection should be bound to. |
|
||||
| `timeout` | Maximum number of seconds to wait for leaf node authentication. |
|
||||
| `users` | List of credentials and account to bind to leaf node connections. [**See User Block section below**](#users-block). |
|
||||
| `users` | List of credentials and account to bind to leaf node connections. [**See User Block section below**](leafnode_conf.md#users-block). |
|
||||
|
||||
### Users Block
|
||||
|
||||
| Property | Description |
|
||||
| :------ | :---- |
|
||||
| :--- | :--- |
|
||||
| `user` | Username for the leaf node connection. |
|
||||
| `password` | Password for the user entry. |
|
||||
| `account` | Account this leaf node connection should be bound to. |
|
||||
|
||||
Here are some examples of using basic user/password authentication for leaf nodes (note while this is using accounts it is not using JWTs)
|
||||
Here are some examples of using basic user/password authentication for leaf nodes \(note while this is using accounts it is not using JWTs\)
|
||||
|
||||
Singleton mode:
|
||||
```
|
||||
|
||||
```text
|
||||
leafnodes {
|
||||
port: ...
|
||||
authorization {
|
||||
@ -42,10 +45,12 @@ leafnodes {
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
With above configuration, if a soliciting server creates a Leafnode connection with url: `nats://leaf:secret@host:port`, then the accepting server will bind the leafnode connection to the account "TheAccount". This account need to exist otherwise the connection will be rejected.
|
||||
|
||||
Multi-users mode:
|
||||
```
|
||||
|
||||
```text
|
||||
leafnodes {
|
||||
port: ...
|
||||
authorization {
|
||||
@ -56,13 +61,14 @@ leafnodes {
|
||||
}
|
||||
}
|
||||
```
|
||||
With the above, if a server connects using `leaf1:secret@host:port`, then the accepting server will bind the connection to account `account1`.
|
||||
If using `leaf2` user, then the accepting server will bind to connection to `account2`.
|
||||
|
||||
If username/password (either singleton or multi-users) is defined, then the connecting server MUST provide the proper credentials otherwise the connection will be rejected.
|
||||
With the above, if a server connects using `leaf1:secret@host:port`, then the accepting server will bind the connection to account `account1`. If using `leaf2` user, then the accepting server will bind to connection to `account2`.
|
||||
|
||||
If username/password \(either singleton or multi-users\) is defined, then the connecting server MUST provide the proper credentials otherwise the connection will be rejected.
|
||||
|
||||
If no username/password is provided, it is still possible to provide the account the connection should be associated with:
|
||||
```
|
||||
|
||||
```text
|
||||
leafnodes {
|
||||
port: ...
|
||||
authorization {
|
||||
@ -70,17 +76,17 @@ leafnodes {
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
With the above, a connection without credentials will be bound to the account "TheAccount".
|
||||
|
||||
If other form of credentials are used (jwt, nkey or other), then the server will attempt to authenticate and if successful associate to the account for that specific user. If the user authentication fails (wrong password, no such user, etc..) the connection will be also rejected.
|
||||
|
||||
If other form of credentials are used \(jwt, nkey or other\), then the server will attempt to authenticate and if successful associate to the account for that specific user. If the user authentication fails \(wrong password, no such user, etc..\) the connection will be also rejected.
|
||||
|
||||
## LeafNode `remotes` Entry Block
|
||||
|
||||
| Property | Description |
|
||||
| :------ | :---- |
|
||||
| `url` | Leafnode URL (URL protocol should be `nats-leaf`). |
|
||||
| `urls` | Leafnode URL array. Supports multiple URLs for discovery, e.g., urls: [ "nats-leaf://host1:7422", "nats-leaf://host2:7422" ]|
|
||||
| :--- | :--- |
|
||||
| `url` | Leafnode URL \(URL protocol should be `nats-leaf`\). |
|
||||
| `urls` | Leafnode URL array. Supports multiple URLs for discovery, e.g., urls: \[ "nats-leaf://host1:7422", "nats-leaf://host2:7422" \] |
|
||||
| `account` | Account public key identifying the leafnode. Account must be defined locally. |
|
||||
| `credentials` | Credential file for connecting to the leafnode server. |
|
||||
| `tls` | A TLS configuration block. Leafnode client will use specified TLS certificates when connecting/authenticating. |
|
||||
@ -88,14 +94,14 @@ If other form of credentials are used (jwt, nkey or other), then the server will
|
||||
## `tls` Configuration Block
|
||||
|
||||
| Property | Description |
|
||||
| :------ | :---- |
|
||||
| :--- | :--- |
|
||||
| `cert_file` | TLS certificate file. |
|
||||
| `key_file` | TLS certificate key file. |
|
||||
| `ca_file` | TLS certificate authority file. |
|
||||
| `insecure` | Skip certificate verification. |
|
||||
| `verify` | If `true`, require and verify client certificates. |
|
||||
| `verify_and_map` | If `true`, require and verify client certificates and use values map certificate values for authentication purposes. |
|
||||
| `cipher_suites` | When set, only the specified TLS cipher suites will be allowed. Values must match golang version used to build the server. |
|
||||
| `cipher_suites` | When set, only the specified TLS cipher suites will be allowed. Values must match golang version used to build the server. |
|
||||
| `curve_preferences` | List of TLS cypher curves to use in order. |
|
||||
| `timeout` | TLS handshake timeout in fractional seconds. |
|
||||
|
||||
|
@ -68,30 +68,30 @@ authorization {
|
||||
|
||||
Here's another example, where the `allow` and `deny` options are specified:
|
||||
|
||||
```
|
||||
```text
|
||||
authorization: {
|
||||
users = [
|
||||
{
|
||||
user: admin
|
||||
password: secret
|
||||
permissions: {
|
||||
publish: ">"
|
||||
subscribe: ">"
|
||||
}
|
||||
}
|
||||
{
|
||||
user: test
|
||||
password: test
|
||||
permissions: {
|
||||
publish: {
|
||||
deny: ">"
|
||||
},
|
||||
subscribe: {
|
||||
allow: "client.>"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
users = [
|
||||
{
|
||||
user: admin
|
||||
password: secret
|
||||
permissions: {
|
||||
publish: ">"
|
||||
subscribe: ">"
|
||||
}
|
||||
}
|
||||
{
|
||||
user: test
|
||||
password: test
|
||||
permissions: {
|
||||
publish: {
|
||||
deny: ">"
|
||||
},
|
||||
subscribe: {
|
||||
allow: "client.>"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -16,7 +16,7 @@ When detected at the client, the application is notified and messages are droppe
|
||||
|
||||
## Slow consumers identified in the client
|
||||
|
||||
A [client can detect it is a slow consumer](../../developing-with-nats/intro-5/slow.md) on a local connection and notify the application through use of the asynchronous error callback. It is better to catch a slow consumer locally in the client rather than to allow the server to detect this condition. This example demonstrates how to define and register an asynchronous error handler that will handle slow consumer errors.
|
||||
A [client can detect it is a slow consumer]() on a local connection and notify the application through use of the asynchronous error callback. It is better to catch a slow consumer locally in the client rather than to allow the server to detect this condition. This example demonstrates how to define and register an asynchronous error handler that will handle slow consumer errors.
|
||||
|
||||
```go
|
||||
func natsErrHandler(nc *nats.Conn, sub *nats.Subscription, natsErr error) {
|
||||
@ -66,7 +66,7 @@ Apart from using [NATS streaming](../../nats-streaming-concepts/intro.md) or opt
|
||||
|
||||
**Scaling with queue subscribers**
|
||||
|
||||
This is ideal if you do not rely on message order. Ensure your NATS subscription belongs to a [queue group](../../concepts/queue.md), then scale as required by creating more instances of your service or application. This is a great approach for microservices - each instance of your microservice will receive a portion of the messages to process, and simply add more instances of your service to scale. No code changes, configuration changes, or downtime whatsoever.
|
||||
This is ideal if you do not rely on message order. Ensure your NATS subscription belongs to a [queue group](../../nats-concepts/queue.md), then scale as required by creating more instances of your service or application. This is a great approach for microservices - each instance of your microservice will receive a portion of the messages to process, and simply add more instances of your service to scale. No code changes, configuration changes, or downtime whatsoever.
|
||||
|
||||
**Create a subject namespace that can scale**
|
||||
|
||||
|
@ -38,7 +38,7 @@ nats-sub -s nats://localhost:4222 ">"
|
||||
|
||||
`nats-sub` is a subscriber sample included with all NATS clients. `nats-sub` subscribes to a subject and prints out any messages received. You can find the source code to the go version of `nats-sub` \[here\)\([https://github.com/nats-io/nats.go/tree/master/examples](https://github.com/nats-io/nats.go/tree/master/examples)\). After starting the subscriber you should see a message on 'A' that a new client connected.
|
||||
|
||||
We have two servers and a client. Time to simulate our rolling upgrade. But wait, before we upgrade 'A', let's introduce a new server 'C'. Server 'C' will join the existing cluster while we perform the upgrade. Its sole purpose is to provide an additional place where clients can go other than 'A' and ensure we don't end up with a single server serving all the clients after the upgrade procedure. Clients will randomly select a server when connecting unless a special option is provided that disables that functionality \(usually called 'DontRandomize' or 'noRandomize'\). You can read more about ["Avoiding the Thundering Herd"](../../developing-with-nats/intro-1/random.md). Suffice it to say that clients redistribute themselves about evenly between all servers in the cluster. In our case 1/2 of the clients on 'A' will jump over to 'B' and the remaining half to 'C'.
|
||||
We have two servers and a client. Time to simulate our rolling upgrade. But wait, before we upgrade 'A', let's introduce a new server 'C'. Server 'C' will join the existing cluster while we perform the upgrade. Its sole purpose is to provide an additional place where clients can go other than 'A' and ensure we don't end up with a single server serving all the clients after the upgrade procedure. Clients will randomly select a server when connecting unless a special option is provided that disables that functionality \(usually called 'DontRandomize' or 'noRandomize'\). You can read more about ["Avoiding the Thundering Herd"](). Suffice it to say that clients redistribute themselves about evenly between all servers in the cluster. In our case 1/2 of the clients on 'A' will jump over to 'B' and the remaining half to 'C'.
|
||||
|
||||
Let's start our temporary server:
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
# NATS and Docker
|
||||
|
||||
## NATS Server Containerization
|
||||
|
||||
@ -9,32 +10,32 @@ The NATS server is provided as a Docker image on [Docker Hub](https://hub.docker
|
||||
|
||||
To use the Docker container image, install Docker and pull the public image:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
docker pull nats
|
||||
```
|
||||
|
||||
Run the NATS server image:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
docker run nats
|
||||
```
|
||||
|
||||
By default the NATS server exposes multiple ports:
|
||||
|
||||
- 4222 is for clients.
|
||||
- 8222 is an HTTP management port for information reporting.
|
||||
- 6222 is a routing port for clustering.
|
||||
- Use -p or -P to customize.
|
||||
* 4222 is for clients.
|
||||
* 8222 is an HTTP management port for information reporting.
|
||||
* 6222 is a routing port for clustering.
|
||||
* Use -p or -P to customize.
|
||||
|
||||
### Creating a NATS Cluster
|
||||
|
||||
First run a server with the ports exposed on a `docker network`:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
$ docker network create nats
|
||||
```
|
||||
|
||||
```sh
|
||||
```bash
|
||||
docker run --name nats --network nats --rm -p 4222:4222 -p 8222:8222 nats
|
||||
[INF] Starting nats-server version 2.1.0
|
||||
[INF] Git commit [1cc5ae0]
|
||||
@ -47,7 +48,7 @@ docker run --name nats --network nats --rm -p 4222:4222 -p 8222:8222 nats
|
||||
|
||||
Next, start another couple of servers and point them to the seed server to make them form a cluster:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
docker run --name nats-1 --network nats --rm nats --cluster nats://0.0.0.0:6222 --routes=nats://ruser:T0pS3cr3t@nats:6222
|
||||
docker run --name nats-2 --network nats --rm nats --cluster nats://0.0.0.0:6222 --routes=nats://ruser:T0pS3cr3t@nats:6222
|
||||
```
|
||||
@ -56,7 +57,7 @@ docker run --name nats-2 --network nats --rm nats --cluster nats://0.0.0.0:6222
|
||||
|
||||
To verify the routes are connected, you can make a request to the monitoring endpoint on `/routez` as follows and confirm that there are now 2 routes:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
curl http://127.0.0.1:8222/routez
|
||||
{
|
||||
"server_id": "ND34PZ64QLLJKSU5SLSWRS5EUXEKNHW5BUVLCNFWA56R4D7XKDYWJFP7",
|
||||
@ -99,7 +100,7 @@ curl http://127.0.0.1:8222/routez
|
||||
|
||||
### Creating a NATS Cluster with Docker Compose
|
||||
|
||||
It is also straightforward to create a cluster using Docker Compose. Below is a simple example that uses a network named `nats` to create a full mesh cluster.
|
||||
It is also straightforward to create a cluster using Docker Compose. Below is a simple example that uses a network named `nats` to create a full mesh cluster.
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
@ -122,7 +123,7 @@ networks:
|
||||
|
||||
Now we use Docker Compose to create the cluster that will be using the `nats` network:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
$ docker network create nats
|
||||
|
||||
$ docker-compose -f nats-cluster.yaml up
|
||||
@ -146,20 +147,18 @@ nats-1_1 | [1] 2019/10/19 06:41:27.153078 [INF] 172.18.0.4:6222 - rid:3 - Route
|
||||
|
||||
Now, the following should work: make a subscription on one of the nodes and publish it from another node. You should be able to receive the message without problems.
|
||||
|
||||
```sh
|
||||
```bash
|
||||
$ docker run --network nats --rm -it synadia/nats-box
|
||||
~ # nats-sub -s nats://nats:4222 hello &
|
||||
Listening on [hello]
|
||||
|
||||
~ # nats-pub -s "nats://nats-1:4222" hello first
|
||||
~ # nats-pub -s "nats://nats-2:4222" hello second
|
||||
[#1] Received on [hello]: 'first'
|
||||
[#2] Received on [hello]: 'second'
|
||||
```
|
||||
|
||||
Also stopping the seed node to which the subscription was done, should trigger an automatic failover to the other nodes:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
$ docker stop nats
|
||||
|
||||
...
|
||||
@ -169,7 +168,7 @@ Reconnected [nats://172.17.0.4:4222]
|
||||
|
||||
Publishing again will continue to work after the reconnection:
|
||||
|
||||
```sh
|
||||
```bash
|
||||
~ # nats-pub -s "nats://nats-1:4222" hello again
|
||||
~ # nats-pub -s "nats://nats-2:4222" hello again
|
||||
```
|
||||
@ -177,3 +176,4 @@ Publishing again will continue to work after the reconnection:
|
||||
## Tutorial
|
||||
|
||||
See the [NATS Docker tutorial](nats-docker-tutorial.md) for more instructions on using the NATS server Docker image.
|
||||
|
||||
|
@ -4,7 +4,7 @@ NATS Streaming provides a rich set of commands and parameters to configure all a
|
||||
|
||||
* [Command Line Arguments](cmdline.md)
|
||||
* [Configuration File](cfgfile.md)
|
||||
* [Store Limits](storelimits.md/)
|
||||
* [Store Limits](storelimits.md)
|
||||
* [Persistence](persistence/)
|
||||
* [Securing](tls/)
|
||||
* [Securing](tls.md)
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Configuration file
|
||||
# Configuration File
|
||||
|
||||
You can use a configuration file to configure the options specific to the NATS Streaming Server.
|
||||
|
||||
@ -73,7 +73,7 @@ In general the configuration parameters are the same as the command line argumen
|
||||
| encrypt | Specify if server should encrypt messages \(only the payload\) when storing them | `true` or `false` | `encrypt: true` |
|
||||
| encryption\_cipher | Cipher to use for encryption. Currently support AES and CHAHA \(ChaChaPoly\). Defaults to AES | `AES` or `CHACHA` | `encryption_cipher: "AES"` |
|
||||
| encryption\_key | Encryption key. It is recommended to specify the key through the `NATS_STREAMING_ENCRYPTION_KEY` environment variable instead | String | `encryption_key: "mykey"` |
|
||||
| credentials | Credentials file to connect to external NATS 2.0+ Server | String | `credentials: "streaming_server.creds"` |
|
||||
| credentials | Credentials file to connect to external NATS 2.0+ Server | String | `credentials: "streaming_server.creds"` |
|
||||
|
||||
## TLS Configuration
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Command line arguments
|
||||
# Command Line Arguments
|
||||
|
||||
The NATS Streaming Server accepts command line arguments to control its behavior. There is a set of parameters specific to the NATS Streaming Server and some to the embedded NATS Server.
|
||||
|
||||
|
@ -134,6 +134,7 @@ Below is what would be displayed with the above store limits configuration. Noti
|
||||
|
||||
Suppose you have set a channel limit to hold at most 100 messages, and the channel currently holds 72 messages. The server is stopped and the message limit for this channel is lowered to 50 messages, then the server is restarted.
|
||||
|
||||
On startup, the server will apply the store limits, which means that this channel will now hold the maximum number of messages (50) and the 22 oldest messages will be removed due to the channel limit.
|
||||
On startup, the server will apply the store limits, which means that this channel will now hold the maximum number of messages \(50\) and the 22 oldest messages will be removed due to the channel limit.
|
||||
|
||||
We strongly recommend not raising the limit back to the higher limit if messages have been removed in the previous step because those removed messages may or may not become available again depending on the store implementation or if running in clustering mode or not.
|
||||
|
||||
|
@ -32,7 +32,7 @@ The embedded NATS server specifies TLS server certificates with these:
|
||||
--tlscacert <file> Client certificate CA for verification
|
||||
```
|
||||
|
||||
The server parameters are used the same way you'd [secure a typical NATS server](../../../nats-server/configuration/securing_nats/tls.md).
|
||||
The server parameters are used the same way you'd [secure a typical NATS server](../../nats-server/configuration/securing_nats/tls.md).
|
||||
|
||||
Proper usage of the NATS Streaming Server requires the use of both client and server parameters.
|
||||
|
||||
@ -42,7 +42,7 @@ For example:
|
||||
% nats-streaming-server -tls_client_cert client-cert.pem -tls_client_key client-key.pem -tls_client_cacert ca.pem -tlscert server-cert.pem -tlskey server-key.pem -tlscacert ca.pem
|
||||
```
|
||||
|
||||
Further TLS related functionality can be found in [Securing NATS > TLS](../../../nats-server/configuration/securing_nats/tls.md). Note that if specifying cipher suites is required, a configuration file for the embedded NATS server can be passed through the `-config` command line parameter.
|
||||
Further TLS related functionality can be found in [Securing NATS > TLS](../../nats-server/configuration/securing_nats/tls.md). Note that if specifying cipher suites is required, a configuration file for the embedded NATS server can be passed through the `-config` command line parameter.
|
||||
|
||||
### Connecting to Remote NATS Server with TLS Enabled
|
||||
|
@ -1,11 +0,0 @@
|
||||
## NATS Tools
|
||||
|
||||
The NATS Ecosystem has many tools to support server configuration, enhance monitoring or tune performance:
|
||||
|
||||
- [mkpasswd](mkpasswd.md) - Generates or bcrypts passwords
|
||||
- [nk](nk.md) - Generate NKeys
|
||||
- [nsc](nsc/README.md) - Configure Operators, Accounts and Users
|
||||
- [nats account server](nas/README.md) - Serve Account JWTs
|
||||
- [nats top](nats_top/README.md) - Monitor NATS Server
|
||||
- [nats-bench](natsbench.md) - Benchmark NATS Server
|
||||
- [prometheus-nats-exporter](https://github.com/nats-io/prometheus-nats-exporter) - Export NATS server metrics to [Prometheus](https://prometheus.io/) and a [Grafana](https://grafana.com) dashboard.
|
12
nats-tools/nats-tools.md
Normal file
12
nats-tools/nats-tools.md
Normal file
@ -0,0 +1,12 @@
|
||||
# Introduction
|
||||
|
||||
The NATS Ecosystem has many tools to support server configuration, enhance monitoring or tune performance:
|
||||
|
||||
* [mkpasswd](mkpasswd.md) - Generates or bcrypts passwords
|
||||
* [nk](nk.md) - Generate NKeys
|
||||
* [nsc](nsc/) - Configure Operators, Accounts and Users
|
||||
* [nats account server](nas/) - Serve Account JWTs
|
||||
* [nats top](nats_top/) - Monitor NATS Server
|
||||
* [nats-bench](natsbench.md) - Benchmark NATS Server
|
||||
* [prometheus-nats-exporter](https://github.com/nats-io/prometheus-nats-exporter) - Export NATS server metrics to [Prometheus](https://prometheus.io/) and a [Grafana](https://grafana.com) dashboard.
|
||||
|
@ -1,15 +1,15 @@
|
||||
# NATS Account Configuration
|
||||
# nsc
|
||||
|
||||
NATS account configurations are built using the `nsc` tool. The NSC tool allows you to:
|
||||
|
||||
- Create and edit Operators, Accounts, Users
|
||||
- Manage publish and subscribe permissions for Users
|
||||
- Define Service and Stream exports from an account
|
||||
- Reference Service and Streams from another account
|
||||
- Generate Activation tokens that grants access to a private service or stream
|
||||
- Generate User credential files
|
||||
- Describe Operators, Accounts, Users, and Activations
|
||||
- Push and pull account JWTs to an account JWTs server
|
||||
* Create and edit Operators, Accounts, Users
|
||||
* Manage publish and subscribe permissions for Users
|
||||
* Define Service and Stream exports from an account
|
||||
* Reference Service and Streams from another account
|
||||
* Generate Activation tokens that grants access to a private service or stream
|
||||
* Generate User credential files
|
||||
* Describe Operators, Accounts, Users, and Activations
|
||||
* Push and pull account JWTs to an account JWTs server
|
||||
|
||||
## Installation
|
||||
|
||||
@ -19,26 +19,26 @@ Installing `nsc` is easy:
|
||||
curl -L https://raw.githubusercontent.com/nats-io/nsc/master/install.py | python
|
||||
```
|
||||
|
||||
The script will download the latest version of `nsc` and install it into your system.
|
||||
The script will download the latest version of `nsc` and install it into your system.
|
||||
|
||||
## Tutorials
|
||||
|
||||
You can find various task-oriented tutorials to working with the tool here:
|
||||
|
||||
- [Basic Usage](nsc.md)
|
||||
- [Configuring Streams](streams.md)
|
||||
- [Configuring Services](services.md)
|
||||
- [Signing Keys](signing_keys.md)
|
||||
- [Revoking Users or Activations](revocation.md)
|
||||
- [Working with Managed Operators](managed.md)
|
||||
* [Basic Usage](nsc.md)
|
||||
* [Configuring Streams](streams.md)
|
||||
* [Configuring Services](services.md)
|
||||
* [Signing Keys](signing_keys.md)
|
||||
* [Revoking Users or Activations](revocation.md)
|
||||
* [Working with Managed Operators](managed.md)
|
||||
|
||||
## Tool Documentation
|
||||
|
||||
For more specific browsing of the tool syntax, check out the `nsc` tool documentation.
|
||||
It can be found within the tool itself:
|
||||
For more specific browsing of the tool syntax, check out the `nsc` tool documentation. It can be found within the tool itself:
|
||||
|
||||
```text
|
||||
> nsc help
|
||||
```
|
||||
|
||||
Or an online version [here](https://nats-io.github.io/nsc).
|
||||
Or an online version [here](https://nats-io.github.io/nsc).
|
||||
|
||||
|
@ -1,30 +1,28 @@
|
||||
# NSC
|
||||
# Basics
|
||||
|
||||
NSC allows you to manage identities. Identities take the form of _nkeys_. Nkeys are a public-key signature system based on Ed25519 for the NATS ecosystem.
|
||||
|
||||
The nkey identities are associated with NATS configuration in the form of a Jason Web Token (JWT). The JWT is digitally signed by the private key of an issuer forming a chain of trust. The `nsc` tool creates and manages these identities and allows you to deploy them to a JWT account server, which in turn makes the configurations available to nats-servers.
|
||||
The nkey identities are associated with NATS configuration in the form of a Jason Web Token \(JWT\). The JWT is digitally signed by the private key of an issuer forming a chain of trust. The `nsc` tool creates and manages these identities and allows you to deploy them to a JWT account server, which in turn makes the configurations available to nats-servers.
|
||||
|
||||
There’s a logical hierarchy to the entities:
|
||||
|
||||
- `Operators` are responsible for running nats-servers, and issuing account JWTs. Operators set the limits on what an account can do, such as the number of connections, data limits, etc.
|
||||
|
||||
- `Accounts` are responsible for issuing user JWTs. An account defines streams and services that can be exported to other accounts. Likewise, they import streams and services from other accounts.
|
||||
|
||||
- `Users` are issued by an account, and encode limits regarding usage and authorization over the account's subject space.
|
||||
* `Operators` are responsible for running nats-servers, and issuing account JWTs. Operators set the limits on what an account can do, such as the number of connections, data limits, etc.
|
||||
* `Accounts` are responsible for issuing user JWTs. An account defines streams and services that can be exported to other accounts. Likewise, they import streams and services from other accounts.
|
||||
* `Users` are issued by an account, and encode limits regarding usage and authorization over the account's subject space.
|
||||
|
||||
NSC allows you to create, edit, and delete these entities, and will be central to all account-based configuration.
|
||||
|
||||
In this guide, you’ll run end-to-end on some of the configuration scenarios:
|
||||
|
||||
- Generate NKey identities and their associated JWTs
|
||||
- Make JWTs accessible to a nats-server
|
||||
- Configure a nats-server to use JWTs
|
||||
* Generate NKey identities and their associated JWTs
|
||||
* Make JWTs accessible to a nats-server
|
||||
* Configure a nats-server to use JWTs
|
||||
|
||||
Let’s run through the process of creating some identities and JWTs and work through the process.
|
||||
|
||||
## Creating an Operator, Account and User
|
||||
|
||||
Let’s create an operator called `O` (oh):
|
||||
Let’s create an operator called `O` \(oh\):
|
||||
|
||||
```bash
|
||||
> nsc add operator O
|
||||
@ -42,7 +40,6 @@ Lets add a service URL to the operator. Service URLs specify where the nats-serv
|
||||
[ OK ] edited operator "O"
|
||||
```
|
||||
|
||||
|
||||
Creating an account is just as easy:
|
||||
|
||||
```bash
|
||||
@ -64,14 +61,13 @@ Finally, let's create a user:
|
||||
|
||||
As expected, the tool generated an NKEY representing the user, and stored the private key safely in the keystore. In addition, the tool generated a _credentials_ file. A credentials file contains the JWT for the user and the private key for the user. Credential files are used by NATS clients to identify themselves to the system. The client will extract and present the JWT to the nats-server and use the private key to verify its identity.
|
||||
|
||||
|
||||
### NSC Assets
|
||||
|
||||
NSC manages three different directories:
|
||||
|
||||
- The nsc home directory which stores nsc related data. By default nsc home lives in `~/.nsc` and can be changed via the `$NSC_HOME` environment variable.
|
||||
- An _nkeys_ directory, which stores all the private keys. This directory by default lives in `~/.nkeys` and can be changed via the `$NKEYS_PATH` environment variable. The contents of the nkeys directory should be treated as secrets.
|
||||
- A _stores_ directory, which contains JWTs representing the various entities. This directory lives in `$NSC_HOME/nats`, and can be changed using the command `nsc env -s <dir>`. The stores directory can stored under revision control. The JWTs themselves do not contain any secrets.
|
||||
* The nsc home directory which stores nsc related data. By default nsc home lives in `~/.nsc` and can be changed via the `$NSC_HOME` environment variable.
|
||||
* An _nkeys_ directory, which stores all the private keys. This directory by default lives in `~/.nkeys` and can be changed via the `$NKEYS_PATH` environment variable. The contents of the nkeys directory should be treated as secrets.
|
||||
* A _stores_ directory, which contains JWTs representing the various entities. This directory lives in `$NSC_HOME/nats`, and can be changed using the command `nsc env -s <dir>`. The stores directory can stored under revision control. The JWTs themselves do not contain any secrets.
|
||||
|
||||
#### The NSC Stores Directory
|
||||
|
||||
@ -97,26 +93,27 @@ The nkeys directory contains all the private keys and credential files. As menti
|
||||
|
||||
The structure keys directory is machine friendly. All keys are sharded by their kind `O` for operators, `A` for accounts, `U` for users. These prefixes are also part of the public key. The second and third letters in the public key are used to create directories where other like-named keys are stored.
|
||||
|
||||
```
|
||||
```text
|
||||
tree ~/.nkeys
|
||||
/Users/aricart/.nkeys
|
||||
├── creds
|
||||
│ └── O
|
||||
│ └── A
|
||||
│ └── U.creds
|
||||
│ └── O
|
||||
│ └── A
|
||||
│ └── U.creds
|
||||
└── keys
|
||||
├── A
|
||||
│ └── DE
|
||||
│ └── ADETPT36WBIBUKM3IBCVM4A5YUSDXFEJPW4M6GGVBYCBW7RRNFTV5NGE.nk
|
||||
│ └── DE
|
||||
│ └── ADETPT36WBIBUKM3IBCVM4A5YUSDXFEJPW4M6GGVBYCBW7RRNFTV5NGE.nk
|
||||
├── O
|
||||
│ └── AF
|
||||
│ └── OAFEEYZSYYVI4FXLRXJTMM32PQEI3RGOWZJT7Y3YFM4HB7ACPE4RTJPG.nk
|
||||
│ └── AF
|
||||
│ └── OAFEEYZSYYVI4FXLRXJTMM32PQEI3RGOWZJT7Y3YFM4HB7ACPE4RTJPG.nk
|
||||
└── U
|
||||
└── DB
|
||||
└── UDBD5FNQPSLIO6CDMIS5D4EBNFKYWVDNULQTFTUZJXWFNYLGFF52VZN7.nk
|
||||
```
|
||||
|
||||
The `nk` files themselves are named after the complete public key, and stored in a single string - the private key in question:
|
||||
|
||||
```bash
|
||||
cat ~/.nkeys/keys/U/DB/UDBD5FNQPSLIO6CDMIS5D4EBNFKYWVDNULQTFTUZJXWFNYLGFF52VZN7.nk
|
||||
SUAG35IAY2EF5DOZRV6MUSOFDGJ6O2BQCZHSRPLIK6J3GVCX366BFAYSNA
|
||||
@ -126,7 +123,6 @@ The private keys are encoded into a string, and always begin with an `S` for _se
|
||||
|
||||
In addition to containing keys, the nkeys directory contains a `creds` directory. This directory is organized in a way friendly to humans. It stores user credential files or `creds` files for short. A credentials file contains a copy of the user JWT and the private key for the user. These files are used by NATS clients to connect to a NATS server:
|
||||
|
||||
|
||||
```bash
|
||||
> cat ~/.nkeys/creds/O/A/U.creds
|
||||
-----BEGIN NATS USER JWT-----
|
||||
@ -165,8 +161,7 @@ The different entity names are listed along with their public key, and whether t
|
||||
|
||||
In some cases you may want to view the private keys:
|
||||
|
||||
|
||||
```
|
||||
```text
|
||||
> nsc list keys --show-seeds
|
||||
╭───────────────────────────────────────────────────────────────────────────────────╮
|
||||
│ Seeds Keys │
|
||||
@ -181,8 +176,7 @@ In some cases you may want to view the private keys:
|
||||
[ERR] error reading seed
|
||||
```
|
||||
|
||||
If you don't have the seed (perhaps you don't control the operator), nsc will decorate the row with a `!`.
|
||||
If you have more than one account, you can show them all by specifying the `--all` flag.
|
||||
If you don't have the seed \(perhaps you don't control the operator\), nsc will decorate the row with a `!`. If you have more than one account, you can show them all by specifying the `--all` flag.
|
||||
|
||||
## The Operator JWT
|
||||
|
||||
@ -203,7 +197,8 @@ You can view a human readable version of the JWT by using `nsc`:
|
||||
```
|
||||
|
||||
Since the operator JWT is just a JWT you can use other tools, such as jwt.io to decode a JWT an inspect it's contents. All JWTs have a header, payload, and signature:
|
||||
```json
|
||||
|
||||
```javascript
|
||||
{
|
||||
"typ": "jwt",
|
||||
"alg": "ed25519"
|
||||
@ -223,14 +218,9 @@ Since the operator JWT is just a JWT you can use other tools, such as jwt.io to
|
||||
}
|
||||
```
|
||||
|
||||
All NATS JWTs will use the `algorithm` ed25519 for signature.
|
||||
The payload will list different things. On our basically empty operator, we will only have standard JWT `claim` fields:
|
||||
All NATS JWTs will use the `algorithm` ed25519 for signature. The payload will list different things. On our basically empty operator, we will only have standard JWT `claim` fields:
|
||||
|
||||
`jti` - a jwt id
|
||||
`iat` - the timestamp when the JWT was issued in UNIX time
|
||||
`iss` - the issuer of the JWT, in this case the operator's public key
|
||||
`sub` - the subject or identity represented by the JWT, in this case the same operator
|
||||
`type` - since this is an operator JWT, `operator` is the type
|
||||
`jti` - a jwt id `iat` - the timestamp when the JWT was issued in UNIX time `iss` - the issuer of the JWT, in this case the operator's public key `sub` - the subject or identity represented by the JWT, in this case the same operator `type` - since this is an operator JWT, `operator` is the type
|
||||
|
||||
NATS specific is the `nats` object, which is where we add NATS specific JWT configuration to the JWT claim.
|
||||
|
||||
@ -293,7 +283,6 @@ The user id is the public key for the user, the issuer is the account. This user
|
||||
|
||||
When a user connects to a nats-server, it presents it's user JWT and signs an nonce using its private key. The server verifies if the user is who they say they are by validating that the nonce was signed using the private key associated with the public key, representing the identify of the user. Next, the server fetches the issuer account and validates that the account was issued by a trusted operator completing the chain of trust verification.
|
||||
|
||||
|
||||
Let’s put all of this together, and create a simple server configuration that accepts sessions from `U`.
|
||||
|
||||
## Account Server Configuration
|
||||
@ -314,7 +303,7 @@ The account server has options to enable you to use an nsc directory directly. L
|
||||
> nats-account-server -nsc ~/.nsc/nats/O
|
||||
```
|
||||
|
||||
Above, we pointed the account server to our nsc data directory (more specifically to the `O` operator that we created earlier). By default, the server listens on the localhost at port 9090.
|
||||
Above, we pointed the account server to our nsc data directory \(more specifically to the `O` operator that we created earlier\). By default, the server listens on the localhost at port 9090.
|
||||
|
||||
You can also run the account server with a data directory that is not your nsc folder. In this mode, you can upload account JWTs to the server. See the help for `nsc push` for more information about how to push JWTs to the account server.
|
||||
|
||||
@ -364,14 +353,12 @@ Published [hello] : 'NATS'
|
||||
Subscriber shows:
|
||||
|
||||
```text
|
||||
[#1] Received on [hello]: ’NATS’
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
### NSC Embeds NATS tooling
|
||||
|
||||
To make it easier to work, you can use the NATS clients built right into NSC. These tools know how to find the credential files in the keyring.
|
||||
For convenience, the tools are aliased to `sub`, `pub`, `req`, `reply`:
|
||||
To make it easier to work, you can use the NATS clients built right into NSC. These tools know how to find the credential files in the keyring. For convenience, the tools are aliased to `sub`, `pub`, `req`, `reply`:
|
||||
|
||||
```bash
|
||||
nsc sub --user U ">"
|
||||
@ -379,12 +366,10 @@ nsc sub --user U ">"
|
||||
|
||||
nsc pub --user U hello NATS
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
See `nsc tool -h` for more detailed information.
|
||||
|
||||
|
||||
## User Authorization
|
||||
|
||||
User authorization, as expected, also works with JWT authentication. With `nsc` you can specify authorization for specific subjects to which the user can or cannot publish or subscribe. By default a user doesn't have any limits on the subjects that it can publish or subscribe to. Any message stream or message published in the account is subscribable by the user. The user can also publish to any subject or imported service. Note that authorization, if configured, must be specified on a per user basis.
|
||||
@ -455,7 +440,6 @@ Similarly, we can limit a client:
|
||||
|
||||
The client has the opposite permissions of the service. It can publish on the request subject `q`, and receive replies on an inbox.
|
||||
|
||||
|
||||
## The NSC Environment
|
||||
|
||||
As your projects become more involved, you may work with one or more accounts. NSC tracks your current operator and account. If you are not in a directory containing an operator, account or user, it will use the last operator/account context.
|
||||
@ -480,3 +464,4 @@ To view your current environment:
|
||||
```
|
||||
|
||||
If you have multiple accounts, you can use `nsc env --account <account name>` to set the account as the current default. If you have defined `NKEYS_PATH` or `NSC_HOME` in the environment, you'll also see their current effective values. Finally, if you want to set the stores directory to anything other than the default, you can do `nsc env --store <dir containing an operator>`. If you have multiple accounts, you can try having multiple terminals, each in a directory for a different account.
|
||||
|
||||
|
@ -49,10 +49,10 @@ To review the service export:
|
||||
|
||||
Importing a service enables you to send requests to the remote _Account_. To import a Service, you have to create an _Import_. To create an import you need to know:
|
||||
|
||||
- The exporting account’s public key
|
||||
- The subject the service is listening on
|
||||
- You can map the service’s subject to a different subject
|
||||
- Self-imports are not valid; you can only import services from other accounts.
|
||||
* The exporting account’s public key
|
||||
* The subject the service is listening on
|
||||
* You can map the service’s subject to a different subject
|
||||
* Self-imports are not valid; you can only import services from other accounts.
|
||||
|
||||
To learn how to inspect a JWT from an account server, [check this article](../nas/inspecting_jwts.md).
|
||||
|
||||
@ -111,12 +111,12 @@ Let's also add a user to make requests from the service:
|
||||
[ OK ] added user "b" to account "B"
|
||||
```
|
||||
|
||||
|
||||
### Testing the Service
|
||||
|
||||
To test the service, we can install the `nats-req` and `nats-rply` tools:
|
||||
|
||||
Set up a process to handle the request. This process will run from account 'A' using user 'U':
|
||||
|
||||
```text
|
||||
> go get github.com/nats-io/nats.go/examples/nats-rply
|
||||
|
||||
@ -128,6 +128,7 @@ nsc reply --account A --user U help "I will help"
|
||||
```
|
||||
|
||||
Send the request:
|
||||
|
||||
```text
|
||||
> go get github.com/nats-io/nats.go/examples/nats-req
|
||||
> nats-req -creds ~/.nkeys/creds/O/B/b.creds help me
|
||||
@ -135,11 +136,13 @@ Published [help] : 'me'
|
||||
```
|
||||
|
||||
The service receives the request:
|
||||
|
||||
```text
|
||||
Received on [help]: 'me'
|
||||
```
|
||||
|
||||
And the response is received by the requestor:
|
||||
|
||||
```text
|
||||
Received [_INBOX.v6KAX0v1bu87k49hbg3dgn.StIGJF0D] : 'I will help'
|
||||
```
|
||||
@ -200,14 +203,13 @@ As before, we declared an export, but this time we added the `--private` flag. T
|
||||
│ help │ Service │ help │ Yes │ 0 │ - │
|
||||
│ private.help.* │ Service │ private.help.* │ No │ 0 │ - │
|
||||
╰────────────────┴─────────┴────────────────┴────────┴─────────────┴──────────╯
|
||||
|
||||
```
|
||||
|
||||
### Generating an Activation Token
|
||||
|
||||
For the foreign account to _import_ a private service and be able to send requests, you have to generate an activation token. The activation token in addition to granting permission to the account allows you to subset the service’s subject:
|
||||
|
||||
To generate a token, you’ll need to know the public key of the account importing the service. We can easily find the public key for account B by running:
|
||||
To generate a token, you’ll need to know the public key of the account importing the service. We can easily find the public key for account B by running:
|
||||
|
||||
```bash
|
||||
> nsc list keys --account B
|
||||
@ -222,16 +224,13 @@ To generate a token, you’ll need to know the public key of the account importi
|
||||
╰────────┴──────────────────────────────────────────────────────────┴─────────────┴────────╯
|
||||
```
|
||||
|
||||
|
||||
```text
|
||||
> nsc generate activation --account A --target-account AAM46E3YF5WOZSE5WNYWHN3YYISVZOSI6XHTF2Q64ECPXSFQZROJMP2H --subject private.help.AAM46E3YF5WOZSE5WNYWHN3YYISVZOSI6XHTF2Q64ECPXSFQZROJMP2H -o /tmp/activation.jwt
|
||||
[ OK ] generated "private.help.*" activation for account "AAM46E3YF5WOZSE5WNYWHN3YYISVZOSI6XHTF2Q64ECPXSFQZROJMP2H"
|
||||
[ OK ] wrote account description to "/tmp/activation.jwt"
|
||||
```
|
||||
|
||||
The command took the account that has the export ('A'), the public key of account B, the subject where requests from account B will be handled, and an output file where the token can be stored.
|
||||
The subject for the export allows the service to handle all requests coming in on private.help.*, but account B can only request from a specific subject.
|
||||
|
||||
The command took the account that has the export \('A'\), the public key of account B, the subject where requests from account B will be handled, and an output file where the token can be stored. The subject for the export allows the service to handle all requests coming in on private.help.\*, but account B can only request from a specific subject.
|
||||
|
||||
For completeness, the contents of the JWT file looks like this:
|
||||
|
||||
@ -267,9 +266,9 @@ When decoded it looks like this:
|
||||
╰─────────────────┴───────────────────────────────────────────────────────────────────────╯
|
||||
```
|
||||
|
||||
The token can be shared directly with the client account.
|
||||
The token can be shared directly with the client account.
|
||||
|
||||
> If you manage many tokens for many accounts, you may want to host activation tokens on a web server and share the URL with the account. The benefit to the hosted approach is that any updates to the token would be available to the importing account whenever their account is updated, provided the URL you host them in is stable. When using a JWT account server, the tokens can be stored right on the server and shared by an URL that is printed when the token is generated.
|
||||
> If you manage many tokens for many accounts, you may want to host activation tokens on a web server and share the URL with the account. The benefit to the hosted approach is that any updates to the token would be available to the importing account whenever their account is updated, provided the URL you host them in is stable. When using a JWT account server, the tokens can be stored right on the server and shared by an URL that is printed when the token is generated.
|
||||
|
||||
## Importing a Private Service
|
||||
|
||||
@ -320,9 +319,8 @@ Testing a private service is no different than a public one:
|
||||
```bash
|
||||
> nsc reply --account A --user U "private.help.*" "help is here"
|
||||
listening on [private.help.*]
|
||||
[#1] received on [private.help.AAM46E3YF5WOZSE5WNYWHN3YYISVZOSI6XHTF2Q64ECPXSFQZROJMP2H]: 'help_me'
|
||||
|
||||
> nsc req --account B --user b private.help help_me
|
||||
published request: [private.help] : 'help_me'
|
||||
received reply: [_INBOX.3MhS0iCHfqO8wUl1x59bHB.jpE2jvEj] : 'help is here'
|
||||
```
|
||||
|
||||
|
@ -4,22 +4,22 @@ As previously discussed, NKEYs are identities, and if someone gets a hold of an
|
||||
|
||||
NATS has a strategies to let you deal with scenarios where your private keys escape out in the wild.
|
||||
|
||||
The first and most important line of defense is _Signing Keys_. _Signing Keys_ allow you have multiple NKEY identities of the same kind (Operator or Account) that have the same degree of trust as the standard _Issuer_ nkey.
|
||||
The first and most important line of defense is _Signing Keys_. _Signing Keys_ allow you have multiple NKEY identities of the same kind \(Operator or Account\) that have the same degree of trust as the standard _Issuer_ nkey.
|
||||
|
||||
The concept behind the signing key is that you can issue a JWT for an operator or an account that lists multiple nkeys. Typically the issuer will match the _Subject_ of the entity issuing the JWT. With SigningKeys, a JWT is considered valid if it is signed by the _Subject_ of the _Issuer_ or one of its signing keys. This enables guarding the private key of the Operator or Account more closely while allowing _Accounts_, _Users_ or _Activation Tokens_ be signed using alternate private keys.
|
||||
|
||||
If an issue should arise where somehow a signing key escapes into the wild, you would remove the compromised signing key from the entity, add a new one, and reissue the entity. When a JWT is validated, if the signing key is missing, the operation is rejected. You are also on the hook to re-issue all JWTs (accounts, users, activation tokens) that were signed with the compromised signing key.
|
||||
If an issue should arise where somehow a signing key escapes into the wild, you would remove the compromised signing key from the entity, add a new one, and reissue the entity. When a JWT is validated, if the signing key is missing, the operation is rejected. You are also on the hook to re-issue all JWTs \(accounts, users, activation tokens\) that were signed with the compromised signing key.
|
||||
|
||||
This is effectively a large hammer. You can mitigate the process a bit by having a larger number of signing keys and then rotating the signing keys to get a distribution you can easily handle in case of a compromise. In a future release, we’ll have a revocation process were you can invalidate a single JWT by its unique JWT ID (JTI). For now a sledge hammer you have.
|
||||
This is effectively a large hammer. You can mitigate the process a bit by having a larger number of signing keys and then rotating the signing keys to get a distribution you can easily handle in case of a compromise. In a future release, we’ll have a revocation process were you can invalidate a single JWT by its unique JWT ID \(JTI\). For now a sledge hammer you have.
|
||||
|
||||
With greater security process, there’s greater complexity. With that said, `nsc` doesn’t track public or private signing keys. As these are only identities that when in use presume a manual use. That means that you the user will have to track and manage your private keys more closely.
|
||||
|
||||
Let’s get a feel for the workflow. We are going to:
|
||||
|
||||
- Create an operator with a signing key
|
||||
- Create an account with a signing key
|
||||
- The account will be signed using the operator’s signing key
|
||||
- Create an user with the account’s signing key
|
||||
* Create an operator with a signing key
|
||||
* Create an account with a signing key
|
||||
* The account will be signed using the operator’s signing key
|
||||
* Create an user with the account’s signing key
|
||||
|
||||
All signing key operations revolve around the global `nsc` flag `-K` or `--private-key`. Whenever you want to modify an entity, you have to supply the parent key so that the JWT is signed. Normally this happens automatically but in the case of signing keys, you’ll have to supply the flag by hand.
|
||||
|
||||
@ -42,7 +42,7 @@ operator key stored ~/.nkeys/keys/O/AZ/OAZBRNE7DQGDYT5CSAGWDMI5ENGKOEJ57BXVU6WUT
|
||||
|
||||
> On a production environment private keys should be saved to a file and always referenced from the secured file.
|
||||
|
||||
Now we are going to edit the operator by adding a signing key with the `--sk` flag providing the generated operator public key (the one starting with `O`):
|
||||
Now we are going to edit the operator by adding a signing key with the `--sk` flag providing the generated operator public key \(the one starting with `O`\):
|
||||
|
||||
```text
|
||||
> nsc edit operator --sk OAZBRNE7DQGDYT5CSAGWDMI5ENGKOEJ57BXVU6WUTHFEAO3CU5GLQYF5
|
||||
@ -118,7 +118,7 @@ Let’s add the signing key to the account, and remember to sign the account wit
|
||||
╰───────────────────────────┴──────────────────────────────────────────────────────────╯
|
||||
```
|
||||
|
||||
We can see that the signing key `ADUQTJD4TF4O6LTTHCKDKSHKGBN2NECCHHMWFREPKNO6MPA7ZETFEEF7` was added to the account. Also the issuer is the operator signing key (specified by the `-K`).
|
||||
We can see that the signing key `ADUQTJD4TF4O6LTTHCKDKSHKGBN2NECCHHMWFREPKNO6MPA7ZETFEEF7` was added to the account. Also the issuer is the operator signing key \(specified by the `-K`\).
|
||||
|
||||
Now let’s create a user and signing it with account signing key starting with `ABHYL27UAHHQ`.
|
||||
|
||||
@ -149,3 +149,4 @@ Now let’s create a user and signing it with account signing key starting with
|
||||
```
|
||||
|
||||
As expected, the issuer is now the signing key we generated earlier. To map the user to the actual account, an `Issuer Account` field was added to the JWT that identifies the public key of account _A_.
|
||||
|
||||
|
@ -53,10 +53,10 @@ Messages this account publishes on `a.b.c.>` will be forwarded to all accounts t
|
||||
|
||||
Importing a stream enables you to receive messages that are published by a different _Account_. To import a Stream, you have to create an _Import_. To create an _Import_ you need to know:
|
||||
|
||||
- The exporting account’s public key
|
||||
- The subject where the stream is published
|
||||
- You can map the stream’s subject to a different subject
|
||||
- Self-imports are not valid; you can only import streams from other accounts.
|
||||
* The exporting account’s public key
|
||||
* The subject where the stream is published
|
||||
* You can map the stream’s subject to a different subject
|
||||
* Self-imports are not valid; you can only import streams from other accounts.
|
||||
|
||||
To learn how to inspect a JWT from an account server, [check this article](../nas/inspecting_jwts.md).
|
||||
|
||||
@ -72,7 +72,7 @@ With the required information, we can add an import to the public stream.
|
||||
[ OK ] added stream import "a.b.c.>"
|
||||
```
|
||||
|
||||
> Notice that messages published by the remote account will be received on the same subject as they are originally published. Sometimes you would like to prefix messages received from a stream. To add a prefix specify `--local-subject`. Subscribers in our account can listen to `abc.>`. For example if `--local-subject abc`, The message will be received as `abc.a.b.c.>`.
|
||||
> Notice that messages published by the remote account will be received on the same subject as they are originally published. Sometimes you would like to prefix messages received from a stream. To add a prefix specify `--local-subject`. Subscribers in our account can listen to `abc.>`. For example if `--local-subject abc`, The message will be received as `abc.a.b.c.>`.
|
||||
|
||||
And verifying it:
|
||||
|
||||
@ -119,14 +119,13 @@ Let's also add a user to make requests from the service:
|
||||
|
||||
### Testing the Stream
|
||||
|
||||
```bash
|
||||
```bash
|
||||
> nsc sub --account B --user b "a.b.c.>"
|
||||
Listening on [a.b.c.>]
|
||||
...
|
||||
> nsc pub --account A --user U a.b.c.hello world
|
||||
Published [a.b.c.hello] : "world"
|
||||
...
|
||||
[#1] received on [a.b.c.hello]: 'world'
|
||||
```
|
||||
|
||||
## Securing Streams
|
||||
@ -177,12 +176,11 @@ Similarly when we defined an export, but this time we added the `--private` flag
|
||||
╰───────────────┴────────┴───────────────┴────────┴─────────────┴──────────╯
|
||||
```
|
||||
|
||||
|
||||
### Generating an Activation Token
|
||||
|
||||
For a foreign account to _import_ a private stream, you have to generate an activation token. In addition to granting permissions to the account, the activation token also allows you to subset the exported stream’s subject.
|
||||
|
||||
To generate a token, you’ll need to know the public key of the account importing the service. We can easily find the public key for account B by running:
|
||||
To generate a token, you’ll need to know the public key of the account importing the service. We can easily find the public key for account B by running:
|
||||
|
||||
```bash
|
||||
> nsc list keys --account B
|
||||
@ -203,7 +201,7 @@ To generate a token, you’ll need to know the public key of the account importi
|
||||
[ OK ] wrote account description to "/tmp/activation.jwt"
|
||||
```
|
||||
|
||||
The command took the account that has the export ('A'), the public key of account B, the subject where the stream will publish to account B.
|
||||
The command took the account that has the export \('A'\), the public key of account B, the subject where the stream will publish to account B.
|
||||
|
||||
For completeness, the contents of the JWT file look like this:
|
||||
|
||||
@ -239,7 +237,7 @@ When decoded it looks like this:
|
||||
╰─────────────────┴──────────────────────────────────────────────────────────────────────╯
|
||||
```
|
||||
|
||||
The token can be shared directly with the client account.
|
||||
The token can be shared directly with the client account.
|
||||
|
||||
> If you manage many tokens for many accounts, you may want to host activation tokens on a web server and share the URL with the account. The benefit to the hosted approach is that any updates to the token would be available to the importing account whenever their account is updated, provided the URL you host them in is stable.
|
||||
|
||||
@ -289,13 +287,12 @@ Importing a private stream is more natural than a public one as the activation t
|
||||
|
||||
Testing a private stream is no different than a public one:
|
||||
|
||||
```bash
|
||||
```bash
|
||||
> nsc sub --account B --user b private.abc.AAM46E3YF5WOZSE5WNYWHN3YYISVZOSI6XHTF2Q64ECPXSFQZROJMP2H
|
||||
Listening on [private.abc.AAM46E3YF5WOZSE5WNYWHN3YYISVZOSI6XHTF2Q64ECPXSFQZROJMP2H]
|
||||
...
|
||||
> nsc pub --account A --user U private.abc.AAM46E3YF5WOZSE5WNYWHN3YYISVZOSI6XHTF2Q64ECPXSFQZROJMP2H hello
|
||||
Published [private.abc.AAM46E3YF5WOZSE5WNYWHN3YYISVZOSI6XHTF2Q64ECPXSFQZROJMP2H] : "hello"
|
||||
...
|
||||
[#1] received on [private.abc.AAM46E3YF5WOZSE5WNYWHN3YYISVZOSI6XHTF2Q64ECPXSFQZROJMP2H]: 'hello'
|
||||
```
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user