Commit Graph

2111 Commits

Author SHA1 Message Date
Ivan Kozlovic
b048b6b3de Merge pull request #1754 from nats-io/mqtt
[ADDED] MQTT Support
2020-12-07 09:06:12 -07:00
Ivan Kozlovic
1d7c4712a5 Increase Pub performance
Essentially make publish a zero alloc. Use c.mqtt.pp as the parser
publish packet structure. Messages were initially copied because
MQTT messages don't have CR_LF but was adding it so that it worked
for NATS pub/subs and MQTT pub/subs.
Now an MQTT producer sending to NATS sub will queue CR_LF after
payload.

Here is result of benchcmp for MQTT pub runs only:
```
benchmark                                     old ns/op     new ns/op     delta
BenchmarkMQTT_QoS0_Pub_______0b_Payload-8     157           55.6          -64.59%
BenchmarkMQTT_QoS0_Pub_______8b_Payload-8     167           61.0          -63.47%
BenchmarkMQTT_QoS0_Pub______32b_Payload-8     181           65.3          -63.92%
BenchmarkMQTT_QoS0_Pub_____128b_Payload-8     243           78.1          -67.86%
BenchmarkMQTT_QoS0_Pub_____256b_Payload-8     298           95.0          -68.12%
BenchmarkMQTT_QoS0_Pub_______1K_Payload-8     604           224           -62.91%
BenchmarkMQTT_QoS1_Pub_______0b_Payload-8     1713          1314          -23.29%
BenchmarkMQTT_QoS1_Pub_______8b_Payload-8     1703          1311          -23.02%
BenchmarkMQTT_QoS1_Pub______32b_Payload-8     1722          1345          -21.89%
BenchmarkMQTT_QoS1_Pub_____128b_Payload-8     2105          1432          -31.97%
BenchmarkMQTT_QoS1_Pub_____256b_Payload-8     2148          1503          -30.03%
BenchmarkMQTT_QoS1_Pub_______1K_Payload-8     3024          1889          -37.53%

benchmark                                     old MB/s     new MB/s     speedup
BenchmarkMQTT_QoS0_Pub_______0b_Payload-8     31.76        89.91        2.83x
BenchmarkMQTT_QoS0_Pub_______8b_Payload-8     77.79        213.01       2.74x
BenchmarkMQTT_QoS0_Pub______32b_Payload-8     204.52       566.26       2.77x
BenchmarkMQTT_QoS0_Pub_____128b_Payload-8     550.65       1715.96      3.12x
BenchmarkMQTT_QoS0_Pub_____256b_Payload-8     877.77       2757.16      3.14x
BenchmarkMQTT_QoS0_Pub_______1K_Payload-8     1705.02      4607.72      2.70x
BenchmarkMQTT_QoS1_Pub_______0b_Payload-8     6.42         8.37         1.30x
BenchmarkMQTT_QoS1_Pub_______8b_Payload-8     11.16        14.49        1.30x
BenchmarkMQTT_QoS1_Pub______32b_Payload-8     24.97        31.97        1.28x
BenchmarkMQTT_QoS1_Pub_____128b_Payload-8     66.52        97.74        1.47x
BenchmarkMQTT_QoS1_Pub_____256b_Payload-8     124.78       178.27       1.43x
BenchmarkMQTT_QoS1_Pub_______1K_Payload-8     342.64       548.32       1.60x

benchmark                                     old allocs     new allocs     delta
BenchmarkMQTT_QoS0_Pub_______0b_Payload-8     3              0              -100.00%
BenchmarkMQTT_QoS0_Pub_______8b_Payload-8     3              0              -100.00%
BenchmarkMQTT_QoS0_Pub______32b_Payload-8     3              0              -100.00%
BenchmarkMQTT_QoS0_Pub_____128b_Payload-8     4              0              -100.00%
BenchmarkMQTT_QoS0_Pub_____256b_Payload-8     4              0              -100.00%
BenchmarkMQTT_QoS0_Pub_______1K_Payload-8     4              0              -100.00%
BenchmarkMQTT_QoS1_Pub_______0b_Payload-8     5              2              -60.00%
BenchmarkMQTT_QoS1_Pub_______8b_Payload-8     5              2              -60.00%
BenchmarkMQTT_QoS1_Pub______32b_Payload-8     5              2              -60.00%
BenchmarkMQTT_QoS1_Pub_____128b_Payload-8     7              3              -57.14%
BenchmarkMQTT_QoS1_Pub_____256b_Payload-8     7              3              -57.14%
BenchmarkMQTT_QoS1_Pub_______1K_Payload-8     7              3              -57.14%

benchmark                                     old bytes     new bytes     delta
BenchmarkMQTT_QoS0_Pub_______0b_Payload-8     80            0             -100.00%
BenchmarkMQTT_QoS0_Pub_______8b_Payload-8     88            0             -100.00%
BenchmarkMQTT_QoS0_Pub______32b_Payload-8     120           0             -100.00%
BenchmarkMQTT_QoS0_Pub_____128b_Payload-8     224           0             -100.00%
BenchmarkMQTT_QoS0_Pub_____256b_Payload-8     369           1             -99.73%
BenchmarkMQTT_QoS0_Pub_______1K_Payload-8     1250          31            -97.52%
BenchmarkMQTT_QoS1_Pub_______0b_Payload-8     106           28            -73.58%
BenchmarkMQTT_QoS1_Pub_______8b_Payload-8     122           28            -77.05%
BenchmarkMQTT_QoS1_Pub______32b_Payload-8     154           28            -81.82%
BenchmarkMQTT_QoS1_Pub_____128b_Payload-8     381           157           -58.79%
BenchmarkMQTT_QoS1_Pub_____256b_Payload-8     655           287           -56.18%
BenchmarkMQTT_QoS1_Pub_______1K_Payload-8     2312          1078          -53.37%
```

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-12-04 14:42:37 -07:00
Matthias Hanel
dc2eebcd85 removing t.Errorf 2020-12-03 21:40:06 -05:00
Matthias Hanel
f5fd5e4f40 fix test timing issue and flapper caused by unnecessary pop/push 2020-12-03 21:14:04 -05:00
Ivan Kozlovic
415a7071a7 Tweaks to mqttProcessConnect()
The test TestMQTTPersistedSession() flapped once on GA. It turns
out that when the server was sending CONNACK the test was immediately
using a NATS publisher to send a message that was not received by
the MQTT subscription for the recovered session.

Sending the CONNACK before restoring subscriptions allowed for a
window where a different connection could publish and messages
would be missed. It is technically ok, I think, and test could
have been easily fixed to ensure that we don't NATS publish before
the session is fully restored.

However, I have changed the order to first restore subscriptions
then send the CONNACK. The way locking happens with MQTT subscriptions
we are sure that the CONNACK will be sent first because even if
there are inflight messages, the MQTT callbacks will have to wait
for the session lock under which the subscriptions are restored
and the CONNACK sent.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-12-03 17:57:51 -07:00
Derek Collison
7564768027 Added Compact to store interface for WAL functionality
Signed-off-by: Derek Collison <derek@nats.io>
2020-12-03 16:18:58 -08:00
Ivan Kozlovic
035cffae37 Add clientType() which returns NATS/MQTT/WS for CLIENT connections.
It returns NON_CLIENT if invoked from a non CLIENT connection.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-12-03 14:23:57 -07:00
Derek Collison
a97e84d8b9 Merge pull request #1760 from nats-io/jsbug
[FIXES] https://github.com/nats-io/jetstream/issues/396
2020-12-02 16:29:39 -08:00
Derek Collison
0f7d18d6e8 Fixes https://github.com/nats-io/jetstream/issues/396
Had a deadlock with new preconditions. We need to hold lock across Store() call but that call could call into storeUpdate() such that we may need to acquire the lock. We can enter this callback from the storage layer itself and the lock would not be held so added an atomic.

Signed-off-by: Derek Collison <derek@nats.io>
2020-12-02 16:18:00 -08:00
Ivan Kozlovic
cf9ba928ca Fixed some MQTT tests
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-12-02 17:00:47 -07:00
Ivan Kozlovic
67425d23c8 Add c.isMqtt() and c.isWebsocket()
This hides the check on "c.mqtt != nil" or "c.ws != nil".
Added some tests.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-12-02 15:52:06 -07:00
Ivan Kozlovic
41fac39f8e Split createClient() into versions for normal, WS and MQTT clients.
This duplicate quite a bit of code, but reduces the conditionals
in the createClient() function.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-12-02 13:50:50 -07:00
Derek Collison
cddf23c200 Limit search depth for account cycles for imports
Signed-off-by: Derek Collison <derek@nats.io>
2020-12-02 11:44:27 -08:00
Derek Collison
9b107c0f4b Merge pull request #1759 from nats-io/acc_cycles
Better implementation to detect various cycles from account imports/exports.
2020-12-02 10:02:24 -08:00
Waldemar Quevedo
a9a6bdc04f Merge pull request #1732 from nats-io/rdn-ordering
Match DNs regardless of order when using TLS auth
2020-12-02 09:25:36 -08:00
Derek Collison
705cc0f5ea Better impl for detecting cycles between accounts
Signed-off-by: Derek Collison <derek@nats.io>
2020-12-02 08:56:19 -08:00
Ivan Kozlovic
4fc04d3f55 Revert changes to processSub()
Based on how the MQTT callback operates, it is safe to finish setup
of the MQTT subscriptions after processSub() returns. So I have
reverted the changes to processSub() which will minimize changes
to non-MQTT related code.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-12-01 15:38:47 -07:00
Ivan Kozlovic
3e91ef75ab Some updates based on code review
- Added non-public stream and consumer configuration options to
achieve the "no subject" and "no interest" capabilities. Had
to implement custom FileStreamInfo and FileConsumerInfo marshal/
unmarshal methods so that those non public fields can be
persisted/recovered properly.
- Restored some of JS original code (since now can use config
instead of passing booleans to the functions).
- Use RLock for deliveryFormsCycle() check (unrelated to MQTT).
- Removed restriction on creating streams with MQTT prefix.
- Preventing API deletion of internal streams and their consumers.
- Added comment on Sublist's ReverseMatch method.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-12-01 14:05:54 -07:00
Ivan Kozlovic
718c995914 Allow "nats" utility to display internal MQTT streams
MQTT streams are special in that we do not set subjects in the config
since they capture all subjects. Otherwise, we would have been forced
to create a stream on say "MQTT.>" but then all publishes would have
to be prefixed with "MQTT." in order for them to be captured.

However, if one uses the "nats" tool to inspect those streams, the tool
would fail with:

```
server response is not a valid "io.nats.jetstream.api.v1.stream_info_response" message:
(root): Must validate one and only one schema (oneOf)
config: subjects is required
config: Must validate all the schemas (allOf)
```

To solve that, if we detect that user asks for the MQTT streams, we
artificially set the returned config's subject to ">".

Alternatively, we may want to not return those streams at all, although
there may be value to see the info for mqtt streams/consumers.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-11-30 20:08:44 -07:00
Ivan Kozlovic
ac4890acba Fixed flapper
Tests dealing with MQTT "will" needed to wait for the server to
process the MQTT client close of the connection. Only then we
have the guarantee that the server produced the "will" message.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-11-30 20:08:44 -07:00
Ivan Kozlovic
1dba6418ed [ADDED] MQTT Support
This PR introduces native support for MQTT clients. It requires use
of accounts with JetStream enabled. Since as of now clustering is
not available, MQTT will be limited to single instance.

Only QoS 0 and 1 are supported at the moment. MQTT clients can
exchange messages with NATS clients and vice-versa.

Since JetStream is required, accounts with JetStream enabled must
exist in order for an MQTT client to connect to the NATS Server.
The administrator can limit the users that can use MQTT with the
allowed_connection_types option in the user section. For instance:
```
accounts {
  mqtt {
    users [
      {user: all, password: pwd, allowed_connection_types: ["STANDARD", "WEBSOCKET", "MQTT"]}
      {user: mqtt_only, password: pwd, allowed_connection_types: "MQTT"}
    ]
    jetstream: enabled
  }
}
```
The "mqtt_only" can only be used for MQTT connections, which the user
"all" accepts standard, websocket and MQTT clients.

Here is what a configuration to enable MQTT looks like:
```
mqtt {
  # Specify a host and port to listen for websocket connections
  #
  # listen: "host:port"

  # It can also be configured with individual parameters,
  # namely host and port.
  #
  # host: "hostname"
  port: 1883

  # TLS configuration section
  #
  # tls {
  #  cert_file: "/path/to/cert.pem"
  #  key_file: "/path/to/key.pem"
  #  ca_file: "/path/to/ca.pem"
  #
  #  # Time allowed for the TLS handshake to complete
  #  timeout: 2.0
  #
  #  # Takes the user name from the certificate
  #  #
  #  # verify_an_map: true
  #}

  # Authentication override. Here are possible options.
  #
  # authorization {
  #   # Simple username/password
  #   #
  #   user: "some_user_name"
  #   password: "some_password"
  #
  #   # Token. The server will check the MQTT's password in the connect
  #   # protocol against this token.
  #   #
  #   # token: "some_token"
  #
  #   # Time allowed for the client to send the MQTT connect protocol
  #   # after the TCP connection is established.
  #   #
  #   timeout: 2.0
  #}

  # If an MQTT client connects and does not provide a username/password and
  # this option is set, the server will use this client (and therefore account).
  #
  # no_auth_user: "some_user_name"

  # This is the time after which the server will redeliver a QoS 1 message
  # sent to a subscription that has not acknowledged (PUBACK) the message.
  # The default is 30 seconds.
  #
  # ack_wait: "1m"

  # This limits the number of QoS1 messages sent to a session without receiving
  # acknowledgement (PUBACK) from that session. MQTT specification defines
  # a packet identifier as an unsigned int 16, which means that the maximum
  # value is 65535. The default value is 1024.
  #
  # max_ack_pending: 100
}
```

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-11-30 20:08:44 -07:00
Derek Collison
af81a2013d Log errors
Signed-off-by: Derek Collison <derek@nats.io>
2020-11-30 18:21:19 -08:00
Derek Collison
bfb726e8e9 Make sure to clear JS resources on reload
Signed-off-by: Derek Collison <derek@nats.io>
2020-11-30 17:18:33 -08:00
Derek Collison
3ec2b6d00f Version bump
Signed-off-by: Derek Collison <derek@nats.io>
2020-11-30 16:27:07 -08:00
Derek Collison
4e6d600ecc Also make sure account works after reload
Signed-off-by: Derek Collison <derek@nats.io>
2020-11-30 16:18:36 -08:00
Derek Collison
7e27042e6e Fix for #1736
When a system account was configured and not the default when we did a reload we would lose the JetStream service exports.

Signed-off-by: Derek Collison <derek@nats.io>
2020-11-30 16:11:50 -08:00
Ivan Kozlovic
77aead807c Send LS- without origin to route
When cluster origin code was added, a server may send LS+ with
an origin cluster name in the protocol. Parsing code from a ROUTER
connection was adjusted to understand this LS+ protocol.
However, the server was also sending an LS- with origin but the
parsing code was not able to understand that. When the unsub was
for a queue subscription, this would cause the parser to error out
and close the route connection.

This PR sends an LS- without the origin in this case (so that tracing
makes sense in term of LS+/LS- sent to a route). The receiving side
then traces appropriate LS- but processes as a normal RS-.

Resolves #1751

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-11-30 13:31:32 -07:00
Derek Collison
4532447908 Remove limitation on ackall for filtered consumers
Signed-off-by: Derek Collison <derek@nats.io>
2020-11-28 07:18:17 -08:00
R.I.Pienaar
5e5b2e4dfd ensure the stream originating a pub error is reported
Signed-off-by: R.I.Pienaar <rip@devco.net>
2020-11-27 12:24:41 +01:00
Ivan Kozlovic
cceab9a46f System account was not properly tracking GW routed replies
In some cases, the reply of a request message is prefixed when
going over a gateway so that if it comes back to a different
server than when the request originates, it can be routed back.

For system accounts, this routed reply subject was not tracked
so the server would reply to the inbox and may reach a server
that had not yet processed (through the route) the interest
on that inbox. If the reply came with the GW routed info, that
server would know to route it to the original server.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-11-25 15:51:12 -07:00
Derek Collison
5d8b9eb608 Merge pull request #1748 from nats-io/fs_consumer_bug
Fixed bug restoring consumer state
2020-11-25 14:17:02 -08:00
Derek Collison
f69b199e0f Fixed bug restoring consumer state.
We were not properly restoring our state for consumers and we also had a bug where we would not properly encode and write redelivered state.

Signed-off-by: Derek Collison <derek@nats.io>
2020-11-25 13:31:46 -08:00
Derek Collison
bcf295dd51 Changed dcount -> dc
Signed-off-by: Derek Collison <derek@nats.io>
2020-11-25 13:30:29 -08:00
Derek Collison
de0c992ca6 Merge pull request #1747 from nats-io/jsupdates
JetStream changes.
2020-11-25 08:07:48 -08:00
Derek Collison
954f5a9093 Flattened filters for stream names API
Signed-off-by: Derek Collison <derek@nats.io>
2020-11-25 07:46:56 -08:00
Derek Collison
44a1373f89 JetStream changes.
Made several changes based on feedback.

1. Made PubAckResponse only optionally include an ApiError and not force an API type.
2. Allow FilterSubject to be set on a consumer config and cleared if it matches the only stream subject.
3. Remove LookupStream by subject, and add in filters for stream names API.

Signed-off-by: Derek Collison <derek@nats.io>
2020-11-25 06:50:25 -08:00
Phil Pennock
bc6c433142 FreeBSD fixes: %cpu scale; no-cgo for amd64
It would be convenient if we did not need to use cgo to compile for
FreeBSD for common architectures, allowing builds to be generated in
Linux CI flows.

The sysctl interface uses offsets which can vary by build architecture,
so doing this in the general case without cgo is not realistic.  But we
can front-load the C work to get the offsets for a given architecture,
then use encoding/binary at run-time.

While doing this, I saw that the existing FreeBSD code was
mis-calculating `%cpu` by converting the `fixpt_t` scaled int straight
to a double without dividing by the scaling factor, so we also fix for
all other architectures by introducing a division by `FSCALE`.

The offsets-emitting code is in `freebsd.txt`, with the filename chosen
to keep the Go toolchain from picking it up and trying to compile.

The result is unsafe-free cgo-free builds for FreeBSD/amd64.
2020-11-24 23:04:50 -05:00
Ivan Kozlovic
406dc7ee56 Fixed data race on leafnode check for remote cluster
A newly introduced test (TestLeafNodeTwoRemotesBindToSameAccount)
had a server creating two remotes to the same server/account.
This test quite often show the data race:
```
go test -race -v -run=TestLeafNodeTwoRemotesBindToSameAccount ./server -count 100 --failfast
=== RUN   TestLeafNodeTwoRemotesBindToSameAccount
==================
WARNING: DATA RACE
Write at 0x00c000168790 by goroutine 34:
  github.com/nats-io/nats-server/v2/server.(*client).processLeafNodeConnect()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/leafnode.go:1177 +0x314
  github.com/nats-io/nats-server/v2/server.(*client).processConnect()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/client.go:1719 +0x9e4
  github.com/nats-io/nats-server/v2/server.(*client).parse()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/parser.go:870 +0xf88
  github.com/nats-io/nats-server/v2/server.(*client).readLoop()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/client.go:1052 +0x7a5
  github.com/nats-io/nats-server/v2/server.(*Server).createLeafNode.func4()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/leafnode.go:872 +0x52

Previous read at 0x00c000168790 by goroutine 32:
  github.com/nats-io/nats-server/v2/server.(*client).remoteCluster()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/leafnode.go:1203 +0x42d
  github.com/nats-io/nats-server/v2/server.(*Server).updateLeafNodes()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/leafnode.go:1375 +0x2cf
  github.com/nats-io/nats-server/v2/server.(*client).processLeafSub()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/leafnode.go:1619 +0x858
  github.com/nats-io/nats-server/v2/server.(*client).parse()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/parser.go:624 +0x5031
  github.com/nats-io/nats-server/v2/server.(*client).readLoop()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/client.go:1052 +0x7a5
  github.com/nats-io/nats-server/v2/server.(*Server).createLeafNode.func4()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/leafnode.go:872 +0x52

Goroutine 34 (running) created at:
  github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/server.go:2627 +0xc7
  github.com/nats-io/nats-server/v2/server.(*Server).createLeafNode()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/leafnode.go:872 +0xf7a
  github.com/nats-io/nats-server/v2/server.(*Server).startLeafNodeAcceptLoop.func1()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/leafnode.go:474 +0x5e
  github.com/nats-io/nats-server/v2/server.(*Server).acceptConnections.func1()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/server.go:1784 +0x57

Goroutine 32 (running) created at:
  github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/server.go:2627 +0xc7
  github.com/nats-io/nats-server/v2/server.(*Server).createLeafNode()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/leafnode.go:872 +0xf7a
  github.com/nats-io/nats-server/v2/server.(*Server).startLeafNodeAcceptLoop.func1()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/leafnode.go:474 +0x5e
  github.com/nats-io/nats-server/v2/server.(*Server).acceptConnections.func1()
      /Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/server.go:1784 +0x57
==================
    testing.go:965: race detected during execution of test
--- FAIL: TestLeafNodeTwoRemotesBindToSameAccount (0.05s)
```

This is because as soon as a LEAF is registered with the account, it is available
in the account's lleafs map, even before the CONNECT for this connectio is processed.
If another LEAF connection is processing a LSUB, the code goes over all leaf connections
for the account and may find the new connection that is in the process of connecting.
The check accesses c.leaf.remoteCluster unlocked which is also set unlocked during
the CONNECT. The fix is to have the set and check on that particular location using
the client's lock.

Ideally I believe that the connection should not have been in the account's lleafs,
or at least not used until the CONNECT for this leaf connection is fully processed.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-11-24 15:42:30 -07:00
Ivan Kozlovic
88475014ef [FIXED] Split LMSG across routes
If a LeafNode message is sent across a route, and the message
does not fit in the buffer, the parser would incorrectly process
the "pub args" as if it was a ROUTED message, not a LEAF message.
This caused clonePubArg() to return an error that would cause
the parser to end with a protocol violation.

Keep track that we are processing an LMSG so that we can pass
that information to clonePubArg() and do proper parsing in split
scenario.

Resolves #1743

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-11-24 14:55:50 -07:00
Ivan Kozlovic
120b031ffd Merge pull request #1739 from nats-io/leaf-warning
[Added] account name checks for leaf nodes in operator mode
2020-11-24 12:35:31 -07:00
Matthias Hanel
b0461e3921 Fixed comment
Signed-off-by: Matthias Hanel <mh@synadia.com>
2020-11-24 12:47:41 -05:00
Matthias Hanel
a0dc9ea3e3 Reducing complexity of lookup
Signed-off-by: Matthias Hanel <mh@synadia.com>
2020-11-24 12:31:44 -05:00
Ivan Kozlovic
637717a9f3 Merge pull request #1738 from nats-io/fix_1730
[FIXED] LeafNode reject duplicate remote
2020-11-24 09:22:11 -07:00
Matthias Hanel
a8390b7432 Incorporating comments and moving code
Signed-off-by: Matthias Hanel <mh@synadia.com>
2020-11-23 23:27:44 -05:00
Matthias Hanel
e2e69b6daf [Added] account name checks for leaf nodes in operator mode
Rules out implausible ones.

Signed-off-by: Matthias Hanel <mh@synadia.com>
2020-11-23 15:38:41 -05:00
Ivan Kozlovic
f155c75da7 [FIXED] LeafNode reject duplicate remote
There was a test to prevent an errorneous loop detection when a
remote would reconnect (due to a stale connection) while the accepting
side did not detect the bad connection yet.

However, this test was racy because the test was done prior to add
the connections to the map.

In the case of a misconfiguration where the remote creates 2 different
remote connections that end-up binding to the same account in the
accepting side, then it was possible that this would not be detected.
And when it was, the remote side would be unaware since the disconnect/
reconnect attempts would not show up if not running in debug mode.

This change makes sure that the detection is no longer racy and returns
an error to the remote so at least the log/console of the remote will
show the "duplicate connection" error messages.

Resolves #1730

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2020-11-23 13:28:18 -07:00
Waldemar Quevedo
a766b52c47 Allow matching DNs regardless of order
Signed-off-by: Waldemar Quevedo <wally@synadia.com>
2020-11-23 12:16:49 -08:00
Derek Collison
c0bc788c6d Merge pull request #1735 from nats-io/ehdrs
Stream publish changes
2020-11-23 09:44:37 -08:00
Derek Collison
18108be374 Merge pull request #1731 from nats-io/cycle
[FIXED] Detect service import cycles.
2020-11-23 09:43:51 -08:00
Derek Collison
2e7b47f692 Merge pull request #1733 from nats-io/jsimport
Allow complete $JS.API to be imported from another account.
2020-11-23 07:35:24 -08:00