This is similar to PR #4115 but for LeafNodes.
Compression mode can be set on both side (the accept and in remotes).
```
leafnodes {
port: 7422
compression: s2_best
remotes [
{
url: "nats://host2:74222"
compression: s2_better
}
]
}
```
Possible modes are similar than for routes (described in PR #4115),
except that when not defined we default to `s2_auto`.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Added a leafnode lock to allow better traversal without copying of large leafnodes in a single hub account.
Signed-off-by: Derek Collison <derek@nats.io>
One should not access s.opts directly but instead use s.getOpts().
Also, server lock needs to be released when performing an account
lookup (since this may result in server lock being acquired).
A function was calling s.LookupAccount under the client lock, which
technically creates a lock inversion situation.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
The new field `compression` in the `cluster{}` block allows to
specify which compression mode to use between servers.
It can be simply specified as a boolean or a string for the
simple modes, or as an object for the "s2_auto" mode where
a list of RTT thresholds can be specified.
By default, if no compression field is specified, the server
will use the s2_auto mode with default RTT thresholds of
10ms, 50ms and 100ms for the "uncompressed", "fast", "better"
and "best" modes.
```
cluster {
..
# Possible values are "disabled", "off", "enabled", "on",
# "accept", "s2_fast", "s2_better", "s2_best" or "s2_auto"
compression: s2_fast
}
```
To specify a different list of thresholds for the s2_auto,
here is how it would look like:
```
cluster {
..
compression: {
mode: s2_auto
# This means that for RTT up to 5ms (included), then
# the compression level will be "uncompressed", then
# from 5ms+ to 15ms, the mode will switch to "s2_fast",
# then from 15ms+ to 50ms, the level will switch to
# "s2_better", and anything above 50ms will result
# in the "s2_best" compression mode.
rtt_thresholds: [5ms, 15ms, 50ms]
}
}
```
Note that the "accept" mode means that a server will accept
compression from a remote and switch to that same compression
mode, but will otherwise not initiate compression. That is,
if 2 servers are configured with "accept", then compression
will actually be "off". If one of the server had say s2_fast
then they would both use this mode.
If a server has compression mode set (other than "off") but
connects to an older server, there will be no compression between
those 2 routes.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
One should not access s.opts directly but instead use s.getOpts().
Also, server lock needs to be released when performing an account
lookup (since this may result in server lock being acquired).
A function was calling s.LookupAccount under the client lock, which
technically creates a lock inversion situation.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
New configuration fields:
```
cluster {
...
pool_size: 5
accounts: ["A", "B"]
}
```
The configuration `pool_size` in the example above means that this
server will create 5 routes to a remote server, assuming that that
server has the same `pool_size` setting.
Accounts (which are not part of the `accounts[]` configuration)
are assigned a specific route in this pool, and this will be the
same route on all servers in the cluster.
Accounts that are defined in the `accounts` field will each have
a dedicated route connection. This will allow suppression of the
account name in some of the route protocols, reducing bytes transmitted
which may increase performance.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
- Save the TLS name only if not already set
- Use the passed URLs slice instead of using s.getOpts().Routes
- Enhanced the test
- Fixed an unrelated DATA RACE report
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
The server was not setting "server name" in the TLS configuration
for route connections, which may lead to failed (re)connect if
the certificate does not allow for the IP and the URL did not
have the hostname, which would happen with gossip protocol.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
A request to `$SYS.REQ.SERVER.PING.JSZ` would now return something
like this:
```
...
"meta_cluster": {
"name": "local",
"leader": "A",
"peer": "NUmM6cRx",
"replicas": [
{
"name": "B",
"current": true,
"active": 690369000,
"peer": "b2oh2L6w"
},
{
"name": "Server name unknown at this time (peerID: jZ6RvVRH)",
"current": false,
"offline": true,
"active": 0,
"peer": "jZ6RvVRH"
}
],
"cluster_size": 3
}
```
Note the "peer" field following the "leader" field that contains
the server name. The new field is the node ID, which is a hash of
the server name.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
A server maintains a map for the subject+queue to know the number
of members on the same group. However, on unsubscribe when we get
to the last one being unsubscribed, we were removing from the map
but then unfortunately adding back with a value of 0, which caused
a leak. If the same subscription was coming back, then this map
entry would be reused, but if it is a never coming back queue sub,
then memory could increase continously.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Added Start, LastActivity, Uptime and Idle that we normally have
in a Connz for non route connections. This info can be useful
to determine if a route is recent, etc..
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Gateway connection will be closed and error reported if a remote
has a name that is a duplicate of the local cluster.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
This was introduced when fixing #2881. The call to setFirstPingTimer
needed to be done under the client's lock.
Moved setFirstPingTimer from a server receiver to a client receiver.
The only reason it was a server receiver is because we need the
server options, but c.srv is always set when invoking this function,
so we will get the server from c.srv in that function now.
Related to #2881
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
We will only send if all peers in our group are >= 2.7.1 and we will check for updates.
When a consumer follower takes over it will notify all pending requests that those requests are invalid now.
Signed-off-by: Derek Collison <derek@nats.io>
This allows stream placement to overflow to adjacent clusters.
We also do more balanced placement based on resources (store or mem). We can continue to expand this as well.
We also introduce an account requirement that stream configs contain a MaxBytes value.
We now track account limits and server limits more distinctly, and do not reserver server resources based on account limits themselves.
Signed-off-by: Derek Collison <derek@nats.io>
- When detecting duplicate route, it was possible that a server
would lose track of the peer's gateway URL, which would prevent
it from gossiping that URL to inbound gateway connections
- When a server has gateways enabled and has as a remote its
own gateway, the monitoring endpoint `/varz` would include it
but without the "urls" array.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
This is the reverse of the early work to have LNs extend a non-JS cluster.
Also have mixed mode tests as well.
Signed-off-by: Derek Collison <derek@nats.io>
* [fixed] jetstream unique server name requirement across domains
including domain in server info
adding check for cluster name in duplicate leaf node connection check
This does not address non unique domains in the same domain, say within
super cluster.
Signed-off-by: Matthias Hanel <mh@synadia.com>
* [changed] pinned certs to check the server connected to as well
on reload clients with removed pinned certs will be disconnected.
The check happens only on tls handshake now.
Signed-off-by: Matthias Hanel <mh@synadia.com>
This structure is used in ClientAuthentication, an interface
designed to let 3rd parties extend the authentication mechanisms
of the server
In order to allow those 3rd parties to create unit tests, mocks etc
we need to export this structure so it's accessible externally
Signed-off-by: R.I.Pienaar <rip@devco.net>
In a setup with a cluster of servers to which 2 different leaf nodes
attach to, and queue subs are attached to one of the leaf, if the
leaf server is restarted and reconnects to another server in the
cluster, there was a risk for an infinite message loop between
some servers in the "hub" cluster.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
This could happen when a leafnode has permissions set and another
connection (client, etc..) is about to assign a message to the
leafnode while the leafnode itself is receiving messages and they
both check permissions at the same time.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Currently, we use ReadyForConnections in server tests to wait for the
server to be ready. However, when this fails we don't get a clue about
why it failed.
This change adds a new unexported method called readyForConnections that
returns an error describing which check failed. The exported
ReadyForConnections version works exactly as before. The unexported
version gets used in internal tests only.
1. When in mixed mode and only running the global account we now will check the account for JS.
2. Added code to decrease the cluster set size if we guessed wrong in mixed mode setup.
Signed-off-by: Derek Collison <derek@nats.io>
If leafnodes from a cluster were to reconnect to a server in
a different cluster, it was possible for that server to send
to the leafnodes some their own subscriptions that could cause
an inproper loop detection error.
There was also a defect that would cause subscriptions over route
for leafnode subscriptions to be registered under the wrong key,
which would lead to those subscriptions not being properly removed
on route disconnect.
Finally, during route disconnect, the leafnodes map was not updated.
This PR fixes that too.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Took the approach of storing struct instead of pointer. Of course,
when changing the offline bool from false to true, it means that
we need to call Store again (with same key).
This is based on the assumption that those Load/Store are not too
frequent. Otherwise, we may need to use locking (and keep *nodeInfo)
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
This allows metacontrollers to span superclusters. Also includes placement directives for streams. By default they select the request origin cluster.
Signed-off-by: Derek Collison <derek@nats.io>
Added two options in the remote leaf node configuration
- compress, for websocket only at the moment
- ws_masking, to force remote leafnode connections to mask websocket
frames (default is no masking since it is communication between
server to server)
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Since we were creating subs on the fly, sub.im would always be nil.
We were passing a client because it was needed in sendRouteSubOrUnSubProtos().
This PR simply fills the buffer with each account's subscriptions.
There is also no need to have subs sent from different go routine
based on some threshold. Routes are no longer subject to max pending.
Some code has been made into a function so that they can be shared
by sendSubsToRoute() and sendRouteSubOrUnSubProtos(). The function
is simply adding to given buffer the RS+/- protocol.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Those changes are required to maintain backward compatibility.
Since the replies are "_G_.<gateway name hash>.<server ID hash>"
and the hash were 6 characters long, changing to 8 the hash function
would break things.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>