Constrain our server auth nonce selection with a no-op change for the current
server code-base, in a way which we guarantee and expect clients to check for,
to buy us future proofing.
Leafnodes that formed clusters were partially supported. This adds proper support for origin cluster, subscription suppression and data message no echo for the origin cluster.
Signed-off-by: Derek Collison <derek@nats.io>
When a leafnode would connect with credentials that had permissions the spoke did not have a way of knowing what those were.
This could lead to being disconnected when sending subscriptions or messages to the hub which were not allowed.
Signed-off-by: Derek Collison <derek@nats.io>
Added cluster names as required for prep work for clustered JetStream. System can dynamically pick a cluster name and settle on one even in large clusters.
Signed-off-by: Derek Collison <derek@nats.io>
This was found due to a recent test that was flapping. The test
was not checking the correct server for leafnode connection, but
that uncovered the following bug:
When a leafnode connection is solicited, the read/write loops are
started. Then, the connection lock is released and several
functions invoked to register the connection with an account and
add to the connection leafs map.
The problem is that the readloop (for instance) could get a read
error and close the connection *before* the above said code
executes, which would lead to a connection incorrectly registered.
This could be fixed either by delaying the start of read/write loops
after the registration is done, or like in this PR, check the
connection close status after registration, and if closed, manually
undoing the registration with account/leafs map.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
The test was not checking that PINGs were sent, however, without
disabling the first short ping and with a very low interval, there
was chance on Travis that the first short ping was sent before
the client had connected, which would break the NATS client protocol
that expects to receive a PONG to initial PING (after CONNECT).
The client library could arguably be updated to accept PING during
the CONNECT phase.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
We added authentication override block for websocket configuration
in PR #1463 and #1465 which somehow introduced a drop in perf as
reported by the bench tests.
This PR refactors a bit to restore the performance numbers.
This change also fixes the override behavior for websocket auth:
- If websocket's NoAuthUser is configured, the websocket's auth
block MUST define Users, and the user be present.
- If there is any override (username/pwd,token,etc..) then the
whole block config will be used when authenticating a websocket
client, which means that if websocket NoAuthUser is empty we
are not falling back to the regular client's NoAuthUser config.
- TLSMap always override the regular client's config. That is,
whatever TLSMap value specified in the websocket's tls{} block
will be used.
The TLSMap configuration was not used for LeafNodes. The behavior
now will be:
- If LeafNode's auth block contains users and TLSMap is true,
the user is looked up based on the cert's info. If not found,
authentication will fail. If found, it will be authenticated
and bound to associated account.
- If no user is specified in LeafNode's auth block and TLSMap
is true, then the cert's info will be used against the global
users map.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Building server's nkeys and users map out of slices form options
has been made a function so it can be used for the server and
websocket (and in future for mqtt)
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Websocket can now override
- Username/password
- Token
- Users
- NKeys
- no_auth_user
- auth_timeout
For TLS, support for verify and verify_and_map. We used to set
tls config's ClientAuth to NoClientCert. It will now depend
if the config requires client certificate verification, which
is needed if TLSMap is enabled.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
The grace period used to be hardcoded at 10 seconds.
This option allows the user to configure the amount of time the
server will wait before initiating the closing of client connections.
Note that the grace period needs to be strictly lower than the overall
lame_duck_duration. The server deducts the grace period from that
overall duration and spreads the closing of connections during
that time.
For instance, if there are 1000 connections and the lame duck
duration is set to 30 seconds and grace period to 10, then
the server will use 30-10 = 20 seconds to spread the closing
of those 1000 connections, so say roughly 50 clients per second.
Resolves#1459.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>