* [added] pinned_cert option to tls block hex(sha256(spki))
When read form config, the values are automatically lower cased.
The check when seeing the values programmatically requires
lower case to avoid having to alter the map at this point.
Signed-off-by: Matthias Hanel <mh@synadia.com>
Say with a cluster of 3, all MQTT assets are created with a replicas
of 3. However, when a server is shutdown, then any new MQTT client
will fail to connect because we try to create a session stream
with R(3), which leads to insufficient resources.
The longer term solution should be for the server to allow the
creation of an asset with a R() value that is bigger than the
current number of running servers as long as there is quorum.
For now, we will reduce the R() value for the sessions if we get
an "insufficient resources" error.
Note that the other assets still will use the compute R() based
on cluster size. So the first time that a client on a given
account is started, we will still need to have R() == cluster size
(at least for R(3)).
Partially resolves#2226
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
If you attempt to use the server http handlers it would panic unless you explicity called StartMonitoring
This isn't ideal to have a secondary http server running for those that are embedding nats and only want to host the http handlers on a pre-existing http server
Talked with @kozlovic via Slack about this
- Changed to be case-sensitive by default
- Removed all references to the Go implementation
- Clarified how append in a case-insensitive context should behave
[ci skip]
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Jetstream movement can fail, so return that error and abort start-up if there's
a failure in moving precious data, rather than serve without it.
Create the jetstream directory if needed.
Create directories for private data mode 0750 not 0755.
This does not handle a directory layout made with 2.2.3, but does support a
2.2.2 to 2.2.4 migration. The empty directories made under 2.2.3 will still
hinder the renames we do here.
When a response was needed from a leafnode cluster back to a hub, we had rules to disallow.
That rule was a bit dated and since we have cluster origin for leafnode clusters and that
is checked before the message is actually sent we could remove the old rule.
Signed-off-by: Derek Collison <derek@nats.io>
There are 2 options, same_origin and allowed_origins that should
apply only to webbrowsers that set the Origin http header. If
the header is not present, the server should not fail direct
clients using websocket protocol, or leafnodes.
From spec:
https://datatracker.ietf.org/doc/html/rfc6455#section-1.6
The WebSocket Protocol uses the origin model used by web browsers to
restrict which web pages can contact a WebSocket server when the
WebSocket Protocol is used from a web page. Naturally, when the
WebSocket Protocol is used by a dedicated client directly (i.e., not
from a web page through a web browser), the origin model is not
useful, as the client can provide any arbitrary origin string.
Resolves#2207
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
When using multiple source streams from either different accounts or domains, the stream name could be the same and would cause bad behavior.
Signed-off-by: Derek Collison <derek@nats.io>
Only log when actually moving an account in case other files start polluting the directory.
When failing to look up an account and we have a resolver check to see if its a valid account name before attempting lookup.
Signed-off-by: Derek Collison <derek@nats.io>