The issue was that the subscription created for the MQTT client
was resulting in creation of a shadow subscription which did not
have the mqtt specific object attached, which would cause the
panic when accessing it in the sub's icb.
After that, it was discovered that the wrong subject was passed
to deliverMsg(), so fixed that too so that the icb callback gets
the proper transformed subject.
Resolves#2265
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
The addStream() can return an ApiErr but we did not handle
that leading to errors like 'stream name already in use (10058)' instead
of just 'stream name already in use' with the correct error code 10058 set
Signed-off-by: R.I.Pienaar <rip@devco.net>
We were incorrectly shutting things down via deny clauses when detecting the remote side/hub had JetStream capabilities.
This change moves that logic to the remote side and is signalled off the connect message which let's the remote side know
if the local leafnode account has JetStream enabled.
Signed-off-by: Derek Collison <derek@nats.io>
* [changed] pinned certs to check the server connected to as well
on reload clients with removed pinned certs will be disconnected.
The check happens only on tls handshake now.
Signed-off-by: Matthias Hanel <mh@synadia.com>
fixed by moving setting of the mappings into a common function that is
also called when the jwt is updated
Signed-off-by: Matthias Hanel <mh@synadia.com>
When we had a duplicate detected in R>1 mode we set the skip sequence indicator but were not using that when dealing with underlying store.
Signed-off-by: Derek Collison <derek@nats.io>
We had a report of corrupt message payloads when going across leafnodes between streams that were sourced from one another.
We were incorrectly using the underlying buffer when a header already existed.
Signed-off-by: Derek Collison <derek@nats.io>
* [added] pinned_cert option to tls block hex(sha256(spki))
When read form config, the values are automatically lower cased.
The check when seeing the values programmatically requires
lower case to avoid having to alter the map at this point.
Signed-off-by: Matthias Hanel <mh@synadia.com>
Say with a cluster of 3, all MQTT assets are created with a replicas
of 3. However, when a server is shutdown, then any new MQTT client
will fail to connect because we try to create a session stream
with R(3), which leads to insufficient resources.
The longer term solution should be for the server to allow the
creation of an asset with a R() value that is bigger than the
current number of running servers as long as there is quorum.
For now, we will reduce the R() value for the sessions if we get
an "insufficient resources" error.
Note that the other assets still will use the compute R() based
on cluster size. So the first time that a client on a given
account is started, we will still need to have R() == cluster size
(at least for R(3)).
Partially resolves#2226
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
If you attempt to use the server http handlers it would panic unless you explicity called StartMonitoring
This isn't ideal to have a secondary http server running for those that are embedding nats and only want to host the http handlers on a pre-existing http server
Talked with @kozlovic via Slack about this
Jetstream movement can fail, so return that error and abort start-up if there's
a failure in moving precious data, rather than serve without it.
Create the jetstream directory if needed.
Create directories for private data mode 0750 not 0755.
This does not handle a directory layout made with 2.2.3, but does support a
2.2.2 to 2.2.4 migration. The empty directories made under 2.2.3 will still
hinder the renames we do here.
When a response was needed from a leafnode cluster back to a hub, we had rules to disallow.
That rule was a bit dated and since we have cluster origin for leafnode clusters and that
is checked before the message is actually sent we could remove the old rule.
Signed-off-by: Derek Collison <derek@nats.io>