- Change finger_prints to cert_sha256 and use hex.EncodeToString
- Add spki_sha256 for RawSubjectPublicKeyInfo with hex.EncodeToString
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
past processing the import response, c.pa was not reset to the
appropriate state, which lead to an unintended recursion
Signed-off-by: Matthias Hanel <mh@synadia.com>
Scoped signing keys allow for optional values in allow rules
If an allow rule therefore gets removed because a tag is not present,
the removal needs to be compensated by adding in a deny >
Signed-off-by: Matthias Hanel <mh@synadia.com>
This fixes an issue introduced in #3080
The consumer filter subject check was skipped on recovery.
The intent was to bypass the upstream stream subjects.
But it also filtered the downstream stream subject.
This became a problem when the downstream was itself an upstream.
Then during recover, the stream subject was not checked, which
lead to delivery of filtered messages that should never have been
delivered.
Signed-off-by: Matthias Hanel <mh@synadia.com>
Since PR #3381, the 2 tests modified here would take twice as
long (around 245 seconds) to complete.
Talking with Matthias, he suggested using a variable instead of
a const and set it to 0 for those 2 tests since they don't really
need that to be set.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Add basic peer certificates information in /connz endpoint when
the "auth" option is provided.
Resolves#3317
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Saw this panic in code coverage run:
```
=== RUN TestJetStreamClusterPeerExclusionTag
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x88 pc=0x8acd55]
goroutine 97850 [running]:
github.com/nats-io/nats-server/v2/server.(*jetStream).monitorStream(0xc002b94780, 0xc001ecb500, 0xc003229b00, 0x0)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:1653 +0x495
github.com/nats-io/nats-server/v2/server.(*jetStream).processClusterCreateStream.func1()
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:2953 +0x3b
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/server.go:3063 +0xa7
```
Was able to reproduce and reason was `meta` was nil.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
When a request for a system service like $SYS.REQ.ACCOUNT.*.CONNZ
is imported/exported we ensured that the requesting account is identical
to the account referenced in the subject.
In #3250 this check was extended from CONNZ to all $SYS.REQ.ACCOUNT.*.*
requests.
In general this check interferes with monitoring accounts that need
to query all other accounts, not just itself.
There the use case is that account A sends a request with account B
in the subject. The check for equal accounts prevents this.
This change removes the check to support these use cases.
Instead of the check, the default export now uses exportAuth
tokenPos to ensure that the 4th token is the importer account id.
This guarantees that an explicit export (done by user) can only import
for the own account.
This change also ensures that an explicit export is not overwritten
by the system.
This is not a problem when the export is public.
Automatic imports set the account id correctly and do not use wildcards.
To cover cases where the export is private, automatically added imports
are not subject a token check.
Signed-off-by: Matthias Hanel <mh@synadia.com>
We are phasing out the optimistic-only mode. Servers accepting
inbound gateway connections will switch the accounts to interest-only
mode.
The servers with outbound gateway connection will check interest
and ignore the "optimistic" mode if it is known that the corresponding
inbound is going to switch the account to interest-only. This is
done using a boolean in the gateway INFO protocol.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
This is to avoid a narrow race between adding server and them catching
up where they also register as current.
Also wait for all peers to be caught up.
This also avoids clearing catchup marker once catchup stalled.
A stalled catchup would remove the marker causing the peer to
register as current.
Signed-off-by: Matthias Hanel <mh@synadia.com>
if an origin stream contains:
1M msgs with subject foo and 1M msgs with subject bar
IF the source consumer changes their filter from foo to bar
Then it would have received messages for subject bar.
This happens because this tail was filtered and their
respective seqno was not communicated to the consumer.
This is somewhat unexpected. It is also coincidental.
Had the last message in the stream had subject foo then
this wouldn't happen.
Therefore, when completely changing the subject say,
from foo to bar, we only receive messages received
after the time the change was made.
However, if the old and new subject overlap in any way,
we go by sequence number. Meaning in these cases the
outlined behavior remains in order to not induce artificial
message loss for the part of the subject space that is
covered by old and new filter.
Signed-off-by: Matthias Hanel <mh@synadia.com>
Signed-off-by: Matthias Hanel <mh@synadia.com>
The monitoring http server is started early and the gateway setup
(when configured) may not be fully ready when the `/gatewayz`
endpoint is inspected and could cause a panic.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
ordering of templates got messed up by a map (now removed)
Also improved error message when template generation fails
Signed-off-by: Matthias Hanel <mh@synadia.com>
Test TestNoRaceJetStreamClusterCorruptWAL() would start to flap
because of the snapshot on cluster shutdown. Disable the snapshot
on exit for this test by stopping the raft node before shutdown.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
- didRemove in applyMetaEntries() could be reset when processing
multiple entries
- change "no race" test names to include JetStream
- separate raft nodes leader stepdown and stop in server
shutdown process
- in InstallSnapshot, call wal.Compact() with lastIndex+1
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
This is a regression introduced in v2.8.3. If a message reaches
the max redeliver count, it stops being delivered to the consumer.
However, after a server or cluster restart, those messages would
be redelivered again.
Resolves#3361
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>