It can take slightly longer in Travis close to the deadline so bumping
it for this test:
```
=== RUN TestAccountReloadServiceImportPanic
--- PASS: TestAccountReloadServiceImportPanic (10.60s)
=== RUN TestAccountReloadServiceImportPanic
accounts_test.go:3621: Have not received all responses, want 187876 got 182649
--- FAIL: TestAccountReloadServiceImportPanic (14.09s)
```
Track deleted with single avl.SeqSet dmap for now vs old method for
memory store.
For fileStore, we were trying to be too smart to save space at the
expense of encoding time, so revert back to simple version that is much
100x faster.
Size of encoding may be a bit bigger then we wanted, but we want to
prefer speed over size.
Signed-off-by: Derek Collison <derek@nats.io>
Add distinction between create and update to consumer API
As in the server there is only one API for consumer management create
and update,
if clients want to provide to the users guard against overriding
existing consumer with create operation, or accidentaly creating them
with update, they need to rely on calling `Info`.
That adds latency, traffic and load on the server and is still race'y,
as state on the server can change between the info and create calls.
This PR adds `Action` to CreateConsumerRequest, which is a non-breaking
change that allows client's to present it's intent without spliting
Consumer API into create and update.
This is not a prefect solution, but such split, to not be breaking and
does not require new API version.
TODO:
- [x] Add concrete error types to errors.json and use them
- [ ] Add ADR (after LGTM)
Signed-off-by: Tomasz Pietrek <tomasz@nats.io>
If the failed state of clfs drifts between leaders and followers,
replicas could discard and skip messages possibly incorrectly. This will
force sync if we have a non-zero clfs state when a leader takes over.
Signed-off-by: Derek Collison <derek@nats.io>
This PR backports the OCSP Peer feature option (as in 2.10 train) and
includes two fixes for the existing OCSP Staple feature.
OCSP Staple:
1. Fixed and clarified how NATS Server determines its own Issuer CA when
obtaining and validating an OCSP Response for subsequent staple
2. Eliminated problematic assumption that all node peers are issued by
same CA when NATS Server validates ROUTE and GATEWAY peer nodes
3. Added OCSP Response effectivity checks on ROUTE and GATEWAY
peer-presented staple
Note for #3: Allowed host clock skew between node peers set at
30-seconds. If the OCSP Response contains an empty assertion for
NextUpdate, NATS Server will default to 1-hour validity (after
ThisUpdate). It is recommended that CA OCSP Responder should assert
NextUpdate.
On reload some of the imports from the system account where going
missing on reload, this adds them back after a reload:
```
$SYS.REQ.SERVER.PING.CONNZ
$SYS.REQ.ACCOUNT.PING.STATZ
$SYS.REQ.ACCOUNT.PING.CONNZ
```
This makes configuration files that are empty, or read and processed by
the parser but with no detected values now return an error.
Fixes#4343
Backport from dev branch
(https://github.com/nats-io/nats-server/pull/4347)
This makes configuration files that are empty, or read and processed
by the parser but with no detected values now return an error.
Signed-off-by: Waldemar Quevedo <wally@nats.io>
Do not hold onto no interest subjects from a client in the unlocked cache.
If sending lots of different subjects all with no interest performance could be affected.
Signed-off-by: Derek Collison <derek@nats.io>
Resolves#4341
Three issues were found and resolved.
1. Some purge replays after recovery could execute full purge.
2. Callback was registered without lock, which could lead to skew.
3. Cluster reset could stop stream store and recreate it, which could
lead to double accounting.
Signed-off-by: Derek Collison <derek@nats.io>
Three issues were found and resolved.
1. Purge replays after recovery could execute full purge.
2. Callback was registered without lock, which could lead to skew.
3. Cluster reset could stop stream store and recreate it, which could lead to double accounting.
Signed-off-by: Derek Collison <derek@nats.io>
Resolves problems of [issue
#3773](https://github.com/nats-io/nats-server/issues/3773).
With this fix, NATS Server will locally determine it's own certificate's
issuer from either the configured server certificate (bundle of leaf
cert plus optional intermediate CA certs) or from the configured server
CA trust store, as follows:
1. The operator may provide the server's certificate issuer in the
second position of the server's certificate configuration (typically
`cert_file` but may be `cert_store` on the Windows platform). If a
candidate issuer is found here it is PKI validated as the actual issuer
of the server's cert else a hard error.
2. If not found in [1], NATS Server will seek to create at least one
verified chain with its configured trust store (typically `ca_file` but
could by the system trust store if not configured). It will derive the
issuer from the first verified chain. If no verified chain can be formed
it is a hard error.