This aims to optimize a bit, more work to be done on streams with a very large (> 200k) number of msg blks.
Signed-off-by: Derek Collison <derek@nats.io>
For NATS Server on Windows, provide option for TLS certificate and
handshake signature to be provided by the Windows Certificate Store
instead of PEM files.
- [ ] Link to issue, e.g. `Resolves #NNN`
- [ ] Documentation added (if applicable)
- [x] Tests added
- [x] Branch rebased on top of current main (`git pull --rebase origin
main`)
- [x] Changes squashed to a single commit (described
[here](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html))
- [x] Build is green in Travis CI
- [x] You have certified that the contribution is your original work and
that you license the work to the project under the [Apache 2
license](https://github.com/nats-io/nats-server/blob/main/LICENSE)
### Changes proposed in this pull request:
- New benchmark for NATS JetStream Object Store
This fixes#4252 by ensuring that `tls_available`, `tls_required`,
`host` and `port` are populated based on the WebSocket listener rather
than standard listeners.
Signed-off-by: Neil Twigg <neil@nats.io>
This fixes#4252 by ensuring that `tls_available`, `tls_required`, `host` and `port`
are populated based on the WebSocket listener rather than standard listeners.
Signed-off-by: Neil Twigg <neil@nats.io>
This unit test is modelled around issue #4247 and proves that the
`MaxMsgs` and `MaxMsgsPer` limits are correctly enforced together with
`DiscardNew` and `DiscardNewPer`.
Signed-off-by: Neil Twigg <neil@nats.io>
This test has multiple leafnode connections to different accounts and to
a shared account to make sure behavior is correct.
Signed-off-by: Derek Collison <derek@nats.io>
This test has multiple leafnode connections to different accounts and to a shared account to make sure behavior is correct.
Signed-off-by: Derek Collison <derek@nats.io>
When creating replicated mirrors where the source stream had a very
large starting sequence number, the server would use excessive CPU and
Memory.
This is due to the mirroring functionality trying to skip messages when
it detects a gap. In a replicated stream this puts excessive stress on
the raft system.
This step is not needed at all if the mirror stream has no messages, we
can simply jump ahead.
Signed-off-by: Derek Collison <derek@nats.io>
This is due to the mirroring functionality trying to skip messages when it detects a gap. In a replicated stream this puts excessive stress on the raft system.
This step is not needed at all if the mirror stream has no messages, we can simply jump ahead.
Signed-off-by: Derek Collison <derek@nats.io>
If we know we are in stand alone mode only send out statsz updates if we
know we have external interest.
Signed-off-by: Derek Collison <derek@nats.io>
Resolves: #4234
If we know we are in stand alone mode we do not need to run the updates
for JetStream account resources updates.
Signed-off-by: Derek Collison <derek@nats.io>
Resolves#4227 (Partial)
When messages were very small and the key space was very large the
performance of last message gets in the store layer (both file and
memory) would degrade.
If the subject is literal we can optimize and avoid sequence scans that
are needed when multiple subject states need to be considered.
Signed-off-by: Derek Collison <derek@nats.io>
Resolves#4221
Fix for properly distributed queue requests over multiple leafnode
connections.
When a leafnode server joins two accounts in a supercluster, we want to
make sure that each connection properly takes into account the weighted
number of subscribers in each account.
Signed-off-by: Derek Collison <derek@nats.io>
When a leafnode server joins two accounts in a supercluster, we want to make sure that each connection properly takes into account the weighted number of subscribers in each account.
Signed-off-by: Derek Collison <derek@nats.io>
It doesn't really appear as though, for what these tests are trying to
prove, that an excessively large number of messages is required. Instead
let's drop the count a little in the hope that they run a bit faster.
Signed-off-by: Neil Twigg <neil@nats.io>
It doesn't really appear as though, for what these tests are trying to
prove, that an excessively large number of messages is required. Instead
let's drop the count a little in the hope that they run a bit faster.
Signed-off-by: Neil Twigg <neil@nats.io>
Bail early if new consumer, meaning stream sequence floor is 0.
Decide which linear space to scan.
Do no work if no pending and we just need to adjust which we do at the
end.
Also realized some tests were named wrong and were not being run, or
were in wrong file.
Signed-off-by: Derek Collison <derek@nats.io>
Bail early if new consumer, meaning stream sequence floor is 0.
Decide which linear space to scan.
Do no work if no pending and we just need to adjust which we do at the end.
Also realized some tests were named wrong and were not being run, or were in wrong file.
Signed-off-by: Derek Collison <derek@nats.io>
Currently `UpdateKnownPeers` doesn't send a peer state when a single
peer add operation is taking place, but it seems like this can
potentially race when there are lots of changes to the replica count
happening in rapid succession. Sending the peer state in all cases seems
to fix this issue and, so far in my testing, fixes the failground stream
update replicas test.
Signed-off-by: Neil Twigg <neil@nats.io>