Only discard messages from MQTT QoS0 from internal JetStream clients if
really a QoS1 JetStream publish, not just a JetStream client.
Signed-off-by: Derek Collison <derek@nats.io>
Resolves#4291
When a queue subscriber was updated multiple times over a leafnode
connection we added in more shadow subscriptions which could become
zombies when the connection went away.
In a case where a leafnode server had multiple queue subscribers on the
same queue group, the hub server would add in multiple shadow subs.
These subs would not be properly cleaned up and could lead to stale
connections being associated with them.
Signed-off-by: Derek Collison <derek@nats.io>
In a case where a leafnode server had multiple queue subscribers on the same queue group, the hub server would add in multiple shadow subs. These subs would not be properly cleaned up and could lead to stale connections being associated with them.
Signed-off-by: Derek Collison <derek@nats.io>
These tests are to help verify that routes aren't leaking when they go
down for write deadlines or auth failures.
Signed-off-by: Neil Twigg <neil@nats.io>
If we're on a branch other than main or dev, then when building
"nightly", build it with the name of the branch instead. Overrideable
via an env var.
It's a bit ugly because of limitations of goreleaser templating, we
can't haul the expression out to be defined once, but it works.
Can build a custom Docker image with:
goreleaser release --snapshot -p 2 -f .goreleaser-nightly.yml --clean
If we're on a branch other than main or dev, then when building "nightly",
build it with the name of the branch instead. Overrideable via an env var.
It's a bit ugly because of limitations of goreleaser templating, we can't haul
the expression out to be defined once, but it works.
Can build a custom Docker image with:
goreleaser release --snapshot -p 2 -f .goreleaser-nightly.yml --clean
When creating a consumer on a stream with a very large number of msg
blks, calculating numPending could be slow.
This aims to optimize a bit, more work to be done on streams with a very
large (> 200k) number of msg blks.
Signed-off-by: Derek Collison <derek@nats.io>
This aims to optimize a bit, more work to be done on streams with a very large (> 200k) number of msg blks.
Signed-off-by: Derek Collison <derek@nats.io>
For NATS Server on Windows, provide option for TLS certificate and
handshake signature to be provided by the Windows Certificate Store
instead of PEM files.
- [ ] Link to issue, e.g. `Resolves #NNN`
- [ ] Documentation added (if applicable)
- [x] Tests added
- [x] Branch rebased on top of current main (`git pull --rebase origin
main`)
- [x] Changes squashed to a single commit (described
[here](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html))
- [x] Build is green in Travis CI
- [x] You have certified that the contribution is your original work and
that you license the work to the project under the [Apache 2
license](https://github.com/nats-io/nats-server/blob/main/LICENSE)
### Changes proposed in this pull request:
- New benchmark for NATS JetStream Object Store
This fixes#4252 by ensuring that `tls_available`, `tls_required`,
`host` and `port` are populated based on the WebSocket listener rather
than standard listeners.
Signed-off-by: Neil Twigg <neil@nats.io>
This fixes#4252 by ensuring that `tls_available`, `tls_required`, `host` and `port`
are populated based on the WebSocket listener rather than standard listeners.
Signed-off-by: Neil Twigg <neil@nats.io>
This unit test is modelled around issue #4247 and proves that the
`MaxMsgs` and `MaxMsgsPer` limits are correctly enforced together with
`DiscardNew` and `DiscardNewPer`.
Signed-off-by: Neil Twigg <neil@nats.io>
This test has multiple leafnode connections to different accounts and to
a shared account to make sure behavior is correct.
Signed-off-by: Derek Collison <derek@nats.io>
This test has multiple leafnode connections to different accounts and to a shared account to make sure behavior is correct.
Signed-off-by: Derek Collison <derek@nats.io>
When creating replicated mirrors where the source stream had a very
large starting sequence number, the server would use excessive CPU and
Memory.
This is due to the mirroring functionality trying to skip messages when
it detects a gap. In a replicated stream this puts excessive stress on
the raft system.
This step is not needed at all if the mirror stream has no messages, we
can simply jump ahead.
Signed-off-by: Derek Collison <derek@nats.io>
This is due to the mirroring functionality trying to skip messages when it detects a gap. In a replicated stream this puts excessive stress on the raft system.
This step is not needed at all if the mirror stream has no messages, we can simply jump ahead.
Signed-off-by: Derek Collison <derek@nats.io>