If we have a wider subject, meaning more tokens, but our subs end with a
wildcard token make sure that matches properly.
Signed-off-by: Derek Collison <derek@nats.io>
- [x] Link to issue, e.g. `Resolves #NNN`
- [ ] Documentation added (if applicable)
- [ ] Tests added
- [x] Branch rebased on top of current main (`git pull --rebase origin
main`)
- [x] Changes squashed to a single commit (described
[here](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html))
- [x] Build is green in Travis CI
- [x] You have certified that the contribution is your original work and
that you license the work to the project under the [Apache 2
license](https://github.com/nats-io/nats-server/blob/main/LICENSE)
Resolves#3782
### Changes proposed in this pull request:
- Add hard_delete option in the resolver config and use it to set the
deleteType in NewDirAccResolver
/cc @nats-io/core
Alternative fix for issue
https://github.com/nats-io/nats-server/issues/4014: where we always send
pings for ROUTER, GATEWAY and LEAF spoke connections.
This is my own original work that I license to the project
[ x] Link to issue, e.g. Resolves #NNN
Documentation added (if applicable)
Tests added
[ x] Branch rebased on top of current main (git pull --rebase origin
main)
[ x] Changes squashed to a single commit (described
[here](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html))
[x] Build is green in Travis CI
[ x] You have certified that the contribution is your original work and
that you license the work to the project under the [Apache 2
license](https://github.com/nats-io/nats-server/blob/main/LICENSE)
Resolves #
Changes proposed in this pull request:
- in processPingTimer, add explicit test for ROUTER connection, and only
hold off sending ping based on receive ping
This PR adds internal support for compression of message blocks in the
file store. Message blocks will be compressed when they reach their
maximum configured block size and we have moved onto the next block, so
as to avoid slowing down on-going writes as far as possible.
Compressed blocks are preambled with a small amount of metadata
describing the compression algorithm (although at this point only S2
compression is supported) and compression takes place before encryption
(so as to maximise the efficiency of the compression where a stream
contains redundant or repeated information in the messages).
This PR **does not** provide any way to configure compression from the
stream level yet (or indeed any other level exposed to clients or
operators). It also does not go out of its way to compress old
uncompressed blocks after turning on compression unless they are
compacted or truncated. These omissions will be addressed in follow-up
PRs.
New permutations have been added to the file store tests to also ensure
test coverage when compression is enabled.
We should note that, when file store encryption is not in use,
storage-level or filesystem-level deduplication or compression (such as
those provided by some disk arrays or filesystems like ZFS) may end up
being considerably more space-efficient than file store compression
would. Conversely, when file store encryption is enabled, enabling
compression at the file store level may achieve better ratios.
Signed-off-by: Neil Twigg <neil@nats.io>
- [X] Link to issue, e.g. `Resolves #NNN`
- [ ] Documentation added (if applicable)
- [ ] Tests added
- [X] Branch rebased on top of current main (`git pull --rebase origin
main`)
- [X] Changes squashed to a single commit (described
[here](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html))
- [ ] Build is green in Travis CI
- [X] You have certified that the contribution is your original work and
that you license the work to the project under the [Apache 2
license](https://github.com/nats-io/nats-server/blob/main/LICENSE)
Resolves#4000
### Changes proposed in this pull request:
Removes the check on the republish filter needing to have an overlap
with the listened subjects, as with the new stream subject
transformation changes we do not assume anymore that the subjects in a
stream must match the subjects being listened to.
This allows the use of republish for mirroring and/or sourcing streams
To increase system stability in large NATS systems we do not want to
process inbound system messages that are destined to be processed by the
server itself to be processed inline on the inbound connection.
Signed-off-by: Derek Collison <derek@nats.io>
This may fix a race condition in `sysRequest` where multiple inbox
responses could try to mutate the same input object, so instead we'll
let it instantiate its own, although it isn't clear to me yet why that
would happen in the first place.
Signed-off-by: Neil Twigg <neil@nats.io>
These entries on restart should not be re-processed. This was causing
some failures on consumer tests that involved restarts.
Signed-off-by: Derek Collison <derek@nats.io>
New configuration fields:
```
cluster {
...
pool_size: 5
accounts: ["A", "B"]
}
```
The configuration `pool_size` in the example above means that this
server will create 5 routes to a remote server, assuming that that
server has the same `pool_size` setting.
Accounts (which are not part of the `accounts[]` configuration)
are assigned a specific route in this pool, and this will be the
same route on all servers in the cluster.
Accounts that are defined in the `accounts` field will each have
a dedicated route connection. This will allow suppression of the
account name in some of the route protocols, reducing bytes transmitted
which may increase performance.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Two changes here.
1. Revert of a change regarding handling of leadership transfer entries
that caused instability in some tests and a bad upgrade experience.
2. Make sure if we detect that a stream or a consumer is already running
the monitoring routines that we do not stop the underlying raft group
since it will be being used in the monitor routine and would stall that
asset.
Signed-off-by: Derek Collison <derek@nats.io>