Mimic same behavior for normal subs.
Note that when a queue subscription is behind both a spoke leafnode
connection and a service import the message will be delivered over the
leafnode since service imports are binary signals that are just on. Need
a more thorough investigation for a proper fix. For now its best to not
have the service import on the spoke leafnode such that the raw queue
sub's information if relayed across the leafnode.
Signed-off-by: Derek Collison <derek@nats.io>
Resolves#4367
If Stream has consumer limits set, creating consumer with defaults will
fail in most cases.
Test didn't catch this, as by default, old JS client sets ack policy to
`none`. If the policy is different, it will fail to create consumer with
defaults. We agreed that default Ack Policy should be `explicit`
Signed-off-by: Tomasz Pietrek <tomasz@nats.io>
Test didn't catch this error, as by default, old JS client
sets ack policy to none. If policy is different, it will fail
to create consumer with defaults.
Signed-off-by: Tomasz Pietrek <tomasz@nats.io>
Removes the single subject transform destination field any subject
transformation in StreamSources must now be done using the
SubjectTransforms array instead.
This fixes an issue where specifying a single subject filter, i.e. in
`SubjectFilters` or `SubjectTransforms`, instead of using
`SubjectFilter` would result in the old consumer create subject being
incorrectly used.
Signed-off-by: Neil Twigg <neil@nats.io>
The server consumer creation code is picky and does indeed not accept a request send to the ExT subject if that request specifies the subject filter in the array (even if there is only one entry in the array).
Signed-off-by: Jean-Noël Moyne <jnmoyne@gmail.com>
Signed-off-by: Neil Twigg <neil@nats.io>
Co-authored-by: Jean-Noël Moyne <jnmoyne@gmail.com>
Co-authored-by: Neil Twigg <neil@nats.io>
If a leafnode remote configuration does not have a tls{} block but
connect to a hub that requires TLS, the handshake between the two
servers will fail. A simple workaround is to add in the remote
configuration an empty tls{} block.
This issue was introduced in v2.10.0 due to some refactoring in order to
support compression.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
If a leafnode remote configuration does not have a tls{} block but
connect to a hub that requires TLS, the handshake between the two
servers will fail. A simple workaround is to add in the remote
configuration an empty tls{} block.
This issue was introduced in v2.10.0 due to some refactoring in
order to support compression.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
This is a safer (less lines of code touched) alternative to #4557 for
now, which simply ignores the `subject_transform_dest` field in the API
and the stream assignments. We'll still look to merge the other PR to
clean up but will do so post-release when we have more time to test it.
Signed-off-by: Neil Twigg <neil@nats.io>
`addStreamWithAssignment` did not hold the JS lock at the point of
calling `setStreamAssignment`, which could result in a data race
accessing the Raft group peers from the stream assignment on line 774.
Signed-off-by: Neil Twigg <neil@nats.io>
Sometimes when scaling down a stream, a raft node could continue
forwarding proposals after already being closed, in the debug logs this
can be confirmed by many entries logging 'Direct proposal ignored, not
leader (state: CLOSED)'.
### Changes proposed in this pull request:
NATS Server 2.9 has `logfile_size_limit` option which allows the
operator to set an optional byte limit on the NATS Server log file which
when met causes a "rotation" such that the current log file is renamed
(original file name appended with a time stamp to nanosecond accuracy)
and a new log file is instantiated.
This PR is a new `logfile_max_num` companion option (alias
`log_max_num`) which allows the operator to designate that the server
should prune the **total number of log files** -- the currently active
log file plus backups -- to the maximum setting.
A max value of `0` (the implicit default) or a negative number has
meaning of unlimited log files (no maximum) as this is an opt-in
feature.
A max value of `1` is effectively a truncate-only logging pattern as any
backup made at rotation will subsequently be purged.
A max value of `2` will maintain the active log file plus the latest
backup. And so on...
> The currently active log file is never purged. Only backups are
purged.
When enabled, backup log deletion is evaluated inline after each
successful rotation event. To be considered for log deletion, backup log
files MUST adhere to the file naming format used in log rotation as well
as agree with the current `logfile` name and location. Any other files
or sub-directories in the log directory will be ignored. E.g. if an
operator makes a manual copy of the log file to `logfile.bak` that file
will not be evaluated as a backup.
### Typical use case:
This feature is useful in a constrained hosting environment for NATS
Server, for example an embedded, edge-compute, or IoT device scenario,
in which _more featureful_ platform or operating system log management
features do not exist or the complexity is not required.
We fixed a few bugs in tombstone handling, and formalized support for
holes in the underlying buffers. Due to customer data from the field we
also now use holes during compaction.
Signed-off-by: Derek Collison <derek@nats.io>