By default the S2 library defaults to a concurrency level of
`GOMAXPROCS`, which forces the library to run extra goroutines to manage
asynchronous flushes.
As we only ever have one goroutine (the client writer) using a given S2
writer, reducing the concurrency down to 1 helps a bit with overheads,
slightly reduces allocations and slightly improves throughput.
Signed-off-by: Neil Twigg <neil@nats.io>
Mimic same behavior for normal subs.
Note that when a queue subscription is behind both a spoke leafnode
connection and a service import the message will be delivered over the
leafnode since service imports are binary signals that are just on. Need
a more thorough investigation for a proper fix. For now its best to not
have the service import on the spoke leafnode such that the raw queue
sub's information if relayed across the leafnode.
Signed-off-by: Derek Collison <derek@nats.io>
Resolves#4367
If Stream has consumer limits set, creating consumer with defaults will
fail in most cases.
Test didn't catch this, as by default, old JS client sets ack policy to
`none`. If the policy is different, it will fail to create consumer with
defaults. We agreed that default Ack Policy should be `explicit`
Signed-off-by: Tomasz Pietrek <tomasz@nats.io>
Test didn't catch this error, as by default, old JS client
sets ack policy to none. If policy is different, it will fail
to create consumer with defaults.
Signed-off-by: Tomasz Pietrek <tomasz@nats.io>
Removes the single subject transform destination field any subject
transformation in StreamSources must now be done using the
SubjectTransforms array instead.
This fixes an issue where specifying a single subject filter, i.e. in
`SubjectFilters` or `SubjectTransforms`, instead of using
`SubjectFilter` would result in the old consumer create subject being
incorrectly used.
Signed-off-by: Neil Twigg <neil@nats.io>
The server consumer creation code is picky and does indeed not accept a request send to the ExT subject if that request specifies the subject filter in the array (even if there is only one entry in the array).
Signed-off-by: Jean-Noël Moyne <jnmoyne@gmail.com>
Signed-off-by: Neil Twigg <neil@nats.io>
Co-authored-by: Jean-Noël Moyne <jnmoyne@gmail.com>
Co-authored-by: Neil Twigg <neil@nats.io>
If a leafnode remote configuration does not have a tls{} block but
connect to a hub that requires TLS, the handshake between the two
servers will fail. A simple workaround is to add in the remote
configuration an empty tls{} block.
This issue was introduced in v2.10.0 due to some refactoring in order to
support compression.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
If a leafnode remote configuration does not have a tls{} block but
connect to a hub that requires TLS, the handshake between the two
servers will fail. A simple workaround is to add in the remote
configuration an empty tls{} block.
This issue was introduced in v2.10.0 due to some refactoring in
order to support compression.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
This is a safer (less lines of code touched) alternative to #4557 for
now, which simply ignores the `subject_transform_dest` field in the API
and the stream assignments. We'll still look to merge the other PR to
clean up but will do so post-release when we have more time to test it.
Signed-off-by: Neil Twigg <neil@nats.io>
`addStreamWithAssignment` did not hold the JS lock at the point of
calling `setStreamAssignment`, which could result in a data race
accessing the Raft group peers from the stream assignment on line 774.
Signed-off-by: Neil Twigg <neil@nats.io>