This is a continuation of PR #3060, but extends to clustering.
Verified with manual test that a mirror created with v2.7.4 has
the duplicates window set and on restart with main would still
complain about use of dedup in cluster mode. The mirror stream
was recovered but showing as R1.
With this fix, a restart of the cluster - with existing data -
will properly recover the stream as an R3 and messages that
were published while in a bad state are synchronized.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Signed-off-by: Matthias Hanel mh@synadia.com
The main issue was that in mixed-mode, the interest through gateway
may still be in optimistic mode, which when creating the source
consumer would start delivery before we had a chance to setup
the subscription to receive those messages.
The approach is to create the subscription prior to sending
the consumer create request. Also refactored a bit the code in
the hope to make the retries a bit more bullet proof.
We may also look at making sure that gateways are switched to
interest-mode when detecting a mixed-mode setup.
Also fixed a defect that could cause a source to be canceled
when updating a stream.
Resovles #2801
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
* [fix] on queue sub, a consumers delivery subject, was not changed
to the original publish subject the stream received
the code added is a copy of what regular subs do
* [fixed] subject renaming for leaf node connections as well
also updated multi server test to test for queue and non queue scenarios
Signed-off-by: Matthias Hanel <mh@synadia.com>
This issue was introduced by a bug that assigned a de dupe default value
do mirrors.
This is an invalid config that used to be checked on stream create.
Subsequently this logic was moved and then got executed on startup.
Ending up checking mirrors that got the bad default assigned.
Fix is to clear de dupe window for mirrors
Signed-off-by: Matthias Hanel <mh@synadia.com>
When a configuration reload is done, the account's leaf node connections
were not transfered to the new instance of the account, causing the
interest to not be propagated until a leafnode reconnect or a server
restart.
Resolves#3009
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
- Possibly missing some early messages from the sourced stream
- In some cancel situations, the processing of sourced messages
would not longer work
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
* [Fixed] limits enforcement issues
stream create had checks that stream restore did not have.
Moved code into commonly used function checkStreamCfg.
Also introduced (cluster/non clustered) StreamLimitsCheck functions to
perform checks specific to clustered /non clustered data structures.
Checking for valid stream config and limits/reservations before
receiving all the data. Now fails the request right away.
Added a jetstream limit "max_request_batch" to limit fetch batch size
Shortened max name length from 256 to 255, more common file name limit
Added check for loop in cyclic source stream configurations
features related to limits
Signed-off-by: Matthias Hanel <mh@synadia.com>
- Updated tests that were checking for the error to include the limit
- Moved some tests above the benchmark ones
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
- Wait of some sort of routing to be in place before starting
the raft run loop
- Remove use of lock in apiDispatch that was not necessary but
could have cause a route to block, causing memory growth, etc..
Unrelated rename of some tests so that they start with TestJetStream
and TestJetStreamCluster for cluster tests, fixed some flappers
and ensure that tests that change RAFT timeouts put them back
to default values on exit.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
During elected stepdown and transfer allow the new leader to take over before we stepdown.
We could receive a leader change, so make sure to also check migration state.
Signed-off-by: Derek Collison <derek@nats.io>