- Wait of some sort of routing to be in place before starting
the raft run loop
- Remove use of lock in apiDispatch that was not necessary but
could have cause a route to block, causing memory growth, etc..
Unrelated rename of some tests so that they start with TestJetStream
and TestJetStreamCluster for cluster tests, fixed some flappers
and ensure that tests that change RAFT timeouts put them back
to default values on exit.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
During elected stepdown and transfer allow the new leader to take over before we stepdown.
We could receive a leader change, so make sure to also check migration state.
Signed-off-by: Derek Collison <derek@nats.io>
* [Adding] max_ha_assets to limit placement on server with more ha assets
server running more than max_ha_assets #raft nodes will not be used to
place new streams and fail if not enough free server can be found.
Durable Consumer creation on such server will fail as their peer size is
bound to the same set as their stream.
This also avoids updating placement where no new placement is needed.
This is the case when, on update, placement tags get removed.
Signed-off-by: Matthias Hanel <mh@synadia.com>
- A stream could become leader when it should not, causing
messages to be lost.
- A catchup could stall because the server sending data
could bail out of the runCatchup routine but still send
the EOF signal.
- Deadlock with monitoring of Jsz
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Signed-off-by: Derek Collison <derek@nats.io>
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
basically a gw subject propagation issue could be hidden behind a leaf
node.
also change error text when this was the case
Signed-off-by: Matthias Hanel <mh@synadia.com>
This needs testing because stream move adjusts the replication factor
Because adjusting replication factor and moving is illegal, this case
does not need to be tested
In order to support one off configurations, added same modification
callout to super cluster as is used with cluster
Signed-off-by: Matthias Hanel <mh@synadia.com>
This broke cross-account functionality. Ported the test from the
Go client that showed the failure after PR#2997 was merged.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
The system will allow an update to a stream, and subsequently all attached consumers, to be placed in another cluster either directly or via tag placement.
The meta layer will scale the underlying peerset appropriately to straddle the two clusters for both the stream and consumers, taking into account the consumer type.
Control will then pass to the current leaders of the assets who will monitor the catchup status of the new peers.
(Note we can optimize this later to only traverse once across a GW for any given asset, but for now this is simpler)
Once the original leaders have determined the assets are synched it will pass leadership to a member of the new peerset.
Once the new leader has been elected, it will forward a request for the meta layer to shrink the peerset by removing the old peers.
Signed-off-by: Derek Collison <derek@nats.io>
* Add a config modification callback to createJetStreamCluster
named createJetStreamClusterAndModHook allowing the generated config to
be altered prior to server start
Signed-off-by: Matthias Hanel <mh@synadia.com>
An example was a "consumer info" request with a consumer name that
had tokens, which is illegal. This results in the request being
dropped in apiDispatch() because there was no interest.
The server will now return a "bad request" error in such case.
Resolves#2995
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Some warnings, especially when dealing with JS limits that were
printed on a per-message basis, are now limited to ~1 per second
if the content of the warning is already found in a map.
This is also for "client" warnings, but the client porting of the
warning is not taken into account so that helps with reducing logging
for similar content, but coming from different clients.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Data race that has been seen:
```
Read at 0x00c00134bec0 by goroutine 159:
github.com/nats-io/nats-server/v2/server.(*client).msgHeaderForRouteOrLeaf()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:2935 +0x254
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:4364 +0x2147
(...)
Previous write at 0x00c00134bec0 by goroutine 201:
github.com/nats-io/nats-server/v2/server.(*Server).addRoute()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/route.go:1475 +0xdb4
github.com/nats-io/nats-server/v2/server.(*client).processRouteInfo()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/route.go:641 +0x1704
```
Also fixed some flappers and removed use of `s.js.` since we have
already captured `js` in Jsz monitoring.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Also fixed a bug that could cause memory based replicated consumers to no longer work after snapshots and server restarts.
The snapshot logic would allow non-state changing updates to continously grow the raft logs. We also were too conservative on when we snapshotted and why.
Also added in ability to have FileStore.Compact() reclaim space from the block file from the head of last changed block.
Signed-off-by: Derek Collison <derek@nats.io>
Also fixed a bug where we were incorrectly not spining up the monitoring loop for a stream when going from 3->1->3.
Signed-off-by: Derek Collison <derek@nats.io>
Would possibly show up when a consumer leader changes for a consumer
that had redelivered messages and for instance messages were inbound
on the stream.
Resolves#2912
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Previously we would rely more heavily on Go's garbage collector since when we loaded a block for an underlying stream we would pass references upward to avoimd copies.
Now we always copy when passing back to the upper layers which allows us to not only expire our cache blocks but pool and reuse them.
The upper layers also had changes made to allow the pooling layer at that level to interoperate with the storage layer optionally.
Also fixed some flappers and a bug where de-dupe might not be reformed correctly.
Signed-off-by: Derek Collison <derek@nats.io>
Since the "next" timer value is set to the AckWait value, which
is the first element in the BackOff list if present, the check
would possibly happen at this interval, even when we were past
the first redelivery and the backoff interval had increased.
The end-user would still see the redelivery be done at the durations
indicated by the BackOff list, but internally, we would be checking
at the initial BackOff's ack wait.
I added a test that uses the store's interface to detect how many
times the checkPending() function is invoked. For this test it
should have been invoked twice, but without the fix it was invoked
15 times.
Also fixed an unrelated test that could possibly deadlock causing
tests to be aborted due to inactivity on Travis.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
For reason explained in previous commit, for tests that were
expecting the number of ack/pending to be of a certain value after
an Ack(), they would be flapping. Replaced all references and
we can go back to selectively call Ack() when AckSync() is not
needed.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
We also would hang if no stream info requests were sent during a stream list due to the asset being offline.
Signed-off-by: Derek Collison <derek@nats.io>
Also had to change all references from `path.` to `filepath.` when
dealing with files, so that it works properly on Windows.
Fixed also lots of tests to defer the shutdown of the server
after the removal of the storage, and fixed some config files
directories to use the single quote `'` to surround the file path,
again to work on Windows.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
The "deleted" advisory was missing because the stream's send loop
was closed before the advisory was pushed to the queue to be sent.
Added tests, both for single and clustered mode to test all stream
advisories.
Resolves#2886
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>