The original value was hardcoded to 128MB and 32MB per stream. The
per-server limit is lowered to 32MB but is configurable with
a new configuration parameter:
```
jetstream {
max_catchup: 8MB
}
```
The per-stream limit was also lowered from 32MB/128,000msgs to
8MB/32,000 messages.
Tests have shown no difference in performance for fast links.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Since PR #3381, the 2 tests modified here would take twice as
long (around 245 seconds) to complete.
Talking with Matthias, he suggested using a variable instead of
a const and set it to 0 for those 2 tests since they don't really
need that to be set.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Saw this panic in code coverage run:
```
=== RUN TestJetStreamClusterPeerExclusionTag
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x88 pc=0x8acd55]
goroutine 97850 [running]:
github.com/nats-io/nats-server/v2/server.(*jetStream).monitorStream(0xc002b94780, 0xc001ecb500, 0xc003229b00, 0x0)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:1653 +0x495
github.com/nats-io/nats-server/v2/server.(*jetStream).processClusterCreateStream.func1()
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:2953 +0x3b
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/server.go:3063 +0xa7
```
Was able to reproduce and reason was `meta` was nil.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
This is to avoid a narrow race between adding server and them catching
up where they also register as current.
Also wait for all peers to be caught up.
This also avoids clearing catchup marker once catchup stalled.
A stalled catchup would remove the marker causing the peer to
register as current.
Signed-off-by: Matthias Hanel <mh@synadia.com>
- didRemove in applyMetaEntries() could be reset when processing
multiple entries
- change "no race" test names to include JetStream
- separate raft nodes leader stepdown and stop in server
shutdown process
- in InstallSnapshot, call wal.Compact() with lastIndex+1
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
* Fix race between stream stop and monitorStream
monitorCluster stops the stream, when doing so, monitorStream
needs to be stopped to avoid miscounting of store size.
In a test stop and reset of store size happened first and then
was followed by storing more messages via monitorStream
Signed-off-by: Matthias Hanel <mh@synadia.com>
If the leader sends messages but the follower for any reason aborts
or retry the snapshot process, it will now send the error that
caused this and the leader can then abort the catchup instead of
waiting for its inactivity threshold of 5 seconds.
Also make the send of a batch be delayed for a bit until the number
of "acks" is 1/2 of the batch size or after reaching 100ms. This
helps avoid trickling of messages. Tested with the new test
TestJetStreamSuperClusterStreamCathupLongRTT() and see better
results both in size of batches and overall time is smaller or
similar but not longer.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
We would send skip messages for a sync request that was completely below our current state, but this could be more traffic then we might want.
Now we only send EOF and the other side can detect the skip forward and adjust on a successful catchup.
We still send skips if we can partially fill the sync request.
Signed-off-by: Derek Collison <derek@nats.io>
```
Replica: Server name unknown at this time (peerID: jZ6RvVRH), outdated, OFFLINE, not seen
```
After discussing with @ripienaar, this text convey better a sense
that this is a transient situation.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
If a cluster is brought down and then partially restarted, the
replica information about the non restarted node would be completely
missing. The CLI could report replicas 3 but then only the leader
and the running replicas, but nothing about the other node.
Since this node's server name is not know, this PR adds an entry
with something similar to this:
```
<unknown (peerID: jZ6RvVRH)>, outdated, OFFLINE, not seen
```
Also, replicas array is now ordered, which will help when using
a watcher or repeating stream info commands in that the replicas
output will be stable in regards to the list of replicas.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
* better error when peer selection fails
It is pretty hard to diagnose what went wrong when not enough peers for
an operation where found. This change now returns counts of reasons why
peers where discarded.
Changed the error to JSClusterNoPeers as it seems more appropriate
of an error for that operation. Not having enough resources is one of
the conditions for a peer not being considered. But so is having a non
matching tag. Which is why JSClusterNoPeers seems more appropriate
In addition, JSClusterNoPeers was already used as error after one call
to selectPeerGroup already.
example:
no suitable peers for placement: peer selection cluster 'C' with 3 peers
offline: 0
excludeTag: 1
noTagMatch: 2
noSpace: 0
uniqueTag: 0
misc: 0
Examle for mqtt:
mid:12 - "mqtt" - unable to connect: create sessions stream for account "$G":
no suitable peers for placement: peer selection cluster 'MQTT' with 3 peers
offline: 0
excludeTag: 0
noTagMatch: 0
noSpace: 0
uniqueTag: 0
misc: 0
(10005)
Signed-off-by: Matthias Hanel <mh@synadia.com>
* review comment
Signed-off-by: Matthias Hanel <mh@synadia.com>
* Adding account purge operation
The new request is available for the system account.
The subject to send the request to is $JS.API.ACCOUNT.PURGE.*
With the name of the account to purge instead of the wildcard.
Also added directory cleanup code such that server do not
end up with empty streams directories and account dirs that
only contain streams
Also adding ACCOUNT to leaf node domain rewrite table
Addresses #3186 and #3306 by providing a way to
get rid of the streams for existing and non existing accounts
Signed-off-by: Matthias Hanel <mh@synadia.com>
- Send snapshot only if leader
- When processing snapshot, start with a smaller inactivity interval
that will double up to 10sec or use 10sec directly once we get a
message. Reason for that is that it is possible that the request
for snapshot is sent while the leader has not yet setup the subscription
that receives the requests (or subscription has not fully reached the
cluster).
- Don't remember snapfile on err.
- Do not consider current if we have not had any activity.
- Stabilize stream scale up under active heavy publishing.
- Due to the publish pressure move the check for followers direct subs spinning up til after we stop publishing.
Signed-off-by: Derek Collison <derek@nats.io>
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
If there are more stream names that the current limit of 1024, getting
the list of names would return them all instead of using pagination.
For "stream infos", the Total amount returned would be the API limit
instead of the actual number of streams.
Resolves https://github.com/nats-io/natscli/issues/541
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Now monitorStream waits with scaling down the stream until all
monitorConsumer have scaled down their respective consumer
Also update consumer assignment for later use in monitorConsumer
Same for stream assignment in monitorStream
Signed-off-by: Matthias Hanel <mh@synadia.com>
The `JSStreamNameExistErr` will now include in the description that
the stream exists with a different configuration, because that is
the error clients would get when trying to add a stream with a
different configuration (otherwise this is a no-op and client
don't get an error).
Since that error was used in case of restore, a new error is added
but uses the same description prefix "stream name already in use"
but adds ", cannot restore" to indicate that this is a restore
failure because the stream already exists.
Resolves#3273
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
The Move/Cancel/Downscale mechanism did not take into account that
the consumer's replica count can be set independently.
This also alters peer selection to have the ability to skip
unique tag prefix check for server that will be replaced.
Say you have 3 az, and want to add another server to az:1,
in order to replace a server that is the same zone.
Without this change, uniqueTagPrefix check would filter
the server to replace with and cause a failure.
The cancel move response could not be received due to
the wrong account name.
Signed-off-by: Matthias Hanel <mh@synadia.com>
* add the ability to cancel a move in progress
Move to individual subjects for move and cancel_move
New subjects are:
$JS.API.ACCOUNT.STREAM.MOVE.*.*
$JS.API.ACCOUNT.STREAM.CANCEL_MOVE.*.*
last and second to last token are account and stream name
Signed-off-by: Matthias Hanel <mh@synadia.com>
A customer experienced and endless failure to have a stream cacthup. The current leader was being asked for a message from a snapshot that was larger then what we had, resulting in EOF which silently failed.
We now detect this and signal end of catchup and redo the bad snapshot if possible.
Signed-off-by: Derek Collison <derek@nats.io>
When increasing the replica count unique tags for already existing peers
where ignored, which could lead to bad placement
Signed-off-by: Matthias Hanel <mh@synadia.com>
Make sure when processing a peer removal that the stream assignment agrees.
When a new leader takes over it can resend a peer removal, and if the stream/consumer really was rescheduled we could remove by accident.
Also need to make sure that when we remove a stream we remove the node as part of the stream assignment.
If we didn't, if the same asset returned to this server we would not start up the monitoring loop.
Simplify migration logic in monitorStream, to be driven by leader only
Improved unit tests
Added failure when server not in peer list
Move command does not require server anymore
Signed-off-by: Matthias Hanel <mh@synadia.com>