When a stream had a large number of consumers on a server that were sparse, the signaling mechanism would do a linear scan to signal matching consumers. As usage patterns have continued to have more consumers that are filteres and sparse, meaning a message is destined for a single or small number of consumers.
This change moves selection to a sublist that tracks only active consumer leaders for selection, which optimizes selection of consumers to signal when the number of consumers is large.
Signed-off-by: Derek Collison <derek@nats.io>
When js-enabled is set to true, the condition was only checked if
the `getJetStream()` call returned `nil`. However, if it non-nil,
all remaining checks were executed, including assessing the health
of the assets (streams and consumers).
This change addresses two issues:
- Switch to use `js.isEnabled()` which will check whether the value
is nil OR `js.disabled = true` which can occur if the subsystem
is temporarily disabled (insufficient resources).
- Correctly exit the check after the assertion and before meta and
asset checks are performed.
In addition, the option has been renamed to `js-enabled-only` to align
with the `js-server-only` naming. The previous `js-enabled` name still
works, but is mapped to this new option. A warning is emitted noting
the previous option is deprecated.
Fix#3703
Signed-off-by: Byron Ruth <b@devel.io>
The bug was when a timestamp for the pending state was exactly -1 which could happen based on timing of the redlivered pending items which would set pending.Timestamp into the future potentially and the timing on the encodeConsumerState call.
Minor fixes to raft.
Signed-off-by: Derek Collison <derek@nats.io>
First issue was applications not getting any response.
However, there was also a more serious issue that would create multiple raft groups for each concurrent request.
The servers would only run one stream monitor loop, however they would update the state to the new raft group's name, so on server restart the stream would be using a different raft group then existing servers.
Signed-off-by: Derek Collison <derek@nats.io>
Lower minimum amount of data / number of operations so that benchmarks
can run in reasonable time.
Minimum amount of work should be controlled via `-benchtime` flag. But
due to these hardcoded limits, some tests were taking too long.
e.g. Running for 2 minutes even with `-benchtime` set to 1 second.
A stream with a WorkQueue retention policy is supposed to allow
more than one consumer if they user filtered subjects, but those
subjects should not overlap.
There was an issue that if a new consumer had a filter subject
"wider" than an existing one, the error was not detected and
the new consumer was incorrectly accepted.
Resolves#3639
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>