On streams that were constantly removing items, like KVs, this could
become over active when not needed. Simply mark the store as dirty for
next check.
Signed-off-by: Derek Collison <derek@nats.io>
Fixed a bug that was not correctly selecting next first because it was
not properly skipping new dbit entries.
This could result in lookups failing, e.g. after a change in max msgs
per subject to a lower value.
Also fixed a bug that would not properly update our psim during compact
when throwing away the whole block and a subject had more than one
message.
Signed-off-by: Derek Collison <derek@nats.io>
Resolves: #4643
This could result in lookups failing, e.g. after a change in max msgs per subject to a lower value.
Also fixed a bug that would not prperly update psim during compact when throwing away the whole block and a subject had more than one message.
Signed-off-by: Derek Collison <derek@nats.io>
This is a possible fix for #4653.
Changes made:
1. Added tests for creating and updating consumers on a work queue
stream with overlapping subjects.
2. Check for overlapping subjects before
[updating](a25af02c73/server/consumer.go (L770))
the consumer config.
3. Changed [`func (*stream).partitionUnique(partitions []string)
bool`](a25af02c73/server/stream.go (L5269))
to accept the consumer name being checked so we can skip it while
checking for overlapping subjects (Required for
[`FilterSubjects`](a25af02c73/server/consumer.go (L75))
updates), wasn't needed before because the checks were made on creation
only.
There's only 1 thing that I'm not sure about.
In the [current work queue stream conflict
checks](a25af02c73/server/consumer.go (L796)),
the consumer config `Direct` is being checked if `false`, should we also
make this check before the update?
Signed-off-by: Pierre Mdawar <pierre@mdawar.dev>
This really was a cut/paste/typo error, the `else` should not have been
there. Came up in my testing.
The effect was that when there was a pending `PUBREL` in JetStream, and
a matching client connects - we would sometimes attempt to deliver the
PUBREL immediately once connected. `cpending` was already initialized,
but the pubrel map was not (yet).
This really was a cut/paste/typo error.
The effect was that when there was a pending PUBREL in JetStream, we would sometimes attempt to deliver it immediately once the client connected, cpending was already initialized, but the pubrel map was not (yet).
This simplifies the PR template, which is a bit cumbersome, and instead
replaces it with a simpler notice that includes a template sign-off and
a new `CONTRIBUTING.md` document.
Signed-off-by: Neil Twigg <neil@nats.io>
Co-authored-by: Byron Ruth <byron@nats.io>
Under heavy load with max msgs per subject of 1 the dmap, when
considered empty and resetting the initial min, could cause lookup
misses that would lead to excess messages in a stream and longer restore
issues.
Signed-off-by: Derek Collison <derek@nats.io>
Holding onto the compressor and not recycling the internal byte slice
could cause havoc with GC.
This needs to be improved but this at least should allow the GC to
cleanup more effectively.
Signed-off-by: Derek Collison <derek@nats.io>
Several strategies are used which are listed below.
1. Checking a RaftNode to see if it is the leader now uses atomics.
2. Checking if we are the JetStream meta leader from the server now uses
an atomic.
3. Accessing the JetStream context no longer requires a server lock,
uses atomic.Pointer.
4. Filestore syncBlocks would hold msgBlock locks during sync, now does
not.
Signed-off-by: Derek Collison <derek@nats.io>
Several strategies which are listed below.
1. Checking a RaftNode to see if it is the leader now uses atomics.
2. Checking if we are the JetStream meta leader from the server now uses an atomic.
3. Accessing the JetStream context no longer requires a server lock, uses atomic.Pointer.
4. Filestore syncBlocks would hold msgBlock locks during sync, now does not.
Signed-off-by: Derek Collison <derek@nats.io>