Commit Graph

285 Commits

Author SHA1 Message Date
Ivan Kozlovic
5e89374ee9 Fixed another possible lock inversion consumer->stream
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-03-25 12:21:51 -06:00
Ivan Kozlovic
4739eebfc4 [FIXED] JetStream: possible deadlock during consumer leadership change
Would possibly show up when a consumer leader changes for a consumer
that had redelivered messages and for instance messages were inbound
on the stream.

Resolves #2912

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-03-25 12:21:51 -06:00
Derek Collison
ef8f543ea5 Improve memory usage through JetStream storage layer.
Previously we would rely more heavily on Go's garbage collector since when we loaded a block for an underlying stream we would pass references upward to avoimd copies.
Now we always copy when passing back to the upper layers which allows us to not only expire our cache blocks but pool and reuse them.

The upper layers also had changes made to allow the pooling layer at that level to interoperate with the storage layer optionally.

Also fixed some flappers and a bug where de-dupe might not be reformed correctly.

Signed-off-by: Derek Collison <derek@nats.io>
2022-03-24 17:45:15 -06:00
Ivan Kozlovic
2253bb6f1a JS: BackOff list caused too frequent checkPending() calls
Since the "next" timer value is set to the AckWait value, which
is the first element in the BackOff list if present, the check
would possibly happen at this interval, even when we were past
the first redelivery and the backoff interval had increased.

The end-user would still see the redelivery be done at the durations
indicated by the BackOff list, but internally, we would be checking
at the initial BackOff's ack wait.

I added a test that uses the store's interface to detect how many
times the checkPending() function is invoked. For this test it
should have been invoked twice, but without the fix it was invoked
15 times.

Also fixed an unrelated test that could possibly deadlock causing
tests to be aborted due to inactivity on Travis.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-03-23 12:46:17 -06:00
Ivan Kozlovic
c3da392832 Changes to IPQueues
Removed the warnings, instead have a sync.Map where they are
registered/unregistered and can be inspected with an undocumented
monitor page.
Added the notion of "in progress" which is the number of messages
that have beend pop()'ed. When recycle() is invoked this count
goes down.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-03-17 17:53:06 -06:00
Ivan Kozlovic
fe6d7b305f Merge pull request #2898 from nats-io/js_cons_ack_processing
[CHANGED] JetStream: Redeliveries may be delayed if necessary
2022-03-17 10:57:22 -06:00
Derek Collison
dbfa47f9b1 Improve state preservation for consumers, specifically DeliverNew variants when no activity has been present.
Signed-off-by: Derek Collison <derek@nats.io>
2022-03-16 20:55:14 -07:00
Derek Collison
3216eb5ee5 When a consumer has no state we are now compacting the log, but were not snapshotting.
This caused issues on leader change and losing quorum.

Signed-off-by: Derek Collison <derek@nats.io>
2022-03-09 07:21:25 -05:00
Derek Collison
58da4b917a Made improvements to scale up and down for streams and consumers.
Signed-off-by: Derek Collison <derek@nats.io>
2022-03-06 16:59:02 -08:00
Derek Collison
31a19729b0 When removing a stream peer with an attached durable consumer, the consumer could become inconsistent.
Signed-off-by: Derek Collison <derek@nats.io>
2022-03-06 05:42:22 -08:00
Ivan Kozlovic
804ce102ac [CHANGED] JetStream: Redeliveries may be delayed if necessary
We have seen situations where when a lot of pending messages accumulate,
there is a contention between the processing of the ACKs and the
checking of the pending map.

Decision is made to abort checking of pending list if processing of
ack(s) would be delayed because of that. The result is that a
redelivery may be post-poned.

Internally, the ACKs are also now using a queue to prevent processing
of them from the network handler, which could cause head-of-line
blocking, especially bad for routes.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-03-03 10:32:35 -07:00
Derek Collison
1c8f7de848 On filtered subjects when consumers were staggered we need to disqualify a filtered consumer if not applicable.
Signed-off-by: Derek Collison <derek@nats.io>
2022-02-16 18:24:27 -08:00
Derek Collison
5a93b0e9d8 Allow pull requests to specify a heartbeat when idle to detect when a request is invalidated.
Signed-off-by: Derek Collison <derek@nats.io>
2022-02-11 09:51:51 -08:00
Ivan Kozlovic
55ffde7251 Fixed consumer dlv count and num pending wrong due to redeliveries
Introduced by #2848, so should not have impacted existing releases.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-02-10 17:33:53 -07:00
Derek Collison
ecfe42630a Merge pull request #2858 from nats-io/add_consumer_with_info
Make sure we snapshot initial consumer info during consumer creation.
2022-02-09 17:05:01 -08:00
Derek Collison
da9046b2e6 Snapshot initial consumer info when needed.
Signed-off-by: Derek Collison <derek@nats.io>
2022-02-09 15:23:53 -08:00
Derek Collison
c13a84cf44 Fixed a bug that would calculate the first sequence of a filteredPending incorrectly.
Also added in more optimized version to select the first matching message in a message block for LoadNextMsg.

Signed-off-by: Derek Collison <derek@nats.io>
2022-02-08 13:29:38 -08:00
Derek Collison
d50febeeff Improved sparse consumers replay time.
When a stream has multiple subjects and a consumer filters the stream to a small and spread out list of messages the logic would do a linear scan looking for the next message for the filtered consumer.
This CL allows the store layer to utilize the per subject info to improve the times.

Signed-off-by: Derek Collison <derek@nats.io>
2022-02-07 17:26:32 -08:00
Derek Collison
55b7f11c9a Fixed flow control stall under specific conditions of message size.
Signed-off-by: Derek Collison <derek@nats.io>
2022-02-05 20:15:48 -08:00
Ivan Kozlovic
30c431a9a3 [FIXED] JetStream: BackOff redeliveries would always use first in list
If the consumer's sequence was not the same than the stream's sequence,
then the redelivery would always use the first duration from the
BackOff list.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-01-31 17:44:08 -07:00
Derek Collison
b38ced51b2 A true no wait pull request was not considering redeliveries.
Signed-off-by: Derek Collison <derek@nats.io>
2022-01-30 11:52:35 -08:00
Derek Collison
275d42628b Fix for #2828. The original design of the consumer and the subsequent store did not allow updates.
Now that we do, we need to store the new config into our storage layer.

Signed-off-by: Derek Collison <derek@nats.io>
2022-01-30 09:45:05 -08:00
Derek Collison
a57bd96def Updating a push consumer to be pull would succeed but cause a panic if used.
This disallows that upgrade. We had a check in place for pull to push, but not the reverse.

Signed-off-by: Derek Collison <derek@nats.io>
2022-01-28 13:11:58 -08:00
Derek Collison
6be9925127 Update config error
Signed-off-by: Derek Collison <derek@nats.io>
2022-01-24 15:02:41 -08:00
Derek Collison
65b168aa8b Updates based on feedback. MaxDeliver needs to be set properly now to be > len(BackOff) but if larger we will reuse last value in BackOff array.
Signed-off-by: Derek Collison <derek@nats.io>
2022-01-24 15:02:39 -08:00
Derek Collison
bd78b1a99b Formal json version for NAK delay
Signed-off-by: Derek Collison <derek@nats.io>
2022-01-24 15:01:52 -08:00
Derek Collison
d486c24199 Allow a consumer to be configured with BackOffs.
This allows a consumer to have exponential backoffs vs static AckWait and MaxDeliver.
When BackOff is set it will overridde AckWait to BackOff[0] and MaxDeliver will be len(BackOff)+1.

Signed-off-by: Derek Collison <derek@nats.io>
2022-01-24 14:57:36 -08:00
Derek Collison
579bf336ad Allow NAK to take a delay parameter to delay redelivery for a certain amount of time.
Signed-off-by: Derek Collison <derek@nats.io>
2022-01-24 14:57:28 -08:00
Derek Collison
d332684322 Fixed data race and fuxed bug that we would not clear our waiting queue when a leader stepped down.
Signed-off-by: Derek Collison <derek@nats.io>
2022-01-24 13:01:25 -08:00
Derek Collison
6fd41e5ea4 Updates based on review feedback
Signed-off-by: Derek Collison <derek@nats.io>
2022-01-24 10:23:47 -08:00
Derek Collison
d962500827 Track reply subjects for pending pull requests across clustered consumers.
We will only send if all peers in our group are >= 2.7.1 and we will check for updates.
When a consumer follower takes over it will notify all pending requests that those requests are invalid now.

Signed-off-by: Derek Collison <derek@nats.io>
2022-01-21 16:31:59 -08:00
Derek Collison
7f572983ac The 2.7 update broke the one-shot pull consumer fetch behavior due to change and a fix to a bug that allowed it to work before.
This change tries to lock down all expected behaviors, and now does out of order timeouts for requests.

Signed-off-by: Derek Collison <derek@nats.io>
2022-01-20 18:06:29 -08:00
Ivan Kozlovic
84f6cbb760 Pooling pubMsg and jsPubMsg objects
This should help with GC pressure, however, it may have an effect
on performance (based on some benchmark). Calling sync.Pool.Get/Put
too often has a performance impact...

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-01-13 13:14:25 -07:00
Ivan Kozlovic
d74dba2df9 Replaced RAFT's append entry response channel
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-01-13 13:06:48 -07:00
Ivan Kozlovic
23ebf9d2f8 Adapted jsOutQ
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-01-13 13:05:27 -07:00
Derek Collison
103f710479 Fixed consumer info num pending bug.
Under load we could have a message committed to the underlying store when a consumer was being created and then it increase num pending again when the stream signals the consumers.
This fix just remembers the last seq of the state when we calculate sgap and test before adding in the stream code.

Signed-off-by: Derek Collison <derek@nats.io>
2022-01-12 20:03:26 -08:00
Derek Collison
32c3c9ecfb Track interest properly across accounts for pull consumers
Signed-off-by: Derek Collison <derek@nats.io>
2022-01-12 12:16:53 -08:00
Derek Collison
279f31ecb5 Add in ability to have ephemeral pull based consumers
Signed-off-by: Derek Collison <derek@nats.io>
2022-01-10 20:42:39 -08:00
Derek Collison
e12c8cda92 Add in ability to limit aspects of a pull request, specifically batch size and expiration.
Signed-off-by: Derek Collison <derek@nats.io>
2022-01-10 17:29:04 -08:00
Derek Collison
5592d923c4 Updated pull consumers.
Cleaned up code, made more consistent, utilize loopAndGather.
Allow pull consumers to have AckAll as well as AckExplicit.

Signed-off-by: Derek Collison <derek@nats.io>
2022-01-10 16:59:01 -08:00
Derek Collison
52da55c8c6 Implement overflow placement for JetStream streams.
This allows stream placement to overflow to adjacent clusters.
We also do more balanced placement based on resources (store or mem). We can continue to expand this as well.
We also introduce an account requirement that stream configs contain a MaxBytes value.

We now track account limits and server limits more distinctly, and do not reserver server resources based on account limits themselves.

Signed-off-by: Derek Collison <derek@nats.io>
2022-01-06 19:33:08 -08:00
Derek Collison
c5fbb63614 JetStream ephemeral consumers could create a situation where the server would exhaust the OS thread limit - default 10k.
Under certain situations large number of consumers that are racing to update state or delete their stores during a delete
would start taking up OS threads due to blocking disk IO. When this happened and their were a bunch of Go routines becoming
runnable the Go runtime would create extra OS threads to fill in the runnable pool and would exhaust the max thread setting.

This code places a channel as a simple semaphore to limit the number of disk IO blocking OS threads.

Signed-off-by: Derek Collison <derek@nats.io>
2021-12-29 07:05:34 -08:00
Derek Collison
b7c61cd0bf Stabilize filstore to eliminate sporadic errPartialCache errors under certain situations. Related to #2732
The filestore would release a msgBlock lock while trying to load a cache block if it thought it needed to flush pending data.
With async false, this should be very rare but was possible after careful inspection.

I constructed an artificial test with sleeps throughout the filestore code to reproduce.
It involved having 2 Go routines that were through and waiting on the last msg block, and another one that was writing.
After the write, but before we flushed after releasing the lock we would also artificially sleep.
This would lead to the second read seeing the cache load was already in progress and return no error.
If the load was for a sequence before the current write sequence, and async was false, the cache fseq would be higher than what was requested.
This would cause the errPartialCache to be returned.

Once returned to the consumer level in loopAndGather, it would exit that Go routine and the consumer would cease to function.

This change removed the unlock of a msgBlock to perform and flush, ensuring that two cacheLoads would not yield the errPartialCache.

I also updated the consumer in the case this does happen in the future to not exit the loopAndGather Go routine.

Signed-off-by: Derek Collison <derek@nats.io>
2021-12-27 09:54:02 -08:00
Ivan Kozlovic
3053039ff3 [FIXED] JetStream: interest across gateways
If the interest existed prior to the initial creation of the
consumer, the gateway "watcher" would not be started, which means
that interest moving across the super-cluster after that would
not be detected.

The watcher runs every second and not sure if this is costly or
not, so we may want to go a different approach of having a separate
interest change channel that would be specific to gateways. But this
means adding a new sublist where the interest would be registered
and that sublist would need to be updated when processing GW RSub
and RUnsub?

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2021-12-16 17:20:16 -07:00
Ivan Kozlovic
9f30bf00e0 [FIXED] Corrupted headers receiving from consumer with meta-only
When a consumer is configured with "meta-only" option, and the
stream was backed by a memory store, a memory corruption could
happen causing the application to receive corrupted headers.

Also replaced most of usage of `append(a[:0:0], a...)` to make
copies. This was based on this wiki:
https://github.com/go101/go101/wiki/How-to-efficiently-clone-a-slice%3F

But since Go 1.15, it is actually faster to call make+copy instead.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2021-12-01 10:50:15 -07:00
R.I.Pienaar
cf097bfab4 Merge pull request #2717 from ripienaar/stream_valid_subjects
Stream valid subjects
2021-12-01 17:43:41 +01:00
R.I.Pienaar
4f1bfa969f ensure streams have only valid interest subjects
Signed-off-by: R.I.Pienaar <rip@devco.net>
2021-12-01 17:03:28 +01:00
Derek Collison
e65f3d4a30 [FIXED #2706] - Only utilize full state with deleted details when really needed. Otherwise fast state will suffice.
Signed-off-by: Derek Collison <derek@nats.io>
2021-11-29 10:50:28 -08:00
Derek Collison
5ead954fee [ADDED] Allow certain consumer attributes to be updated #2670, #2603
Signed-off-by: Derek Collison <derek@nats.io>
2021-11-04 13:43:11 -07:00
Derek Collison
003b6996f1 If AckWait less then restart check interval use AckWait
Signed-off-by: Derek Collison <derek@nats.io>
2021-10-28 11:00:06 -07:00