This is a fix for a bad msg blk detected in the field that had sequence
holes.
The stream had max msgs per subject of one and only one subject but had
lots of messages. The stream did not recover correctly, and upon further
inspection determined that a msg blk had holes, which should not be
possible.
We now detect the holes and deal with the situation appropriately.
Heavily tested on the data dump from the field.
Signed-off-by: Derek Collison <derek@nats.io>
The stream had max msgs per subject of one and only one subject but had lots of messages.
The stream did not recover correctly, and upon further inspection determined that a msg blk had holes, which should not be possible.
We now detect the holes and deal with the situation appropriately.
Heavily tested on the data dump from the field.
Signed-off-by: Derek Collison <derek@nats.io>
Previously the Total in paged responses would always equal the size of
the first response this would stall paged clients after the first page.
Now correctly sets the total so paging continues, improves the test to
verify these aspects of the report
Previously the Total in paged responses would always equal the
size of the first response this would stall paged clients after
the first page.
Now correctly sets the total so paging continues, improves the
test to verify these aspects of the report
Signed-off-by: R.I.Pienaar <rip@devco.net>
If we created lots of hashes, beyond server names, like for consumer or
stream NRG group names etc, these maps would grow and not release
memory. Performance hit is ~300ns per call, and we can use string intern
trick if need be at a future date since it is GC friendly.
Signed-off-by: Derek Collison <derek@nats.io>
Resolves#4289
In the benchmark on my machine, this added ~300ns per call, but I think that is ok for now vs the memory usage.
Signed-off-by: Derek Collison <derek@nats.io>
When service imports were reloaded on active accounts with lots of
traffic the server could panic or lose data.
Signed-off-by: Derek Collison <derek@nats.io>
Multiple tests were using the same port numbers and it seems that the
NATS Servers were not always shutting down quickly enough or the test is
reusing a port that was already assigned ephemerally, resulting in
`listen tcp 127.0.0.1:50554: bind: address already in use` failures.
Signed-off-by: Neil Twigg <neil@nats.io>
Multiple tests were using the same port numbers and it seems that the
NATS Servers were not always shutting down quickly enough, resulting
in `listen tcp 127.0.0.1:50554: bind: address already in use` failures.
Signed-off-by: Neil Twigg <neil@nats.io>
Only discard messages from MQTT QoS0 from internal JetStream clients if
really a QoS1 JetStream publish, not just a JetStream client.
Signed-off-by: Derek Collison <derek@nats.io>
Resolves#4291