Commit Graph

83 Commits

Author SHA1 Message Date
Lev
beee6fc72a [FIXED] MQTT PUBREL header incompatibility (#4616)
https://hivemq.github.io/mqtt-cli/docs/test/ pointed out the
incompatibility.
2023-10-05 08:07:50 -07:00
Lev Brouk
214711654e PR feedback: use checkFor 2023-09-28 12:42:18 -07:00
Lev Brouk
a05d4416ef PR feedback: nit 2023-09-28 12:02:35 -07:00
Lev Brouk
4b59efd6e7 [FIXED] Flapping TestMQTTLockedSession 2023-09-28 11:13:48 -07:00
Neil
59bc094e86 [FIXED] Increased AckWait in TestMQTTQoS2RetriesPublish, TestMQTTQoS2RetriesPubRel (#4518)
TestMQTTQoS2RetriesPublish to 100ms, and in TestMQTTQoS2RetriesPubRel to
50ms.

A lesser value caused another PUBLISH to be fired while the test was
still processing the final QoS2 flow. Reduced the number of retries we
wait for to make the test a little quicker.
2023-09-13 11:49:17 +01:00
Lev Brouk
b60df6ec2d Increased AckWait in TestMQTTQoS2RetriesXXX
TestMQTTQoS2RetriesPublish to 100ms, and in TestMQTTQoS2RetriesPubRel to 50ms.

A lesser value caused another PUBLISH to be fired while the test was still processing the final QoS2 flow. Reduced the number of retries we wait for to make the test a little quicker.
2023-09-12 11:38:24 -07:00
Lev
1acc800778 [FIXED] Increased MQTT test R/W timeout from 4 to 5s (TestMQTTSubPropagation) (#4517)
Tracing the connect (ack?) read times in `TestMQTTSubPropagation` showed
that they come in the 2-3s range during normal execution, and it appears
that they occasionally exceed the 4s timeout.

I am not sure exactly why MQTT CONNECT takes such a long time, but as
the name of the test suggests, perhaps it has to do with session
propagation in a cluster.
2023-09-12 11:31:16 -07:00
Lev Brouk
64c34c4b5d [FIXED] Skip TestMQTTQoS2InflightMsgsDeliveredAfterUnsubscribe, see comments 2023-09-08 09:42:51 -07:00
Neil Twigg
8de83bc2ef Use TempDir in more tests
Signed-off-by: Neil Twigg <neil@nats.io>
2023-09-04 16:54:36 +01:00
Ivan Kozlovic
8bd68b550d [FIXED] MQTT: Retain flag did not always have the correct value.
As per specification MQTT-3.3.1-8, we are now setting the RETAIN
flag when delivering to new subscriptions and clear the flag in
all other conditions.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2023-08-29 12:39:59 -06:00
Lev Brouk
ad2e9d7b8d MQTT QoS2 support 2023-08-28 11:52:01 -07:00
Waldemar Quevedo
e68c411b74 test: fix TestMQTTTLSVerifyAndMap on Go 1.21
reported error changed slightly in Go 1.21

```
=== RUN   TestMQTTTLSVerifyAndMap
=== RUN   TestMQTTTLSVerifyAndMap/no_filtering,_client_does_not_provide_cert
    mqtt_test.go:1033: Unexpected error: Error reading: remote error: tls: certificate required
--- FAIL: TestMQTTTLSVerifyAndMap (0.04s)
```

Signed-off-by: Waldemar Quevedo <wally@nats.io>
2023-08-08 23:10:29 -07:00
Derek Collison
d5a91f43f3 Merge branch 'main' into dev 2023-07-13 07:29:40 -07:00
Derek Collison
1f39d744dd Only discard messages from MQTT QoS0 from internal jetstream clients if really a QoS1 jetstream publish.
Signed-off-by: Derek Collison <derek@nats.io>
2023-07-12 16:00:59 -07:00
Ivan Kozlovic
f2d009b244 fix test
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2023-06-13 17:22:21 -06:00
Ivan Kozlovic
1ac99fd5db [CHANGED] MQTT: Support for topics with . character.
The `.` character will be transformed to `//` in NATS subject. For
instance an MQTT message published on `spBv1.0/plant1` would
be received by a NATS subscriber as `spBv1//0.plant1`.

Conversely, a NATS message published on `spBv1//0.plant1` would
be received by an MQTT subscriber as `spBv1.0/plant1`.

Resolves #1879
Resolves #3482

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2023-06-13 13:06:41 -06:00
Neil Twigg
8db804ead9 Don't keep MQTT retained message content in memory
Signed-off-by: Neil Twigg <neil@nats.io>
2023-06-13 10:38:30 +01:00
Ivan Kozlovic
a744cb8cd2 Fixed delivery of retained messages after transfer.
I was running a manual test moving from dev to this branch and
noticed that the consumer would receive only 1 message of the 10
messages sent as retained. So I modified the test to verify that
we receive them all and we did not.

The reason was that after the transfer we need to refresh the state
of the stream (stream info) since we attempt to load all messages
based on the state's sequences.

I have also modified a bit the code to update the MaxMsgsPer once
all messages have been transferred.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2023-06-01 10:00:18 +01:00
Neil Twigg
4f797a54e0 Add test for MQTT retained message migration
Signed-off-by: Neil Twigg <neil@nats.io>
2023-06-01 10:00:18 +01:00
Derek Collison
0321eb6484 Merge branch 'main' into dev 2023-04-29 19:52:57 -07:00
Derek Collison
a66ac8cb9b The server's Start() used to block but no longer does. This updates tests and function comment.
Fix for #4110

Signed-off-by: Derek Collison <derek@nats.io>
2023-04-27 06:55:03 -07:00
Ivan Kozlovic
3d495435c0 MQTT: Fixed issue that could cause time out storing messages
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2023-04-03 09:32:28 -06:00
Derek Collison
d9933b1f7a Fix for MQTT Spec 4.7.2-1
Signed-off-by: Derek Collison <derek@nats.io>
2023-02-28 20:43:46 -08:00
Derek Collison
4a3c27a251 Fix MQTT test for consumer replica override.
This was ill-advised by me, not understanding that the messages stream for MQTT was interested policy based.
Interest policy based streams require consumers to match the replica count.

Signed-off-by: Derek Collison <derek@nats.io>
2023-01-25 17:58:57 -08:00
Neil Twigg
14d0ba1c65 Fix some lint errors after move to golangci-lint 2022-12-30 20:00:08 +00:00
Marco Primi
f8a030bc4a Use testing.TempDir() where possible
Refactor tests to use go built-in temporary directory utility for tests.

Also avoid binding to default port (which may be in use)
2022-12-12 13:18:44 -08:00
Ivan Kozlovic
dde94987ce [FIXED] MQTT: Subjects mapping were not handled
A simple configuration like this:
```
...
mappings = {
  foo: bar
}

mqtt {
   port: 1883
}
```
would cause an MQTT subscription on "bar" to not receive messages
published on "foo".

In otherwords, the subject transformation was not done when parsing
a PUBLISH packet.

This PR also handles the case of service imports where transformation
occurs after the initial publish parsing.

Resolves #3547

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-10-13 16:00:05 -06:00
Ivan Kozlovic
170ff49837 [ADDED] JetStream: peer (the hash of server name) in statsz/jsz
A request to `$SYS.REQ.SERVER.PING.JSZ` would now return something
like this:
```
...
    "meta_cluster": {
      "name": "local",
      "leader": "A",
      "peer": "NUmM6cRx",
      "replicas": [
        {
          "name": "B",
          "current": true,
          "active": 690369000,
          "peer": "b2oh2L6w"
        },
        {
          "name": "Server name unknown at this time (peerID: jZ6RvVRH)",
          "current": false,
          "offline": true,
          "active": 0,
          "peer": "jZ6RvVRH"
        }
      ],
      "cluster_size": 3
    }
```
Note the "peer" field following the "leader" field that contains
the server name. The new field is the node ID, which is a hash of
the server name.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-09-16 15:31:37 -06:00
Ivan Kozlovic
29224c8ea9 Split more tests to speed up Travis run
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-09-09 12:45:48 -06:00
Matthias Hanel
52c4872666 better error when peer selection fails (#3342)
* better error when peer selection fails

It is pretty hard to diagnose what went wrong when not enough peers for
an operation where found. This change now returns counts of reasons why
peers where discarded.

Changed the error to JSClusterNoPeers as it seems more appropriate
of an error for that operation. Not having enough resources is one of
the conditions for a peer not being considered. But so is having a non
matching tag. Which is why JSClusterNoPeers seems more appropriate
In addition, JSClusterNoPeers was already used as error after one call
to selectPeerGroup already.

example:
no suitable peers for placement: peer selection cluster 'C' with 3 peers
offline: 0
excludeTag: 1
noTagMatch: 2
noSpace: 0
uniqueTag: 0
misc: 0

Examle for mqtt:
mid:12 - "mqtt" - unable to connect: create sessions stream for account "$G":
no suitable peers for placement: peer selection cluster 'MQTT' with 3 peers
        offline: 0
        excludeTag: 0
        noTagMatch: 0
        noSpace: 0
        uniqueTag: 0
        misc: 0
         (10005)

Signed-off-by: Matthias Hanel <mh@synadia.com>

* review comment

Signed-off-by: Matthias Hanel <mh@synadia.com>
2022-08-06 00:17:01 +02:00
Ivan Kozlovic
6460519cf5 [FIXED] MQTT: Possible panic when clients misbehave
If a client with a given client ID is connected and while connected
another client tries to reuse the same client ID, the spec says that
the old client be closed and the new one accepted.
However, the server protects from this flapping happening all the time
by rejecting new clients that try to connect at a very fast pace.

However, the server was closing a misbehaving client after a second
delay (to prevent immediate reconnect if the client library does that)
but was not blocking the read loop and the compounding issue was that
if that misbehaving client is REALLY misbehaving and not waiting for
the CONNACK to send more protocols (for instance SUB) the server would
panic because the client was not fully configured.

To prevent that, the server will now "block" this misbehaving client
in its readLoop before closing the connection, preventing processing
of possible protocols that follow the CONNECT.

Resolves #3313

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-07-31 12:20:38 -06:00
Derek Collison
05e8d82c50 Allow retries on connect
Signed-off-by: Derek Collison <derek@nats.io>
2022-07-06 20:54:13 -07:00
Derek Collison
90caf12d96 Attempt to fix flapping test
Signed-off-by: Derek Collison <derek@nats.io>
2022-07-06 09:11:31 -07:00
Derek Collison
9b7c81c37e Some tests improvements on non-standard JS cluster setup
Signed-off-by: Derek Collison <derek@nats.io>
2022-07-03 12:45:27 -07:00
Ivan Kozlovic
f8b16f90be Tweak MQTT test
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-06-28 18:00:25 -06:00
Derek Collison
d24ae4723f Support reload
Signed-off-by: Derek Collison <derek@nats.io>
2022-06-15 07:58:09 -07:00
Derek Collison
9400733606 Allow for MQTT QoS-1 consumers to be auto cleanup after inactive threshold of time.
Signed-off-by: Derek Collison <derek@nats.io>
2022-06-14 17:37:45 -07:00
Ivan Kozlovic
b344519176 [FIXED] MQTT: Same session ID in different domains were considered duplicates
There is a mechanism to detect if a connection somewhere in the
cluster is using the session ID of an existing one, and if so,
close one as a duplicate.
However, when different domains are used, they should not be considered
duplicates.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-05-25 11:16:51 -06:00
Ivan Kozlovic
66b1b51182 [FIXED] MQTT: Errors deleting consumers will now prevent deletion of session
When there was a failure to delete a QoS1 consumer, the session
would still be deleted, which would cause orphaned consumers.

In case of error, the session record will not be deleted, which means
that it is still possible to restart the session and then close
it (with the clean flag).

Relates to #3116

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-05-23 11:28:18 -06:00
Ivan Kozlovic
68792b678b Fix new MQTT test flapper
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-05-18 16:36:07 -06:00
Ivan Kozlovic
da256ea15a Added consumer_memory_storage to make consumer memory based
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-05-18 15:53:23 -06:00
Ivan Kozlovic
1ddc5bd9f6 Added consumer_replicas (similar to stream_replicas)
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-05-18 15:53:23 -06:00
Ivan Kozlovic
5d3b1743e3 [ADDED] MQTT: Stream/Consumer replica count override
Ability to override the stream and consumers replica count, which is by default
determined based on the cluster size.

```
mqtt {
  port: 1883
  stream_replicas: 5
  consumer_replicas: 1
}
```

The above would allow *new* MQTT streams to be created with a replicas
factor of 5 (it will be an error if the cluster does not have that
many nodes, and error will occur at runtime when the first client
on a given account connects), and new consumers would be R=1.

The MQTT existing streams/consumers for an account are not modified.

The stream_replicas can also obviously be reduced to 1 for a cluster
of 3 nodes if one desire to have those streams as R=1.

A value of 0 or negative is considered letting the server pick
the value (from 1 to 3 depending on standalone/cluster size).

There is another property that allows the consumers to be created
with memory storage instead of file:
```
mqtt {
  ..
  consumer_memory_storage: true
}
```

Those new settings are global and apply to new streams/consumers
only.

Related to #3116

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>

Update warning

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-05-18 15:50:23 -06:00
Matthias Hanel
0c5f3688a7 [ADDED] Tiered limits and fix limit issues on updates (#2945)
* Adding tiered limits and fix limit issues on updates

Signed-off-by: Matthias Hanel <mh@synadia.com>
2022-03-28 20:47:54 -04:00
Ivan Kozlovic
6ad93d9b34 Fix some flappers
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-03-25 18:24:17 -06:00
Ivan Kozlovic
b4128693ed Ensure file path is correct during stream restore
Also had to change all references from `path.` to `filepath.` when
dealing with files, so that it works properly on Windows.

Fixed also lots of tests to defer the shutdown of the server
after the removal of the storage, and fixed some config files
directories to use the single quote `'` to surround the file path,
again to work on Windows.

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-03-09 13:31:51 -07:00
Ivan Kozlovic
3ce22adb76 Fixed some tests
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2022-01-13 13:14:05 -07:00
Derek Collison
52da55c8c6 Implement overflow placement for JetStream streams.
This allows stream placement to overflow to adjacent clusters.
We also do more balanced placement based on resources (store or mem). We can continue to expand this as well.
We also introduce an account requirement that stream configs contain a MaxBytes value.

We now track account limits and server limits more distinctly, and do not reserver server resources based on account limits themselves.

Signed-off-by: Derek Collison <derek@nats.io>
2022-01-06 19:33:08 -08:00
Matthias Hanel
3e8b66286d Js leaf deny (#2693)
Along a leaf node connection, unless the system account is shared AND the JetStream domain name is identical, the default JetStream traffic (without a domain set) will be denied.

As a consequence, all clients that wants to access a domain that is not the one in the server they are connected to, a domain name must be specified.
Affected from this change are setups where: a leaf node had no local JetStream OR the server the leaf node connected to had no local JetStream. 
One of the two accounts that are connected via a leaf node remote, must have no JetStream enabled.
The side that does not have JetStream enabled, will loose JetStream access and it's clients must set `nats.Domain` manually.

For workarounds on how to restore the old behavior, look at:
https://github.com/nats-io/nats-server/pull/2693#issuecomment-996212582

New config values added:
`default_js_domain` is a mapping from account to domain, settable when JetStream is not enabled in an account.
`extension_hint` are hints for non clustered server to start in clustered mode (and be usable to extend)
`js_domain` is a way to set the JetStream domain to use for mqtt.

Signed-off-by: Matthias Hanel <mh@synadia.com>
2021-12-16 16:53:20 -05:00
Ivan Kozlovic
2e07c3f614 [ADDED] MQTT: Support for Websocket
Clients will need to connect to the Websocket port and have `/mqtt`
as the URL path.

Resolves #2433

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
2021-12-06 16:13:13 -07:00