Such endpoint will list the gateway/cluster name, address and port
then list of outbound/inbound connections.
For each remote gateway there will be at most one outbound connection.
There can be 0 or more inbound connections for the same remote
gateway.
For each of these outbound/inbound connection, the connection info
similar to Connz is reported. Optionally, one can include the
interest mode/stats for each account.
Here are possible options:
* No specific options
http://host:port/gatewayz
* Limit to specific remote gateway, say name "B":
http://host:port/gatewayz/gw_name=B
* Include accounts (default limit to 1024 accounts)
http://host:port/gatewayz/accs=1
* Specific limit, say 200 (note accs=1 in this case is optional)
http://host:port/gatewayz/accs=1&accs_limit=200
* Specific account, say "acc_1". Note that accs=1 is not required then
http://host:port/gatewayz/acc_name=acc_1
* Above options can be mixed: specific remote gateway (B), with 100
accounts reported
http://host:port/gatewayz/gw_name=B&accs_limit=200
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
- TestSystemAccountConnectionUpdatesStopAfterNoLocal: I believe that
the check on number of notifications was wrong. Since we did not
consume the ones for the connect, the expected count after the
disconnect is 8 instead of 4.
- Possible fix GW tests complaining about number of outbound/inbound
I think that it may be possible that connection does not succeed
right away (remote to fully started, etc) and due to dial timeout
and reconnect attempt delay, I suspect that when given a max time
of 1sec to complete, it may not be enough.
Quick change for now is to override to 2secs for now in the
wait helpers. If that proves conclusive, we could remove the
timeout given to these helpers.
- TestGatewaySendAllSubsBadProtocol: used a t.Fatalf() in checkFor
instead of return fmt.Errorf().
- TestLeafNodeResetsMSGProto: this test is not about change to
interest mode only, so to avoid possible mix of protos, delay
a bit creation of gateway after creation of leaf node.
- Some defer s.Shutdown() were missing
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
In varz's cluster{} section, there was no URLs field. This PR adds
it and displays the routes defined in the cluster{} config section.
The value gets updated should there be a config reload following
addition/removal of an url from "routes".
If config had 1 route to "nats://127.0.0.1:1234", here is what
it would look like now:
```
"cluster": {
"addr": "0.0.0.0",
"cluster_port": 6222,
"auth_timeout": 1,
"urls": [
"127.0.0.1:1234"
]
},
```
Adding route to "127.0.0.1:4567" and doing config reload:
```
"cluster": {
"addr": "0.0.0.0",
"cluster_port": 6222,
"auth_timeout": 1,
"urls": [
"127.0.0.1:1234",
"127.0.0.1:4567"
]
},
```
Note that due to how we handle discovered servers in the cluster,
new urls dynamically discovered will not show in above output.
This could be done, but would need some changes in how we store
things (actually in this case, new urls are not stored, just
attempted to be connected. Once they connect, they would be visible
in /routez).
For gateways, however, this PR displays the combination of the
URLs defined in config and the ones that are discovered after
a connection is made to a give cluster. So say cluster A has a single
url to one server in cluster B, when connecting to that server,
the server on A will get the list of the gateway URLs that one
can connect to, and these will be reflected in /varz. So this is
a different behavior that for routes. As explained above, we could
harmonize the behavior in a future PR.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
----------------------------------------------------------------
Backward-incompatibility note:
Varz used to embed *Info and *Options which are other server objects.
However, Info is a struct that servers used to send protocols to other
servers or clients and its content must contain json tags since we
need to marshal those to be sent over. The problem is that it made
those fields now accessible to users calling Varz() and also visible
to the http /varz output. Some fields in Info were introduced in the
2.0 branch that clashed with json tag in Options, which made cluster{}
for instance disappear in the /varz output - because a Cluster string
in Info has the same json tag, and Cluster in Info is empty in some
cases.
For users that embed NATS and were using Server.Varz() directly,
without the use of the monitoring endpoint, they were then given
access (which was not the intent) to server internals (Info and Options).
Fields that were in Info or Options or directly in Varz that did not
clash with each other could be referenced directly, for instace, this
is you could access the server ID:
v, _ := s.Varz(nil)
fmt.Println(v.ID)
Another way would be:
fmt.Println(v.Info.ID)
Same goes for fields that were brought from embedding the Options:
fmt.Println(v.MaxConn)
or
fmt.Println(v.Options.MaxConn)
We have decided to explicitly define fields in Varz, which means
that if you previously accessed fields through v.Info or v.Options,
you will have to update your code to use the corresponding field
directly: v.ID or v.MaxConn for instance.
So fields were also duplicated between Info/Options and Varz itself
so depending on which one your application was accessing, you may
have to update your code.
---------------------------------------------------------------
Other issues that have been fixed is races that were introduced
by the fact that the creation of a Varz object (pointing to
some server data) was done under server lock, but marshaling not
being done under that lock caused races.
The fact that object returned to user through Server.Varz() also
had references to server internal objects had to be fixed by
returning deep copy of those internal objects.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
The test TestConnzRTT() failed once with "invalid duration". Adding
the original string in case of error to understand better.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Use pending bytes as slow consumer trigger, so reintroduce max_pending.
Improve latency with inplace flush calls when appropriate. Utilize simple
time budget for readLoop routine.
Signed-off-by: Derek Collison <derek@nats.io>
This test originally used only 1 connection. It was then modified
to use another connection to check the publishing effect into
the last activity. However, when polling for connz we were using
[0], but that may not necessarily match the connection we were
checking.
I don't think that there was a need for a new connection in this
test, so use a single connection.
Even for endpoints that currently do not need options (such as
Subsz and Varz), I added some options and a return of error so that
we don't break the API if we ever need to add options for those.
In tests, replaced all code doing http.Get, etc.. with a readBody
helper function.
Combine server functions and monitor handler tests in one test
with a for loop to try both modes on a given test.
Still need to add Connz.
Fixes#612 by adding a new Varz method to the Server. This can be used
by processes embedding gnatsd to access the Varz struct without relying
on the HTTP API. This starts with just Varz but the same pattern could
be expanded to other monitoring APIs in the future.
If server requires TLS and clients are connecting, and a Connz
request is made while clients are still in TLS Handshake, the
call to tls.Conn.ConnectionState() would block for the duration
of the handshake. This would cause the overall http request to
take too long.
We will now not try to gather TLSVersion and TLSCipher from a
client that is still in TLS handshake.
Resolves#600
Fixing various tests that were failing locally when running in
parallel mode (without -p=1).
In reload_test.go, lots of nats.Conn.Close() were missing which
would require too much memory when running with `-race` mode.