----------------------------------------------------------------
Backward-incompatibility note:
Varz used to embed *Info and *Options which are other server objects.
However, Info is a struct that servers used to send protocols to other
servers or clients and its content must contain json tags since we
need to marshal those to be sent over. The problem is that it made
those fields now accessible to users calling Varz() and also visible
to the http /varz output. Some fields in Info were introduced in the
2.0 branch that clashed with json tag in Options, which made cluster{}
for instance disappear in the /varz output - because a Cluster string
in Info has the same json tag, and Cluster in Info is empty in some
cases.
For users that embed NATS and were using Server.Varz() directly,
without the use of the monitoring endpoint, they were then given
access (which was not the intent) to server internals (Info and Options).
Fields that were in Info or Options or directly in Varz that did not
clash with each other could be referenced directly, for instace, this
is you could access the server ID:
v, _ := s.Varz(nil)
fmt.Println(v.ID)
Another way would be:
fmt.Println(v.Info.ID)
Same goes for fields that were brought from embedding the Options:
fmt.Println(v.MaxConn)
or
fmt.Println(v.Options.MaxConn)
We have decided to explicitly define fields in Varz, which means
that if you previously accessed fields through v.Info or v.Options,
you will have to update your code to use the corresponding field
directly: v.ID or v.MaxConn for instance.
So fields were also duplicated between Info/Options and Varz itself
so depending on which one your application was accessing, you may
have to update your code.
---------------------------------------------------------------
Other issues that have been fixed is races that were introduced
by the fact that the creation of a Varz object (pointing to
some server data) was done under server lock, but marshaling not
being done under that lock caused races.
The fact that object returned to user through Server.Varz() also
had references to server internal objects had to be fixed by
returning deep copy of those internal objects.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
The test TestConnzRTT() failed once with "invalid duration". Adding
the original string in case of error to understand better.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Use pending bytes as slow consumer trigger, so reintroduce max_pending.
Improve latency with inplace flush calls when appropriate. Utilize simple
time budget for readLoop routine.
Signed-off-by: Derek Collison <derek@nats.io>
This test originally used only 1 connection. It was then modified
to use another connection to check the publishing effect into
the last activity. However, when polling for connz we were using
[0], but that may not necessarily match the connection we were
checking.
I don't think that there was a need for a new connection in this
test, so use a single connection.
Even for endpoints that currently do not need options (such as
Subsz and Varz), I added some options and a return of error so that
we don't break the API if we ever need to add options for those.
In tests, replaced all code doing http.Get, etc.. with a readBody
helper function.
Combine server functions and monitor handler tests in one test
with a for loop to try both modes on a given test.
Still need to add Connz.
Fixes#612 by adding a new Varz method to the Server. This can be used
by processes embedding gnatsd to access the Varz struct without relying
on the HTTP API. This starts with just Varz but the same pattern could
be expanded to other monitoring APIs in the future.
If server requires TLS and clients are connecting, and a Connz
request is made while clients are still in TLS Handshake, the
call to tls.Conn.ConnectionState() would block for the duration
of the handshake. This would cause the overall http request to
take too long.
We will now not try to gather TLSVersion and TLSCipher from a
client that is still in TLS handshake.
Resolves#600
Fixing various tests that were failing locally when running in
parallel mode (without -p=1).
In reload_test.go, lots of nats.Conn.Close() were missing which
would require too much memory when running with `-race` mode.