----------------------------------------------------------------
Backward-incompatibility note:
Varz used to embed *Info and *Options which are other server objects.
However, Info is a struct that servers used to send protocols to other
servers or clients and its content must contain json tags since we
need to marshal those to be sent over. The problem is that it made
those fields now accessible to users calling Varz() and also visible
to the http /varz output. Some fields in Info were introduced in the
2.0 branch that clashed with json tag in Options, which made cluster{}
for instance disappear in the /varz output - because a Cluster string
in Info has the same json tag, and Cluster in Info is empty in some
cases.
For users that embed NATS and were using Server.Varz() directly,
without the use of the monitoring endpoint, they were then given
access (which was not the intent) to server internals (Info and Options).
Fields that were in Info or Options or directly in Varz that did not
clash with each other could be referenced directly, for instace, this
is you could access the server ID:
v, _ := s.Varz(nil)
fmt.Println(v.ID)
Another way would be:
fmt.Println(v.Info.ID)
Same goes for fields that were brought from embedding the Options:
fmt.Println(v.MaxConn)
or
fmt.Println(v.Options.MaxConn)
We have decided to explicitly define fields in Varz, which means
that if you previously accessed fields through v.Info or v.Options,
you will have to update your code to use the corresponding field
directly: v.ID or v.MaxConn for instance.
So fields were also duplicated between Info/Options and Varz itself
so depending on which one your application was accessing, you may
have to update your code.
---------------------------------------------------------------
Other issues that have been fixed is races that were introduced
by the fact that the creation of a Varz object (pointing to
some server data) was done under server lock, but marshaling not
being done under that lock caused races.
The fact that object returned to user through Server.Varz() also
had references to server internal objects had to be fixed by
returning deep copy of those internal objects.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Implemented single server account claim limits for subscriptions and active connections and message payload.
Signed-off-by: Derek Collison <derek@nats.io>
Add in trusted keys options and binary stamp
User JWT and Account fetch with AccountResolver
Account and User expiration
Account Imports/Exports w/ updates
Import activation expiration
Signed-off-by: Derek Collison <derek@nats.io>
Start the lame duck mode in a go routine in the signal handler
because I think we want to be able to shutdown the server while
in that mode.
Kept the closing as a loop in the lameDuckMode() function (did
not use a timer).
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
When receiving SIGUSR2 signal (or -sl ldm) the server stops
accepting new clients, closes routes connections and spread the
closing of client connections based on a config lame duck duration
(default is 30sec). This will help preventing a storm of client
reconnect when a server needs to be shutdown.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Even for endpoints that currently do not need options (such as
Subsz and Varz), I added some options and a return of error so that
we don't break the API if we ever need to add options for those.
In tests, replaced all code doing http.Get, etc.. with a readBody
helper function.
Combine server functions and monitor handler tests in one test
with a for loop to try both modes on a given test.
Still need to add Connz.
Fixes#612 by adding a new Varz method to the Server. This can be used
by processes embedding gnatsd to access the Varz struct without relying
on the HTTP API. This starts with just Varz but the same pattern could
be expanded to other monitoring APIs in the future.
If server requires TLS and clients are connecting, and a Connz
request is made while clients are still in TLS Handshake, the
call to tls.Conn.ConnectionState() would block for the duration
of the handshake. This would cause the overall http request to
take too long.
We will now not try to gather TLSVersion and TLSCipher from a
client that is still in TLS handshake.
Resolves#600
gnatsd currently uses a global logger. This can cause some problems
(especially around the config-reload work), but global variables are
also just an anti-pattern in general. The current behavior is
particularly surprising because the global logger is configured through
calls to the Server.
This addresses issue #500 by removing the global logger and making it a
field on Server.