Shuffle the array created when iterating through the gateways URLs
map since map iteration may not be well randomized with small maps.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
This is a continuation of #1000. Added a configuration to specify
the number of attempts at which the repeated error is reported.
The algo is now to print only the 1st attempt and when current
attempt % <this config param> == 0.
Resolves#969
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
This applies to routes, gateways and leaf node connections.
The failed attempts will be printed at the first, after the first
minute and then every hour.
The connect/error statements now include the attempt number.
Note that in debug mode, all attempts are traced, so you may get
double trace (one for debug, one for info/error).
Resolves#969
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Previously we would walk all clients bound to an account to
collect the leaf nodes for updating of the subscription maps.
Signed-off-by: Derek Collison <derek@nats.io>
----------------------------------------------------------------
Backward-incompatibility note:
Varz used to embed *Info and *Options which are other server objects.
However, Info is a struct that servers used to send protocols to other
servers or clients and its content must contain json tags since we
need to marshal those to be sent over. The problem is that it made
those fields now accessible to users calling Varz() and also visible
to the http /varz output. Some fields in Info were introduced in the
2.0 branch that clashed with json tag in Options, which made cluster{}
for instance disappear in the /varz output - because a Cluster string
in Info has the same json tag, and Cluster in Info is empty in some
cases.
For users that embed NATS and were using Server.Varz() directly,
without the use of the monitoring endpoint, they were then given
access (which was not the intent) to server internals (Info and Options).
Fields that were in Info or Options or directly in Varz that did not
clash with each other could be referenced directly, for instace, this
is you could access the server ID:
v, _ := s.Varz(nil)
fmt.Println(v.ID)
Another way would be:
fmt.Println(v.Info.ID)
Same goes for fields that were brought from embedding the Options:
fmt.Println(v.MaxConn)
or
fmt.Println(v.Options.MaxConn)
We have decided to explicitly define fields in Varz, which means
that if you previously accessed fields through v.Info or v.Options,
you will have to update your code to use the corresponding field
directly: v.ID or v.MaxConn for instance.
So fields were also duplicated between Info/Options and Varz itself
so depending on which one your application was accessing, you may
have to update your code.
---------------------------------------------------------------
Other issues that have been fixed is races that were introduced
by the fact that the creation of a Varz object (pointing to
some server data) was done under server lock, but marshaling not
being done under that lock caused races.
The fact that object returned to user through Server.Varz() also
had references to server internal objects had to be fixed by
returning deep copy of those internal objects.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
We don't support reload of leafnode config yet, but we need to make
sure it does not fail the reload process if nothing has been changed.
(it would fail because TLSConfig internally do change in some cases)
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
If server solicits leaf node TLS connection and needs to verify
the server certificate, it did not have the root CAs set in its
config.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Say there are 2 clusters, A and B. A client connects to A and
publishes messages on an account that B has no interest in.
Then a leaf node server connects to B (using same account than
the no-interest is for). Cluster B will ask cluster
A to switch to interest mode only for leaf node account. This
would cause a panic.
Signed-off-by: Ivan Kozlovic <ivan@synadia.com>