When trying to debug sentinel connections or max connections errors it
would be very useful to have the ability to see the list of connected
clients to a running sentinel. At the same time it would be very helpful
to be able to name each sentinel connection or kill offending clients.
This commits adds the already defined CLIENT commands back to Redis
Sentinel.
Improvements:
- Return empty string if asking for non-existing section (INFO foo)
- Fix potential memory leak (caused by sdsempty() then returned if >2 args)
- Clean up argument parsing
- Allow "all" as valid section (same as "default" or zero args currently)
- Move strcasecmp to end of evaluation chain in conditionals
Also, since we're C99, I moved some variable declarations to be closer
to where they are actually used (saves us from needing to free an empty info
if detect argument errors up front).
Closes#1915Closes#1966
I guess the initial goal of the initialization was to suppress GCC
warning, but if we have to initialize, we can do it with the base-case
value instead of NULL which is never retained.
Sentinel queries the INFO from every master and from every replica of
every master.
We can cache the INFO results in Sentinel so Sentinel can be a single
place to quickly get all INFO output for an entire Sentinel monitoring
group.
This commit gives us SENTINEL INFO-CACHE in two forms:
- SENTINEL INFO-CACHE — returns all masters and all replicas
- SENTINEL INFO-CACHE master0 master1 ... masterN — vararg specify masters
Results are returned as a multibulk reply with two top-level entries
for each master. The first entry for each master is the name of the master.
The second entry is a nested multibulk reply with the contents of INFO,
first for the master, then an additional entry for each of the
replicas.
- Remove trailing newlines from redis.conf
- Fix comment misspelling
- Clarifies zipEncodeLength usage and a C API mention (#1243, #1242)
- Fix cluster typos (inspired by @papanikge #1507)
- Fix rewite -> rewrite in a few places (inspired by #682)
Closes#1243, #1242, #1507
The code to check the number of voters was never updated to follow the new
Sentinel specification, so the number of voters was computed using only
the set of Sentinels that provided a vote.
This means that there is a changing majority on partitions, even if
usually the issue is not triggered because of the configured quorum
check (what was broken was the other implicit check that requires anyway
half of the known sentinels to agree in order to start a failover).
The original implementation was modified in order to allow to
selectively announce a different IP or port, and to rewrite the two
options in the config file after a rewrite.
Some deployments need traffic sent from a specific address. This
change uses the same policy as Cluster where the first listed bindaddr
becomes the source address for outgoing Sentinel communication.
Fixes#1667
Eventual configuration convergence is guaranteed by our periodic hello
messages to all the instances, however when there are important notices
to share, better make a phone call. With this commit we force an hello
message to other Sentinal and Redis instances within the next 100
milliseconds of a config update, which is practically better than
waiting a few seconds.
Lack of check of the SRI_PROMOTED flag caused Sentienl to act with the
promoted slave turned into a master during failover like if it was a
normal instance.
Normally this problem was not apparent because during real failovers the
old master is down so the bugged code path was not entered, however with
manual failovers via the SENTINEL FAILOVER command, the problem was
easily triggered.
This commit prevents promoted slaves from getting reconfigured, moreover
we now explicitly check that during a failover the slave turning into a
master is the one we selected for promotion and not a different one.
This implements the new Sentinel-Client protocol for the Sentinel part:
now instances are reconfigured using a transaction that ensures that the
config is rewritten in the target instance, and that clients lose the
connection with the instance, in order to be forced to: ask Sentinel,
reconnect to the instance, and verify the instance role with the new
ROLE command.
When a Sentinel performs a failover (successful or not), or when a
Sentinel votes for a different Sentinel trying to start a failover, it
sets a min delay before it will try to get elected for a failover.
While not strictly needed, because if multiple Sentinels will try
to failover the same master at the same time, only one configuration
will eventually win, this serialization is practically very useful.
Normal failovers are cleaner: one Sentinel starts to failover, the
others update their config when the Sentinel performing the failover
is able to get the selected slave to move from the role of slave to the
one of master.
However currently this timeout was implicit, so users could see
Sentinels not reacting, after a failed failover, for some time, without
giving any feedback in the logs to the poor sysadmin waiting for clues.
This commit makes Sentinels more verbose about the delay: when a master
is down and a failover attempt is not performed because the delay has
still not elaped, something like that will be logged:
Next failover delay: I will not start a failover
before Thu May 8 16:48:59 2014
In sentinelFlushConfig() fd could be undefined when the following if
statement was true:
if (rewrite_status == -1) goto werr;
This could cause random file descriptors to get closed.
GCC-4.9 warned about this, but clang didn't.
This commit fixes warning:
sentinel.c: In function 'sentinelReceiveHelloMessages':
sentinel.c:2156:43: warning: variable 'master' set but not used [-Wunused-but-set-variable]
sentinelRedisInstance *ri = c->data, *master;
Test sentinel.tilt condition on top and return if it is true.
This allows to remove the check for the tilt condition in the remaining
code paths of the function.
Failure detection in Sentinel is ping-pong based. It used to work by
remembering the last time a valid PONG reply was received, and checking
if the reception time was too old compared to the current current time.
PINGs were sent at a fixed interval of 1 second.
This works in a decent way, but does not scale well when we want to set
very small values of "down-after-milliseconds" (this is the node
timeout basically).
This commit reiplements the failure detection making a number of
changes. Some changes are inspired to Redis Cluster failure detection
code:
* A new last_ping_time field is added in representation of instances.
If non zero, we have an active ping that was sent at the specified
time. When a valid reply to ping is received, the field is zeroed
again.
* last_ping_time is not reset when we reconnect the link or send a new
ping, so from our point of view it represents the time we started
waiting for the instance to reply to our pings without receiving a
reply.
* last_ping_time is now used in order to check if the instance is
timed out. This means that we can have a node timeout of 100
milliseconds and yet the system will work well since the new check is
not bound to the period used to send pings.
* Pings are now sent every second, or often if the value of
down-after-milliseconds is less than one second. With a lower limit of
10 HZ ping frequency.
* Link reconnection code was improved. This is used in order to try to
reconnect the link when we are at 50% of the node timeout without a
valid reply received yet. However the old code triggered unnecessary
reconnections when the node timeout was very small. Now that should be
ok.
The new code passes the tests but more testing is needed and more unit
tests stressing the failure detector, so currently this is merged only
in the unstable branch.
Sentinel's main safety argument is that there are no two configurations
for the same master with the same version (configuration epoch).
For this to be true Sentinels require to be authorized by a majority.
Additionally Sentinels require to do two important things:
* Never vote again for the same epoch.
* Never exchange an old vote for a fresh one.
The first prerequisite, in a crash-recovery system model, requires to
persist the master->leader_epoch on durable storage before to reply to
messages. This was not the case.
We also make sure to persist the current epoch in order to never reply
to stale votes requests from other Sentinels, after a recovery.
The configuration is persisted by making use of fsync(), this is
considered in the context of this code a good enough guarantee that
after a restart our durable state is restored, however this may not
always be the case depending on the kind of hardware and operating
system used.
Now the way HELLO messages are received is unified.
Now it is no longer needed for Sentinels to converge to the higher
configuration for a master to be able to chat via some Redis instance,
the are able to directly exchanges configurations.
Note that this commit does not include the (trivial) change needed to
send HELLO messages to Sentinel instances as well, since for an error I
committed the change in the previous commit that refactored hello
messages processing into a separated function.