The PING trigger was improved again by using two fields instead of a
single one to remember when the last ping was sent:
1. The "active" ping is the time at which we sent the last ping that
still received no reply. However we continue to ping non replying
instances even if they have an old active ping: the link may be
disconnected and reconencted in the meantime so the older pings may get
lost even if it's a TCP socket.
2. The "last" ping is the time at which we really sent the last ping
on the wire, and this is used in order to throttle the amount of pings
we send during failures (when no pong is received).
All in all the failure detector effectiveness should be identical but we
avoid to flood instances with pings during failures or when they are
slow.
It's ok to ping as soon as the ping period has elapsed since we received
the last PONG, but it's not good that we ping again if there is a
pending ping... With this change we'll send a new ping if there is one
pending only if two times the ping period elapsed since the ping which
is still pending was sent.
This is useful for debugging and logging activities: given a
sentinelRedisInstance object returns a C string representing the
instance type: master, slave, sentinel.
This new command triggers a config flush to save the in-memory config to
disk. This is useful for cases of a configuration management system or a
package manager wiping out your sentinel config while the process is
still running - and has not yet been restarted. It can also be useful
for scripting a backup and migrate or clone of a running sentinel.
Since with a previous commit Sentinels now persist their unique ID, we
no longer need to detect duplicated Sentinels and re-add them. We remove
and re-add back using different events only in the case of address
switch of the same Sentinel, without generating a new +sentinel event.
Previously Sentinels always changed unique ID across restarts, relying
on the server.runid field. This is not a good idea, and forced Sentinel
to rely on detection of duplicated Sentinels and a potentially dangerous
clean-up and re-add operation of the Sentinel instance that was
rebooted.
Now the ID is generated at the first start and persisted in the
configuration file, so that a given Sentinel will have its unique
ID forever (unless the configuration is manually deleted or there is a
filesystem corruption).
Originally, only the +slave event which occurs when a slave is
reconfigured during sentinelResetMasterAndChangeAddress triggers a flush
of the config to disk. However, newly discovered slaves don't
apparently trigger this flush but do trigger the +slave event issuance.
So if you start up a sentinel, add a master, then add a slave to the
master (as a way to reproduce it) you'll see the +slave event issued,
but the sentinel config won't be updated with the known-slave entry.
This change makes sentinel do the flush of the config if a new slave is
deteted in sentinelRefreshInstanceInfo.
To rewrite the config in the loop that adds slaves back after a master
reset, in order to handle switching to another master, is useless: it
just adds latency since there is an fsync call in the inner loop,
without providing any additional guarantee, but the contrary, since if
after the first loop iteration the server crashes we end with just a
single slave entry losing all the other informations.
It is wiser to rewrite the config at the end when the full new
state is configured.
This fixes issue #2535, that was actually an hiredis library bug (I
submitted an issue and fix to the redis/hiredis repo as well).
When an asynchronous hiredis connection subscribes to a Pub/Sub channel
and gets an error, and in other related conditions, the function
redisProcessCallbacks() enters a code path where the link is
disconnected, however the function returns before freeing the allocated
reply object. This causes a memory leak. The memory leak was trivial to
trigger in Redis Sentinel, which uses hiredis, every time we tried to
subscribe to an instance that required a password, in case the Sentinel
was configured either with the wrong password or without password at
all. In this case, the -AUTH error caused the leaking code path to be
executed.
It was verified with Valgrind that after this change the leak no longer
happens in Sentinel with a misconfigured authentication password.