The result of this one-char bug was pretty serious, if the new master
had the same port of the previous master, but just a different IP
address, non-leader Sentinels would not be able to recognize the
configuration change.
This commit fixes issue #1394.
Many thanks to @shanemadden that reported the bug and helped
investigating it.
At the end of the file, CONFIG REWRITE adds a comment line that:
# Generated by CONFIG REWRITE
Followed by the additional config options required. However this was
added again and again at every rewrite in praticular conditions (when a
given set of options change in a given time during the time).
Now if it was alrady encountered, it is not added a second time.
This is especially important for Sentinel that rewrites the config at
every state change.
Some are just to know if the master is down, and in this case the runid
in the request is set to "*", others are actually in order to seek for a
vote and get elected. In the latter case the runid is set to the runid
of the instance seeking for the vote.
Also the sentinel configuration rewriting was modified in order to
account for failover in progress, where we need to provide the promoted
slave address as master address, and the old master address as one of
the slaves address.
We'll use CONFIG REWRITE (internally) in order to store the new
configuration of a Sentinel after the internal state changes. In order
to do so, we need configuration options (that usually the user will not
touch at all) about config epoch of the master, Sentinels and Slaves
known for this master, and so forth.
The time Sentinel waits since the slave is detected to be configured to
the wrong master, before reconfiguring it, is now the failover_timeout
time as this makes more sense in order to give the Sentinel performing
the failover enoung time to reconfigure the slaves slowly (if required
by the configuration).
Also we now PUBLISH more frequently the new configuraiton as this allows
to switch the reapprearing master back to slave faster.
Also defaulf failover timeout changed to 3 minutes as the failover is a
fairly fast procedure most of the times, unless there are a very big
number of slaves and the user picked to configure them sequentially (in
that case the user should change the failover timeout accordingly).
Once we switched configuration during a failover, we should advertise
the new address.
This was a serious race condition as the Sentinel performing the
failover for a moment advertised the old address with the new
configuration epoch: once trasmitted to the other Sentinels the broken
configuration would remain there forever, until the next failover
(because a greater configuration epoch is required to overwrite an older
one).
Now Sentinel believe the current configuration is always the winner and
should be applied by Sentinels instead of trying to adapt our view of
the cluster based on what we observe.
So the only way to modify what a Sentinel believe to be the truth is to
win an election and advertise the new configuration via Pub / Sub with a
greater configuration epoch.
Changes to leadership handling.
Now the leader gets selected by every Sentinel, for a specified epoch,
when the SENTINEL is-master-down-by-addr is sent.
This command now includes the runid and the currentEpoch of the instance
seeking for a vote. The Sentinel only votes a single time in a given
epoch.
Still a work in progress, does not even compile at this stage.
Sentinel state now includes the idea of current epoch and config epoch.
In the Hello message, that is now published both on masters and slaves,
a Sentinel no longer just advertises itself but also broadcasts its
current view of the configuration: the master name / ip / port and its
current epoch.
Sentinels receiving such information switch to the new master if the
configuration epoch received is newer and the ip / port of the master
are indeed different compared to the previos ones.
Now there is a function that handles the update of the local slot
configuration every time we have some new info about a node and its set
of served slots and configEpoch.
Moreoever the UPDATE packets are now processed when received (it was a
work in progress in the previous commit).
The commit also introduces detection of nodes publishing not updated
configuration. More work in progress to send an UPDATE packet to inform
of the config change.
AUTH and SCRIPT KILL were sent without incrementing the pending commands
counter. Clearly this needs some kind of wrapper doing it for the caller
in order to be less bug prone.
This change makes Sentinel less fragile about a number of failure modes.
This commit also fixes a different bug as a side effect, SLAVEOF command
was sent multiple times without incrementing the pending commands count.
The previous implementation of SCAN parsed the cursor in the generic
function implementing SCAN, SSCAN, HSCAN and ZSCAN.
The actual higher-level command implementation only checked for empty
keys and return ASAP in that case. The result was that inverting the
arguments of, for instance, SSCAN for example and write:
SSCAN 0 key
Instead of
SSCAN key 0
Resulted into no error, since 0 is a non-existing key name very likely.
Just the iterator returned no elements at all.
In order to fix this issue the code was refactored to extract the
function to parse the cursor and return the error. Every higher level
command implementation now parses the cursor and later checks if the key
exist or not.
The previous implementation assumed that the first call always happens
with cursor set to 0, this may not be the case, and we want to return 0
anyway otherwise the (broken) client code will loop forever.
The new implementation is capable of iterating the keyspace but also
sets, hashes, and sorted sets, and can be used to implement SSCAN, ZSCAN
and HSCAN.
All the internal state of cluster involving time is now using mstime_t
and mstime() in order to use milliseconds resolution.
Also the clusterCron() function is called with a 10 hz frequency instead
of 1 hz.
The cluster node_timeout must be also configured in milliseconds by the
user in redis.conf.
When a slave requests our vote, the configEpoch he claims for its master
and the set of served slots must be greater or equal to the configEpoch
of the nodes serving these slots in the current configuraiton of the
master granting its vote.
In other terms, masters don't vote for slaves having a stale
configuration for the slots they want to serve.
Sometimes when we resurrect a cached master after a successful partial
resynchronization attempt, there is pending data in the output buffers
of the client structure representing the master (likely REPLCONF ACK
commands).
If we don't reinstall the write handler, it will never be installed
again by addReply*() family functions as they'll assume that if there is
already data pending, the write handler is already installed.
This bug caused some slaves after a successful partial sync to never
send REPLCONF ACK, and continuously being detected as timing out by the
master, with a disconnection / reconnection loop.
Sometimes when we resurrect a cached master after a successful partial
resynchronization attempt, there is pending data in the output buffers
of the client structure representing the master (likely REPLCONF ACK
commands).
If we don't reinstall the write handler, it will never be installed
again by addReply*() family functions as they'll assume that if there is
already data pending, the write handler is already installed.
This bug caused some slaves after a successful partial sync to never
send REPLCONF ACK, and continuously being detected as timing out by the
master, with a disconnection / reconnection loop.
Since we started sending REPLCONF ACK from slaves to masters, the
lastinteraction field of the client structure is always refreshed as
soon as there is room in the socket output buffer, so masters in timeout
are detected with too much delay (the socket buffer takes a lot of time
to be filled by small REPLCONF ACK <number> entries).
This commit only counts data received as interactions with a master,
solving the issue.
Since we started sending REPLCONF ACK from slaves to masters, the
lastinteraction field of the client structure is always refreshed as
soon as there is room in the socket output buffer, so masters in timeout
are detected with too much delay (the socket buffer takes a lot of time
to be filled by small REPLCONF ACK <number> entries).
This commit only counts data received as interactions with a master,
solving the issue.
There was a bug that over-esteemed the amount of backlog available,
however this could only happen when a slave was asking for an offset
that was in the "future" compared to the master replication backlog.
Now this case is handled well and logged as an incident in the master
log file.
There was a bug that over-esteemed the amount of backlog available,
however this could only happen when a slave was asking for an offset
that was in the "future" compared to the master replication backlog.
Now this case is handled well and logged as an incident in the master
log file.
The new API is able to remember operations to perform before returning
to the event loop, such as checking if there is the failover quorum for
a slave, save and fsync the configuraiton file, and so forth.
Because this operations are performed before returning on the event
loop we are sure that messages that are sent in the same event loop run
will be delivered *after* the configuration is already saved, that is a
requirement sometimes. For instance we want to publish a new epoch only
when it is already stored in nodes.conf in order to avoid returning back
in the logical clock when a node is restarted.
This new API provides a big performance advantage compared to saving and
possibly fsyncing the configuration file multiple times in the same
event loop run, especially in the case of big clusters with tens or
hundreds of nodes.
The new algorithm does not check replies time as checking for the
currentEpoch in the reply ensures that the reply is about the current
election process.
The old algorithm used a PROMOTED flag and explicitly checks about
slave->master convertions. Wit the new cluster meta-data propagation
algorithm we just look at the configEpoch to check if we need to
reconfigure slots, then:
1) If a node is a master but it reaches zero served slots becuase of
reconfiguration.
2) If a node is a slave but the master reaches zero served slots because
of a reconfiguration.
We switch as a replica of the new slots owner.
We need to:
1) Increment the configEpoch.
2) Save it to disk and fsync the file.
3) Broadcast the PONG with the new configuration.
If other nodes will receive the updated configuration we need to be sure
to restart with this new config in the event of a crash.
First change: now there is no need to be a master in order to detect a
failure, however the majority of masters signaling PFAIL or FAIL is needed.
This change is important because it allows slaves rejoining the cluster
after a partition to sense the FAIL condition so that eventually all the
nodes agree on failures.
The time is sent in requests, and copied back in reply packets.
This way the receiver can compare the time field in a reply with its
local clock and check the age of the request associated with this reply.
This is an easy way to discard delayed replies. Note that only a clock
is used here, that is the one of the node sending the packet. The
receiver only copies the field back into the reply, so no
synchronization is needed between clocks of different hosts.
Handshake nodes should turn into normal nodes or be freed in a
reasonable amount of time, otherwise they'll keep accumulating if the
address they are associated with is not reachable for some reason.
The code freed a reply object that was never created, resulting in a
segfault every time randomkey returned a key that was deleted before we
queried it for size.
Multiple missing calls to lua_pop prevented the error handler function
pushed on the stack for lua_pcall() to be popped before returning,
causing a memory leak in almost all the code paths of EVAL (both
successful calls and calls returning errors).
This caused two issues: Lua leaking memory (and this was very visible
from INFO memory output, as the 'used_memory_lua' field reported an
always increasing amount of memory used), and as a result slower and
slower GC cycles resulting in all the CPU being used.
Thanks to Tanguy Le Barzic for noticing something was wrong with his 2.8
slave, and for creating a testing EC2 environment where I was able to
investigate the issue.
We are sure the string is large, since when the sds optimization branch
is entered it means that it was not possible to encode it as EMBSTR for
size concerns.
When no encoding is possible, at least try to reallocate the sds string
with one that does not waste memory (with free space at the end of the
buffer) when the string is large enough.
We are sure that a string that is longer than 21 chars cannot be
represented by a 64 bit signed integer, as -(2^64) is 21 chars:
strlen(-18446744073709551616) => 21
This feature was implemented in the initial days of the Redis Cluster
implementaiton but is not a good idea at all.
1) It depends on clocks to be synchronized, that is already very bad.
2) Moreover it adds a bug where the pong time is updated via gossip so
no new PING is ever sent by the current node, with the effect of no PONG
received, no update of tables, no clearing of PFAIL flag.
In general to trust other nodes about the reachability of other nodes is
a broken distributed programming model.
The previous hashing used the trivial algorithm of xoring the integers
together. This is not optimal as it is very likely that different
hash table setups will hash the same, for instance an hash table at the
start of the rehashing process, and at the end, will have the same
fingerprint.
Now we hash N integers in a smarter way, by summing every integer to the
previous hash, and taking the integer hashing again (see the code for
further details). This way it is a lot less likely that we get a
collision. Moreover this way of hashing explicitly protects from the
same set of integers in a different order to hash to the same number.
This commit is related to issue #1240.
This commit does mainly two things:
1) It fixes zunionInterGenericCommand() by removing mass-initialization
of all the iterators used, so that we don't violate the unsafe iterator
API of dictionaries. This fixes issue #1240.
2) Since the zui* APIs required the allocator to be initialized in the
zsetopsrc structure in order to use non-iterator related APIs, this
commit fixes this strict requirement by accessing objects directly via
the op->subject->ptr pointer we have to the object.
dict.c allows the user to create unsafe iterators, that are iterators
that will not touch the dictionary data structure in any way, preventing
copy on write, but at the same time are limited in their usage.
The limitation is that when itearting with an unsafe iterator, no call
to other dictionary functions must be done inside the iteration loop,
otherwise the dictionary may be incrementally rehashed resulting into
missing elements in the set of the elements returned by the iterator.
However after introducing this kind of iterators a number of bugs were
found due to misuses of the API, and we are still finding
bugs about this issue. The bugs are not trivial to track because the
effect is just missing elements during the iteartion.
This commit introduces auto-detection of the API misuse. The idea is
that an unsafe iterator has a contract: from initialization to the
release of the iterator the dictionary should not change.
So we take a fingerprint of the dictionary state, xoring a few important
dict properties when the unsafe iteartor is initialized. We later check
when the iterator is released if the fingerprint is still the same. If it
is not, we found a misuse of the iterator, as not allowed API calls
changed the internal state of the dictionary.
This code was checked against a real bug, issue #1240.
This is what Redis prints (aborting) when a misuse is detected:
Assertion failed: (iter->fingerprint == dictFingerprint(iter->d)),
function dictReleaseIterator, file dict.c, line 587.
The previous code using a static buffer as an optimization was lame:
1) Premature optimization, actually it was *slower* than naive code
because resulted into the creation / destruction of the object
encapsulating the output buffer.
2) The code was very hard to test, since it was needed to have specific
tests for command lines exceeding the size of the static buffer.
3) As a result of "2" the code was bugged as the current tests were not
able to stress specific corner cases.
It was replaced with easy to understand code that is safer and faster.
During the replication full resynchronization process, the RDB file is
transfered from the master to the slave. However there is a short
preamble to send, that is currently just the bulk payload length of the
file in the usual Redis form $..length..<CR><LF>.
This preamble used to be sent with a direct write call, assuming that
there was alway room in the socket output buffer to hold the few bytes
needed, however this does not scale in case we'll need to send more
stuff, and is not very robust code in general.
This commit introduces a more general mechanism to send a preamble up to
2GB in size (the max length of an sds string) in a non blocking way.
Before this commit redis-benchmark supported random argumetns in the
form of :rand:000000000000. In every string of that form, the zeros were
replaced with a random number of 12 digits at every command invocation.
However this was far from perfect as did not allowed to generate simply
random numbers as arguments, there was always the :rand: prefix.
Now instead every argument in the form __rand_int__ is replaced with a
12 digits number. Note that "__rand_int__" is 12 characters itself.
In order to implement the new semantic, it was needed to change a few
thigns in the internals of redis-benchmark, as new clients are created
cloning old clients, so without a stable prefix such as ":rand:" the old
way of cloning the client was no longer able to understand, from the old
command line, what was the position of the random strings to substitute.
Now instead a client structure is passed as a reference for cloning, so
that we can directly clone the offsets inside the command line.