After the introduction of the list with clients with pending writes, to
process clients incrementally outside of the event loop we also need to
process the pending writes list.
The code was broken and resulted in redis-cli --pipe to, most of the
times, writing everything received in the standard input to the Redis
connection socket without ever reading back the replies, until all the
content to write was written.
This means that Redis had to accumulate all the output in the output
buffers of the client, consuming a lot of memory.
Fixed thanks to the original report of anomalies in the behavior
provided by Twitter user @fsaintjacques.
HINCRBY* tests later used the value "tmp" that was sometimes generated
by the random key generation function. The result was ovewriting what
Tcl expected to be inside Redis with another value, causing the next
HSTRLEN test to fail.
Georadius works by computing the center + neighbors squares covering all
the area of the specified position and radius. Then a distance filter is
used to remove elements which are actually outside the range.
When a huge radius is used, like 5000 km or more, adjacent neighbors may
collide and be the same, leading to the reporting of the same element
multiple times. This only happens in the edge case of huge radius but is
not ideal.
A robust but slow solution would involve qsorting the range to remove
all the duplicates. However since the collisions are only in adjacent
boxes, for the way they are ordered in the code, it is much faster to
just check if the current box is the same as the previous one processed.
This commit adds a regression test for the bug.
Fixes#2767.
MOVE was not able to move the TTL: when a key was moved into a different
database number, it became persistent like if PERSIST was used.
In some incredible way (I guess almost nobody uses Redis MOVE) this bug
remained unnoticed inside Redis internals for many years.
Finally Andy Grunwald discovered it and opened an issue.
This commit fixes the bug and adds a regression test.
Close#2765.
As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE
attempt returns an error, we scan the list of slaves in order to remove
them since there is no way to serve them currently.
However we check for the replication state BGSAVE_START, which was
modified by rdbSaveToSlaveSockets() before forking(). So when fork fails
the state of slaves remain BGSAVE_END and no cleanup is performed.
This commit fixes the problem by making rdbSaveToSlavesSockets() able to
undo the state change on fork failure.
This additional info may provide more clues about the test randomly
failing from time to time. Probably the failure is due to some previous
test that overwrites the logical content in the Tcl variable, but this
will make the problem more obvious.
Before this commit, after triggering a BGSAVE it was up to the caller of
startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in
order to update them accordingly. However when the replication target is
the socket, this is not possible since the process of updating the
slaves and sending the FULLRESYNC reply must be coupled with the process
of starting an RDB save (the reason is, we need to send the FULLSYNC
command and spawn a child that will start to send RDB data to the slaves
ASAP).
This commit moves the responsibility of handling slaves in
WAIT_BGSAVE_START to startBgsavForReplication() so that for both
diskless and disk-based replication we have the same chain of
responsiblity. In order accomodate such change, the syncCommand() also
needs to put the client in the slave list ASAP (just after the initial
checks) and not at the end, so that startBgsavForReplication() can find
the new slave alrady in the list.
Another related change is what happens if the BGSAVE fails because of
fork() or other errors: we now remove the slave from the list of slaves
and send an error, scheduling the slave connection to be terminated.
As a side effect of this change the following errors found by
Oran Agra are fixed (thanks!):
1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned
up, otherwise they remain in a wrong state forever since we setup them
for full resync before actually trying to fork.
2. updateSlavesWaitingBgsave() with replication target set as "socket"
was broken since the function changed the slaves state from
WAIT_BGSAVE_START to WAIT_BGSAVE_END via
replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets()
will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
It is simpler if removing the read event handler from the FD is up to
slaveTryPartialResynchronization, after all it is only called in the
context of syncWithMaster.
This commit also makes sure that on error all the event handlers are
removed from the socket before closing it.
Talking with @oranagra we had to reason a little bit to understand if
this function could ever flush the output buffers of the wrong slaves,
having online state but actually not being ready to receive writes
before the first ACK is received from them (this happens with diskless
replication).
Next time we'll just read this comment.
Add the concept of slaves capabilities to Redis, the slave now presents
to the Redis master with a set of capabilities in the form:
REPLCONF capa SOMECAPA capa OTHERCAPA ...
This has the effect of setting slave->slave_capa with the corresponding
SLAVE_CAPA macros that the master can test later to understand if it
the slave will understand certain formats and protocols of the
replication process. This makes it much simpler to introduce new
replication capabilities in the future in a way that don't break old
slaves or masters.
This patch was designed and implemented together with Oran Agra
(@oranagra).
Our function to read a line with a timeout handles newlines as requests
to refresh the timeout, however the code kept subtracting the buffer
size left every time a newline was received, for a bug in the loop
logic. Fixed by this commit.
For PINGs we use the period configured by the user, but for the newlines
of slaves waiting for an RDB to be created (including slaves waiting for
the FULLRESYNC reply) we need to ping with frequency of 1 second, since
the timeout is fixed and needs to be refreshed.
In previous commits we moved the FULLRESYNC to the moment we start the
BGSAVE, so that the offset we provide is the right one. However this
also means that we need to re-emit the SELECT statement every time a new
slave starts to accumulate the changes.
To obtian this effect in a more clean way, the function that sends the
FULLRESYNC reply was overloaded with a more important role of also doing
this and chanigng the slave state. So it was renamed to
replicationSetupSlaveForFullResync() to better reflect what it does now.
This commit attempts to fix a bug involving PSYNC and diskless
replication (currently experimental) found by Yuval Inbar from Redis Labs
and that was later found to have even more far reaching effects (the bug also
exists when diskstore is off).
The gist of the bug is that, a Redis master replies with +FULLRESYNC to
a PSYNC attempt that fails and requires a full resynchronization.
However, the baseline offset sent along with FULLRESYNC was always the
current master replication offset. This is not ok, because there are
many reasosn that may delay the RDB file creation. And... guess what,
the master offset we communicate must be the one of the time the RDB
was created. So for example:
1) When the BGSAVE for replication is delayed since there is one
already but is not good for replication.
2) When the BGSAVE is not needed as we attach one currently ongoing.
3) When because of diskless replication the BGSAVE is delayed.
In all the above cases the PSYNC reply is wrong and the slave may
reconnect later claiming to need a wrong offset: this may cause
data curruption later.
Using chained replication where C is slave of B which is in turn slave of
A, if B reconnects the replication link with A but discovers it is no
longer possible to PSYNC, slaves of B must be disconnected and PSYNC
not allowed, since the new B dataset may be completely different after
the synchronization with the master.
Note that there are varius semantical differences in the way this is
handled now compared to the past. In the past the semantics was:
1. When a slave lost connection with its master, disconnected the chained
slaves ASAP. Which is not needed since after a successful PSYNC with the
master, the slaves can continue and don't need to resync in turn.
2. However after a failed PSYNC the replication backlog was not reset, so a
slave was able to PSYNC successfully even if the instance did a full
sync with its master, containing now an entirely different data set.
Now instead chained slaves are not disconnected when the slave lose the
connection with its master, but only when it is forced to full SYNC with
its master. This means that if the slave having chained slaves does a
successful PSYNC all its slaves can continue without troubles.
See issue #2694 for more details.