It's a key invariant that when AOF is enabled, after the cluster
reshards, a crash-recovery event causes all the keys to be still fine
with the expected logical content. Now this is part of unit 04.
In issue #2948 a crash was reported in processCommand(). Later Oran Agra
(@oranagra) traced the bug (in private chat) in the following sequence
of events:
1. Some maxmemory is set.
2. The slave is the currently active client and is executing PING or
REPLCONF or whatever a slave can send to its master.
3. freeMemoryIfNeeded() is called since maxmemory is set.
4. flushSlavesOutputBuffers() is called by freeMemoryIfNeeded().
5. During slaves buffers flush, a write error could be encoutered in
writeToClient() or sendReplyToClient() depending on the version of
Redis. This will trigger freeClient() against the currently active
client, so a segmentation fault will likely happen in
processCommand() immediately after the call to freeMemoryIfNeeded().
There are different possible fixes:
1. Add flags to writeToClient() (recent versions code base) so that
we can ignore the write errors, and use this flag in
flushSlavesOutputBuffers(). However this is not simple to do in older
versions of Redis.
2. Use freeClientAsync() during write errors. This works but changes the
current behavior of releasing clients ASAP when possible. Normally
we write to clients during the normal event loop processing, in the
writable client, where there is no active client, so no care must be
taken.
3. The fix of this commit: to detect that the current client is no
longer valid. This fix is a bit "ad-hoc", but works across all the
versions and has the advantage of not changing the remaining
behavior. Only alters what happens during this race condition,
hopefully.
The old test, designed to do a transformation on the bits that was
invertible, in order to avoid touching the original memory content, was
not effective as it was redis-server --test-memory. The former often
reported OK while the latter was able to spot the error.
So the test was substituted with one that may perform better, however
the new one must backup the memory tested, so it tests memory in small
pieces. This limits the effectiveness because of the CPU caches. However
some attempt is made in order to trash the CPU cache between the fill
and the check stages, but not for the addressing test unfortunately.
We'll see if this test will be able to find errors where the old failed.
We use the new variadic/pipelined MIGRATE for faster migration.
Testing is not easy because to see the time it takes for a slot to be
migrated requires a very large data set, but even with all the overhead
of migrating multiple slots and to setup them properly, what used to
take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is
a good improvement. However the improvement can be a lot larger if:
1. We use large datasets where a single slot has many keys.
2. By moving more than 10 keys per iteration, making this configurable,
which is planned.
Close#2710Close#2711
We need to process replies after errors in order to delete keys
successfully transferred. Also argument rewriting was fixed since
it was broken in several ways. Now a fresh argument vector is created
and set if we are acknowledged of at least one key.
We wait a fixed amount of time (5 seconds currently) much greater than
the usual Cluster node to node communication latency, before migrating.
This way when a failover occurs, before detecting the new master as a
target for migration, we give the time to its natural slaves (the slaves
of the failed over master) to announce they switched to the new master,
preventing an useless migration operation.
The old version was modeled with two failovers, however after the first
it is possible that another slave will migrate to the new master, since
for some time the new master is not backed by any slave. Probably there
should be some pause after a failover, before the migration. Anyway the
test is simpler in this way, and depends less on timing.
Some time ago I broken replicas migration (reported in #2924).
The idea was to prevent masters without replicas from getting replicas
because of replica migration, I remember it to create issues with tests,
but there is no clue in the commit message about why it was so
undesirable.
However my patch as a side effect totally ruined the concept of replicas
migration since we want it to work also for instances that, technically,
never had slaves in the past: promoted slaves.
So now instead the ability to be targeted by replicas migration, is a
new flag "migrate-to". It only applies to masters, and is set in the
following two cases:
1. When a master gets a slave, it is set.
2. When a slave turns into a master because of fail over, it is set.
This way replicas migration targets are only masters that used to have
slaves, and slaves of masters (that used to have slaves... obviously)
and are promoted.
The new flag is only internal, and is never exposed in the output nor
persisted in the nodes configuration, since all the information to
handle it are implicit in the cluster configuration we already have.
Now we have a single function to call in any state of the slave
handshake, instead of using different functions for different states
which is error prone. Change performed in the context of issue #2479 but
does not fix it, since should be functionally identical to the past.
Just an attempt to make replication.c simpler to follow.
64 bit double math is not enough to make the test passing, and rounding
to 1.2999999 instead of 1.23 is not an error in the implementation.
Valgrind and sometimes other archs are not able to work with 80 bit
doubles.
There are some cases of printing unsigned integer with %d conversion
specificator and vice versa (signed integer with %u specificator).
Patch by Sergey Polovko. Backported to Redis from Disque.
My guess was that wait3() with WNOHANG could never return -1 and an
error. However issue #2897 may possibly indicate that this could happen
under non clear conditions. While we try to understand this better,
better to handle a return value of -1 explicitly, otherwise in the
case a BGREWRITE is in progress but wait3() returns -1, the effect is to
match the first branch of the if/else block since server.rdb_child_pid
is -1, and call backgroundSaveDoneHandler() without a good reason, that
will, in turn, crash the Redis server with an assertion.