Extend the MIGRATE extra freedom to be able to be called in the context
of the local slot, anytime there is a slot open in one or the other
direction (importing or migrating). This is useful for redis-trib to fix
the cluster when it has in an odd state.
Thix fix allows "redis-trib fix" to make its work in certain cases where
previously an error was reported.
For non existing keys, we don't want to send -ASK redirections to
MIGRATE, since when moving slots from the migrating node to the
importing node, we want just to ignore keys that are no longer there.
They may be expired or deleted between the GETKEYSINSLOT call and the
MIGRATE call. Otherwise this causes an error during migrations with
redis-trib (or equivalent cluster management tools).
We need to process replies after errors in order to delete keys
successfully transferred. Also argument rewriting was fixed since
it was broken in several ways. Now a fresh argument vector is created
and set if we are acknowledged of at least one key.
We wait a fixed amount of time (5 seconds currently) much greater than
the usual Cluster node to node communication latency, before migrating.
This way when a failover occurs, before detecting the new master as a
target for migration, we give the time to its natural slaves (the slaves
of the failed over master) to announce they switched to the new master,
preventing an useless migration operation.
Some time ago I broken replicas migration (reported in #2924).
The idea was to prevent masters without replicas from getting replicas
because of replica migration, I remember it to create issues with tests,
but there is no clue in the commit message about why it was so
undesirable.
However my patch as a side effect totally ruined the concept of replicas
migration since we want it to work also for instances that, technically,
never had slaves in the past: promoted slaves.
So now instead the ability to be targeted by replicas migration, is a
new flag "migrate-to". It only applies to masters, and is set in the
following two cases:
1. When a master gets a slave, it is set.
2. When a slave turns into a master because of fail over, it is set.
This way replicas migration targets are only masters that used to have
slaves, and slaves of masters (that used to have slaves... obviously)
and are promoted.
The new flag is only internal, and is never exposed in the output nor
persisted in the nodes configuration, since all the information to
handle it are implicit in the cluster configuration we already have.
There was a bug in Redis Cluster caused by clients blocked in a blocking
list pop operation, for keys no longer handled by the instance, or
in a condition where the cluster became down after the client blocked.
A typical situation is:
1) BLPOP <somekey> 0
2) <somekey> hash slot is resharded to another master.
The client will block forever int this case.
A symmentrical non-cluster-specific bug happens when an instance is
turned from master to slave. In that case it is more serious since this
will desynchronize data between slaves and masters. This other bug was
discovered as a side effect of thinking about the bug explained and
fixed in this commit, but will be fixed in a separated commit.
This commit moves the process of generating a new config epoch without
consensus out of the clusterCommand() implementation, in order to make
it reusable for other reasons (current target is to have a CLUSTER
FAILOVER option forcing the failover when no master majority is
reachable).
Moreover the commit moves other functions which are similarly related to
config epochs in a new logical section of the cluster.c file, just for
clarity.
Before we relied on the global cluster state to make sure all the hash
slots are linked to some node, when getNodeByQuery() is called. So
finding the hash slot unbound was checked with an assertion. However
this is fragile. The cluster state is often updated in the
clusterBeforeSleep() function, and not ASAP on state change, so it may
happen to process clients with a cluster state that is 'ok' but yet
certain hash slots set to NULL.
With this commit the condition is also checked in getNodeByQuery() and
reported with a identical error code of -CLUSTERDOWN but slightly
different error message so that we have more debugging clue in the
future.
Root cause of issue #2288.
1. Remove useless "cs" initialization.
2. Add a "select" var to capture a condition checked multiple times.
3. Avoid duplication of the same if (!copy) conditional.
4. Don't increment dirty if copy is given (no deletion is performed),
otherwise we propagate MIGRATE when not needed.
This improves PFAIL -> FAIL switch. Too late at this point in the RC
releases to add proper PFAIL/FAIL separate dictionary to do this in a
less randomized way. Tested in practice with experiments that this
helps. PFAIL -> FAIL average with 20 nodes and node-timeout set to 5
seconds takes 2.5 seconds without this commit, 1 second with this
commit.
Otherwise it is impossible to receive the majority of failure reports in
the node_timeout*2 window in larger clusters.
Still with a 200 nodes cluster, 20 gossip sections are a very reasonable
amount of bytes to send.
A side effect of this change is also fater cluster nodes joins for large
clusters, because the cluster layout makes less time to propagate.
Otherwise we risk sending not initialized data to other nodes, that may
contain anything. This was actually not possible only because the
initialization of the buffer where the cluster packets header is created
was larger than the 3 gossip sections we use, so the memory was already
all filled with zeroes by the memset().
Fixes valgrind error:
48 bytes in 1 blocks are definitely lost in loss record 196 of 373
at 0x4910D3: je_malloc (jemalloc.c:944)
by 0x42807D: zmalloc (zmalloc.c:125)
by 0x41FA0D: dictGetIterator (dict.c:543)
by 0x41FA48: dictGetSafeIterator (dict.c:555)
by 0x459B73: clusterHandleSlaveMigration (cluster.c:2776)
by 0x45BF27: clusterCron (cluster.c:3123)
by 0x423344: serverCron (redis.c:1239)
by 0x41D6CD: aeProcessEvents (ae.c:311)
by 0x41D8EA: aeMain (ae.c:455)
by 0x41A84B: main (redis.c:3832)
If array has N elements, we can't read +1 if we are already at N.
Also, we need to move elements by their storage size in the array,
not just by individual bytes.
[maybe] Fixes valgrind errors:
32 bytes in 4 blocks are definitely lost in loss record 107 of 228
at 0x80EA447: je_malloc (jemalloc.c:944)
by 0x806E59C: zrealloc (zmalloc.c:125)
by 0x80A9AFC: clusterSetMaster (cluster.c:801)
by 0x80AEDC9: clusterCommand (cluster.c:3994)
by 0x80682A5: call (redis.c:2049)
by 0x8068A20: processCommand (redis.c:2309)
by 0x8076497: processInputBuffer (networking.c:1143)
by 0x8073BAF: readQueryFromClient (networking.c:1208)
by 0x8060E98: aeProcessEvents (ae.c:412)
by 0x806123B: aeMain (ae.c:455)
by 0x806C3DB: main (redis.c:3832)
64 bytes in 8 blocks are definitely lost in loss record 143 of 228
at 0x80EA447: je_malloc (jemalloc.c:944)
by 0x806E59C: zrealloc (zmalloc.c:125)
by 0x80AAB40: clusterProcessPacket (cluster.c:801)
by 0x80A847F: clusterReadHandler (cluster.c:1975)
by 0x30000FF: ???
80 bytes in 10 blocks are definitely lost in loss record 148 of 228
at 0x80EA447: je_malloc (jemalloc.c:944)
by 0x806E59C: zrealloc (zmalloc.c:125)
by 0x80AAB40: clusterProcessPacket (cluster.c:801)
by 0x80A847F: clusterReadHandler (cluster.c:1975)
by 0x2FFFFFF: ???
Fixes valgrind error:
Syscall param write(buf) points to uninitialised byte(s)
at 0x514C35D: ??? (syscall-template.S:81)
by 0x456B81: clusterWriteHandler (cluster.c:1907)
by 0x41D596: aeProcessEvents (ae.c:416)
by 0x41D8EA: aeMain (ae.c:455)
by 0x41A84B: main (redis.c:3832)
Address 0x5f268e2 is 2,274 bytes inside a block of size 8,192 alloc'd
at 0x4932D1: je_realloc (jemalloc.c:1297)
by 0x428185: zrealloc (zmalloc.c:162)
by 0x4269E0: sdsMakeRoomFor.part.0 (sds.c:142)
by 0x426CD7: sdscatlen (sds.c:251)
by 0x4579E7: clusterSendMessage (cluster.c:1995)
by 0x45805A: clusterSendPing (cluster.c:2140)
by 0x45BB03: clusterCron (cluster.c:2944)
by 0x423344: serverCron (redis.c:1239)
by 0x41D6CD: aeProcessEvents (ae.c:311)
by 0x41D8EA: aeMain (ae.c:455)
by 0x41A84B: main (redis.c:3832)
Uninitialised value was created by a stack allocation
at 0x457810: nodeUpdateAddressIfNeeded (cluster.c:1236)