Originally, only the +slave event which occurs when a slave is
reconfigured during sentinelResetMasterAndChangeAddress triggers a flush
of the config to disk. However, newly discovered slaves don't
apparently trigger this flush but do trigger the +slave event issuance.
So if you start up a sentinel, add a master, then add a slave to the
master (as a way to reproduce it) you'll see the +slave event issued,
but the sentinel config won't be updated with the known-slave entry.
This change makes sentinel do the flush of the config if a new slave is
deteted in sentinelRefreshInstanceInfo.
To rewrite the config in the loop that adds slaves back after a master
reset, in order to handle switching to another master, is useless: it
just adds latency since there is an fsync call in the inner loop,
without providing any additional guarantee, but the contrary, since if
after the first loop iteration the server crashes we end with just a
single slave entry losing all the other informations.
It is wiser to rewrite the config at the end when the full new
state is configured.
When we fail to setup the write handler it does not make sense to take
the client around, it is missing writes: whatever is a client or a slave
anyway the connection should terminated ASAP.
Moreover what the function does exactly with its return value, and in
which case the write handler is installed on the socket, was not clear,
so the functions comment are improved to make the goals of the function
more obvious.
Also related to #2485.
master was closing the connection if the RDB transfer took long time.
and also sent PINGs to the slave before it got the initial ACK, in which case the slave wouldn't be able to find the EOF marker.
Segfault introduced during a refactoring / warning suppression a few
commits away. This particular call assumed that it is safe to pass NULL
to the object pointer argument when we are sure the set has a given
encoding. This can't be assumed and is now guaranteed to segfault
because of the new API of setTypeNext().
This change fixes several warnings compiling at -O3 level with GCC
4.8.2, and at the same time, in case of misuse of the API, we have the
pointer initialize to NULL or the integer initialized to the value
-123456789 which is easy to spot by naked eye.
No semantical changes since to make dict.c truly able to scale over the
32 bit table size limit, the hash function shoulds and other internals
related to hash function output should be 64 bit ready.
rehashidx is always positive in the two code paths, since the only
negative value it could have is -1 when there is no rehashing in
progress, and the condition is explicitly checked.
Bug as old as Redis and blocking operations. It's hard to trigger since
only happens on instance role switch, but the results are quite bad
since an inconsistency between master and slave is created.
How to trigger the bug is a good description of the bug itself.
1. Client does "BLPOP mylist 0" in master.
2. Master is turned into slave, that replicates from New-Master.
3. Client does "LPUSH mylist foo" in New-Master.
4. New-Master propagates write to slave.
5. Slave receives the LPUSH, the blocked client get served.
Now Master "mylist" key has "foo", Slave "mylist" key is empty.
Highlights:
* At step "2" above, the client remains attached, basically escaping any
check performed during command dispatch: read only slave, in that case.
* At step "5" the slave (that was the master), serves the blocked client
consuming a list element, which is not consumed on the master side.
This scenario is technically likely to happen during failovers, however
since Redis Sentinel already disconnects clients using the CLIENT
command when changing the role of the instance, the bug is avoided in
Sentinel deployments.
Closes#2473.
There was a bug in Redis Cluster caused by clients blocked in a blocking
list pop operation, for keys no longer handled by the instance, or
in a condition where the cluster became down after the client blocked.
A typical situation is:
1) BLPOP <somekey> 0
2) <somekey> hash slot is resharded to another master.
The client will block forever int this case.
A symmentrical non-cluster-specific bug happens when an instance is
turned from master to slave. In that case it is more serious since this
will desynchronize data between slaves and masters. This other bug was
discovered as a side effect of thinking about the bug explained and
fixed in this commit, but will be fixed in a separated commit.
1. No need to set btype in processUnblockedClients(), since clients
flagged REDIS_UNBLOCKED should have it already cleared.
2. When putting clients in the unblocked clients list, clientsArePaused()
should flag them with REDIS_UNBLOCKED. Not strictly needed with the
current code but is more coherent.
When the list of unblocked clients were processed, btype was set to
blocking type none, but the client remained flagged with REDIS_BLOCKED.
When timeout is reached (or when the client disconnects), unblocking it
will trigger an assertion.
There is no need to process pending requests from blocked clients, so
now clientsArePaused() just avoid touching blocked clients.
Close#2467.
This commit moves the process of generating a new config epoch without
consensus out of the clusterCommand() implementation, in order to make
it reusable for other reasons (current target is to have a CLUSTER
FAILOVER option forcing the failover when no master majority is
reachable).
Moreover the commit moves other functions which are similarly related to
config epochs in a new logical section of the cluster.c file, just for
clarity.
Before we relied on the global cluster state to make sure all the hash
slots are linked to some node, when getNodeByQuery() is called. So
finding the hash slot unbound was checked with an assertion. However
this is fragile. The cluster state is often updated in the
clusterBeforeSleep() function, and not ASAP on state change, so it may
happen to process clients with a cluster state that is 'ok' but yet
certain hash slots set to NULL.
With this commit the condition is also checked in getNodeByQuery() and
reported with a identical error code of -CLUSTERDOWN but slightly
different error message so that we have more debugging clue in the
future.
Root cause of issue #2288.
Not perfect since The Solution IMHO is to have a DSL with a table of
configuration functions with type, limits, and aux functions to handle
the odd ones. However this hacky macro solution is already better and
forces to put limits in the range of numerical fields.
More field types to be refactored in the next commits hopefully.
Should be much faster, and regardless, the code is more obvious now
compared to generating a string just to get the return value of the
ll2stirng() function.
1. HVSTRLEN -> HSTRLEN. It's unlikely one needs the length of the key,
not clear how the API would work (by value does not make sense) and
there will be better names anyway.
2. Default is to return 0 when field is missing.
3. Default is to return 0 when key is missing.
4. The implementation was slower than needed, and produced unnecessary COW.
Related issue #2415.
1. Remove useless "cs" initialization.
2. Add a "select" var to capture a condition checked multiple times.
3. Avoid duplication of the same if (!copy) conditional.
4. Don't increment dirty if copy is given (no deletion is performed),
otherwise we propagate MIGRATE when not needed.
Less grays: more readable palette since usually we have a non linear
distribution of percentages and very near gray tones are hard to take
apart. Final part of the palette is gradient from yellow to red. The red
part is hardly reached because of usual distribution of latencies, but
shows up mainly when latencies are very high because of the logarithmic
scale, this is coherent to what people expect: red = bad.
The old version of SPOP with "count" argument used an API call of dict.c
which was actually designed for a different goal, and was not capable of
good distribution. We follow a different three-cases approach optimized
for different ratiion between sets and requested number of elements.
The implementation is simpler and allowed the removal of a large amount
of code.
Severan problems are addressed but still a few missing.
Since replication of this command was more complex than others since it
needs to replicate multiple SREM commands, an old API able to do this
was reused (it was taken inside the implementation since it was pretty
obvious soon or later that would be useful). The API was improved a bit
so that now a command may opt-out for the standard command replication
when the server.dirty counter is incremented, in order to "manually"
replicate what it wants.