We introduce the distinction between slow and fast commands since those
are two different sources of latency. An O(1) or O(log N) command without
side effects (can't trigger deletion of large objects as a side effect of
its execution) if delayed is a symptom of inherent latency of the system.
A non-fast command (commands that may run large O(N) computations) if
delayed may just mean that the user is executing slow operations.
The advices LATENCY should provide in this two different cases are
different, so we log the two classes of commands in a separated way.
This fixes detection of wrong subcommand (that resulted in the default
all-commands output instead) and allows COMMAND INFO to be called
without arguments (resulting into an empty array) which is useful in
programmtically generated calls like the following (in Ruby):
redis.commands("command","info",*mycommands)
Note: mycommands may be empty.
Static was removed since it is needed in order to get symbols in stack
traces. Minor changes in the source code were operated to make it more
similar to the existing Redis code base.
COMMANDS returns a nested multibulk reply for each
command in the command table. The reply for each
command contains:
- command name
- arity
- array of command flags
- start key position
- end key position
- key offset step
- optional: if the keys are not deterministic and
Redis uses an internal key evaluation function,
the 6th field appears and is defined as a status
reply of: REQUIRES ARGUMENT PARSING
Cluster clients need to know where the keys are in each
command to implement proper routing to cluster nodes.
Redis commands can have multiple keys, keys at offset steps, or other
issues where you can't always assume the first element after
the command name is the cluster routing key.
Using the information exposed by COMMANDS, client implementations
can have live, accurate key extraction details for all commands.
Also implements COMMANDS INFO [commands...] to return only a
specific set of commands instead of all 160+ commands live in Redis.
Technically the problem is due to the client type API that does not
return a special value for the master, however fixing it locally in the
CLIENT KILL command is better currently because otherwise we would
introduce a new output buffer limit class as a side effect.
From mailing list post https://groups.google.com/forum/#!topic/redis-db/D3k7KmJmYgM
In the file “config.h”, the definition HAVE_ATOMIC is used to indicate
if an architecture on which redis is implemented supports atomic
synchronization primitives. Powerpc supports atomic synchronization
primitives, however, it is not listed as one of the architectures
supported in config.h. This patch adds the __powerpc__ to the list of
architectures supporting these primitives. The improvement of redis
due to the atomic synchronization on powerpc is significant,
around 30% to 40%, over the default implementation using pthreads.
This proposal adds __powerpc__ to the list of architectures designated
to support atomic builtins.
From mailing list post https://groups.google.com/forum/#!topic/redis-db/QLjiQe4D7LA
In zmalloc.c the following primitives are currently used
to synchronize access to single global variable:
__sync_add_and_fetch
__sync_sub_and_fetch
In some architectures such as powerpc these primitives are overhead
intensive. More efficient C11 __atomic builtins are available with
newer GCC versions, see
http://gcc.gnu.org/onlinedocs/gcc-4.8.2/gcc/_005f_005fatomic-Builtins.html#_005f_005fatomic-Builtins
By substituting the following __atomic… builtins:
__atomic_add_fetch
__atomic_sub_fetch
the performance improvement on certain architectures such as powerpc can be significant,
around 10% to 15%, over the implementation using __sync builtins while there is only slight uptick on
Intel architectures because it was already enforcing Intel Strongly ordered memory semantics.
The selection of __atomic built-ins can be predicated on the definition of ATOMIC_RELAXED
which Is available on in gcc 4.8.2 and later versions.
While we have to output failing masters in order to provide an accurate
map (that may be the one of a Redis Cluster in down state because not
all slots are served by a working master), to provide slaves in FAIL
state is not a good idea since those are not necesarely needed, and the
client will likely incur into a latency penalty trying to connect with a
slave which is down.
Note that this means that CLUSTER SLOTS does not provide a *complete*
map of slaves, however this would not be of any help since slaves may be
added later, and a client that needs to scale reads and requires to
stay updated with the list of slaves, need to do a refresh of the map
from time to time, anyway.
CLUSTER SLOTS returns a Redis-formatted mapping from
slot ranges to IP/Port pairs serving that slot range.
The outer return elements group return values by slot ranges.
The first two entires in each result are the min and max slots for the range.
The third entry in each result is guaranteed to be either
an IP/Port of the master for that slot range - OR - null
if that slot range, for some reason, has no master
The 4th and higher entries in each result are replica instances
for the slot range.
Output comparison:
127.0.0.1:7001> cluster nodes
f853501ec8ae1618df0e0f0e86fd7abcfca36207 127.0.0.1:7001 myself,master - 0 0 2 connected 4096-8191
5a2caa782042187277647661ffc5da739b3e0805 127.0.0.1:7005 slave f853501ec8ae1618df0e0f0e86fd7abcfca36207 0 1402622415859 6 connected
6c70b49813e2ffc9dd4b8ec1e108276566fcf59f 127.0.0.1:7007 slave 26f4729ca0a5a992822667fc16b5220b13368f32 0 1402622415357 8 connected
2bd5a0e3bb7afb2b56a2120d3fef2f2e4333de1d 127.0.0.1:7006 slave 32adf4b8474fdc938189dba00dc8ed60ce635b0f 0 1402622419373 7 connected
5a9450e8279df36ff8e6bb1c139ce4d5268d1390 127.0.0.1:7000 master - 0 1402622418872 1 connected 0-4095
32adf4b8474fdc938189dba00dc8ed60ce635b0f 127.0.0.1:7002 master - 0 1402622419874 3 connected 8192-12287
5db7d05c245267afdfe48c83e7de899348d2bdb6 127.0.0.1:7004 slave 5a9450e8279df36ff8e6bb1c139ce4d5268d1390 0 1402622417867 5 connected
26f4729ca0a5a992822667fc16b5220b13368f32 127.0.0.1:7003 master - 0 1402622420877 4 connected 12288-16383
127.0.0.1:7001> cluster slots
1) 1) (integer) 0
2) (integer) 4095
3) 1) "127.0.0.1"
2) (integer) 7000
4) 1) "127.0.0.1"
2) (integer) 7004
2) 1) (integer) 12288
2) (integer) 16383
3) 1) "127.0.0.1"
2) (integer) 7003
4) 1) "127.0.0.1"
2) (integer) 7007
3) 1) (integer) 4096
2) (integer) 8191
3) 1) "127.0.0.1"
2) (integer) 7001
4) 1) "127.0.0.1"
2) (integer) 7005
4) 1) (integer) 8192
2) (integer) 12287
3) 1) "127.0.0.1"
2) (integer) 7002
4) 1) "127.0.0.1"
2) (integer) 7006
Instead of having an hardcoded IP address in the node configuration, we
autodiscover it via MEET messages for automatic update when the node is
restarted with a different IP address.
This mechanism was discussed in the context of PR #1782.
In the initialization test for each instance we used to unregister the
old master and register it again to clear the config.
However there is a race condition doing this: as soon as we unregister
and re-register "mymaster", another Sentinel can update the new
configuration with the old state because of gossip "hello" messages.
So the correct procedure is instead, unregister "mymaster" from all the
sentinel instances, and re-register it everywhere again.
Some deployments need traffic sent from a specific address. This
change uses the same policy as Cluster where the first listed bindaddr
becomes the source address for outgoing Sentinel communication.
Fixes#1667