setTypeRandomElements() now returns unsigned long, and also uses unsigned long for anything related to count of members.
spopWithCountCommand() now uses unsigned long elements_returned instead of int, for values returned from setTypeRandomElements()
If we woke up to accept a connection, but we can't
accept it, inform the user of the error going on
with their networking.
(The previous message was the same for success or error!)
spopCommand() now runs spopWithCountCommand() in case the <count> param is found.
Added intsetRandomMembers() to Intset: Copies N random members from the set into inputted 'values' array. Uses either the Knuth or Floyd sample algos depending on ratio count/size.
Added setTypeRandomElements() to SET type: Returns a number of random elements from a non empty set. This is a version of setTypeRandomElement() that is modified in order to return multiple entries, using dictGetRandomKeys() and intsetRandomMembers().
Added tests for SPOP with <count>: unit/type/set, unit/scripting, integration/aof
--
Cleaned up code a bit to match with required Redis coding style
Otherwise there are security risks, especially when providing Redis as a
service, the user may "sniff" for admin commands renamed to an
unguessable string via rename-command in redis.conf.
Previously the string was created empty then re-sized
to fit the offset, but sds resize causes the sds to
over-allocate by at least 1 MB (which is a lot when
you are operating at bit-level access).
This also improves the speed of initial sets by 2% to 6%
based on quick testing.
Patch logic provided by @oranagra
Fixes#1918
Improvements:
- Return empty string if asking for non-existing section (INFO foo)
- Fix potential memory leak (caused by sdsempty() then returned if >2 args)
- Clean up argument parsing
- Allow "all" as valid section (same as "default" or zero args currently)
- Move strcasecmp to end of evaluation chain in conditionals
Also, since we're C99, I moved some variable declarations to be closer
to where they are actually used (saves us from needing to free an empty info
if detect argument errors up front).
Closes#1915Closes#1966
There is no standard cross-platform way of obtaining
system memory info, but I found a useful function
convering all common platforms. I removed support
for uncommon Redis platforms (windows, AIX) and left
others intact.
For more info, see:
http://nadeausoftware.com/articles/2012/09/c_c_tip_how_get_physical_memory_size_system
The system memory info is cached on startup, but some systems
may be able to change the amount of memory visible to Redis
at runtime if Redis is deployed in a VM or container.
Also see #1820
Slaves key expire is orchestrated by the master. Sometimes the master
will send the synthesized DEL to expire keys on the slave with a non
trivial delay (when the key is not accessed, only the incremental expiry
algorithm will expire it in background).
During that time, a key is logically expired, but slaves still return
the key if you GET (or whatever) it. This is a bad behavior.
However we can't simply trust the slave view of the key, since we need
the master to be able to send write commands to update the slave data
set, and DELs should only happen when the key is expired in the master
in order to ensure consistency.
However 99.99% of the issues with this behavior is when a client which
is not a master sends a read only command. In this case we are safe and
can consider the key as non existing.
This commit does a few changes in order to make this sane:
1. lookupKeyRead() is modified in order to return NULL if the above
conditions are met.
2. Calls to lookupKeyRead() in commands actually writing to the data set
are repliaced with calls to lookupKeyWrite().
There are redundand checks, so for example, if in "2" something was
overlooked, we should be still safe, since anyway, when the master
writes the behavior is to don't care about what expireIfneeded()
returns.
This commit is related to #1768, #1770, #2131.
I guess the initial goal of the initialization was to suppress GCC
warning, but if we have to initialize, we can do it with the base-case
value instead of NULL which is never retained.
in the case (all chars of the string s found in 'cset' ),
line[573] will no more do the same thing line[572] did.
this will be more faster especially in the case that the string s is very long and all chars of string s found in 'cset'
Track bandwidth used by clients and replication (but diskless
replication is not tracked since the actual transfer happens in the
child process).
This includes a refactoring that makes tracking new instantaneous
metrics simpler.
PFCOUNT is technically speaking a write command, since the cached value
of the HLL is exposed in the data structure (design error, mea culpa), and
can be modified by PFCOUNT.
However if we flag PFCOUNT as "w", read only slaves can't execute the
command, which is a problem since there are environments where slaves
are used to scale PFCOUNT reads.
Nor it is possible to just prevent PFCOUNT to modify the data structure
in slaves, since without the cache we lose too much efficiency.
So while this commit allows slaves to create a temporary inconsistency
(the strings representing the HLLs in the master and slave can be
different in certain moments) it is actually harmless.
In the long run this should be probably fixed by turning the HLL into a
more opaque representation, for example by storing the cached value in
the part of the string which is not exposed (this should be possible
with SDS strings).
bulk_data field size was not removed from the count. It is not possible
to declare it simply as 'char bulk_data[]' since the structure is nested
into another structure.
Because of (not so) recent Redis changes, now the LRU internally
reported unit is milliseconds, not seconds, but the DEBUG OBJECT output
was still claiming seconds while providing milliseconds.
However OBJECT IDLETIME was working as expected, which is the correct
API to use.
zmalloc(0) cauesd to actually trigger a non-zero allocation since with
standard libc malloc we have our own zmalloc header for memory tracking,
but at the same time the returned pointer is at the end of the block and
not in the middle. This triggers a false positive when testing with
valgrind.
When the inline protocol args count is 0, we now avoid reallocating
c->argv, preventing the issue to happen.
Issue: #2157
As the SET command is parsed, it remembers which options are already set
and if a duplicate option is found, raises an error because it is
essentially an invalid syntax.
It still allows mutually exclusive options like EX and PX because taking
an option over another (precedence) is not essentially a syntactic
error.
Sentinel queries the INFO from every master and from every replica of
every master.
We can cache the INFO results in Sentinel so Sentinel can be a single
place to quickly get all INFO output for an entire Sentinel monitoring
group.
This commit gives us SENTINEL INFO-CACHE in two forms:
- SENTINEL INFO-CACHE — returns all masters and all replicas
- SENTINEL INFO-CACHE master0 master1 ... masterN — vararg specify masters
Results are returned as a multibulk reply with two top-level entries
for each master. The first entry for each master is the name of the master.
The second entry is a nested multibulk reply with the contents of INFO,
first for the master, then an additional entry for each of the
replicas.
RDB EOF detection was relying on the final part of the RDB transfer to
be a magic 40 bytes EOF marker. However as the slave is put online
immediately, and because of sockets timeouts, the replication stream is
actually contiguous with the RDB file.
This means that to detect the EOF correctly we should either:
1) Scan all the stream searching for the mark. Sucks CPU-wise.
2) Start to send the replication stream only after an acknowledge.
3) Implement a proper chunked encoding.
For now solution "2" was picked, so the master does not start to send
ASAP the stream of commands in the case of diskless replication. We wait
for the first REPLCONF ACK command from the slave, that certifies us
that the slave correctly loaded the RDB file and is ready to get more
data.
Both upstart and systemd provide a way for daemons to
be supervised, as well as a mechanism for them to
signal their readyness status.
This patch provides compatibility with this functionality while
not interfering with other methods.
With this, it will be possible to use `expect stop` with upstart
and `Type=notify` with systemd.
A more detailed explanation of the mechanism can be found here:
http://spootnik.org/entries/2014/11/09_pid-tracking-in-modern-init-systems.html
if redis works in cluster-mode and redis-cli was run with argv, reconnect if needs.
example:
./redis-cli set foo bar
if return is MOVED redis-cli just do nothing.
Same as the original bind fixes (we just missed these the
first time around).
This helps Redis not automatically send
connections from the first IP on an interface if we are bound
to a specific IP address (e.g. with multiple IP aliases on one
interface, you want to send from _your_ IP, not from the first IP
on the interface).
People mostly use SORT against lists, but our prior
behavior was pretending lists were an unordered bag
requiring a forced-sort when no sort was requested.
We can just use the native list ordering to ensure
consistency across replicaion and scripting calls.
Closes#2079Closes#545 (again)
This caused BGSAVE to be triggered a second time without any need when
we switch from socket to disk target via the command
CONFIG SET repl-diskless-sync no
and there is already a slave waiting for the BGSAVE to start.
Also comments clarified about what is happening.
EWOULDBLOCK with the fdset rio target is returned when we try to write
but the send timeout socket option triggered an error. Better to
translate the error in something the user can actually recognize as a
timeout.
We need to avoid that a child -> slaves transfer can continue forever.
We use the same timeout used as global replication timeout, which is
documented to also affect I/O operations during bulk transfers.
To perform a socket write() for each RDB rio API write call was
extremely unefficient, so now rio has minimal buffering capabilities.
Writes are accumulated into a buffer and only when a given limit is
reacehd are actually wrote to the N slaves FDs.
Trivia: rio lacked support for buffering since our targets were:
1) Memory buffers.
2) C standard I/O.
Both were buffered already.
This is useful for normal replication in order to refresh the slave
when we are persisting on disk, but for diskless replication the
child is already receiving data while in WAIT_BGSAVE_END state.
If we turn from diskless to disk-based replication via CONFIG SET, we
need a way to start a BGSAVE if there are slaves alerady waiting for a
BGSAVE to start. Normally with disk-based replication we do it as soon
as the previous child exits, but when there is a configuration change
via CONFIG SET, we may have slaves in WAIT_BGSAVE_START state without
an RDB background process currently active.
Fdset target is used when we want to write an RDB file directly to
slave's sockets. In this setup as long as there is a single slave that
is still receiving our payload, we want to continue sennding instead of
aborting. However rio calls should abort of no FD is ok.
Also we want the errors reported so that we can signal the parent who is
ok and who is broken, so there is a new set integers with the state of
each fd. Zero is ok, non-zero is the errno of the failure, if avaialble,
or a generic EIO.
A few people have written custom C commands because bit
manipulation isn't exposed through Lua. Let's give
them Mike Pall's bitop.
This adds bitop 1.0.2 (2012-05-08) from http://bitop.luajit.org/
bitop is imported as "bit" into the global namespace.
New Lua commands: bit.tobit, bit.tohex, bit.bnot, bit.band, bit.bor, bit.bxor,
bit.lshift, bit.rshift, bit.arshift, bit.rol, bit.ror, bit.bswap
Verification of working (the asserts would abort on error, so (nil) is correct):
127.0.0.1:6379> eval "assert(bit.tobit(1) == 1); assert(bit.band(1) == 1); assert(bit.bxor(1,2) == 3); assert(bit.bor(1,2,4,8,16,32,64,128) == 255)" 0
(nil)
127.0.0.1:6379> eval 'assert(0x7fffffff == 2147483647, "broken hex literals"); assert(0xffffffff == -1 or 0xffffffff == 2^32-1, "broken hex literals"); assert(tostring(-1) == "-1", "broken tostring()"); assert(tostring(0xffffffff) == "-1" or tostring(0xffffffff) == "4294967295", "broken tostring()")' 0
(nil)
Tests also integrated into the scripting tests and can be run with:
./runtest --single unit/scripting
Tests are excerpted from `bittest.lua` included in the bitop distribution.
With the exception of nodes sending MEET packets: we have to trust them
since they can send us MEET packets only when the cluster is initially
created or because sysadmin manual action.
In the cluster evaluation function we are supposed to set the cluster
state as "fail" if we are among a minority, however the code was not
detecting to be into a minority partition if exactly half the masters
were reachable, which is a minority.
We need to remember what is the saving strategy of the current RDB child
process, since the configuration may be modified at runtime via CONFIG
SET and still we'll need to understand, when the child exists, what to
do and for what goal the process was initiated: to create an RDB file
on disk or to write stuff directly to slave's sockets.
However we don't try to do this if the integer is already inside a range
representable with a shared integer.
The performance gain appears to be around ~15% in micro benchmarks,
however in the long run this also helps to improve locality, so should
have more, hard to measure, benefits.
Some language in the comment was difficult
to understand, so this commit: clarifies wording, removes
unnecessary words, and relocates some dependent clauses
closer to what they actually describe.
I also tried to break up longer chains of thought
(if X, then Y, and Q, and also F, so obviously M)
into more manageable chunks for ease of understanding.
- Remove trailing newlines from redis.conf
- Fix comment misspelling
- Clarifies zipEncodeLength usage and a C API mention (#1243, #1242)
- Fix cluster typos (inspired by @papanikge #1507)
- Fix rewite -> rewrite in a few places (inspired by #682)
Closes#1243, #1242, #1507
The old DEBUG POPULATE form for automatic creation of test keys is:
DEBUG POPULATE <count>
Now an additional form is available:
DEBUG POPULATE <count> <prefix>
When prefix is not specified, it defaults to "key", so the keys are
named incrementally from key:0 to key:<count-1>. Otherwise the specified
prefix is used instead of "key".
The command is useful in order to populate different Redis instances
with key names guaranteed to don't collide. There are other debugging
uses, for example it is possible to add additional N keys using a count
of N and a random prefix at every call.
Following the CLIENT LIST output format, we prefix the unix socket
address with a "/" so that it is different than an IPv4/6 address.
This makes parsing simpler.
Related to #2010.
This fixes a potential bug that was never observed in practice since
what happens is that the asynchronous connect returns ok (to fail later,
calling the handler) every time, so a ping is queued, and sent_ping
happens to always be populated.
Howver technically connect(2) with a non blocking socket may return an
error synchronously, so before this fix the code was not correct.
It is not clear if files open in append only mode will automatically fix
their offset after a truncate(2) operation. This commit makes sure that
we reposition the AOF file descriptor offset at the end of the file
after a truncated AOF is loaded and trimmed to the last valid command.
Recently we introduced the ability to load truncated AOFs, but
unfortuantely the support was broken since the server, after loading the
truncated AOF, continues appending to the file that is corrupted at the
end. The problem is fixed only in the next AOF rewrite.
This commit fixes the issue by truncating the AOF to the last valid
opcode, and aborting if it is not possible to truncate the file
correctly.
The code to check the number of voters was never updated to follow the new
Sentinel specification, so the number of voters was computed using only
the set of Sentinels that provided a vote.
This means that there is a changing majority on partitions, even if
usually the issue is not triggered because of the configured quorum
check (what was broken was the other implicit check that requires anyway
half of the known sentinels to agree in order to start a failover).
Because of the new ability to start with a truncated AOF, we need
to correctly release all the memory on EOF error. Otherwise there is a
small leak, that is not really a problem, but causes a false positive in
the tests that detect memory leaks.
The original implementation was modified in order to allow to
selectively announce a different IP or port, and to rewrite the two
options in the config file after a rewrite.
Previously, GETRANGE of a key containing nothing ("")
would allocate a large (size_t)-1 return value causing
crashes on 32bit builds when it tried to allocate the
4 GB return string.
32 bit builds don't have a big enough long to capture
the same range as a 64 bit build. If we use "long long"
we get proper size limits everywhere.
Also updates size of unsigned comparison to fit new size of `end`.
Fixes#1981
This allows to support datasets with more than 2 billion of keys
(possible in very large memory instances, this bug was actually
reported).
Closes issue #1814.
This just deletes old code that didn't get removed when
logic changed. We were setting offsets that never
got read anywhere.
Since clients are now just cloned, we don't need to track
per-client buffer offsets anywhere because they are all
the same from the original client.
This commit adds a size check after initial config
line parsing to make sure we have *at least* 8 arguments
per line.
Also, instead of asserting for cluster->myself, we just test
and error out normally (since the error does a hard exit anyway).
Closes#1597
For some Solaris flavours, the inet_aton in in resolv library.
Not linking this library will introduce link error.
Improves compatability with older Solaris and still
works on new Solaris.
Closes#1092
The funciton was also modified in order to be more standalone and
produce an output without trailing spaces, making the reuse simpler.
The global variable was renamed in cammel case as most other Redis
globals, except the main ones we refer too many times, like 'server'.
Found by The Mayhem Team (Alexandre Rebert, Thanassis Avgerinos,
Sang Kil Cha, David Brumley, Manuel Egele) Cylab, Carnegie Mellon
University. See http://bugs.debian.org/716259 for more.
Signed-off-by: Chris Lamb <lamby@debian.org>
Fixes#1191
Previously, "MOVE key somestring" would move the key to
DB 0 which is just unexpected and wrong.
String as DB == error.
Test added too.
Modified by @antirez in order to use the getLongLongFromObject() API
instead of strtol().
Fixes#1428
Also adds test for numsub — due to tcl being tcl,
it doesn't capture the "numberness" of the fix,
but now we at least have one test case for numsub.
Closes#1561
Modified by @antirez since the original fix to genInfoString() looked
weak. Probably the clang analyzer complained about `section` being
possibly NULL, and strcasecmp() called with a NULL pointer. In the
practice this can never happen, still for the sake of correctness
the right fix is not to modify only the first call, but to set `section`
to the value of "default" if it happens to be NULL.
Closes#1660
We only want to use the last STORE key, but we have to record
we actually found a STORE key so we can increment the final return
key count.
Test added to prevent further regression.
Closes#1883, #1645, #1647
Previously the end was casted to a smaller type
which resulted in a wrong check and failed
with values larger than handled by unsigned.
Closes#1847, #1844
Cluster leaks memory while connecting due to missing freeaddrinfo()
(Commit modified by @antirez. The freeaddrinfo() call was misplaced so
in case of no address was bound, the memory leak was still there).
Closes#1801
Previously redis-cli would happily show "-1" or "99999"
as valid DB choices.
Now, if the SELECT call returned an error, we don't update
the DB number in the CLI.
Inspired by @anupshendkar in #1313Fixes#566, #1313
Previously, if you did SELECT then AUTH, redis-cli
would show your SELECT'd db even though it didn't
happen.
Note: running into this situation is a (hopefully) very limited
used case of people using multiple DBs and AUTH all at the same
time.
Fixes antirez#1639
Replica migration algorithm modified so that slaves never try to migrate
to masters that were never configured to have slaves in the past.
We want the algorithm to take care of masters that remained without
*working* slaves, but that used to have slaves according to the cluster
configuration.
Based on ideas documented in this blog post:
https://www.facebook.com/notes/facebook-engineering/three-optimization-tips-for-c/10151361643253920
The original code was modified to handle signed integers, reformetted to
fit inside the Redis code base, and was stress-tested with a program
in order to validate the implementation against snprintf().
Redis was measured to be measurably faster from the point of view of
clients in real-world operations because of this change, since sometimes
number to string conversion is used extensively (for example every time
a GET results into an integer encoded object to be returned to the
user).
This is just a quickfix, for the nature of the test the right way to fix
it is to average the error of N runs, since otherwise it is always
possible to get a false positive with a bad run, or to minimize too much
this possibility we may end testing with too much "large" error ranges.
The user @kjmph provided excellent ideas to improve speed of ZUNIONSTORE
(in certain cases by many order of magnitude), together with an
implementation of the ideas.
While the ideas were sounding, the implementation could be improved both
in terms of speed and clearness, so that's my attempt at reimplementing
the speedup proposed, trying to improve by directly using just a
dictionary with an embedded score inside, and reusing the single-pass
aggregate + order-later approach.
Note that you can't apply this commit without applying the previous
commit in this branch that adds a double in the dictEntry value union.
Issue #1786.
For non-empty masters, CLUSTER RESET is denied, and the user requires to
start to reset a node by explicitly clearing it with FLUSHALL.
However CLUSTER RESET when executed with slaves don't have this
restrictions since data is just a replica of the master, and with
read-only slaves it is also not possible to remove the data set. However
the node was turned from slave to master after a reset, without touching
the old slave data. This is 99.99% of times not appropriate and forces
full resets to follow this path to work with both slave and master
nodes:
FLUSHALL
CLUSTER RESET HARD
FLUSHALL
Since we need the first flushall for masters, and the second for slaves.
This commit changes the behavior so that CLUSTER RESET removes the data set
of a slave node during a reset, in the moment it gets turned into a master,
so the new pattern is simply:
FLUSHALL (that may fail for slaves)
CLUSTER RESET
The introduction of --from --to --slots --yes options allow to reshard
from cli in an automated way from scripts. The code is ugly and needs
refactoring as soon as we get it in RC / stable release.
In order to make sure every object has its own private LRU counter, when
maxmemory is enabled tryObjectEncoding() does not use the pool of shared
integers. However when the policy is not LRU-based, it does not make
sense to do so, and it is much better to save memory using shared
integers.
PING can now be called with an additional arugment, behaving exactly
like the ECHO command. PING can now also be called in Pub/Sub mode (with
one more more subscriptions to channels / patterns) in order to trigger
the delivery of an asynchronous pong message with the optional payload.
This fixes issue #420.
The code tested many times if a client had active Pub/Sub subscriptions
by checking the length of a list and dictionary where the patterns and
channels are stored. This was substituted with a client flag called
REDIS_PUBSUB that is simpler to test for. Moreover in order to manage
this flag some code was refactored.
This commit is believed to have no effects in the behavior of the
server.
Previously, the command definition for the OBJECT command specified
a minimum of two args (and that it was variadic), which meant that
if you sent this:
OBJECT foo
When cluster was enabled, it would result in an assertion/SEGFAULT
when Redis was attempting to extract keys.
It appears that OBJECT is not variadic, and only ever takes 3 args.
https://gist.github.com/michael-grunder/25960ce1508396d0d36a
We now wait up to 1 second for diff data to come from the parent,
however we use poll(2) to wait for more data, and use a counter of
contiguous failures to get data for N times (set to 20 experimentally
after different tests) as an early stop condition to avoid wasting 1
second when the write traffic is too low.
We introduce the distinction between slow and fast commands since those
are two different sources of latency. An O(1) or O(log N) command without
side effects (can't trigger deletion of large objects as a side effect of
its execution) if delayed is a symptom of inherent latency of the system.
A non-fast command (commands that may run large O(N) computations) if
delayed may just mean that the user is executing slow operations.
The advices LATENCY should provide in this two different cases are
different, so we log the two classes of commands in a separated way.
This fixes detection of wrong subcommand (that resulted in the default
all-commands output instead) and allows COMMAND INFO to be called
without arguments (resulting into an empty array) which is useful in
programmtically generated calls like the following (in Ruby):
redis.commands("command","info",*mycommands)
Note: mycommands may be empty.
Static was removed since it is needed in order to get symbols in stack
traces. Minor changes in the source code were operated to make it more
similar to the existing Redis code base.
COMMANDS returns a nested multibulk reply for each
command in the command table. The reply for each
command contains:
- command name
- arity
- array of command flags
- start key position
- end key position
- key offset step
- optional: if the keys are not deterministic and
Redis uses an internal key evaluation function,
the 6th field appears and is defined as a status
reply of: REQUIRES ARGUMENT PARSING
Cluster clients need to know where the keys are in each
command to implement proper routing to cluster nodes.
Redis commands can have multiple keys, keys at offset steps, or other
issues where you can't always assume the first element after
the command name is the cluster routing key.
Using the information exposed by COMMANDS, client implementations
can have live, accurate key extraction details for all commands.
Also implements COMMANDS INFO [commands...] to return only a
specific set of commands instead of all 160+ commands live in Redis.
Technically the problem is due to the client type API that does not
return a special value for the master, however fixing it locally in the
CLIENT KILL command is better currently because otherwise we would
introduce a new output buffer limit class as a side effect.
From mailing list post https://groups.google.com/forum/#!topic/redis-db/D3k7KmJmYgM
In the file “config.h”, the definition HAVE_ATOMIC is used to indicate
if an architecture on which redis is implemented supports atomic
synchronization primitives. Powerpc supports atomic synchronization
primitives, however, it is not listed as one of the architectures
supported in config.h. This patch adds the __powerpc__ to the list of
architectures supporting these primitives. The improvement of redis
due to the atomic synchronization on powerpc is significant,
around 30% to 40%, over the default implementation using pthreads.
This proposal adds __powerpc__ to the list of architectures designated
to support atomic builtins.
From mailing list post https://groups.google.com/forum/#!topic/redis-db/QLjiQe4D7LA
In zmalloc.c the following primitives are currently used
to synchronize access to single global variable:
__sync_add_and_fetch
__sync_sub_and_fetch
In some architectures such as powerpc these primitives are overhead
intensive. More efficient C11 __atomic builtins are available with
newer GCC versions, see
http://gcc.gnu.org/onlinedocs/gcc-4.8.2/gcc/_005f_005fatomic-Builtins.html#_005f_005fatomic-Builtins
By substituting the following __atomic… builtins:
__atomic_add_fetch
__atomic_sub_fetch
the performance improvement on certain architectures such as powerpc can be significant,
around 10% to 15%, over the implementation using __sync builtins while there is only slight uptick on
Intel architectures because it was already enforcing Intel Strongly ordered memory semantics.
The selection of __atomic built-ins can be predicated on the definition of ATOMIC_RELAXED
which Is available on in gcc 4.8.2 and later versions.
While we have to output failing masters in order to provide an accurate
map (that may be the one of a Redis Cluster in down state because not
all slots are served by a working master), to provide slaves in FAIL
state is not a good idea since those are not necesarely needed, and the
client will likely incur into a latency penalty trying to connect with a
slave which is down.
Note that this means that CLUSTER SLOTS does not provide a *complete*
map of slaves, however this would not be of any help since slaves may be
added later, and a client that needs to scale reads and requires to
stay updated with the list of slaves, need to do a refresh of the map
from time to time, anyway.
CLUSTER SLOTS returns a Redis-formatted mapping from
slot ranges to IP/Port pairs serving that slot range.
The outer return elements group return values by slot ranges.
The first two entires in each result are the min and max slots for the range.
The third entry in each result is guaranteed to be either
an IP/Port of the master for that slot range - OR - null
if that slot range, for some reason, has no master
The 4th and higher entries in each result are replica instances
for the slot range.
Output comparison:
127.0.0.1:7001> cluster nodes
f853501ec8ae1618df0e0f0e86fd7abcfca36207 127.0.0.1:7001 myself,master - 0 0 2 connected 4096-8191
5a2caa782042187277647661ffc5da739b3e0805 127.0.0.1:7005 slave f853501ec8ae1618df0e0f0e86fd7abcfca36207 0 1402622415859 6 connected
6c70b49813e2ffc9dd4b8ec1e108276566fcf59f 127.0.0.1:7007 slave 26f4729ca0a5a992822667fc16b5220b13368f32 0 1402622415357 8 connected
2bd5a0e3bb7afb2b56a2120d3fef2f2e4333de1d 127.0.0.1:7006 slave 32adf4b8474fdc938189dba00dc8ed60ce635b0f 0 1402622419373 7 connected
5a9450e8279df36ff8e6bb1c139ce4d5268d1390 127.0.0.1:7000 master - 0 1402622418872 1 connected 0-4095
32adf4b8474fdc938189dba00dc8ed60ce635b0f 127.0.0.1:7002 master - 0 1402622419874 3 connected 8192-12287
5db7d05c245267afdfe48c83e7de899348d2bdb6 127.0.0.1:7004 slave 5a9450e8279df36ff8e6bb1c139ce4d5268d1390 0 1402622417867 5 connected
26f4729ca0a5a992822667fc16b5220b13368f32 127.0.0.1:7003 master - 0 1402622420877 4 connected 12288-16383
127.0.0.1:7001> cluster slots
1) 1) (integer) 0
2) (integer) 4095
3) 1) "127.0.0.1"
2) (integer) 7000
4) 1) "127.0.0.1"
2) (integer) 7004
2) 1) (integer) 12288
2) (integer) 16383
3) 1) "127.0.0.1"
2) (integer) 7003
4) 1) "127.0.0.1"
2) (integer) 7007
3) 1) (integer) 4096
2) (integer) 8191
3) 1) "127.0.0.1"
2) (integer) 7001
4) 1) "127.0.0.1"
2) (integer) 7005
4) 1) (integer) 8192
2) (integer) 12287
3) 1) "127.0.0.1"
2) (integer) 7002
4) 1) "127.0.0.1"
2) (integer) 7006
Instead of having an hardcoded IP address in the node configuration, we
autodiscover it via MEET messages for automatic update when the node is
restarted with a different IP address.
This mechanism was discussed in the context of PR #1782.
Some deployments need traffic sent from a specific address. This
change uses the same policy as Cluster where the first listed bindaddr
becomes the source address for outgoing Sentinel communication.
Fixes#1667
Eventual configuration convergence is guaranteed by our periodic hello
messages to all the instances, however when there are important notices
to share, better make a phone call. With this commit we force an hello
message to other Sentinal and Redis instances within the next 100
milliseconds of a config update, which is practically better than
waiting a few seconds.
Lack of check of the SRI_PROMOTED flag caused Sentienl to act with the
promoted slave turned into a master during failover like if it was a
normal instance.
Normally this problem was not apparent because during real failovers the
old master is down so the bugged code path was not entered, however with
manual failovers via the SENTINEL FAILOVER command, the problem was
easily triggered.
This commit prevents promoted slaves from getting reconfigured, moreover
we now explicitly check that during a failover the slave turning into a
master is the one we selected for promotion and not a different one.
This implements the new Sentinel-Client protocol for the Sentinel part:
now instances are reconfigured using a transaction that ensures that the
config is rewritten in the target instance, and that clients lose the
connection with the instance, in order to be forced to: ask Sentinel,
reconnect to the instance, and verify the instance role with the new
ROLE command.
This will be used by CLIENT KILL and is also a good way to ensure a
given client is still the same across CLIENT LIST calls.
The output of CLIENT LIST was modified to include the new ID, but this
change is considered to be backward compatible as the API does not imply
you can do positional parsing, since each filed as a different name.
Because of output buffer limits Redis internals had this idea of type of
clients: normal, pubsub, slave. It is possible to set different output
buffer limits for the three kinds of clients.
However all the macros and API were named after output buffer limit
classes, while the idea of a client type is a generic one that can be
reused.
This commit does two things:
1) Rename the API and defines with more general names.
2) Change the class of clients executing the MONITOR command from "slave"
to "normal".
"2" is a good idea because you want to have very special settings for
slaves, that are not a good idea for MONITOR clients that are instead
normal clients even if they are conceptually slave-alike (since it is a
push protocol).
The backward-compatibility breakage resulting from "2" is considered to
be minimal to care, since MONITOR is a debugging command, and because
anyway this change is not going to break the format or the behavior, but
just when a connection is closed on big output buffer issues.
Lua scripts are executed in the context of the currently selected
database (as selected by the caller of the script).
However Lua scripts are also free to use the SELECT command in order to
affect other DBs. When SELECT is called frm Lua, the old behavior, before
this commit, was to automatically set the Lua caller selected DB to the
last DB selected by Lua. See for example the following sequence of
commands:
SELECT 0
SET x 10
EVAL "redis.call('select','1')" 0
SET x 20
Before this commit after the execution of this sequence of commands,
we'll have x=10 in DB 0, and x=20 in DB 1.
Because of the problem above, there was a bug affecting replication of
Lua scripts, because of the actual implementation of replication. It was
possible to fix the implementation of Lua scripts in order to fix the
issue, but looking closely, the bug is the consequence of the behavior
of Lua ability to set the caller's DB.
Under the old semantics, a script selecting a different DB, has no simple
ways to restore the state and select back the previously selected DB.
Moreover the script auhtor must remember that the restore is needed,
otherwise the new commands executed by the caller, will be executed in
the context of a different DB.
So this commit fixes both the replication issue, and this hard-to-use
semantics, by removing the ability of Lua, after the script execution,
to force the caller to switch to the DB selected by the Lua script.
The new behavior of the previous sequence of commadns is to just set
X=20 in DB 0. However Lua scripts are still capable of writing / reading
from different DBs if needed.
WARNING: This is a semantical change that will break programs that are
conceived to select the client selected DB via Lua scripts.
This fixes issue #1811.
It is more straightforward to just test for a numerical type avoiding
Lua's automatic conversion. The code is technically more correct now,
however Lua should automatically convert to number only if the original
type is a string that "looks like a number", and not from other types,
so practically speaking the fix is identical AFAIK.
The new check-for-number behavior of Lua arguments broke
users who use large strings of just integers.
The Lua number check would convert the string to a number, but
that breaks user data because
Lua numbers have limited precision compared to an arbitrarily
precise number wrapped in a string.
Regression fixed and new test added.
Fixes#1118 again.
The new ROLE command is designed in order to provide a client with
informations about the replication in a fast and easy to use way
compared to the INFO command where the same information is also
available.
Since there are ways to alter the configEpoch outside of the failover
procedure (for exampel CLUSTER SET-CONFIG-EPOCH and via the configEpoch
collision resolution algorithm), make always sure, before replacing our
configEpoch with a new one, that it is greater than the current one.
SET-CONFIG-EPOCH, used by redis-trib at cluster creation time, failed to
update the currentEpoch, making it possible after a failover for a
server to set its configEpoch to a value smaller than the current one
(since configEpochs are obtained using currentEpoch).
The bug totally break the Redis Cluster algorithms and protocols
allowing for permanent split brain conditions about the slots
configuration as shown in issue #1799.
I'm not sure if while the visibility is the inner block, the fact we
point to 'dbuf' is a problem or not, probably the stack var isx
guaranteed to live until the function returns. However obvious code is
better anyway.
The lua_to*string() family of functions use a non optimal format
specifier when converting integers to strings. This has both the problem
of the number being converted in exponential notation, which we don't
use as a Redis return value when floating point numbers are involed,
and, moreover, there is a loss of precision since the default format
specifier is not able to represent numbers that must be represented
exactly in the IEEE 754 number mantissa.
The new code handles it as a special case using a saner conversion.
This fixes issue #1118.
If we are in the signal handler, we don't want to handle
the signal again. In extreme cases, this can cause a stack overflow
and segfault Redis.
Fixes#1771
There is a time defined by REDIS_CLUSTER_WRITABLE_DELAY where fail -> ok
switch is not possible after startup as a master for some time, however
the contrary (ok -> fail) should always be possible.
Every log contains, just after the pid, a single character that provides
information about the role of an instance:
S - Slave
M - Master
C - Writing child
X - Sentinel
Behrad Zari discovered [1] and Josiah reported [2]: if you block
and wait for a list to exist, but the list creates from
a non-push command, the blocked client never gets notified.
This commit adds notification of blocked clients into
the DB layer and away from individual commands.
Lists can be created by [LR]PUSH, SORT..STORE, RENAME, MOVE,
and RESTORE. Previously, blocked client notifications were
only triggered by [LR]PUSH. Your client would never get
notified if a list were created by SORT..STORE or RENAME or
a RESTORE, etc.
Blocked client notification now happens in one unified place:
- dbAdd() triggers notification when adding a list to the DB
Two new tests are added that fail prior to this commit.
All test pass.
Fixes#1668
[1]: https://groups.google.com/forum/#!topic/redis-db/k4oWfMkN1NU
[2]: #1668
When scanning the argument list inside of a redis.call() invocation
for pre-cached values, there was no check being done that the
argument we were on was in fact within the bounds of the cache size.
So if a redis.call() command was ever executed with more than 32
arguments (current cache size #define setting) redis-server could
segfault.
Thanks to this change, when there is some code like:
clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|...);
... and later before returning to the event loop ...
clusterUpdateState();
The clusterUpdateState() function will clar the flag and will not be
repeated in the clusterBeforeSleep() function. This especially important
for config save/fsync flags which are slow to execute and not a good
idea to repeat without a good reason.
This is implemented for all the CLUSTER_TODO flags.
The new command is able to reset a cluster node so that it starts again
as a fresh node. By default the command performs a soft reset (the same
as calling it as CLUSTER RESET SOFT), and the following steps are
performed:
1) All slots are set as unassigned.
2) The list of known nodes is flushed.
3) Node is set as master if it is a slave.
When an hard reset is performed with CLUSTER RESET HARD the following
additional operations are performed:
4) A new Node ID is created at random.
5) Epochs are set to 0.
CLUSTER RESET is useful both when the sysadmin wants to reconfigure a
node with a different role (for example turning a slave into a master)
and for testing purposes.
It also may play a role in automatically provisioned Redis Clusters,
since it allows to reset a node back to the initial state in order to be
reconfigured.
The previous code handling a lost slot (by another master with an higher
configuration for the slot) was defensive, considering it an error and
putting the cluster in an odd state requiring redis-cli fix.
This was changed, because actually this only happens either in a
legitimate way, with failovers, or when the admin messed with the config
in order to reconfigure the cluster. So the new code instead will try to
make sure that the keys stored match the new slots map, by removing all
the keys in the slots we lost ownership from.
The function that deletes the keys from the lost slots is called only
if the node does not lose all its slots (resulting in a reconfiguration
as a slave of the node that got ownership). This is an optimization
since the replication code will anyway flush all the instance data in
a faster way.
Using CLUSTER FAILOVER FORCE it is now possible to failover a master in
a forced way, which means:
1) No check to understand if the master is up is performed.
2) No data age of the slave is checked. Evan a slave with very old data
can manually failover a master in this way.
3) No chat with the master is attempted to reach its replication offset:
the master can just be down.
Automatic failovers only happen in Redis Cluster if the slave trying to
be elected was disconnected from its master for no more than 10 times
the node-timeout value. However there should be no such a check for
manual failovers, since these are initiated by the sysadmin that, in
theory, knows what she is doing when a slave is selected to be promoted.
Will be configurable / adaptive at some point but let's start with a
saner value compared to 1 sec which is not a good idea for big data
structures stored into a single key.
The error when the target key is busy was a generic one, while it makes
sense to be able to distinguish between the target key busy error and
the others easily.
The same change was operated for normal client connections. This is
important for Cluster as well, since when a node rejoins the cluster,
when a partition heals or after a restart, it gets flooded with new
connection attempts by all the other nodes trying to form a full
mesh again.
When a Sentinel performs a failover (successful or not), or when a
Sentinel votes for a different Sentinel trying to start a failover, it
sets a min delay before it will try to get elected for a failover.
While not strictly needed, because if multiple Sentinels will try
to failover the same master at the same time, only one configuration
will eventually win, this serialization is practically very useful.
Normal failovers are cleaner: one Sentinel starts to failover, the
others update their config when the Sentinel performing the failover
is able to get the selected slave to move from the role of slave to the
one of master.
However currently this timeout was implicit, so users could see
Sentinels not reacting, after a failed failover, for some time, without
giving any feedback in the logs to the poor sysadmin waiting for clues.
This commit makes Sentinels more verbose about the delay: when a master
is down and a failover attempt is not performed because the delay has
still not elaped, something like that will be logged:
Next failover delay: I will not start a failover
before Thu May 8 16:48:59 2014
Reusing small objects when possible is a major speedup under certain
conditions, since it is able to avoid the malloc/free pattern that
otherwise is performed for every argument in the client command vector.
Replace the three calls to Lua API lua_tostring, lua_lua_strlen,
and lua_isstring, with a single call to lua_tolstring.
~ 5% consistent speed gain measured.
Calling lua_gc() after every script execution is too expensive, and
apparently does not make the execution smoother: the same peak latency
was measured before and after the commit.
This change accounts for scripts execution speedup in the order of 10%.
The function showed up consuming a non trivial amount of time in the
profiler output. After this change benchmarking gives a 6% speed
improvement that can be consistently measured.
When the reply is only contained in the client static output buffer, use
a fast path avoiding the dynamic allocation of an SDS string to
concatenate the client reply objects.
Initially Redis Cluster accepted that after cluster creation all the
nodes were at configEpoch 0, evolving from zero as failovers happen.
However later the semantic was made more strict in order to make sure a
cluster has always all the master nodes with a different configEpoch,
which is more robust in some corner case (especially resulting from
errors by the system administrator).
To assign different configEpochs to different nodes at startup was a
task performed naturally by the config conflicts resolution algorithm
(see the Cluster specification). However this works well only for small
clusters or when there are actually just a few collisions, since it is
designed for exceptional cases.
When a large cluster is created hundred of nodes can be at epoch 0, so
the conflict resolution code is slow to provide an unique config to each
node. For this reason this new command was introduced. It can be called
only when a node is totally fresh: no other nodes known, and configEpoch
set to zero, so it is safe even against misuses.
redis-trib will use the new command in order to start the cluster
already setting an incremental unique config to every node.
This commit adds peer ID caching in the client structure plus an API
change and the use of sdsMakeRoomFor() in order to improve the
reallocation pattern to generate the CLIENT LIST output.
Both the changes account for a very significant speedup.
This commit also fixes a bug in the implementation of sdscatfmt()
resulting from stale references to the SDS string header after
sdsMakeRoomFor() calls.
sdscatprintf() relies on printf() family libc functions and is sometimes
too slow in critical code paths. sdscatfmt() is an alternative which is:
1) Far less capable.
2) Format specifier uncompatible.
3) Faster.
It is suitable to be used in those speed critical code paths such as
CLIENT LIST output generation.
When we are blocked and a few events a processed from time to time, it
is smarter to call the event handler a few times in order to handle the
accept, read, write, close cycle of a client in a single pass, otherwise
there is too much latency added for clients to receive a reply while the
server is busy in some way (for example during the DB loading).
When the listening sockets readable event is fired, we have the chance
to accept multiple clients instead of accepting a single one. This makes
Redis more responsive when there is a mass-connect event (for example
after the server startup), and in workloads where a connect-disconnect
pattern is used often, so that multiple clients are waiting to be
accepted continuously.
As a side effect, this commit makes the LOADING, BUSY, and similar
errors much faster to deliver to the client, making Redis more
responsive when there is to return errors to inform the clients that the
server is blocked in an not interruptible operation.
We should return REDIS_ERR to signal we can't read the configuration
because there is no config file only after checking errno, othewise
we risk to rewrite an existing file that was not accessible for some
other reason.
This was a common source of problems among users.
The solution adopted is not bullet-proof as if the user deletes the
nodes.conf file manually, and starts a new instance with the same
nodes.conf file path, two instances will use the same file. However
following this reasoning the user may drop a nuclear bomb into the
datacenter as well.
I happen to be working on a system that lacks urandom. While the code does try
to handle this case and artificially create some bytes if the file pointer is
empty, it does try to close it unconditionally, leading to a segfault.
When we set a protocol error we should return with REDIS_ERR to let the
caller know it should stop processing the client.
Bug found in a code auditing related to issue #1699.
The internal HLL raw encoding used by PFCOUNT when merging multiple keys
is aligned to 8 bits (1 byte per register) so we can exploit this to
improve performances by processing multiple bytes per iteration.
In benchmarks the new code was several times faster with HLLs with many
registers set to zero, while no slowdown was observed with populated
HLLs.
When the register is set to zero, we need to add 2^-0 to E, which is 1,
but it is faster to just add 'ez' at the end, which is the number of
registers set to zero, a value we need to compute anyway.
Given that the code was written with a 2 years pause... something
strange happened in the middle. So there was no function to free a
lex range min/max objects, and in some places the range was passed by
value.
After running a few benchmarks, 3000 looks like a reasonable value to
keep HLLs with a few thousand elements small while the CPU cost is
still not huge.
This covers all the cases where the dense representation would use N
orders of magnitude more space, like in the case of many HLLs with
carinality of a few tens or hundreds.
It is not impossible that in the future this gets user configurable,
however it is easy to pick an unreasoable value just looking at savings
in the space dimension without checking what happens in the time
dimension.
Bulk length for registers was emitted too early, so if there was a bug
the reply looked like a long array with just one element, blocking the
client as result.
The function checks if all the HLL_REGISTERS were processed during the
convertion from sparse to dense encoding, returning REDIS_OK or
REDIS_ERR to signal a corruption problem.
A bug in PFDEBUG GETREG was fixed: when the object is converted to the
dense representation we need to reassign the new pointer to the header
structure pointer.
Provides a human readable description of the opcodes composing a
run-length encoded HLL (sparse encoding).
The command is only useful for debugging / development tasks.
The new API takes directly the object doing everything needed to
turn it into a dense representation, including setting the new
representation as object->ptr.
Code never tested, but the basic layout is shaped in this commit.
Also missing:
1) Sparse -> Dense conversion function.
2) New HLL object creation using the sparse representation.
3) Implementation of PFMERGE for the sparse representation.
Metadata are now placed at the start of the representation as an header.
There is a proper structure to access the representation.
Still work to do in order to truly abstract the implementation from the
representation, commands still work assuming dense representation.
After running a few simulations with different alternative encodings,
it was found that the VAL opcode performs better using 5 bits for the
value and 2 bits for the run length, at least for cardinalities in the
range of interest.
adjustOpenFilesLimit() and clusterUpdateSlotsWithConfig() that were
assuming uint64_t is the same as unsigned long long, which is true
probably for all the systems out there that we target, but still GCC
emitted a warning since technically they are two different types.
This will be a non-op most of the times since the object will be
unshared / decoded, however it is more technically correct to start this
way since the object may be decoded even in the read-only code path.
Using a seed of zero has the side effect of having the empty string
hashing to what is a very special case in the context of HyperLogLog: a
very long run of zeroes.
This did not influenced the correctness of the result with 16k registers
because of the harmonic mean, but still it is inconvenient that a so
obvious value maps to a so special hash.
The seed 0xadc83b19 is used instead, which is the first 64 bits of the
SHA1 of the empty string.
Reference: issue #1657.
We need to guarantee that the last bit is 1, otherwise an element may
hash to just zeroes with probability 1/(2^64) and trigger an infinite
loop.
See issue #1657.
This will allow future changes like compressed representations.
Currently the magic is not checked for performance reasons but this may
change in the future, for example if we add new types encoded in strings
that may have the same size of HyperLogLogs.
Better results can be achieved by compensating for the bias of the raw
approximation just after 2.5m (when LINEARCOUNTING is no longer used) by
using a polynomial that approximates the bias at a given cardinality.
The curve used was found using this web page:
http://www.xuru.org/rt/PR.asp
That performs polynomial regression given a set of values.
The following form is given:
HLLADD myhll
No element is provided in the above case so if 'myhll' var does not
exist the result is to just create an empty HLL structure, and no update
will be performed on the registers.
In this case, the DB should still be set dirty and the command
propagated.
The HyperLogLog original paper suggests using LINEARCOUNTING for
cardinalities < 2.5m, however for P=14 the median / max error
curves show that a value of '3' is the best pick for m = 16384.
The more we add elements to an HyperLogLog counter, the smaller is
the probability that we actually update some register.
From this observation it is easy to see how it is possible to use
caching of a previously computed cardinality and reuse it to serve
HLLCOUNT queries as long as no register was updated in the data
structure.
This commit does exactly this by using just additional 8 bytes for the
data structure to store a 64 bit unsigned integer value cached
cardinality. When the most significant bit of the 64 bit integer is set,
it means that the value computed is no longer usable since at least a
single register was modified and we need to recompute it at the next
call of HLLCOUNT.
The value is always stored in little endian format regardless of the
actual CPU endianess.
All the Redis functions that need to modify the string value of a key in
a destructive way (APPEND, SETBIT, SETRANGE, ...) require to make the
object unshared (if refcount > 1) and encoded in raw format (if encoding
is not already REDIS_ENCODING_RAW).
This was cut & pasted many times in multiple places of the code. This
commit puts the small logic needed into a function called
dbUnshareStringValue().
We need to be sure that you can save a dataset in a Redis instance,
reload it in a different architecture, and continue to count in the same
HyperLogLog structure.
So 32 and 64 bit, little or bit endian, must all guarantee to output the
same hash for the same element.
The new representation is more obvious, starting from the LSB of the
first byte and using bits going to MSB, and passing to next byte as
needed.
There was also a subtle error: first two bits were unused, everything
was carried over on the right of two bits, even if it worked because of
the code requirement of always having a byte more at the end.
During the rewrite the code was made safer trying to avoid undefined
behavior due to shifting an uint8_t for more than 8 bits.
To test the bitfield array of counters set/get macros from the Redis Tcl
suite is hard, so a specialized command that is able to test the
internals was developed.
This is not really an error but something that always happens for
example when creating a new cluster, or if the sysadmin rejoins manually
a node that is already known.
Since useless logs don't help, moved to VERBOSE level.
New config epochs must always be obtained incrementing the currentEpoch,
that is itself guaranteed to be >= the max configEpoch currently known
to the node.
redis-trib used to allocate slots not considering fractions of nodes
when computing the slots_per_node amount. So the fractional part was
carried over till the end of the allocation, where the last node
received a few more slots than any other (or a lot more if the cluster
was composed of many nodes).
The computation was changed to allocate slots more evenly when they are
not exactly divisible for the number of masters we have.
The slave election in Redis Cluster guarantees that slaves promoted to
masters always end with unique config epochs, however failures during
manual reshardings, software bugs and operational errors may in theory
cause two nodes to have the same configEpoch.
This commit introduces a mechanism to eventually always end with different
configEpochs if a collision ever happens.
As a (wanted) side effect, this also ensures that after a new cluster
is created, all nodes will end with a different configEpoch automatically.
Bug found by the continuous integration test running the Redis
with valgrind:
==6245== Invalid read of size 8
==6245== at 0x4C2DEEF: memcpy@GLIBC_2.2.5 (mc_replace_strmem.c:876)
==6245== by 0x41F9E6: freeMemoryIfNeeded (redis.c:3010)
==6245== by 0x41D2CC: processCommand (redis.c:2069)
memmove() size argument was accounting for an extra element, going
outside the bounds of the array.
In this commit:
* Decrement steps are semantically differentiated from the reserved FDs.
Previously both values were 32 but the meaning was different.
* Make it clear that we save setrlimit errno.
* Don't explicitly handle wrapping of 'f', but prevent it from
happening.
* Add comments to make the function flow more readable.
This integrates PR #1630
In sentinelFlushConfig() fd could be undefined when the following if
statement was true:
if (rewrite_status == -1) goto werr;
This could cause random file descriptors to get closed.
Previously, the (!fp) would only catch lack of free space
under OS X. Linux waits to discover it can't write until
it actually writes contents to disk.
(fwrite() returns success even if the underlying file
has no free space to write into. All the errors
only show up at flush/sync/close time.)
Fixesantirez/redis#1604
Also update the original REDIS_EVENTLOOP_FDSET_INCR to
include REDIS_MIN_RESERVED_FDS. REDIS_EVENTLOOP_FDSET_INCR
exists to make sure more than (maxclients+RESERVED) entries
are allocated, but we can only guarantee that if we include
the current value of REDIS_MIN_RESERVED_FDS as a minimum
for the INCR size.
This got removed in 2e5c394 during a new feature addition.
The prior commit had "break if masters.length == masters_count"
but we are guaranteed to aready have that condition met since
otherwise we would haven't gotten this far.
Without this break statement, it's possible some masters may
be forgotten and have zero replicas while other masters have
more than their requested number of replicas.
Thanks to carlos for pointing out this regression at:
https://groups.google.com/forum/#!topic/redis-db/_WVVqDw5B7c
This bug was introduced in 2e5c394f during a refactor.
It took me a while to understand what was going on with
the code, so I've refactored it further by:
- Replacing boolean values with meaningful symbols
- Replacing 'i' with a meaningful variable name
- Adding the proper abort check
- Factoring out now duplicated conditionals
- Adding optional verbose logging (we're inside *four*
different looping constructs, so it takes a while to
figure out where all the moving parts are)
- Updating comment for the section
This fixes a problem when the number of master instances
equaled the number of replica instances. Before, when
there were equal numbers of both, nodes_count would go to
zero, but the while loop would spin in i < @replicas because
i would never be updated (because the nodes_list of each ip
was length == 0, which triggered an endless loop of
next -> i = 0 -> 0 < 1? -> true -> next -> i = 0 ...)
Thanks to carlo who found this problem at:
https://groups.google.com/forum/#!topic/redis-db/_WVVqDw5B7c
Fun fact: rlim_t is an unsigned long long on all platforms.
Continually subtracting from a rlim_t makes it get smaller
and smaller until it wraps, then you're up to 2^64-1.
This was causing an infinite loop on Redis startup if
your ulimit was extremely (almost comically) low.
The case of (f > oldlimit) would never be met in a case like:
f = 150
while (f > 20) f -= 128
Since f is unsigned, it can't go negative and would
take on values of:
Iteration 1: 150 - 128 => 22
Iteration 2: 22 - 128 => 18446744073709551510
Iterations 3-∞: ...
To catch the wraparound, we use the previous value of f
stored in limit.rlimit_cur. If we subtract from f and
get a larger number than the value it had previously,
we print an error and exit since we don't have enough
file descriptors to help the user at this point.
Thanks to @bs3g for the inspiration to fix this problem.
Patches existed from @bs3g at antirez#1227, but I needed to repair a few other
parts of Redis simultaneously, so I didn't get a chance to use them.
The log messages about open file limits have always
been slightly opaque and confusing. Here's an attempt to
fix their wording, detail, and meaning. Users will have a
better understanding of how to fix very common problems
with these reworded messages.
Also, we handle a new error case when maxclients becomes less
than one, essentially rendering the server unusable. We
now exit on startup instead of leaving the user with a server
unable to handle any connections.
This fixes antirez#356 as well.
32 was the additional number of file descriptors Redis
would reserve when managing a too-low ulimit. The
number 32 was in too many places statically, so now
we use a macro instead that looks more appropriate.
When Redis sets up the server event loop, it uses:
server.maxclients+REDIS_EVENTLOOP_FDSET_INCR
So, when reserving file descriptors, it makes sense to
reserve at least REDIS_EVENTLOOP_FDSET_INCR FDs instead
of only 32. Currently, REDIS_EVENTLOOP_FDSET_INCR is
set to 128 in redis.h.
Also, I replaced the static 128 in the while f < old loop
with REDIS_EVENTLOOP_FDSET_INCR as well, which results
in no change since it was already 128.
Impact: Users now need at least maxclients+128 as
their open file limit instead of maxclients+32 to obtain
actual "maxclients" number of clients. Redis will carve
the extra REDIS_EVENTLOOP_FDSET_INCR file descriptors it
needs out of the "maxclients" range instead of failing
to start (unless the local ulimit -n is too low to accomidate
the request).
Everywhere in the Redis code base, maxclients is treated
as an int with (int)maxclients or `maxclients = atoi(source)`,
so let's make maxclients an int.
This fixes a bug where someone could specify a negative maxclients
on startup and it would work (as well as set maxclients very high)
because:
unsigned int maxclients;
char *update = "-300";
maxclients = atoi(update);
if (maxclients < 1) goto fail;
But, (maxclients < 1) can only catch the case when maxclients
is exactly 0. maxclients happily sets itself to -300, which isn't
-300, but rather 4294966996, which isn't < 1, so... everything
"worked."
maxclients config parsing checks for the case of < 1, but maxclients
CONFIG SET parsing was checking for case of < 0 (allowing
maxclients to be set to 0). CONFIG SET parsing is now updated to
match config parsing of < 1.
It's tempting to add a MINIMUM_CLIENTS define, but... I didn't.
These changes were inspired by antirez#356, but this doesn't
fix that issue.
Obtaining the RSS (Resident Set Size) info is slow in Linux and OSX.
This slowed down the generation of the INFO 'memory' section.
Since the RSS does not require to be a real-time measurement, we
now sample it with server.hz frequency (10 times per second by default)
and use this value both to show the INFO rss field and to compute the
fragmentation ratio.
Practically this does not make any difference for memory profiling of
Redis but speeds up the INFO call significantly.
For small content the function now tries to use a static buffer to avoid
a malloc/free cycle that is too costly when the function is used in the
context of performance critical code path such as INFO output generation.
This change was verified to have positive effects in the execution speed
of the INFO command.
Uname was profiled to be a slow syscall. It produces always the same
output in the context of a single execution of Redis, so calling it at
every INFO output generation does not make too much sense.
The uname utsname structure was modified as a static variable. At the
same time a static integer was added to check if we need to call uname
the first time.
sdscatvprintf() uses a loop where it tries to output the formatted
string in a buffer of the initial length, if there was not enough room,
a buffer of doubled size is tried and so forth.
The initial guess for the buffer length was very poor, an hardcoded
"16". This caused the printf to be processed multiple times without a
good reason. Given that printf functions are already not fast, the
overhead was significant.
The new heuristic is to use a buffer 4 times the length of the format
buffer, and 32 as minimal size. This appears to be a good balance for
typical uses of the function inside the Redis code base.
This change improved INFO command performances 3 times.
This is safer as by default maxmemory should just set a memory limit
without any key to be deleted, unless the policy is set to something
more relaxed.
There were 2 spare bits inside the Redis object structure that are now
used in order to enlarge 4x the range of the LRU field.
At the same time the resolution was improved from 10 to 1 second: this
still provides 194 days before the LRU counter overflows (restarting from
zero).
This is not a problem since it only causes lack of eviction precision for
objects not touched for a very long time, and the lack of precision is
only temporary.
This new function is useful to get a number of random entries from an
hash table when we just need to do some sampling without particularly
good distribution.
It just jumps at a random place of the hash table and returns the first
N items encountered by scanning linearly.
The main usefulness of this function is to speedup Redis internal
sampling of the key space, for example for key eviction or expiry.
This is an improvement over the previous eviction algorithm where we use
an eviction pool that is persistent across evictions of keys, and gets
populated with the best candidates for evictions found so far.
It allows to approximate LRU eviction at a given number of samples
better than the previous algorithm used.
For testing purposes it is handy to have a very high resolution of the
LRU clock, so that it is possible to experiment with scripts running in
just a few seconds how the eviction algorithms works.
This commit allows Redis to use the cached LRU clock, or a value
computed on demand, depending on the resolution. So normally we have the
good performance of a precomputed value, and a clock that wraps in many
days using the normal resolution, but if needed, changing a define will
switch behavior to an high resolution LRU clock.
GCC-4.9 warned about this, but clang didn't.
This commit fixes warning:
sentinel.c: In function 'sentinelReceiveHelloMessages':
sentinel.c:2156:43: warning: variable 'master' set but not used [-Wunused-but-set-variable]
sentinelRedisInstance *ri = c->data, *master;
Test sentinel.tilt condition on top and return if it is true.
This allows to remove the check for the tilt condition in the remaining
code paths of the function.
Failure detection in Sentinel is ping-pong based. It used to work by
remembering the last time a valid PONG reply was received, and checking
if the reception time was too old compared to the current current time.
PINGs were sent at a fixed interval of 1 second.
This works in a decent way, but does not scale well when we want to set
very small values of "down-after-milliseconds" (this is the node
timeout basically).
This commit reiplements the failure detection making a number of
changes. Some changes are inspired to Redis Cluster failure detection
code:
* A new last_ping_time field is added in representation of instances.
If non zero, we have an active ping that was sent at the specified
time. When a valid reply to ping is received, the field is zeroed
again.
* last_ping_time is not reset when we reconnect the link or send a new
ping, so from our point of view it represents the time we started
waiting for the instance to reply to our pings without receiving a
reply.
* last_ping_time is now used in order to check if the instance is
timed out. This means that we can have a node timeout of 100
milliseconds and yet the system will work well since the new check is
not bound to the period used to send pings.
* Pings are now sent every second, or often if the value of
down-after-milliseconds is less than one second. With a lower limit of
10 HZ ping frequency.
* Link reconnection code was improved. This is used in order to try to
reconnect the link when we are at 50% of the node timeout without a
valid reply received yet. However the old code triggered unnecessary
reconnections when the node timeout was very small. Now that should be
ok.
The new code passes the tests but more testing is needed and more unit
tests stressing the failure detector, so currently this is merged only
in the unstable branch.
Sentinel's main safety argument is that there are no two configurations
for the same master with the same version (configuration epoch).
For this to be true Sentinels require to be authorized by a majority.
Additionally Sentinels require to do two important things:
* Never vote again for the same epoch.
* Never exchange an old vote for a fresh one.
The first prerequisite, in a crash-recovery system model, requires to
persist the master->leader_epoch on durable storage before to reply to
messages. This was not the case.
We also make sure to persist the current epoch in order to never reply
to stale votes requests from other Sentinels, after a recovery.
The configuration is persisted by making use of fsync(), this is
considered in the context of this code a good enough guarantee that
after a restart our durable state is restored, however this may not
always be the case depending on the kind of hardware and operating
system used.
Now the way HELLO messages are received is unified.
Now it is no longer needed for Sentinels to converge to the higher
configuration for a master to be able to chat via some Redis instance,
the are able to directly exchanges configurations.
Note that this commit does not include the (trivial) change needed to
send HELLO messages to Sentinel instances as well, since for an error I
committed the change in the previous commit that refactored hello
messages processing into a separated function.