To test the bitfield array of counters set/get macros from the Redis Tcl
suite is hard, so a specialized command that is able to test the
internals was developed.
This is not really an error but something that always happens for
example when creating a new cluster, or if the sysadmin rejoins manually
a node that is already known.
Since useless logs don't help, moved to VERBOSE level.
New config epochs must always be obtained incrementing the currentEpoch,
that is itself guaranteed to be >= the max configEpoch currently known
to the node.
redis-trib used to allocate slots not considering fractions of nodes
when computing the slots_per_node amount. So the fractional part was
carried over till the end of the allocation, where the last node
received a few more slots than any other (or a lot more if the cluster
was composed of many nodes).
The computation was changed to allocate slots more evenly when they are
not exactly divisible for the number of masters we have.
The slave election in Redis Cluster guarantees that slaves promoted to
masters always end with unique config epochs, however failures during
manual reshardings, software bugs and operational errors may in theory
cause two nodes to have the same configEpoch.
This commit introduces a mechanism to eventually always end with different
configEpochs if a collision ever happens.
As a (wanted) side effect, this also ensures that after a new cluster
is created, all nodes will end with a different configEpoch automatically.
Bug found by the continuous integration test running the Redis
with valgrind:
==6245== Invalid read of size 8
==6245== at 0x4C2DEEF: memcpy@GLIBC_2.2.5 (mc_replace_strmem.c:876)
==6245== by 0x41F9E6: freeMemoryIfNeeded (redis.c:3010)
==6245== by 0x41D2CC: processCommand (redis.c:2069)
memmove() size argument was accounting for an extra element, going
outside the bounds of the array.
In this commit:
* Decrement steps are semantically differentiated from the reserved FDs.
Previously both values were 32 but the meaning was different.
* Make it clear that we save setrlimit errno.
* Don't explicitly handle wrapping of 'f', but prevent it from
happening.
* Add comments to make the function flow more readable.
This integrates PR #1630
In sentinelFlushConfig() fd could be undefined when the following if
statement was true:
if (rewrite_status == -1) goto werr;
This could cause random file descriptors to get closed.
Previously, the (!fp) would only catch lack of free space
under OS X. Linux waits to discover it can't write until
it actually writes contents to disk.
(fwrite() returns success even if the underlying file
has no free space to write into. All the errors
only show up at flush/sync/close time.)
Fixesantirez/redis#1604
Also update the original REDIS_EVENTLOOP_FDSET_INCR to
include REDIS_MIN_RESERVED_FDS. REDIS_EVENTLOOP_FDSET_INCR
exists to make sure more than (maxclients+RESERVED) entries
are allocated, but we can only guarantee that if we include
the current value of REDIS_MIN_RESERVED_FDS as a minimum
for the INCR size.
This got removed in 2e5c394 during a new feature addition.
The prior commit had "break if masters.length == masters_count"
but we are guaranteed to aready have that condition met since
otherwise we would haven't gotten this far.
Without this break statement, it's possible some masters may
be forgotten and have zero replicas while other masters have
more than their requested number of replicas.
Thanks to carlos for pointing out this regression at:
https://groups.google.com/forum/#!topic/redis-db/_WVVqDw5B7c
This bug was introduced in 2e5c394f during a refactor.
It took me a while to understand what was going on with
the code, so I've refactored it further by:
- Replacing boolean values with meaningful symbols
- Replacing 'i' with a meaningful variable name
- Adding the proper abort check
- Factoring out now duplicated conditionals
- Adding optional verbose logging (we're inside *four*
different looping constructs, so it takes a while to
figure out where all the moving parts are)
- Updating comment for the section
This fixes a problem when the number of master instances
equaled the number of replica instances. Before, when
there were equal numbers of both, nodes_count would go to
zero, but the while loop would spin in i < @replicas because
i would never be updated (because the nodes_list of each ip
was length == 0, which triggered an endless loop of
next -> i = 0 -> 0 < 1? -> true -> next -> i = 0 ...)
Thanks to carlo who found this problem at:
https://groups.google.com/forum/#!topic/redis-db/_WVVqDw5B7c
Fun fact: rlim_t is an unsigned long long on all platforms.
Continually subtracting from a rlim_t makes it get smaller
and smaller until it wraps, then you're up to 2^64-1.
This was causing an infinite loop on Redis startup if
your ulimit was extremely (almost comically) low.
The case of (f > oldlimit) would never be met in a case like:
f = 150
while (f > 20) f -= 128
Since f is unsigned, it can't go negative and would
take on values of:
Iteration 1: 150 - 128 => 22
Iteration 2: 22 - 128 => 18446744073709551510
Iterations 3-∞: ...
To catch the wraparound, we use the previous value of f
stored in limit.rlimit_cur. If we subtract from f and
get a larger number than the value it had previously,
we print an error and exit since we don't have enough
file descriptors to help the user at this point.
Thanks to @bs3g for the inspiration to fix this problem.
Patches existed from @bs3g at antirez#1227, but I needed to repair a few other
parts of Redis simultaneously, so I didn't get a chance to use them.
The log messages about open file limits have always
been slightly opaque and confusing. Here's an attempt to
fix their wording, detail, and meaning. Users will have a
better understanding of how to fix very common problems
with these reworded messages.
Also, we handle a new error case when maxclients becomes less
than one, essentially rendering the server unusable. We
now exit on startup instead of leaving the user with a server
unable to handle any connections.
This fixes antirez#356 as well.
32 was the additional number of file descriptors Redis
would reserve when managing a too-low ulimit. The
number 32 was in too many places statically, so now
we use a macro instead that looks more appropriate.
When Redis sets up the server event loop, it uses:
server.maxclients+REDIS_EVENTLOOP_FDSET_INCR
So, when reserving file descriptors, it makes sense to
reserve at least REDIS_EVENTLOOP_FDSET_INCR FDs instead
of only 32. Currently, REDIS_EVENTLOOP_FDSET_INCR is
set to 128 in redis.h.
Also, I replaced the static 128 in the while f < old loop
with REDIS_EVENTLOOP_FDSET_INCR as well, which results
in no change since it was already 128.
Impact: Users now need at least maxclients+128 as
their open file limit instead of maxclients+32 to obtain
actual "maxclients" number of clients. Redis will carve
the extra REDIS_EVENTLOOP_FDSET_INCR file descriptors it
needs out of the "maxclients" range instead of failing
to start (unless the local ulimit -n is too low to accomidate
the request).
Everywhere in the Redis code base, maxclients is treated
as an int with (int)maxclients or `maxclients = atoi(source)`,
so let's make maxclients an int.
This fixes a bug where someone could specify a negative maxclients
on startup and it would work (as well as set maxclients very high)
because:
unsigned int maxclients;
char *update = "-300";
maxclients = atoi(update);
if (maxclients < 1) goto fail;
But, (maxclients < 1) can only catch the case when maxclients
is exactly 0. maxclients happily sets itself to -300, which isn't
-300, but rather 4294966996, which isn't < 1, so... everything
"worked."
maxclients config parsing checks for the case of < 1, but maxclients
CONFIG SET parsing was checking for case of < 0 (allowing
maxclients to be set to 0). CONFIG SET parsing is now updated to
match config parsing of < 1.
It's tempting to add a MINIMUM_CLIENTS define, but... I didn't.
These changes were inspired by antirez#356, but this doesn't
fix that issue.
Obtaining the RSS (Resident Set Size) info is slow in Linux and OSX.
This slowed down the generation of the INFO 'memory' section.
Since the RSS does not require to be a real-time measurement, we
now sample it with server.hz frequency (10 times per second by default)
and use this value both to show the INFO rss field and to compute the
fragmentation ratio.
Practically this does not make any difference for memory profiling of
Redis but speeds up the INFO call significantly.
For small content the function now tries to use a static buffer to avoid
a malloc/free cycle that is too costly when the function is used in the
context of performance critical code path such as INFO output generation.
This change was verified to have positive effects in the execution speed
of the INFO command.
Uname was profiled to be a slow syscall. It produces always the same
output in the context of a single execution of Redis, so calling it at
every INFO output generation does not make too much sense.
The uname utsname structure was modified as a static variable. At the
same time a static integer was added to check if we need to call uname
the first time.
sdscatvprintf() uses a loop where it tries to output the formatted
string in a buffer of the initial length, if there was not enough room,
a buffer of doubled size is tried and so forth.
The initial guess for the buffer length was very poor, an hardcoded
"16". This caused the printf to be processed multiple times without a
good reason. Given that printf functions are already not fast, the
overhead was significant.
The new heuristic is to use a buffer 4 times the length of the format
buffer, and 32 as minimal size. This appears to be a good balance for
typical uses of the function inside the Redis code base.
This change improved INFO command performances 3 times.
This is safer as by default maxmemory should just set a memory limit
without any key to be deleted, unless the policy is set to something
more relaxed.