Commit Graph

2614 Commits

Author SHA1 Message Date
antirez
402110f9fd User-defined switch point between sparse-dense HLL encodings. 2014-04-15 17:46:51 +02:00
antirez
d541f65d66 PFSELFTEST improved with sparse encoding checks. 2014-04-15 10:10:38 +02:00
antirez
dde8dff73f PFDEBUG ENCODING added. 2014-04-14 19:35:00 +02:00
antirez
54f0156e8c Set HLL_SPARSE_MAX to 3000.
After running a few benchmarks, 3000 looks like a reasonable value to
keep HLLs with a few thousand elements small while the CPU cost is
still not huge.

This covers all the cases where the dense representation would use N
orders of magnitude more space, like in the case of many HLLs with
carinality of a few tens or hundreds.

It is not impossible that in the future this gets user configurable,
however it is easy to pick an unreasoable value just looking at savings
in the space dimension without checking what happens in the time
dimension.
2014-04-14 16:15:55 +02:00
antirez
848d0461f9 Error message for invalid HLL objects unified. 2014-04-14 16:11:54 +02:00
antirez
81ceef7d22 PFMERGE fixed to work with sparse encoding. 2014-04-14 16:09:32 +02:00
antirez
9df77fc0c4 Mark PFDEBUG as write command in the commands table.
It is safer since it is able to have side effects.
2014-04-14 15:57:50 +02:00
antirez
3bc35f9ce9 Correctly replicate PFDEBUG GETREG.
Even if it is a debugging command, make sure that when it forces a
change in encoding, the command is propagated.
2014-04-14 15:57:19 +02:00
antirez
ba0afb4566 Added assertion in hllSparseAdd() when promotion to dense occurs.
If we converted to dense, a register must be updated in the dense
representation.
2014-04-14 15:55:21 +02:00
antirez
e9cd51c7eb hllSparseAdd(): speed optimization.
Mostly by reordering opcodes check conditional by frequency of opcodes
in larger sparse-encoded HLLs.
2014-04-14 15:42:05 +02:00
antirez
681bf7468b Detect corrupted sparse HLLs in hllSparseSum(). 2014-04-14 15:20:26 +02:00
antirez
db40da0a47 hllSparseAdd(): faster code removing conditional.
Bottleneck found profiling. Big run time improvement found when testing
after the change.
2014-04-14 12:58:46 +02:00
antirez
4e0a99ba51 Comment typo in hllSparseAdd(). first -> fits. 2014-04-14 12:12:53 +02:00
antirez
5532b5308a Merge adjacent VAL opcodes in hllSparseAdd().
As more values are added splitting ZERO or XZERO opcodes, try to merge
adjacent VAL opcodes if they have the same value.
2014-04-14 12:11:39 +02:00
antirez
837ca39081 More robust HLL_SPARSE macros protecting 'p' with parens.
Now the macros will work with arguments such as "ptr+1".
2014-04-14 11:49:53 +02:00
antirez
142d133c8a hllSparseAdd() opcode seek stop condition fixed. 2014-04-14 11:04:11 +02:00
antirez
1ee18db922 Fixed error message generation in PFDEBUG GETREG.
Bulk length for registers was emitted too early, so if there was a bug
the reply looked like a long array with just one element, blocking the
client as result.
2014-04-14 10:25:19 +02:00
antirez
82c31f750d Fixed memmove() count in hllSparseAdd(). 2014-04-14 09:40:07 +02:00
antirez
3b20003503 hllSparseAdd(): more correct dense conversion conditional.
We want to promote if the total string size exceeds the resulting size
after the upgrade.
2014-04-14 09:36:32 +02:00
antirez
b7571b7453 hllSparseToDense(): sanity check added.
The function checks if all the HLL_REGISTERS were processed during the
convertion from sparse to dense encoding, returning REDIS_OK or
REDIS_ERR to signal a corruption problem.

A bug in PFDEBUG GETREG was fixed: when the object is converted to the
dense representation we need to reassign the new pointer to the header
structure pointer.
2014-04-14 09:27:01 +02:00
antirez
f9dc3cb04d PFDEBUG DECODE added.
Provides a human readable description of the opcodes composing a
run-length encoded HLL (sparse encoding).
The command is only useful for debugging / development tasks.
2014-04-14 09:00:53 +02:00
antirez
261da523e8 PFDEBUG added, PFGETREG removed.
PFDEBUG will be the interface to do debugging tasks with a key
containing an HLL object.
2014-04-13 23:01:21 +02:00
antirez
e8e717e145 hllSparseToDense API changed to take ref to object.
The new API takes directly the object doing everything needed to
turn it into a dense representation, including setting the new
representation as object->ptr.
2014-04-13 22:59:27 +02:00
antirez
2067644a8c hllSparseAdd() sanity check for span != 0 added. 2014-04-13 10:19:12 +02:00
antirez
80140fa006 Fix hllSparseAdd() new sequence replacement when next is NULL.
sdsIncrLen() must be called anyway even if we are replacing the last
oppcode of the sparse representation.
2014-04-12 23:55:44 +02:00
antirez
3c3c16561a Fix seqlen computation in hllSparseAdd(). 2014-04-12 23:52:36 +02:00
antirez
a9e057e095 Abstract hllSparseAdd() / hllDenseAdd() via hllAdd(). 2014-04-12 23:42:56 +02:00
antirez
0b7d08efb9 hllSparseSum(): multiply 1 * runlen for zero entries. 2014-04-12 16:47:50 +02:00
antirez
d9314079ca Macro HLL_SPARSE_XZERO_LEN fixed. 2014-04-12 16:46:08 +02:00
antirez
f5c03044a6 Fix HLL sparse object creation #2.
Two vars initialized to wrong values in createHLLObject().
2014-04-12 16:37:50 +02:00
antirez
b5659cb0a6 Increment pointer while iterating sparse HLL object. 2014-04-12 11:02:14 +02:00
antirez
1ccb661569 Fix HLL sparse object creation.
The function didn't considered the fact that each XZERO opcode is
two bytes.
2014-04-12 10:59:12 +02:00
antirez
a79386b1af Create HyperLogLog objects with sparse encoding. 2014-04-12 10:56:18 +02:00
antirez
1fc04a6221 HyperLogLog sparse to dense conversion function. 2014-04-12 10:55:42 +02:00
antirez
c756936b1d HyperLogLog sparse representation initial implementation.
Code never tested, but the basic layout is shaped in this commit.
Also missing:

1) Sparse -> Dense conversion function.
2) New HLL object creation using the sparse representation.
3) Implementation of PFMERGE for the sparse representation.
2014-04-11 17:34:32 +02:00
antirez
8ea5b46d30 hllCount() refactored to support multiple representations. 2014-04-11 10:25:07 +02:00
antirez
1efc1e052d hllAdd() refactored into two functions.
Also dense representation access macro renamed accordingly.
2014-04-11 09:47:52 +02:00
antirez
d55474e558 HyperLogLog refactoring to support different encodings.
Metadata are now placed at the start of the representation as an header.
There is a proper structure to access the representation.
Still work to do in order to truly abstract the implementation from the
representation, commands still work assuming dense representation.
2014-04-11 09:26:45 +02:00
Matt Stancliff
83d2830372 Check key expiration before deleting
Deleting an expired key should return 0, not success.

Fixes #1648
2014-04-10 17:08:02 -04:00
antirez
9c037ba85f HyperLogLog sparse representation slightly modified.
After running a few simulations with different alternative encodings,
it was found that the VAL opcode performs better using 5 bits for the
value and 2 bits for the run length, at least for cardinalities in the
range of interest.
2014-04-10 16:36:31 +02:00
antirez
da2fbcf93d HyperLogLog sparse representation description and macros. 2014-04-09 18:56:00 +02:00
antirez
67bb2c46b2 Add casting to match printf format.
adjustOpenFilesLimit() and clusterUpdateSlotsWithConfig() that were
assuming uint64_t is the same as unsigned long long, which is true
probably for all the systems out there that we target, but still GCC
emitted a warning since technically they are two different types.
2014-04-07 08:58:06 +02:00
antirez
3a6a1e42f1 ZRANGEBYLEX and ZREVRANGEBYLEX implementation. 2014-04-05 11:41:43 +02:00
antirez
d5be696db8 PFCOUNT: always unshare/decode the object.
This will be a non-op most of the times since the object will be
unshared / decoded, however it is more technically correct to start this
way since the object may be decoded even in the read-only code path.
2014-04-04 17:25:55 +02:00
antirez
1c12bcbcfb tryObjectEncoding() refactoring.
We also avoid to re-create an object that is already in EMBSTR encoding.
2014-04-04 17:25:35 +02:00
antirez
433ce7f85c Changed HyperLogLog hash seed to a non-zero value.
Using a seed of zero has the side effect of having the empty string
hashing to what is a very special case in the context of HyperLogLog: a
very long run of zeroes.

This did not influenced the correctness of the result with 16k registers
because of the harmonic mean, but still it is inconvenient that a so
obvious value maps to a so special hash.

The seed 0xadc83b19 is used instead, which is the first 64 bits of the
SHA1 of the empty string.

Reference: issue #1657.
2014-04-04 09:36:32 +02:00
antirez
d2ca4bb62d Return "WRONGTYPE" error on PF* type mismatch. 2014-04-03 22:10:20 +02:00
antirez
349c978189 Fix PFADD infinite loop.
We need to guarantee that the last bit is 1, otherwise an element may
hash to just zeroes with probability 1/(2^64) and trigger an infinite
loop.

See issue #1657.
2014-04-03 19:31:26 +02:00
antirez
ce637b2fef Remove HyperLogLog type checking duplicated code. 2014-04-03 13:20:34 +02:00
antirez
aaaed66c56 PFGETREG added for testing purposes.
The new command allows to get a dump of the registers stored
into an HyperLogLog data structure for testing / debugging purposes.
2014-04-03 10:45:30 +02:00
antirez
9682295f68 PFCOUNT: unshare the object when cached cardinality is modified. 2014-04-03 10:37:32 +02:00
antirez
be9860d0e9 PFSELFTEST improved to test the approximation error. 2014-04-03 10:18:31 +02:00
antirez
096b5e921e HyperLogLog: added magic / version.
This will allow future changes like compressed representations.
Currently the magic is not checked for performance reasons but this may
change in the future, for example if we add new types encoded in strings
that may have the same size of HyperLogLogs.
2014-04-02 09:58:47 +02:00
Raymond Myers
bf066c875f Fixed pfadd/pfcount commands emitting hll* events instead of pf* events 2014-04-01 14:59:13 -07:00
Raymond Myers
f0868e080d Change HLL* to PF* in error messages 2014-04-01 14:54:31 -07:00
antirez
4ab162a559 Include redis.h before other stuff in hyperloglog.c.
Otherwise fmacros.h is included later and this may break compilation on
different systems.
2014-04-01 15:52:15 +02:00
antirez
5afcca34ce HyperLogLog API prefix modified from "P" to "PF".
Using both the initials of Philippe Flajolet instead of just "P".
2014-03-31 22:48:01 +02:00
antirez
ba4e20835a Makefile.dep updated with hyperloglog.o deps. 2014-03-31 19:51:34 +02:00
antirez
e887c62e45 HyperLogLog: make API use the P prefix in honor of Philippe Flajolet. 2014-03-31 19:29:40 +02:00
antirez
f1b7608128 HLLMERGE fixed by adding a... missing loop! 2014-03-31 16:03:05 +02:00
antirez
ec1ee66256 HyperLogLog apply bias correction using a polynomial.
Better results can be achieved by compensating for the bias of the raw
approximation just after 2.5m (when LINEARCOUNTING is no longer used) by
using a polynomial that approximates the bias at a given cardinality.

The curve used was found using this web page:

    http://www.xuru.org/rt/PR.asp

That performs polynomial regression given a set of values.
2014-03-31 15:41:38 +02:00
antirez
f2277475b2 HLLMERGE implemented.
Merge N HLL data structures by selecting the max value for every
M[i] register among the set of HLLs.
2014-03-31 14:39:44 +02:00
antirez
4ab45183fc HLLCOUNT is technically a write command
When we update the cached value, we need to propagate the command and
signal the key as modified for WATCH.
2014-03-31 12:29:24 +02:00
antirez
8aeb0c196a HLLADD: propagate write when only variable name is given.
The following form is given:

    HLLADD myhll

No element is provided in the above case so if 'myhll' var does not
exist the result is to just create an empty HLL structure, and no update
will be performed on the registers.

In this case, the DB should still be set dirty and the command
propagated.
2014-03-31 12:21:08 +02:00
antirez
60e60f4ee0 HyperLogLog: use LINEARCOUNTING up to 3m.
The HyperLogLog original paper suggests using LINEARCOUNTING for
cardinalities < 2.5m, however for P=14 the median / max error
curves show that a value of '3' is the best pick for m = 16384.
2014-03-31 10:09:55 +02:00
antirez
307a189900 HyperLogLog approximated cardinality caching.
The more we add elements to an HyperLogLog counter, the smaller is
the probability that we actually update some register.

From this observation it is easy to see how it is possible to use
caching of a previously computed cardinality and reuse it to serve
HLLCOUNT queries as long as no register was updated in the data
structure.

This commit does exactly this by using just additional 8 bytes for the
data structure to store a 64 bit unsigned integer value cached
cardinality. When the most significant bit of the 64 bit integer is set,
it means that the value computed is no longer usable since at least a
single register was modified and we need to recompute it at the next
call of HLLCOUNT.

The value is always stored in little endian format regardless of the
actual CPU endianess.
2014-03-30 19:26:16 +02:00
antirez
543ede03f2 String value unsharing refactored into proper function.
All the Redis functions that need to modify the string value of a key in
a destructive way (APPEND, SETBIT, SETRANGE, ...) require to make the
object unshared (if refcount > 1) and encoded in raw format (if encoding
is not already REDIS_ENCODING_RAW).

This was cut & pasted many times in multiple places of the code. This
commit puts the small logic needed into a function called
dbUnshareStringValue().
2014-03-30 18:32:17 +02:00
antirez
aaf6db459b Use endian neutral hash function for HyperLogLog.
We need to be sure that you can save a dataset in a Redis instance,
reload it in a different architecture, and continue to count in the same
HyperLogLog structure.

So 32 and 64 bit, little or bit endian, must all guarantee to output the
same hash for the same element.
2014-03-30 00:55:49 +01:00
antirez
4628ac0065 HyperLogLog internal representation modified.
The new representation is more obvious, starting from the LSB of the
first byte and using bits going to MSB, and passing to next byte as
needed.

There was also a subtle error: first two bits were unused, everything
was carried over on the right of two bits, even if it worked because of
the code requirement of always having a byte more at the end.

During the rewrite the code was made safer trying to avoid undefined
behavior due to shifting an uint8_t for more than 8 bits.
2014-03-29 16:04:27 +01:00
antirez
5317a582cf Remove a few useless operations from hllCount() fast path. 2014-03-29 12:17:56 +01:00
antirez
3ed947fb30 HLLCOUNT 3x faster taking fast path for default params. 2014-03-29 12:12:44 +01:00
antirez
28dce36f76 Use processor base types in HLL_(GET|SET)_REGISTER.
This speedups the macros by a noticeable factor.
2014-03-29 08:37:01 +01:00
antirez
ac8fbe8829 HyperLogLog: use precomputed table for 2^(-M[i]). 2014-03-28 22:49:24 +01:00
antirez
f90a4af3d7 HyperLogLog algorithm fixed in two ways.
There was an error in the computation of 2^register, and the sequence of
zeroes computed after the hashing did not included the "1".
2014-03-28 18:24:05 +01:00
antirez
ded86076b3 HLLCOUNT implemented. 2014-03-28 17:37:18 +01:00
antirez
156929ee97 HLLADD implemented. 2014-03-28 16:24:35 +01:00
antirez
5660ff1cc1 hllAdd() low level HyperLogLog "add" implemented. 2014-03-28 14:42:30 +01:00
antirez
e3234116ad HyperLogLog: redefine constants using "P". 2014-03-28 14:09:28 +01:00
antirez
e73839e7d5 HLL_SET_REGISTER fixed.
There was an error in the first version of the macro.
Now the HLLSELFTEST test reports success.
2014-03-28 13:56:07 +01:00
antirez
f22397dd7f Use REDIS_HLL_REGISTER_MAX when possible. 2014-03-28 12:16:39 +01:00
antirez
1c88c5941b HLL_(SET|GET)_REGISTER types fixed. 2014-03-28 12:15:46 +01:00
antirez
552eb5407a HLLSELFTEST command implemented.
To test the bitfield array of counters set/get macros from the Redis Tcl
suite is hard, so a specialized command that is able to test the
internals was developed.
2014-03-28 12:11:55 +01:00
antirez
0609380603 HyperLogLog: initial sketch of registers access. 2014-03-28 11:18:48 +01:00
antirez
8f52173b2c Cluster: last_vote_epoch -> lastVoteEpoch.
Use cammel case for epochs that are persisted on disk.
2014-03-27 15:01:24 +01:00
antirez
7fb14b73ba Cluster: save/restore vars that must persist after recovery.
This fixes issue #1479.
2014-03-27 14:56:29 +01:00
antirez
6dd2dbbd36 Cluster: handshake "already known" error logged to VERBOSE.
This is not really an error but something that always happens for
example when creating a new cluster, or if the sysadmin rejoins manually
a node that is already known.

Since useless logs don't help, moved to VERBOSE level.
2014-03-26 16:35:38 +01:00
antirez
3cf6f1f54f Cluster: clusterHandleConfigEpochCollision() fixed.
New config epochs must always be obtained incrementing the currentEpoch,
that is itself guaranteed to be >= the max configEpoch currently known
to the node.
2014-03-26 12:31:28 +01:00
antirez
80d4c52cdf Cluster: better logging for clusterUpdateSlotsConfigWith(). 2014-03-26 12:09:38 +01:00
antirez
eb746ec408 Cluster: CLUSTER SETSLOT implementation comment updated.
Update the comment since the implementation details changed.
2014-03-25 17:50:46 +01:00
antirez
0064b1a583 Cluster: redis-trib cluster allocation more even across nodes.
redis-trib used to allocate slots not considering fractions of nodes
when computing the slots_per_node amount. So the fractional part was
carried over till the end of the allocation, where the last node
received a few more slots than any other (or a lot more if the cluster
was composed of many nodes).

The computation was changed to allocate slots more evenly when they are
not exactly divisible for the number of masters we have.
2014-03-25 17:44:39 +01:00
antirez
6c527a89a0 Cluster: configEpoch collisions resolution.
The slave election in Redis Cluster guarantees that slaves promoted to
masters always end with unique config epochs, however failures during
manual reshardings, software bugs and operational errors may in theory
cause two nodes to have the same configEpoch.

This commit introduces a mechanism to eventually always end with different
configEpochs if a collision ever happens.

As a (wanted) side effect, this also ensures that after a new cluster
is created, all nodes will end with a different configEpoch automatically.
2014-03-25 17:19:58 +01:00
antirez
c1041c570f Cluster: stay within 80 cols. 2014-03-25 16:07:14 +01:00
antirez
6540e9eeaa Fix off by one bug in freeMemoryIfNeeded() eviction pool.
Bug found by the continuous integration test running the Redis
with valgrind:

==6245== Invalid read of size 8
==6245==    at 0x4C2DEEF: memcpy@GLIBC_2.2.5 (mc_replace_strmem.c:876)
==6245==    by 0x41F9E6: freeMemoryIfNeeded (redis.c:3010)
==6245==    by 0x41D2CC: processCommand (redis.c:2069)

memmove() size argument was accounting for an extra element, going
outside the bounds of the array.
2014-03-25 10:32:15 +01:00
antirez
6e33c908dd adjustOpenFilesLimit() refactoring.
In this commit:
* Decrement steps are semantically differentiated from the reserved FDs.
  Previously both values were 32 but the meaning was different.
* Make it clear that we save setrlimit errno.
* Don't explicitly handle wrapping of 'f', but prevent it from
  happening.
* Add comments to make the function flow more readable.

This integrates PR #1630
2014-03-25 09:05:28 +01:00
Salvatore Sanfilippo
72c5ebcba4 Merge pull request #1630 from mattsta/fix-infinite-loop-ulimit
Fix infinite loop ulimit
2014-03-25 08:42:39 +01:00
antirez
35667d75c3 Fixed undefined variable value with certain code paths.
In sentinelFlushConfig() fd could be undefined when the following if
statement was true:

        if (rewrite_status == -1) goto werr;

This could cause random file descriptors to get closed.
2014-03-24 21:07:44 +01:00
Matt Stancliff
78782ed59f Use LRU_CLOCK() instead of function getLRUClock()
lookupKey() uses LRU_CLOCK(), so it seems object
creation should use LRU_CLOCK() too.
2014-03-24 14:39:26 -04:00
Matt Stancliff
4290455145 Sentinel: Notify user when config can't be saved 2014-03-24 13:54:14 -04:00
Matt Stancliff
b47b343fab Fix data loss when save AOF/RDB with no free space
Previously, the (!fp) would only catch lack of free space
under OS X.  Linux waits to discover it can't write until
it actually writes contents to disk.

(fwrite() returns success even if the underlying file
has no free space to write into.  All the errors
only show up at flush/sync/close time.)

Fixes antirez/redis#1604
2014-03-24 13:54:14 -04:00
Salvatore Sanfilippo
906c4d77c0 Merge pull request #1617 from mattsta/remove-unused-warning
Cluster: remove variable causing warning
2014-03-24 18:33:22 +01:00
Salvatore Sanfilippo
8e6625e6ae Merge pull request #1629 from mattsta/fix-trib-master-assignment
Cluster: Restore proper trib master iteration
2014-03-24 18:31:55 +01:00
Salvatore Sanfilippo
a006fcb8a7 Merge pull request #1628 from mattsta/fix-trib-create
Cluster: Fix trib create when masters==replicas
2014-03-24 18:26:17 +01:00
Matt Stancliff
386a46946b Fix potentially incorrect errno usage
errno may be reset by the previous call to redisLog, so capture
the original value for proper error reporting.
2014-03-24 13:21:15 -04:00
Matt Stancliff
3b54ee6ea4 Add REDIS_MIN_RESERVED_FDS define for open fds
Also update the original REDIS_EVENTLOOP_FDSET_INCR to
include REDIS_MIN_RESERVED_FDS. REDIS_EVENTLOOP_FDSET_INCR
exists to make sure more than (maxclients+RESERVED) entries
are allocated, but we can only guarantee that if we include
the current value of REDIS_MIN_RESERVED_FDS as a minimum
for the INCR size.
2014-03-24 13:15:35 -04:00
Salvatore Sanfilippo
896e15f3e3 Merge pull request #1627 from badboy/lru-fix
Fixed a few typos.
2014-03-24 18:13:39 +01:00
Matt Stancliff
e942f3ce0f Cluster: Restore proper trib master iteration
This got removed in 2e5c394 during a new feature addition.

The prior commit had "break if masters.length == masters_count"
but we are guaranteed to aready have that condition met since
otherwise we would haven't gotten this far.

Without this break statement, it's possible some masters may
be forgotten and have zero replicas while other masters have
more than their requested number of replicas.

Thanks to carlos for pointing out this regression at:
https://groups.google.com/forum/#!topic/redis-db/_WVVqDw5B7c
2014-03-24 10:17:44 -04:00
Matt Stancliff
df4bdbf688 Cluster: Fix trib create when masters==replicas
This bug was introduced in 2e5c394f during a refactor.

It took me a while to understand what was going on with
the code, so I've refactored it further by:
  - Replacing boolean values with meaningful symbols
  - Replacing 'i' with a meaningful variable name
  - Adding the proper abort check
  - Factoring out now duplicated conditionals
  - Adding optional verbose logging (we're inside *four*
    different looping constructs, so it takes a while to
    figure out where all the moving parts are)
  - Updating comment for the section

This fixes a problem when the number of master instances
equaled the number of replica instances.  Before, when
there were equal numbers of both, nodes_count would go to
zero, but the while loop would spin in i < @replicas because
i would never be updated (because the nodes_list of each ip
was length == 0, which triggered an endless loop of
next -> i = 0 -> 0 < 1? -> true -> next -> i = 0 ...)

Thanks to carlo who found this problem at:
https://groups.google.com/forum/#!topic/redis-db/_WVVqDw5B7c
2014-03-24 10:17:38 -04:00
Matt Stancliff
90b844212d Fix infinite loop on startup if ulimit too low
Fun fact: rlim_t is an unsigned long long on all platforms.

Continually subtracting from a rlim_t makes it get smaller
and smaller until it wraps, then you're up to 2^64-1.

This was causing an infinite loop on Redis startup if
your ulimit was extremely (almost comically) low.

The case of (f > oldlimit) would never be met in a case like:

    f = 150
    while (f > 20) f -= 128

Since f is unsigned, it can't go negative and would
take on values of:

    Iteration 1: 150 - 128 => 22
    Iteration 2:  22 - 128 => 18446744073709551510
    Iterations 3-∞: ...

To catch the wraparound, we use the previous value of f
stored in limit.rlimit_cur.  If we subtract from f and
get a larger number than the value it had previously,
we print an error and exit since we don't have enough
file descriptors to help the user at this point.

Thanks to @bs3g for the inspiration to fix this problem.
Patches existed from @bs3g at antirez#1227, but I needed to repair a few other
parts of Redis simultaneously, so I didn't get a chance to use them.
2014-03-24 10:17:33 -04:00
Matt Stancliff
4a25983f8f Improve error handling around setting ulimits
The log messages about open file limits have always
been slightly opaque and confusing.  Here's an attempt to
fix their wording, detail, and meaning.  Users will have a
better understanding of how to fix very common problems
with these reworded messages.

Also, we handle a new error case when maxclients becomes less
than one, essentially rendering the server unusable.  We
now exit on startup instead of leaving the user with a server
unable to handle any connections.

This fixes antirez#356 as well.
2014-03-24 10:17:33 -04:00
Matt Stancliff
491532a713 Replace magic 32 with REDIS_EVENTLOOP_FDSET_INCR
32 was the additional number of file descriptors Redis
would reserve when managing a too-low ulimit.  The
number 32 was in too many places statically, so now
we use a macro instead that looks more appropriate.

When Redis sets up the server event loop, it uses:
    server.maxclients+REDIS_EVENTLOOP_FDSET_INCR

So, when reserving file descriptors, it makes sense to
reserve at least REDIS_EVENTLOOP_FDSET_INCR FDs instead
of only 32.  Currently, REDIS_EVENTLOOP_FDSET_INCR is
set to 128 in redis.h.

Also, I replaced the static 128 in the while f < old loop
with REDIS_EVENTLOOP_FDSET_INCR as well, which results
in no change since it was already 128.

Impact: Users now need at least maxclients+128 as
their open file limit instead of maxclients+32 to obtain
actual "maxclients" number of clients.  Redis will carve
the extra REDIS_EVENTLOOP_FDSET_INCR file descriptors it
needs out of the "maxclients" range instead of failing
to start (unless the local ulimit -n is too low to accomidate
the request).
2014-03-24 10:17:33 -04:00
Matt Stancliff
c138631cd1 Fix maxclients error handling
Everywhere in the Redis code base, maxclients is treated
as an int with (int)maxclients or `maxclients = atoi(source)`,
so let's make maxclients an int.

This fixes a bug where someone could specify a negative maxclients
on startup and it would work (as well as set maxclients very high)
because:

    unsigned int maxclients;
    char *update = "-300";
    maxclients = atoi(update);
    if (maxclients < 1) goto fail;

But, (maxclients < 1) can only catch the case when maxclients
is exactly 0.  maxclients happily sets itself to -300, which isn't
-300, but rather 4294966996, which isn't < 1, so... everything
"worked."

maxclients config parsing checks for the case of < 1, but maxclients
CONFIG SET parsing was checking for case of < 0 (allowing
maxclients to be set to 0).  CONFIG SET parsing is now updated to
match config parsing of < 1.

It's tempting to add a MINIMUM_CLIENTS define, but... I didn't.

These changes were inspired by antirez#356, but this doesn't
fix that issue.
2014-03-24 10:17:33 -04:00
antirez
93253c2762 Sample and cache RSS in serverCron().
Obtaining the RSS (Resident Set Size) info is slow in Linux and OSX.
This slowed down the generation of the INFO 'memory' section.

Since the RSS does not require to be a real-time measurement, we
now sample it with server.hz frequency (10 times per second by default)
and use this value both to show the INFO rss field and to compute the
fragmentation ratio.

Practically this does not make any difference for memory profiling of
Redis but speeds up the INFO call significantly.
2014-03-24 12:00:20 +01:00
antirez
30639c8ca9 sdscatvprintf(): Try to use a static buffer.
For small content the function now tries to use a static buffer to avoid
a malloc/free cycle that is too costly when the function is used in the
context of performance critical code path such as INFO output generation.

This change was verified to have positive effects in the execution speed
of the INFO command.
2014-03-24 10:20:33 +01:00
antirez
d3efe04c47 Cache uname() output across INFO calls.
Uname was profiled to be a slow syscall. It produces always the same
output in the context of a single execution of Redis, so calling it at
every INFO output generation does not make too much sense.

The uname utsname structure was modified as a static variable. At the
same time a static integer was added to check if we need to call uname
the first time.
2014-03-24 10:00:08 +01:00
antirez
a9caca0424 sdscatvprintf(): guess buflen using format length.
sdscatvprintf() uses a loop where it tries to output the formatted
string in a buffer of the initial length, if there was not enough room,
a buffer of doubled size is tried and so forth.

The initial guess for the buffer length was very poor, an hardcoded
"16". This caused the printf to be processed multiple times without a
good reason. Given that printf functions are already not fast, the
overhead was significant.

The new heuristic is to use a buffer 4 times the length of the format
buffer, and 32 as minimal size. This appears to be a good balance for
typical uses of the function inside the Redis code base.

This change improved INFO command performances 3 times.
2014-03-24 09:44:11 +01:00
antirez
4d2e8fa189 Use getLRUClock() instead of server.lruclock to create objects.
Thanks to Matt Stancliff for noticing this error. It was in the original
code but somehow I managed to remove the change from the commit...
2014-03-21 09:08:20 +01:00
antirez
5fa3248bad The default maxmemory policy is now noeviction.
This is safer as by default maxmemory should just set a memory limit
without any key to be deleted, unless the policy is set to something
more relaxed.
2014-03-21 08:03:34 +01:00
Jan-Erik Rediger
4fdd7a0546 Fixed a few typos. 2014-03-20 23:16:38 +01:00
antirez
a98369929e Use 24 bits for the lru object field and improve resolution.
There were 2 spare bits inside the Redis object structure that are now
used in order to enlarge 4x the range of the LRU field.

At the same time the resolution was improved from 10 to 1 second: this
still provides 194 days before the LRU counter overflows (restarting from
zero).

This is not a problem since it only causes lack of eviction precision for
objects not touched for a very long time, and the lack of precision is
only temporary.
2014-03-20 17:56:27 +01:00
antirez
f4da796c53 Default LRU samples is now 5. 2014-03-20 17:05:42 +01:00
antirez
c641b670c3 Use new dictGetRandomKeys() API to get samples for eviction.
The eviction quality degradates a bit in my tests, but since the API is
faster, it allows to raise the number of samples, and overall is a win.
2014-03-20 16:52:12 +01:00
antirez
82b53c650c struct dictEntry -> dictEntry. 2014-03-20 16:20:37 +01:00
antirez
5317f5e99a Added dictGetRandomKeys() to dict.c: mass get random entries.
This new function is useful to get a number of random entries from an
hash table when we just need to do some sampling without particularly
good distribution.

It just jumps at a random place of the hash table and returns the first
N items encountered by scanning linearly.

The main usefulness of this function is to speedup Redis internal
sampling of the key space, for example for key eviction or expiry.
2014-03-20 15:50:46 +01:00
antirez
22c9cfaf57 LRU eviction pool implementation.
This is an improvement over the previous eviction algorithm where we use
an eviction pool that is persistent across evictions of keys, and gets
populated with the best candidates for evictions found so far.

It allows to approximate LRU eviction at a given number of samples
better than the previous algorithm used.
2014-03-20 11:57:29 +01:00
antirez
6d5790d682 Fix OBJECT IDLETIME return value converting to seconds.
estimateObjectIdleTime() returns a value in milliseconds now, so we need
to scale the output of OBJECT IDLETIME to seconds.
2014-03-20 11:55:18 +01:00
antirez
ad6b0f70b2 Obtain LRU clock in a resolution dependent way.
For testing purposes it is handy to have a very high resolution of the
LRU clock, so that it is possible to experiment with scripts running in
just a few seconds how the eviction algorithms works.

This commit allows Redis to use the cached LRU clock, or a value
computed on demand, depending on the resolution. So normally we have the
good performance of a precomputed value, and a clock that wraps in many
days using the normal resolution, but if needed, changing a define will
switch behavior to an high resolution LRU clock.
2014-03-20 11:47:12 +01:00
antirez
1faf82663f Specify lruclock in redisServer structure via REDIS_LRU_BITS.
The padding field was totally useless: removed.
2014-03-20 11:37:27 +01:00
antirez
d77e231682 Specify LRU resolution in milliseconds. 2014-03-20 11:33:25 +01:00
antirez
fe30847016 Set LRU parameters via REDIS_LRU_BITS define. 2014-03-20 11:22:47 +01:00
antirez
e150ec7d0c Unify stats reset for CONFIG RESETSTAT / initServer().
Now CONFIG RESETSTAT makes sure to reset all the fields, and in the
future it will be simpler to avoid missing new fields.
2014-03-19 12:55:49 +01:00
Matt Stancliff
67ed5f00aa Cluster: remove variable causing warning
GCC-4.9 warned about this, but clang didn't.

This commit fixes warning:
sentinel.c: In function 'sentinelReceiveHelloMessages':
sentinel.c:2156:43: warning: variable 'master' set but not used [-Wunused-but-set-variable]
     sentinelRedisInstance *ri = c->data, *master;
2014-03-18 15:35:09 -04:00
antirez
b9e90a70fa Sentinel: sentinelRefreshInstanceInfo() minor refactoring.
Test sentinel.tilt condition on top and return if it is true.
This allows to remove the check for the tilt condition in the remaining
code paths of the function.
2014-03-18 15:35:47 +01:00
antirez
218cc5fc39 Sentinel: propagate down-after-ms changes to slaves and sentinels. 2014-03-18 14:37:44 +01:00
antirez
bb6d850160 Sentinel: down-after-milliseconds is not master-specific.
addReplySentinelRedisInstance() modified so that this field is displayed
for all the kind of instances: Sentinels, Masters, Slaves.
2014-03-18 11:21:17 +01:00
antirez
ae0b7680b3 Sentinel failure detection implementation improved.
Failure detection in Sentinel is ping-pong based. It used to work by
remembering the last time a valid PONG reply was received, and checking
if the reception time was too old compared to the current current time.

PINGs were sent at a fixed interval of 1 second.

This works in a decent way, but does not scale well when we want to set
very small values of "down-after-milliseconds" (this is the node
timeout basically).

This commit reiplements the failure detection making a number of
changes. Some changes are inspired to Redis Cluster failure detection
code:

* A new last_ping_time field is added in representation of instances.
  If non zero, we have an active ping that was sent at the specified
  time. When a valid reply to ping is received, the field is zeroed
  again.
* last_ping_time is not reset when we reconnect the link or send a new
  ping, so from our point of view it represents the time we started
  waiting for the instance to reply to our pings without receiving a
  reply.
* last_ping_time is now used in order to check if the instance is
  timed out. This means that we can have a node timeout of 100
  milliseconds and yet the system will work well since the new check is
  not bound to the period used to send pings.
* Pings are now sent every second, or often if the value of
  down-after-milliseconds is less than one second. With a lower limit of
  10 HZ ping frequency.
* Link reconnection code was improved. This is used in order to try to
  reconnect the link when we are at 50% of the node timeout without a
  valid reply received yet. However the old code triggered unnecessary
  reconnections when the node timeout was very small. Now that should be
  ok.

The new code passes the tests but more testing is needed and more unit
tests stressing the failure detector, so currently this is merged only
in the unstable branch.
2014-03-17 18:33:45 +01:00
antirez
3a2ff55617 Sentinel: use CLIENT SETNAME when connecting to Redis.
This makes debugging / monitoring of Sentinels simpler since you can
identify sentinels in CLIENT LIST output of Redis instances.
2014-03-15 14:59:23 +01:00
Matt Stancliff
584052ee6b Fix segfault from accessing array out of bounds
argc == 2; argv[2] == crash
2014-03-14 17:38:05 -04:00
antirez
ed813863f0 Sentinel: be safe under crash-recovery assumptions.
Sentinel's main safety argument is that there are no two configurations
for the same master with the same version (configuration epoch).

For this to be true Sentinels require to be authorized by a majority.
Additionally Sentinels require to do two important things:

* Never vote again for the same epoch.
* Never exchange an old vote for a fresh one.

The first prerequisite, in a crash-recovery system model, requires to
persist the master->leader_epoch on durable storage before to reply to
messages. This was not the case.

We also make sure to persist the current epoch in order to never reply
to stale votes requests from other Sentinels, after a recovery.

The configuration is persisted by making use of fsync(), this is
considered in the context of this code a good enough guarantee that
after a restart our durable state is restored, however this may not
always be the case depending on the kind of hardware and operating
system used.
2014-03-14 14:58:44 +01:00
antirez
365094028b Sentinel: fake PUBLISH command to receive HELLO messages.
Now the way HELLO messages are received is unified.
Now it is no longer needed for Sentinels to converge to the higher
configuration for a master to be able to chat via some Redis instance,
the are able to directly exchanges configurations.

Note that this commit does not include the (trivial) change needed to
send HELLO messages to Sentinel instances as well, since for an error I
committed the change in the previous commit that refactored hello
messages processing into a separated function.
2014-03-14 11:07:42 +01:00
antirez
9dfe426fc8 Sentinel: HELLO processing refactored into sentinelProcessHelloMessage(). 2014-03-14 11:07:42 +01:00
antirez
133fccb03f Cluster: flag the transaction as dirty for the new redirections. 2014-03-13 15:11:53 +01:00
antirez
429aff4ef4 Linenoise updated, multiline mode enabled in redis-cli. 2014-03-13 15:11:08 +01:00
antirez
cc11d103c0 redis-trib: call MIGRATE via r.client.call as fix for redis-rb API changes.
See issue #1593.

Thanks to @badboy for suggesting the direct client.call fix.
2014-03-11 16:10:13 +01:00
antirez
df32eb6827 redis-trib: new subcommand 'call'. Exec command in all nodes.
Example:

./redis-trib.rb call 192.168.1.11:7000 config get cluster-node-timeout
2014-03-11 14:58:55 +01:00
antirez
2e5c394fa8 redis-trib: create subcommand is now able to assign spare slaves.
Example: if the user will try to configure a cluster with 9 nodes,
asking for 1 slave for master, redis-trib will configure a 4 masters
cluster with 1 slave each as usually, but this time will assign the
spare node as a slave of one of the masters.
2014-03-11 14:17:28 +01:00
antirez
e26f4486b0 Cluster: update node configEpoch on UPDATE messages.
The UPDATE message contains the configEpoch of the node configuration
advertised in the packet. Update it if needed.
2014-03-11 11:53:09 +01:00
antirez
a2ff90919f Cluster: set slot error if we receive an update for a busy slot.
By manually modifying nodes configurations in random ways, it is possible
to create the following scenario:

A is serving keys for slot 10
B is manually configured to serve keys for slot 10

A receives an update from B (or another node) where it is informed that
the slot 10 is now claimed by B with a greater configuration epoch,
however A still has keys from slot 10.

With this commit A will put the slot in error setting it in IMPORTING
state, so that redis-trib can detect the issue.
2014-03-11 11:49:47 +01:00
antirez
1ed0ad77f0 Cluster: clarified a comment in clusterUpdateSlotsConfigWith(). 2014-03-11 11:32:40 +01:00
antirez
8287945ff8 Cluster: flush importing/migrating state when master is turned into slave. 2014-03-11 11:22:06 +01:00
antirez
2e8e0ad44e Cluster: clusterCloseAllSlots() added. 2014-03-11 11:16:18 +01:00