The function to generate graphs is also more flexible as now includes
step and max value. The step of the samples generation function is no
longer limited to min step of 1000.
This will allow future changes like compressed representations.
Currently the magic is not checked for performance reasons but this may
change in the future, for example if we add new types encoded in strings
that may have the same size of HyperLogLogs.
Better results can be achieved by compensating for the bias of the raw
approximation just after 2.5m (when LINEARCOUNTING is no longer used) by
using a polynomial that approximates the bias at a given cardinality.
The curve used was found using this web page:
http://www.xuru.org/rt/PR.asp
That performs polynomial regression given a set of values.
The following form is given:
HLLADD myhll
No element is provided in the above case so if 'myhll' var does not
exist the result is to just create an empty HLL structure, and no update
will be performed on the registers.
In this case, the DB should still be set dirty and the command
propagated.
The HyperLogLog original paper suggests using LINEARCOUNTING for
cardinalities < 2.5m, however for P=14 the median / max error
curves show that a value of '3' is the best pick for m = 16384.
The more we add elements to an HyperLogLog counter, the smaller is
the probability that we actually update some register.
From this observation it is easy to see how it is possible to use
caching of a previously computed cardinality and reuse it to serve
HLLCOUNT queries as long as no register was updated in the data
structure.
This commit does exactly this by using just additional 8 bytes for the
data structure to store a 64 bit unsigned integer value cached
cardinality. When the most significant bit of the 64 bit integer is set,
it means that the value computed is no longer usable since at least a
single register was modified and we need to recompute it at the next
call of HLLCOUNT.
The value is always stored in little endian format regardless of the
actual CPU endianess.
All the Redis functions that need to modify the string value of a key in
a destructive way (APPEND, SETBIT, SETRANGE, ...) require to make the
object unshared (if refcount > 1) and encoded in raw format (if encoding
is not already REDIS_ENCODING_RAW).
This was cut & pasted many times in multiple places of the code. This
commit puts the small logic needed into a function called
dbUnshareStringValue().
We need to be sure that you can save a dataset in a Redis instance,
reload it in a different architecture, and continue to count in the same
HyperLogLog structure.
So 32 and 64 bit, little or bit endian, must all guarantee to output the
same hash for the same element.
The new representation is more obvious, starting from the LSB of the
first byte and using bits going to MSB, and passing to next byte as
needed.
There was also a subtle error: first two bits were unused, everything
was carried over on the right of two bits, even if it worked because of
the code requirement of always having a byte more at the end.
During the rewrite the code was made safer trying to avoid undefined
behavior due to shifting an uint8_t for more than 8 bits.
To test the bitfield array of counters set/get macros from the Redis Tcl
suite is hard, so a specialized command that is able to test the
internals was developed.
This is not really an error but something that always happens for
example when creating a new cluster, or if the sysadmin rejoins manually
a node that is already known.
Since useless logs don't help, moved to VERBOSE level.
New config epochs must always be obtained incrementing the currentEpoch,
that is itself guaranteed to be >= the max configEpoch currently known
to the node.
redis-trib used to allocate slots not considering fractions of nodes
when computing the slots_per_node amount. So the fractional part was
carried over till the end of the allocation, where the last node
received a few more slots than any other (or a lot more if the cluster
was composed of many nodes).
The computation was changed to allocate slots more evenly when they are
not exactly divisible for the number of masters we have.