The previous hashing used the trivial algorithm of xoring the integers
together. This is not optimal as it is very likely that different
hash table setups will hash the same, for instance an hash table at the
start of the rehashing process, and at the end, will have the same
fingerprint.
Now we hash N integers in a smarter way, by summing every integer to the
previous hash, and taking the integer hashing again (see the code for
further details). This way it is a lot less likely that we get a
collision. Moreover this way of hashing explicitly protects from the
same set of integers in a different order to hash to the same number.
This commit is related to issue #1240.
This commit does mainly two things:
1) It fixes zunionInterGenericCommand() by removing mass-initialization
of all the iterators used, so that we don't violate the unsafe iterator
API of dictionaries. This fixes issue #1240.
2) Since the zui* APIs required the allocator to be initialized in the
zsetopsrc structure in order to use non-iterator related APIs, this
commit fixes this strict requirement by accessing objects directly via
the op->subject->ptr pointer we have to the object.
dict.c allows the user to create unsafe iterators, that are iterators
that will not touch the dictionary data structure in any way, preventing
copy on write, but at the same time are limited in their usage.
The limitation is that when itearting with an unsafe iterator, no call
to other dictionary functions must be done inside the iteration loop,
otherwise the dictionary may be incrementally rehashed resulting into
missing elements in the set of the elements returned by the iterator.
However after introducing this kind of iterators a number of bugs were
found due to misuses of the API, and we are still finding
bugs about this issue. The bugs are not trivial to track because the
effect is just missing elements during the iteartion.
This commit introduces auto-detection of the API misuse. The idea is
that an unsafe iterator has a contract: from initialization to the
release of the iterator the dictionary should not change.
So we take a fingerprint of the dictionary state, xoring a few important
dict properties when the unsafe iteartor is initialized. We later check
when the iterator is released if the fingerprint is still the same. If it
is not, we found a misuse of the iterator, as not allowed API calls
changed the internal state of the dictionary.
This code was checked against a real bug, issue #1240.
This is what Redis prints (aborting) when a misuse is detected:
Assertion failed: (iter->fingerprint == dictFingerprint(iter->d)),
function dictReleaseIterator, file dict.c, line 587.
The previous code using a static buffer as an optimization was lame:
1) Premature optimization, actually it was *slower* than naive code
because resulted into the creation / destruction of the object
encapsulating the output buffer.
2) The code was very hard to test, since it was needed to have specific
tests for command lines exceeding the size of the static buffer.
3) As a result of "2" the code was bugged as the current tests were not
able to stress specific corner cases.
It was replaced with easy to understand code that is safer and faster.
During the replication full resynchronization process, the RDB file is
transfered from the master to the slave. However there is a short
preamble to send, that is currently just the bulk payload length of the
file in the usual Redis form $..length..<CR><LF>.
This preamble used to be sent with a direct write call, assuming that
there was alway room in the socket output buffer to hold the few bytes
needed, however this does not scale in case we'll need to send more
stuff, and is not very robust code in general.
This commit introduces a more general mechanism to send a preamble up to
2GB in size (the max length of an sds string) in a non blocking way.
Before this commit redis-benchmark supported random argumetns in the
form of :rand:000000000000. In every string of that form, the zeros were
replaced with a random number of 12 digits at every command invocation.
However this was far from perfect as did not allowed to generate simply
random numbers as arguments, there was always the :rand: prefix.
Now instead every argument in the form __rand_int__ is replaced with a
12 digits number. Note that "__rand_int__" is 12 characters itself.
In order to implement the new semantic, it was needed to change a few
thigns in the internals of redis-benchmark, as new clients are created
cloning old clients, so without a stable prefix such as ":rand:" the old
way of cloning the client was no longer able to understand, from the old
command line, what was the position of the random strings to substitute.
Now instead a client structure is passed as a reference for cloning, so
that we can directly clone the offsets inside the command line.
Example:
db0:keys=221913,expires=221913,avg_ttl=655
The algorithm uses a running average with only two samples (current and
previous). Keys found to be expired are considered at TTL zero even if
the actual TTL can be negative.
The TTL is reported in milliseconds.
We don't want to repeat a fast cycle too soon, the previous code was
broken, we need to wait two times the period *since* the start of the
previous cycle in order to avoid there is an even space between cycles:
.-> start .-> second start
| |
+-------------+-------------+--------------+
| first cycle | pause | second cycle |
+-------------+-------------+--------------+
The second and first start must be PERIOD*2 useconds apart hence the *2
in the new code.
This commit makes the fast collection cycle time configurable, at
the same time it does not allow to run a new fast collection cycle
for the same amount of time as the max duration of the fast
collection cycle.
The main idea here is that when we are no longer to expire keys at the
rate the are created, we can't block more in the normal expire cycle as
this would result in too big latency spikes.
For this reason the commit introduces a "fast" expire cycle that does
not run for more than 1 millisecond but is called in the beforeSleep()
hook of the event loop, so much more often, and with a frequency bound
to the frequency of executed commnads.
The fast expire cycle is only called when the standard expiration
algorithm runs out of time, that is, consumed more than
REDIS_EXPIRELOOKUPS_TIME_PERC of CPU in a given cycle without being able
to take the number of already expired keys that are yet not collected
to a number smaller than 25% of the number of keys.
You can test this commit with different loads, but a simple way is to
use the following:
Extreme load with pipelining:
redis-benchmark -r 100000000 -n 100000000 \
-P 32 set ele:rand:000000000000 foo ex 2
Remove the -P32 in order to avoid the pipelining for a more real-world
load.
In another terminal tab you can monitor the Redis behavior with:
redis-cli -i 0.1 -r -1 info keyspace
and
redis-cli --latency-history
Note: this commit will make Redis printing a lot of debug messages, it
is not a good idea to use it in production.
Thanks to @run and @badboy for spotting this.
Triva: clang was not able to provide me a warning about that when
compiling.
This closes#1024 and #1207, committing the change myself as the pull
requests no longer apply cleanly after other changes to the same
function.
Actaully the string is modified in-place and a reallocation is never
needed, so there is no need to return the new sds string pointer as
return value of the function, that is now just "void".
Now that EMBSTR encoding exists we calculate the amount of memory used
by the SDS part of a Redis String object in two different ways:
1) For raw string object, the size of the allocation is considered.
2) For embstr objects, the length of the string itself is used.
The new function takes care of this logic.
This function missed proper handling of reply_bytes when gluing to the
previous object was used. The issue was introduced with the EMBSTR new
string object encoding.
This fixes issue #1208.
Previously two string encodings were used for string objects:
1) REDIS_ENCODING_RAW: a string object with obj->ptr pointing to an sds
stirng.
2) REDIS_ENCODING_INT: a string object where the obj->ptr void pointer
is casted to a long.
This commit introduces a experimental new encoding called
REDIS_ENCODING_EMBSTR that implements an object represented by an sds
string that is not modifiable but allocated in the same memory chunk as
the robj structure itself.
The chunk looks like the following:
+--------------+-----------+------------+--------+----+
| robj data... | robj->ptr | sds header | string | \0 |
+--------------+-----+-----+------------+--------+----+
| ^
+-----------------------+
The robj->ptr points to the contiguous sds string data, so the object
can be manipulated with the same functions used to manipulate plan
string objects, however we need just on malloc and one free in order to
allocate or release this kind of objects. Moreover it has better cache
locality.
This new allocation strategy should benefit both the memory usage and
the performances. A performance gain between 60 and 70% was observed
during micro-benchmarks, however there is more work to do to evaluate
the performance impact and the memory usage behavior.
There are systems that when printing +/- infinte with printf-family
functions will not use the usual "inf" "-inf", but different strings.
Handle that explicitly.
Fixes issue #930.
This fixes issue #1194, that contains many details.
However in short, it was possible for ZADD to not accept as score values
that was however possible to obtain with multiple calls to ZINCRBY, like
in the following example:
redis 127.0.0.1:6379> zadd k 2.5e-308 m
(integer) 1
redis 127.0.0.1:6379> zincrby k -2.4e-308 m
"9.9999999999999694e-310"
redis 127.0.0.1:6379> zscore k m
"9.9999999999999694e-310"
redis 127.0.0.1:6379> zadd k 9.9999999999999694e-310 m1
(error) ERR value is not a valid float
The problem was due to strtod() returning ERANGE in the following case
specified by POSIX:
"If the correct value would cause an underflow, a value whose magnitude
is no greater than the smallest normalized positive number in the return
type shall be returned and errno set to [ERANGE].".
Now instead the returned value is accepted even when ERANGE is returned
as long as the return value of the function is not negative or positive
HUGE_VAL or zero.
Note that we only do it when STORE is not used, otherwise we want an
absolutely locale independent and binary safe sorting in order to ensure
AOF / replication consistency.
This is probably an unexpected behavior violating the least surprise
rule, but there is currently no other simple / good alternative.
compareStringObject was not always giving the same result when comparing
two exact strings, but encoded as integers or as sds strings, since it
switched to strcmp() when at least one of the strings were not sds
encoded.
For instance the two strings "123" and "123\x00456", where the first
string was integer encoded, would result into the old implementation of
compareStringObject() to return 0 as if the strings were equal, while
instead the second string is "greater" than the first in a binary
comparison.
The same compasion, but with "123" encoded as sds string, would instead
return a value < 0, as it is correct. It is not impossible that the
above caused some obscure bug, since the comparison was not always
deterministic, and compareStringObject() is used in the implementation
of skiplists, hash tables, and so forth.
At the same time, collateStringObject() was introduced by this commit, so
that can be used by SORT command to return sorted strings usign
collation instead of binary comparison. See next commit.
The function returns an unique identifier for the client, as ip:port for
IPv4 and IPv6 clients, or as path:0 for Unix socket clients.
See the top comment in the function for more info.
This has been done by exposing the anetSockName() function anet.c
to be used when the sentinel is publishing its existence to the masters.
This implementation is very unintelligent as it will likely break if used
with IPv6 as the nested colons will break any parsing of the PUBLISH string
by the master.
Any places which I feel might want to be updated to work differently
with IPv6 have been marked with a comment starting "IPV6:".
Currently the only comments address places where an IP address is
combined with a port using the standard : separated form. These may want
to be changed when printing IPv6 addresses to wrap the address in []
such as
[2001:db8::c0:ffee]:6379
instead of
2001:db8::c0:ffee:6379
as the latter format is a technically valid IPv6 address and it is hard
to distinguish the IPv6 address component from the port unless you know
the port is supposed to be there.
In two places buffers have been created with a size of 128 bytes which
could be reduced to INET6_ADDRSTRLEN to still hold a full IP address.
These places have been marked as they are presently big enough to handle
the needs of storing a printable IPv6 address.
Changes the sockaddr_in to a sockaddr_storage. Attempts to convert the
IP address into an AF_INET or AF_INET6 before returning an "Invalid IP
address" error. Handles converting the sockaddr from either AF_INET or
AF_INET6 back into a string for storage in the clusterNode ip field.
Change the sockaddr_in to sockaddr_storage which is capable of storing
both AF_INET and AF_INET6 sockets. Uses the sockaddr_storage ss_family
to correctly return the printable IP address and port.
Function makes the assumption that the buffer is of at least
REDIS_CLUSTER_IPLEN bytes in size.
Change the sockaddr_in to sockaddr_storage which is capable of storing
both AF_INET and AF_INET6 sockets. Uses the sockaddr_storage ss_family
to correctly return the printable IP address and port.
Change the sockaddr_in to sockaddr_storage which is capable of storing
both AF_INET and AF_INET6 sockets. Uses the sockaddr_storage ss_family
to correctly return the printable IP address and port.
Change the getaddrinfo(3) hints family from AF_INET to AF_UNSPEC to
allow resolution of IPv6 addresses as well as IPv4 addresses. The
function will return the IP address of whichever address family is
preferenced by the operating system. Most current operating systems
will preference AF_INET6 over AF_INET.
Unfortunately without attempting to establish a connection to the
remote address we can't know if the host is capable of using the
returned IP address. It might be desirable to have anetResolve
accept an additional argument specifying the AF_INET/AF_INET6 address
the caller would like to receive. Currently though it does not appear
as though the anetResolve function is ever used within Redis.
Add REDIS_CLUSTER_IPLEN macro to define the size of the clusterNode ip
character array. Additionally use this macro in inet_ntop(3) calls where
the size of the array was being defined manually.
The REDIS_CLUSTER_IPLEN is defined as INET_ADDRSTRLEN which defines the
correct size of a buffer to store an IPv4 address in. The
INET_ADDRSTRLEN macro itself is defined in the <netinet/in.h> header
file and should be portable across the majority of systems.
Using sizeof with an array will only return expected results if the
array is created in the scope of the function where sizeof is used. This
commit changes the inet_ntop calls so that they use the fixed buffer
value as defined in redis.h which is 16.
Replace inet_ntoa(3) calls with the more future proof inet_ntop(3)
function which is capable of handling additional address families.
API Change: anetTcpAccept() & anetPeerToString() additional argument
additional argument required to specify the length of the character
buffer the IP address is written to in order to comply with
inet_ntop(3) function semantics.
Change anetTcpServer() function to use getaddrinfo(3) to perform
address resolution, socket creation and binding. Resolved addresses
are limited to those reachable by the AF_INET address family.
Change anetTcpGenericConnect() function to use getaddrinfo(3) to
perform address resolution, socket creation and connection. Resolved
addresses are limited to those reachable by the AF_INET family.
Change anetResolve() function to use getaddrinfo(3) to resolve hostnames.
Resolved hostnames are limited to those reachable by the AF_INET address
family.
API Change: anetResolve requires additional argument.
additional argument required to specify the length of the character
buffer the IP address is written to in order to comply with
inet_ntop(3) function semantics. inet_ntop(3) replaces inet_ntoa(3)
as it has been designed to be compatible with more address families.
Bind() can't be called multiple times against the same socket, multiple
sockets are required to bind multiple interfaces, silly me.
This reverts commit bd234d62bb.
When in --pipe mode, after all the data transfer to the server is
complete, now redis-cli waits at max the specified amount of
seconds (30 by default, use 0 to wait forever) without receiving any
reply at all from the server. After this time limit the operation is
aborted with an error.
That's related to issue #681.
If the protocol read from stdin happened to contain grabage (invalid
random chars), in the previous implementation it was possible to end
with something like:
dksfjdksjflskfjl*2\r\n$4\r\nECHO....
That is invalid as the *2 should start into a new line. Now we prefix
the ECHO with a CRLF that has no effects on the server but prevents this
issues most of the times.
Of course if the offending wrong sequence is something like:
$3248772349\r\n
No one is going to save us as Redis will wait for data in the context of
a big argument, so this fix does not cover all the cases.
This partially fixes issue #681.