If a thread unblocks a client blocked in a module command, by using the
RedisMdoule_UnblockClient() API, the event loop may not be awaken until
the next timeout of the multiplexing API or the next unrelated I/O
operation on other clients. We actually want the client to be served
ASAP, so a mechanism is needed in order for the unblocking API to inform
Redis that there is a client to serve ASAP.
This commit fixes the issue using the old trick of the pipe: when a
client needs to be unblocked, a byte is written in a pipe. When we run
the list of clients blocked in modules, we consume all the bytes
written in the pipe. Writes and reads are performed inside the context
of the mutex, so no race is possible in which we consume the bytes that
are actually related to an awake request for a client that should still
be put into the list of clients to unblock.
It was verified that after the fix the server handles the blocked
clients with the expected short delay.
Thanks to @dvirsky for understanding there was such a problem and
reporting it.
Testing with Solaris C compiler (SunOS 5.11 11.2 sun4v sparc sun4v)
there were issues compiling due to atomicvar.h and running the
tests also failed because of "tail" usage not conform with Solaris
tail implementation. This commit fixes both the issues.
For performance reasons we use a reduced rounds variant of
SipHash. This should still provide enough protection and the
effects in the hash table distribution are non existing.
If some real world attack on SipHash 1-2 will be found we can
trivially switch to something more secure. Anyway it is a
big step forward from Murmurhash, for which it is trivial to
generate *seed independent* colliding keys... The speed
penatly introduced by SipHash 2-4, around 4%, was a too big
price to pay compared to the effectiveness of the HashDoS
attack against SipHash 1-2, and considering so far in the
Redis history, no such an incident ever happened even while
using trivially to collide hash functions.
1. Refactor memory overhead computation into a function.
2. Every 10 keys evicted, check if memory usage already reached
the target value directly, since we otherwise don't count all
the memory reclaimed by the background thread right now.
This change attempts to switch to an hash function which mitigates
the effects of the HashDoS attack (denial of service attack trying
to force data structures to worst case behavior) while at the same time
providing Redis with an hash function that does not expect the input
data to be word aligned, a condition no longer true now that sds.c
strings have a varialbe length header.
Note that it is possible sometimes that even using an hash function
for which collisions cannot be generated without knowing the seed,
special implementation details or the exposure of the seed in an
indirect way (for example the ability to add elements to a Set and
check the return in which Redis returns them with SMEMBERS) may
make the attacker's life simpler in the process of trying to guess
the correct seed, however the next step would be to switch to a
log(N) data structure when too many items in a single bucket are
detected: this seems like an overkill in the case of Redis.
SPEED REGRESION TESTS:
In order to verify that switching from MurmurHash to SipHash had
no impact on speed, a set of benchmarks involving fast insertion
of 5 million of keys were performed.
The result shows Redis with SipHash in high pipelining conditions
to be about 4% slower compared to using the previous hash function.
However this could partially be related to the fact that the current
implementation does not attempt to hash whole words at a time but
reads single bytes, in order to have an output which is endian-netural
and at the same time working on systems where unaligned memory accesses
are a problem.
Further X86 specific optimizations should be tested, the function
may easily get at the same level of MurMurHash2 if a few optimizations
are performed.
GCC will produce certain unaligned multi load-store instructions
that will be trapped by the Linux kernel since ARM v6 cannot
handle them with unaligned addresses. Better to use the slower
but safer implementation instead of generating the exception which
should be anyway very slow.
I'm not sure how much test Jemalloc gets on ARM, moreover
compiling Redis with Jemalloc support in not very powerful
devices, like most ARMs people will build Redis on, is extremely
slow. It is possible to enable Jemalloc build anyway if needed
by using "make MALLOC=jemalloc".
However note that in architectures supporting 64 bit unaligned
accesses memcpy(...,...,8) is likely translated to a simple
word memory movement anyway.
After investigating issue #3796, it was discovered that MIGRATE
could call migrateCloseSocket() after the original MIGRATE c->argv
was already rewritten as a DEL operation. As a result the host/port
passed to migrateCloseSocket() could be anything, often a NULL pointer
that gets deferenced crashing the server.
Now the socket is closed at an earlier time when there is a socket
error in a later stage where no retry will be performed, before we
rewrite the argument vector. Moreover a check was added so that later,
in the socket_err label, there is no further attempt at closing the
socket if the argument was rewritten.
This fix should resolve the bug reported in #3796.
Ziplists had a bug that was discovered while investigating a different
issue, resulting in a corrupted ziplist representation, and a likely
segmentation foult and/or data corruption of the last element of the
ziplist, once the ziplist is accessed again.
The bug happens when a specific set of insertions / deletions is
performed so that an entry is encoded to have a "prevlen" field (the
length of the previous entry) of 5 bytes but with a count that could be
encoded in a "prevlen" field of a since byte. This could happen when the
"cascading update" process called by ziplistInsert()/ziplistDelete() in
certain contitious forces the prevlen to be bigger than necessary in
order to avoid too much data moving around.
Once such an entry is generated, inserting a very small entry
immediately before it will result in a resizing of the ziplist for a
count smaller than the current ziplist length (which is a violation,
inserting code expects the ziplist to get bigger actually). So an FF
byte is inserted in a misplaced position. Moreover a realloc() is
performed with a count smaller than the ziplist current length so the
final bytes could be trashed as well.
SECURITY IMPLICATIONS:
Currently it looks like an attacker can only crash a Redis server by
providing specifically choosen commands. However a FF byte is written
and there are other memory operations that depend on a wrong count, so
even if it is not immediately apparent how to mount an attack in order
to execute code remotely, it is not impossible at all that this could be
done. Attacks always get better... and we did not spent enough time in
order to think how to exploit this issue, but security researchers
or malicious attackers could.
This header file is for libs, like ziplist.c, that we want to leave
almost separted from the core. The panic() calls will be easy to delete
in order to use such files outside, but the debugging info we gain are
very valuable compared to simple assertions where it is not possible to
print debugging info.
This is of great interest because allows us to print debugging
informations that could be of useful when debugging, like in the
following example:
serverPanic("Unexpected encoding for object %d, %d",
obj->type, obj->encoding);
Don't go over 80 cols. Start with captial letter, capital letter afer
point, end comment with a point and so forth. No actual code behavior
touched at all.
There were two cases outlined in issue #3512 and PR #3551 where
the Geo API returned unexpected results: empty strings where NULL
replies were expected, or a single null reply where an array was
expected. This violates the Redis principle that Redis replies for
existing keys or elements should be indistinguishable.
This is technically an API breakage so will be merged only into 4.0 and
specified in the changelog in the list of breaking compatibilities, even
if it is not very likely that actual code will be affected, hopefully,
since with the past behavior basically there was to acconut for *both*
the possibilities, and the new behavior is always one of the two, but
in a consistent way.
You can still force the logo in the normal logs.
For motivations, check issue #3112. For me the reason is that actually
the logo is nice to have in interactive sessions, but inside the logs
kinda loses its usefulness, but for the ability of users to recognize
restarts easily: for this reason the new startup sequence shows a one
liner ASCII "wave" so that there is still a bit of visual clue.
Startup logging was modified in order to log events in more obvious
ways, and to log more events. Also certain important informations are
now more easy to parse/grep since they are printed in field=value style.
The option --always-show-logo in redis.conf was added, defaulting to no.
This commit also contains other changes in order to conform the code to
the Redis core style, specifically 80 chars max per line, smart
conditionals in the same line:
if (that) do_this();
The new algorithm provides the same speed with a smaller error for
cardinalities in the range 0-100k. Before switching, the new and old
algorithm behavior was studied in details in the context of
issue #3677. You can find a few graphs and motivations there.
Otherwise for small cardinalities the algorithm will output something
like, for example, 4.99 for a candinality of 5, that will be converted
to 4 producing a huge error.
The commit improves ziplistRepr() and adds a new debugging subcommand so
that we can trigger the dump directly from the Redis API.
This command capability was used while investigating issue #3684.
After the fix for #3673 the ttl var is always initialized inside the
loop itself, so the early initialization is not needed.
Variables declaration also moved to a more local scope.
BACKGROUND AND USE CASEj
Redis slaves are normally write only, however the supprot a "writable"
mode which is very handy when scaling reads on slaves, that actually
need write operations in order to access data. For instance imagine
having slaves replicating certain Sets keys from the master. When
accessing the data on the slave, we want to peform intersections between
such Sets values. However we don't want to intersect each time: to cache
the intersection for some time often is a good idea.
To do so, it is possible to setup a slave as a writable slave, and
perform the intersection on the slave side, perhaps setting a TTL on the
resulting key so that it will expire after some time.
THE BUG
Problem: in order to have a consistent replication, expiring of keys in
Redis replication is up to the master, that synthesize DEL operations to
send in the replication stream. However slaves logically expire keys
by hiding them from read attempts from clients so that if the master did
not promptly sent a DEL, the client still see logically expired keys
as non existing.
Because slaves don't actively expire keys by actually evicting them but
just masking from the POV of read operations, if a key is created in a
writable slave, and an expire is set, the key will be leaked forever:
1. No DEL will be received from the master, which does not know about
such a key at all.
2. No eviction will be performed by the slave, since it needs to disable
eviction because it's up to masters, otherwise consistency of data is
lost.
THE FIX
In order to fix the problem, the slave should be able to tag keys that
were created in the slave side and have an expire set in some way.
My solution involved using an unique additional dictionary created by
the writable slave only if needed. The dictionary is obviously keyed by
the key name that we need to track: all the keys that are set with an
expire directly by a client writing to the slave are tracked.
The value in the dictionary is a bitmap of all the DBs where such a key
name need to be tracked, so that we can use a single dictionary to track
keys in all the DBs used by the slave (actually this limits the solution
to the first 64 DBs, but the default with Redis is to use 16 DBs).
This solution allows to pay both a small complexity and CPU penalty,
which is zero when the feature is not used, actually. The slave-side
eviction is encapsulated in code which is not coupled with the rest of
the Redis core, if not for the hook to track the keys.
TODO
I'm doing the first smoke tests to see if the feature works as expected:
so far so good. Unit tests should be added before merging into the
4.0 branch.
Before, if a previous key had a TTL set but the current one didn't, the
TTL was reused and thus resulted in wrong expirations set.
This behaviour was experienced, when `MigrateDefaultPipeline` in
redis-trib was set to >1
Fixes#3655
A bug was reported in the context in issue #3631. The root cause of the
bug was that certain neighbor boxes were zeroed after the "inside the
bounding box or not" check, simply because the bounding box computation
function was wrong.
A few debugging infos where enhanced and moved in other parts of the
code. A check to avoid steps=0 was added, but is unrelated to this
issue and I did not verified it was an actual bug in practice.
This actually includes two changes:
1) No newlines to take the master-slave link up when the upstream master
is down. Doing this is dangerous because the sub-slave often is received
replication protocol for an half-command, so can't receive newlines
without desyncing the replication link, even with the code in order to
cancel out the bytes that PSYNC2 was using. Moreover this is probably
also not needed/sane, because anyway the slave can keep serving
requests, and because if it's configured to don't serve stale data, it's
a good idea, actually, to break the link.
2) When a +CONTINUE with a different ID is received, we now break
connection with the sub-slaves: they need to be notified as well. This
was part of the original specification but for some reason it was not
implemented in the code, and was alter found as a PSYNC2 bug in the
integration testing.
1. Master replication offset was cleared after switching configuration
to some other slave, since it was assumed you can't PSYNC after a
switch. Note the case anymore and when we successfully PSYNC we need to
have our offset untouched.
2. Secondary replication ID was not reset to "000..." pattern at
startup.
3. Master in error state replying -LOADING or other transient errors
forced the slave to discard the cached master and full resync. This is
now fixed.
4. Better logging of what's happening on failed PSYNCs.
This means that stopping a slave and restarting it will still make it
able to PSYNC with the master. Moreover the master itself will retain
its ID/offset, in case it gets turned into a slave, or if a slave will
try to PSYNC with it with an exactly updated offset (otherwise there is
no backlog).
This change was possible thanks to PSYNC v2 that makes saving the current
replication state much simpler.
The gist of the changes is that now, partial resynchronizations between
slaves and masters (without the need of a full resync with RDB transfer
and so forth), work in a number of cases when it was impossible
in the past. For instance:
1. When a slave is promoted to mastrer, the slaves of the old master can
partially resynchronize with the new master.
2. Chained slalves (slaves of slaves) can be moved to replicate to other
slaves or the master itsef, without requiring a full resync.
3. The master itself, after being turned into a slave, is able to
partially resynchronize with the new master, when it joins replication
again.
In order to obtain this, the following main changes were operated:
* Slaves also take a replication backlog, not just masters.
* Same stream replication for all the slaves and sub slaves. The
replication stream is identical from the top level master to its slaves
and is also the same from the slaves to their sub-slaves and so forth.
This means that if a slave is later promoted to master, it has the
same replication backlong, and can partially resynchronize with its
slaves (that were previously slaves of the old master).
* A given replication history is no longer identified by the `runid` of
a Redis node. There is instead a `replication ID` which changes every
time the instance has a new history no longer coherent with the past
one. So, for example, slaves publish the same replication history of
their master, however when they are turned into masters, they publish
a new replication ID, but still remember the old ID, so that they are
able to partially resynchronize with slaves of the old master (up to a
given offset).
* The replication protocol was slightly modified so that a new extended
+CONTINUE reply from the master is able to inform the slave of a
replication ID change.
* REPLCONF CAPA is used in order to notify masters that a slave is able
to understand the new +CONTINUE reply.
* The RDB file was extended with an auxiliary field that is able to
select a given DB after loading in the slave, so that the slave can
continue receiving the replication stream from the point it was
disconnected without requiring the master to insert "SELECT" statements.
This is useful in order to guarantee the "same stream" property, because
the slave must be able to accumulate an identical backlog.
* Slave pings to sub-slaves are now sent in a special form, when the
top-level master is disconnected, in order to don't interfer with the
replication stream. We just use out of band "\n" bytes as in other parts
of the Redis protocol.
An old design document is available here:
https://gist.github.com/antirez/ae068f95c0d084891305
However the implementation is not identical to the description because
during the work to implement it, different changes were needed in order
to make things working well.
Redis fails to compile on MacOS 10.8.5 with Clang 4, version 421.0.57
(based on LLVM 3.1svn).
When compiling zmalloc.c, we get these warnings:
CC zmalloc.o
zmalloc.c:109:5: warning: implicit declaration of function '__atomic_add_fetch' is invalid in C99 [-Wimplicit-function-declaration]
update_zmalloc_stat_alloc(zmalloc_size(ptr));
^
zmalloc.c:75:9: note: expanded from macro 'update_zmalloc_stat_alloc'
atomicIncr(used_memory,__n,used_memory_mutex); \
^
./atomicvar.h:57:37: note: expanded from macro 'atomicIncr'
#define atomicIncr(var,count,mutex) __atomic_add_fetch(&var,(count),__ATOMIC_RELAXED)
^
zmalloc.c:145:5: warning: implicit declaration of function '__atomic_sub_fetch' is invalid in C99 [-Wimplicit-function-declaration]
update_zmalloc_stat_free(oldsize);
^
zmalloc.c:85:9: note: expanded from macro 'update_zmalloc_stat_free'
atomicDecr(used_memory,__n,used_memory_mutex); \
^
./atomicvar.h:58:37: note: expanded from macro 'atomicDecr'
#define atomicDecr(var,count,mutex) __atomic_sub_fetch(&var,(count),__ATOMIC_RELAXED)
^
zmalloc.c:205:9: warning: implicit declaration of function '__atomic_load_n' is invalid in C99 [-Wimplicit-function-declaration]
atomicGet(used_memory,um,used_memory_mutex);
^
./atomicvar.h:60:14: note: expanded from macro 'atomicGet'
dstvar = __atomic_load_n(&var,__ATOMIC_RELAXED); \
^
3 warnings generated.
Also on lazyfree.c:
CC lazyfree.o
lazyfree.c:68:13: warning: implicit declaration of function '__atomic_add_fetch' is invalid in C99 [-Wimplicit-function-declaration]
atomicIncr(lazyfree_objects,1,lazyfree_objects_mutex);
^
./atomicvar.h:57:37: note: expanded from macro 'atomicIncr'
#define atomicIncr(var,count,mutex) __atomic_add_fetch(&var,(count),__ATOMIC_RELAXED)
^
lazyfree.c:111:5: warning: implicit declaration of function '__atomic_sub_fetch' is invalid in C99 [-Wimplicit-function-declaration]
atomicDecr(lazyfree_objects,1,lazyfree_objects_mutex);
^
./atomicvar.h:58:37: note: expanded from macro 'atomicDecr'
#define atomicDecr(var,count,mutex) __atomic_sub_fetch(&var,(count),__ATOMIC_RELAXED)
^
2 warnings generated.
Then in the linking stage:
LINK redis-server
Undefined symbols for architecture x86_64:
"___atomic_add_fetch", referenced from:
_zmalloc in zmalloc.o
_zcalloc in zmalloc.o
_zrealloc in zmalloc.o
_dbAsyncDelete in lazyfree.o
_emptyDbAsync in lazyfree.o
_slotToKeyFlushAsync in lazyfree.o
"___atomic_load_n", referenced from:
_zmalloc_used_memory in zmalloc.o
_zmalloc_get_fragmentation_ratio in zmalloc.o
"___atomic_sub_fetch", referenced from:
_zrealloc in zmalloc.o
_zfree in zmalloc.o
_lazyfreeFreeObjectFromBioThread in lazyfree.o
_lazyfreeFreeDatabaseFromBioThread in lazyfree.o
_lazyfreeFreeSlotsMapFromBioThread in lazyfree.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [redis-server] Error 1
make: *** [all] Error 2
With this patch, the compilation is sucessful, no warnings.
Running `make test` we get a almost clean bill of health. Test pass with
one exception:
[err]: Check for memory leaks (pid 52793) in tests/unit/dump.tcl
[err]: Check for memory leaks (pid 53103) in tests/unit/auth.tcl
[err]: Check for memory leaks (pid 53117) in tests/unit/auth.tcl
[err]: Check for memory leaks (pid 53131) in tests/unit/protocol.tcl
[err]: Check for memory leaks (pid 53145) in tests/unit/protocol.tcl
[ok]: Check for memory leaks (pid 53160)
[err]: Check for memory leaks (pid 53175) in tests/unit/scan.tcl
[ok]: Check for memory leaks (pid 53189)
[err]: Check for memory leaks (pid 53221) in tests/unit/type/incr.tcl
.
.
.
Full debug log (289MB, uncompressed) available at
https://dl.dropboxusercontent.com/u/75548/logs/redis-debug-log-macos-10.8.5.log.xz
Most if not all of the memory leak tests fail. Not sure if this is
related. They are the only ones that fail. I belive they are not related,
but just the memory leak detector is not working properly on 10.8.5.
Signed-off-by: Pedro Melo <melo@simplicidade.org>
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
It was noted by @dvirsky that it is not possible to use string functions
when writing the AOF file. This sometimes is critical since the command
rewriting may need to be built in the context of the AOF callback, and
without access to the context, and the limited types that the AOF
production functions will accept, this can be an issue.
Moreover there are other needs that we can't anticipate regarding the
ability to use Redis Modules APIs using the context in order to build
representations to emit AOF / RDB.
Because of this a new API was added that allows the user to get a
temporary context from the IO context. The context is auto released
if obtained when the RDB / AOF callback returns.
Calling multiple time the function to get the context, always returns
the same one, since it is invalid to have more than a single context.