Unlike the BZPOP variants, these functions take a single key. This fixes
an erroneous CROSSSLOT error when passing a count to a cluster enabled
server.
RESTORE now supports:
1. Setting LRU/LFU
2. Absolute-time TTL
Other related changes:
1. RDB loading will not override LRU bits when RDB file
does not contain the LRU opcode.
2. RDB loading will not set LRU/LFU bits if the server's
maxmemory-policy does not match.
this reduces the extra 8 bytes we save before each pointer.
but more importantly maybe, it makes the valgrind runs to be more similiar
to our normal runs.
note: the change in malloc_stats struct in server.h is to eliminate an name conflict.
structs that are not typedefed are resolved from a separate name space.
due to incorrect forward declaration, it didn't provide all arguments.
this lead to random value being read from the stack and return of incorrect time,
which in this case doesn't matter since no one uses it.
Basically we cannot be sure that if the key is expired while writing the
AOF, the main thread will surely find the key expired. There are
possible race conditions like the moment at which the "now" is sampled,
and the fact that time may jump backward.
Think about the following:
SET a 5
EXPIRE a 1
AOF rewrite starts after about 1 second. The child process finds the key
expired, while in the main thread instead an INCR command is called
against the key "a" immediately after a fork, and the scheduler was
faster to give execution time to the main thread, so "a" is yet not
expired.
The main thread will generate an INCR a command to the AOF log that will
be appended to the rewritten AOF file, but that INCR command will target
a non existin "a" key, so a new non volatile key "a" will be created.
Two observations:
A) In theory by computing "now" before the fork, we should be sure that
if a key is expired at that time, it will be expired later when the
main thread will try to access to such key. However this does not take
into account the fact that the computer time may jump backward.
B) Technically we may still make the process safe by using a monotonic
time source.
However there were other similar related bugs, and in general the new
"vision" is that Redis persistence files should represent the memory
state without trying to be too smart: this makes the design more
consistent, bugs less likely to arise from complex interactions, and in
the end what is to fix is the Redis expire process to have less expired
keys in RAM.
Thanks to Oran Agra and Guy Benoish for writing me an email outlining
this problem, after they conducted a Redis 5 code review.
The old version could not handle the fact that "STREAMS" is a valid key
name for streams. Now we really try to parse the command like the
command implementation would do.
Related to #5028 and 4857.
The loop allocated a buffer for the right number of keys positions, then
overflowed it going past the limit.
Related to #4857 and cause of the memory violation seen in #5028.
Now a MAXLEN of 0 really does what it means: it will create a zero
entries stream. This is useful in order to make sure that the behavior
is identical to XTRIM, that must be able to reduce the stream to zero
elements when MAXLEN is given.
Also now MAXLEN with a count < 0 will return an error.
The ability of "SENTINEL SET" to change the reconfiguration script at
runtime is a problem even in the security model of Redis: any client
inside the network may set any executable to be ran once a failover is
triggered.
This option adds protection for this problem: by default the two
SENTINEL SET subcommands modifying scripts paths are denied. However the
user is still able to rever that using the Sentinel configuration file
in order to allow such a feature.
This way we let big endian systems to still load old RDB versions.
However newver versions will be saved and loaded in a way that make RDB
expires cross-endian again. Thanks to @oranagra for the reporting and
the discussion about this problem, leading to this fix.
Currently it does not look it's sensible to generate events for streams
consumer groups modification, being them metadata, however at least for
key-level events, like the creation or removal of a consumer group, I
added a few events here and there. Later we can evaluate if it makes
sense to add more. From the POV instead of WAIT (in Redis transaciton)
and signaling the key as modified, it looks like that the transaction
should not fail when a stream is modified, so no calls are made in
consumer groups related functions to signalModifiedKey().
Again thanks to @oranagra. The object idle time does not fit into an int
sometimes: use the native type that the serialization function will get
as argument, which is uint64_t.
A user with many connections (10 thousand) on a single Redis server
reports in issue #4983 that sometimes Redis is idle becuase at the same
time many clients need to resize their query buffer according to the old
policy.
It looks like this was created by the fact that we allow the query
buffer to grow without problems to a size up to PROTO_MBULK_BIG_ARG
normally, but when the client is idle we immediately are more strict,
and a query buffer greater than 1024 bytes is already enough to trigger
the resize. So for instance if most of the clients stop at the same time
this issue should be easily triggered.
This behavior actually looks odd, and there should be only a clear limit
after we say, let's look at this query buffer to check if it's time to
resize it. This commit puts the limit at PROTO_MBULK_BIG_ARG, and the
check is performed both if compared to the peak usage the current usage
is too big, or if the client is idle.
Then when the check is performed, to waste just a few kbytes is
considered enough to proceed with the resize. This should fix the issue.
We unblocked the client too early, when the group name object was no
longer valid in client->bpop, so propagating XCLAIM later in
streamPropagateXCLAIM() deferenced a field already set to NULL.
Now that we have SETID, the inetrnals of consumer groups should be able
to handle the case of the same message delivered multiple times just
as a side effect of calling XREADGROUP. Normally this should never
happen but if the admin manually "XGROUP SETID mykey mygroup 0",
messages will get re-delivered to clients waiting for the ">" special
ID. The consumer groups internals were not able to handle the case of a
message re-delivered in this circumstances that was already assigned to
another owner.
and will not be inconsistent after we call debug loadaof.
Before this commit, there were 2 problems:
1, When appendonly is set to no and there is not a appendonly file,
redis-server will crash if we call DEBUG LOADAOF.
2, When appendonly is set to no and there is a appendonly file,
redis-server will hold different data after loading appendonly
file.
The AOF tail of a combined RDB+AOF is based on the premise of applying
the AOF commands to the exact state that there was in the server while
the RDB was persisted. By expiring keys while loading the RDB file, we
change the state, so applying the AOF tail later may change the state.
Test case:
* Time1: SET a 10
* Time2: EXPIREAT a $time5
* Time3: INCR a
* Time4: PERSIT A. Start bgrewiteaof with RDB preamble. The value of a is 11 without expire time.
* Time5: Restart redis from the RDB+AOF: consistency violation.
Thanks to @soloestoy for providing the patch.
Thanks to @trevor211 for the original issue report and the initial fix.
Check issue #4950 for more info.
See issue #2819 for details. The gist is that when we want to send INFO
because we are over the time, we used to send only INFO commands, no
longer sending PING commands. However if a master fails exactly when we
are about to send an INFO command, the PING times will result zero
because the PONG reply was already received, and we'll fail to send more
PINGs, since we try only to send INFO commands: the failure detector
will delay until the connection is closed and re-opened for "long
timeout".
This commit changes the logic so that we can send the three kind of
messages regardless of the fact we sent another one already in the same
code path. It could happen that we go over the message limit for the
link by a few messages, but this is not significant. However now we'll
not introduce delays in sending commands just because there was
something else to send at the same time.
problems fixed:
* failing to read fragmentation information from jemalloc
* overflow in jemalloc fragmentation hint to the defragger
* test suite not triggering eviction after population
Usually blocking operations make a lot of sense with multiple keys so
that we can listen to multiple queues (or whatever the app models) with
a single connection. However in the synchronous case it is more useful
to be able to ask for N elements. This is a change that I also wanted to
perform soon or later in the blocking list variant, but here it is more
natural since there is no reply type difference.
Some times it was not released on error, sometimes it was released two
times because the error path expected the "di" var to be NULL if the
iterator was already released. Thanks to @oranagra for pinging me about
potential problems of this kind inside rdb.c.
Implementation notes: as INFO is "already broken", I didn't want to break it further. Instead of computing the server.lua_script dict size on every call, I'm keeping a running sum of the body's length and dict overheads.
This implementation is naive as it **does not** take into consideration dict rehashing, but that inaccuracy pays off in speed ;)
Demo time:
```bash
$ redis-cli info memory | grep "script"
used_memory_scripts:96
used_memory_scripts_human:96B
number_of_cached_scripts:0
$ redis-cli eval "" 0 ; redis-cli info memory | grep "script"
(nil)
used_memory_scripts:120
used_memory_scripts_human:120B
number_of_cached_scripts:1
$ redis-cli script flush ; redis-cli info memory | grep "script"
OK
used_memory_scripts:96
used_memory_scripts_human:96B
number_of_cached_scripts:0
$ redis-cli eval "return('Hello, Script Cache :)')" 0 ; redis-cli info memory | grep "script"
"Hello, Script Cache :)"
used_memory_scripts:152
used_memory_scripts_human:152B
number_of_cached_scripts:1
$ redis-cli eval "return redis.sha1hex(\"return('Hello, Script Cache :)')\")" 0 ; redis-cli info memory | grep "script"
"1be72729d43da5114929c1260a749073732dc822"
used_memory_scripts:232
used_memory_scripts_human:232B
number_of_cached_scripts:2
✔ 19:03:54 redis [lua_scripts-in-info-memory L ✚…⚑] $ redis-cli evalsha 1be72729d43da5114929c1260a749073732dc822 0
"Hello, Script Cache :)"
```
- Almost all Cluster Manager related code moved to
the same section.
- Many macroes converted to functions
- Added various comments
- Little code restyling
This is useful in the reply and timeout callback, if the module wants to
do some cleanup of the blocked client handle that may be stored around
in the module-private data structures.
In some modules it may be useful to have an idea about being near to
OOM. Anyway additionally an explicit call to get the fill ratio will be
added in the future.
This way it is possible to use conditional compilation to be compatible
with a larger amount of Redis versions, however note that this breaks
binary compatibiltiy, so the module must be compiled with the
corresponding redismodule.h file depending on the version of Redis
targeted.
Note that this was an experimental API that can only be enabled with
REIDSMODULE_EXPERIMENTAL_API, so it is subject to change until its
promoted to stable API. Sorry for the breakage, it is trivial to
resolve btw. This change will not be back ported to Redis 4.0.
While this feature is not used by Redis, ae.c implements the ability for
a timer to call a finalizer callback when an timer event is deleted.
This feature was bugged since the start, and because it was never used
we never noticed a problem. However Anthony LaTorre was using the same
library in order to implement a different system: he found a bug that he
describes as follows, and which he fixed with the patch in this commit,
sent me by private email:
--- Anthony email ---
've found one bug in the current implementation of the timed events.
It's possible to lose track of a timed event if an event is added in
the finalizerProc of another event.
For example, suppose you start off with three timed events 1, 2, and
3. Then the linked list looks like:
3 -> 2 -> 1
Then, you run processTimeEvents and events 2 and 3 finish, so now the
list looks like:
-1 -> -1 -> 2
Now, on the next iteration of processTimeEvents it starts by deleting
the first event, and suppose this finalizerProc creates a new event,
so that the list looks like this:
4 -> -1 -> 2
On the next iteration of the while loop, when it gets to the second
event, the variable prev is still set to NULL, so that the head of the
event loop after the next event will be set to 2, i.e. after deleting
the next event the event loop will look like:
2
and the event with id 4 will be lost.
I've attached an example program to illustrate the issue. If you run
it you will see that it prints:
```
foo id = 0
spam!
```
But if you uncomment line 29 and run it again it won't print "spam!".
--- End of email ---
Test.c source code is as follows:
#include "ae.h"
#include <stdio.h>
aeEventLoop *el;
int foo(struct aeEventLoop *el, long long id, void *data)
{
printf("foo id = %lld\n", id);
return AE_NOMORE;
}
int spam(struct aeEventLoop *el, long long id, void *data)
{
printf("spam!\n");
return AE_NOMORE;
}
void bar(struct aeEventLoop *el, void *data)
{
aeCreateTimeEvent(el, 0, spam, NULL, NULL);
}
int main(int argc, char **argv)
{
el = aeCreateEventLoop(100);
//aeCreateTimeEvent(el, 0, foo, NULL, NULL);
aeCreateTimeEvent(el, 0, foo, NULL, bar);
aeMain(el);
return 0;
}
Anthony fixed the problem by using a linked list for the list of timers, and
sent me back this patch after he tested the code in production for some time.
The code looks sane to me, so committing it to Redis.
There are too many advantages in doing this, RDB is faster to persist,
more compact, much faster to load back. The main issues here are that
the code is less tested because this was not the old default (so we are
enabling it for the new 5.0 release), and that the AOF is no longer a
trivially parsable format from now on. However the non-preamble mode
will be supported in the future as well, if new data types will be
added.
This should be more than enough, even if in case of partial IDs that are
not found, we send all the IDs to the slave/AOF, but this is definitely
a corner case without bad effects if not some wasted space.
This is a big win for caching use cases, since on reloading Redis will
still have some idea about what is worth to evict and what not.
However this only solves part of the problem because the information is
only partially propagated to slaves (on write operations). Reads will
not affect slaves LFU and LRU counters, so after a failover the eviction
decisions are kinda random until keys start to collect some aging/freq info.
However since new slaves are initially populated via RDB file transfer,
this means that if we spin up a new slave from a master, and perform an
immediate manual failover (for instance in order to upgrade the master),
the slave will have eviction informations to use for some time.
The LFU/LRU info is persisted only if the maxmemory policy is set to one
of the relevant type, even if no actual "maxmemory" memory limit is
set.
XINFO is mainly an observability command that will be used more by
humans than computers, and even when used by computers it will be a very
low traffic command. For this reason the format was changed in order to
have field names. They'll consume some bandwidth and CPU cycles, but in
this context this is much better than having to understand what the
numbers in the output array are.
Now you can use:
addReplyError("-MYERRORCODE some message");
If the error code is omitted, the behavior is like in the past,
the generic -ERR will be used.
This commit, in some parts derived from PR #3041 which is no longer
possible to merge (because the user deleted the original branch),
implements the ability of slaves to have a special configuration
preventing that they try to start a failover when the master is failing.
There are multiple reasons for wanting this, and the feautre was
requested in issue #3021 time ago.
The differences between this patch and the original PR are the
following:
1. The flag is saved/loaded on the nodes configuration.
2. The 'myself' node is now flag-aware, the flag is updated as needed
when the configuration is changed via CONFIG SET.
3. The flag name uses NOFAILOVER instead of NO_FAILOVER to be consistent
with existing NOADDR.
4. The redis.conf documentation was rewritten.
Thanks to @deep011 for the original patch.
other fixes / improvements:
- LUA script memory isn't taken from zmalloc (taken from libc malloc)
so it can cause high fragmentation ratio to be displayed (which is false)
- there was a problem with "fragmentation" info being calculated from
RSS and used_memory sampled at different times (now sampling them together)
other details:
- adding a few more allocator info fields to INFO and MEMORY commands
- improve defrag test to measure defrag latency of big keys
- increasing the accuracy of the defrag test (by looking at real grag info)
this way we can use an even lower threshold and still avoid false positives
- keep the old (total) "fragmentation" field unchanged, but add new ones for spcific things
- add these the MEMORY DOCTOR command
- deduct LUA memory from the rss in case of non jemalloc allocator (one for which we don't "allocator active/used")
- reduce sampling rate of the rss and allocator info
- big keys are not defragged in one go from within the dict scan
instead they are scanned in parts after the main dict hash bucket is done.
- add latency monitor sample for defrag
- change default active-defrag-cycle-min to induce lower latency
- make active defrag start a new scan right away if needed, so it's easier
(for the test suite) to detect when it's done
- make active defrag quick the current cycle after each db / big key
- defrag some non key long term global allocations
- some refactoring for smaller functions and more reusable code
- during dict rehashing, one scan iteration of the dict, can end up scanning
one bucket in the smaller dict and many many buckets in the larger dict.
so waiting for 16 scan iterations before checking the time, may be much too long.
AE_BARRIER was implemented like:
- Fire the readable event.
- Do not fire the writabel event if the readable fired.
However this may lead to the writable event to never be called if the
readable event is always fired. There is an alterantive, we can just
invert the sequence of the calls in case AE_BARRIER is set. This commit
does that.
In case the write handler is already installed, it could happen that we
serve the reply of a query in the same event loop cycle we received it,
preventing beforeSleep() from guaranteeing that we do the AOF fsync
before sending the reply to the client.
The AE_BARRIER mechanism, introduced in a previous commit, prevents this
problem. This commit makes actual use of this new feature to fix the
bug.
Add AE_BARRIER to the writable event loop so that slaves requesting
votes can't be served before we re-enter the event loop in the next
iteration, so clusterBeforeSleep() will fsync to disk in time.
Also add the call to explicitly fsync, given that we modified the last
vote epoch variable.
AOF fsync=always, and certain Redis Cluster bus operations, require to
fsync data on disk before replying with an acknowledge.
In such case, in order to implement Group Commits, we want to be sure
that queries that are read in a given cycle of the event loop, are never
served to clients in the same event loop iteration. This way, by using
the event loop "before sleep" callback, we can fsync the information
just one time before returning into the event loop for the next cycle.
This is much more efficient compared to calling fsync() multiple times.
Unfortunately because of a bug, this was not always guaranteed: the
actual way the events are installed was the sole thing that could
control. Normally this problem is hard to trigger when AOF is enabled
with fsync=always, because we try to flush the output buffers to the
socekt directly in the beforeSleep() function of Redis. However if the
output buffers are full, we actually install a write event, and in such
a case, this bug could happen.
This change to ae.c modifies the event loop implementation to make this
concept explicit. Write events that are registered with:
AE_WRITABLE|AE_BARRIER
Are guaranteed to never fire after the readable event was fired for the
same file descriptor. In this way we are sure that data is persisted to
disk before the client performing the operation receives an
acknowledged.
However note that this semantics does not provide all the guarantees
that one may believe are automatically provided. Take the example of the
blocking list operations in Redis.
With AOF and fsync=always we could have:
Client A doing: BLPOP myqueue 0
Client B doing: RPUSH myqueue a b c
In this scenario, Client A will get the "a" elements immediately after
the Client B RPUSH will be executed, even before the operation is persisted.
However when Client B will get the acknowledge, it can be sure that
"b,c" are already safe on disk inside the list.
What to note here is that it cannot be assumed that Client A receiving
the element is a guaranteed that the operation succeeded from the point
of view of Client B.
This is due to the fact that the barrier exists within the same socket,
and not between different sockets. However in the case above, the
element "a" was not going to be persisted regardless, so it is a pretty
synthetic argument.
This commit adds two new fields in the INFO output, stats section:
expired_stale_perc:0.34
expired_time_cap_reached_count:58
The first field is an estimate of the number of keys that are yet in
memory but are already logically expired. They reason why those keys are
yet not reclaimed is because the active expire cycle can't spend more
time on the process of reclaiming the keys, and at the same time nobody
is accessing such keys. However as the active expire cycle runs, while
it will eventually have to return to the caller, because of time limit
or because there are less than 25% of keys logically expired in each
given database, it collects the stats in order to populate this INFO
field.
Note that expired_stale_perc is a running average, where the current
sample accounts for 5% and the history for 95%, so you'll see it
changing smoothly over time.
The other field, expired_time_cap_reached_count, counts the number
of times the expire cycle had to stop, even if still it was finding a
sizeable number of keys yet to expire, because of the time limit.
This allows people handling operations to understand if the Redis
server, during mass-expiration events, is able to collect keys fast
enough usually. It is normal for this field to increment during mass
expires, but normally it should very rarely increment. When instead it
constantly increments, it means that the current workloads is using
a very important percentage of CPU time to expire keys.
This feature was created thanks to the hints of Rashmi Ramesh and
Bart Robinson from Twitter. In private email exchanges, they noted how
it was important to improve the observability of this parameter in the
Redis server. Actually in big deployments, the amount of keys that are
yet to expire in each server, even if they are logically expired, may
account for a very big amount of wasted memory.