The slave sends \n keepalive messages to the master while parsing the rdb,
and later sends REPLCONF ACK once a second. rarely, the master recives both
a linefeed char and a REPLCONF in the same read, \n*3\r\n$8\r\nREPLCONF\r\n...
and it tries to trim two chars (\r\n) from the query buffer,
trimming the '*' from *3\r\n$8\r\nREPLCONF\r\n...
then the master tries to process a command starting with '3' and replies to
the slave a bunch of -ERR and one +OK.
although the slave silently ignores these (prints a log message), this corrupts
the replication offset at the slave since the slave increases the replication
offset, and the master did not.
other than the fix in processInlineBuffer, i did several other improvments
while hunting this very rare bug.
- when redis replies with "unknown command" it includes a portion of the
arguments, not just the command name. so it would be easier to understand
what was recived, in my case, on the slave side, it was -ERR, but
the "arguments" were the interesting part (containing info on the error).
- about a year ago i added code in addReplyErrorLength to print the error to
the log in case of a reply to master (since this string isn't actually
trasmitted to the master), now changed that block to print a similar log
message to indicate an error being sent from the master to the slave.
note that the slave is marked as CLIENT_SLAVE only after PSYNC was received,
so this will not cause any harm for REPLCONF, and will only indicate problems
that are gonna corrupt the replication stream anyway.
- two places were c->reply was emptied, and i wanted to reset sentlen
this is a precaution (i did not actually see such a problem), since a
non-zero sentlen will cause corruption to be transmitted on the socket.
Reading the PR gave me the opportunity to better specify what the code
was doing in places where I was not immediately sure about what was
going on. Moreover I documented the structure in server.h so that people
reading the header file will immediately understand what the structure
is useful for.
A) slave buffers didn't count internal fragmentation and sds unused space,
this caused them to induce eviction although we didn't mean for it.
B) slave buffers were consuming about twice the memory of what they actually needed.
- this was mainly due to sdsMakeRoomFor growing to twice as much as needed each time
but networking.c not storing more than 16k (partially fixed recently in 237a38737).
- besides it wasn't able to store half of the new string into one buffer and the
other half into the next (so the above mentioned fix helped mainly for small items).
- lastly, the sds buffers had up to 30% internal fragmentation that was wasted,
consumed but not used.
C) inefficient performance due to starting from a small string and reallocing many times.
what i changed:
- creating dedicated buffers for reply list, counting their size with zmalloc_size
- when creating a new reply node from, preallocate it to at least 16k.
- when appending a new reply to the buffer, first fill all the unused space of the
previous node before starting a new one.
other changes:
- expose mem_not_counted_for_evict info field for the benefit of the test suite
- add a test to make sure slave buffers are counted correctly and that they don't cause eviction
We don't want to increment the deliveries here, because the sysadmin
reset the consumer group so the desire is likely to restart processing,
and having the PEL polluted with old information is not useful but
probably confusing.
Related to #5111.
To simplify the semantics of blocking for a group, this commit changes
the implementation to better match the description we provide of
conusmer groups: blocking for > will make the consumer waiting for new
elements in the group. However blocking for any other ID will always
serve the local history of the consumer.
However it must be noted that the > ID is actually an alias for the
special ID ms/seq of UINT64_MAX,UINT64_MAX.
To detect when the group (or the whole key) is destroyed to send an
error to the consumers blocked in such group is a problem, so we leave
the consumers listening, the sysadmin is free to create or destroy
groups assuming she/he knows what to do. However a client may be blocked
in a given consumer group, that is later destroyed. Then the stream
receives new elements. In that case there is no sane behavior to serve
the consumer... but to report an error about the group no longer
existing.
More about detecting this synchronously and why it is not done:
1. Normally we don't do that, we leave clients blocked for other data
types such as lists.
2. When we free a stream object there is no longer information about
what was the key it was associated with, so while destroying the
consumer groups we miss the info to unblock the clients in that moment.
3. Objects can be reclaimed in other threads where it is no longer safe
to do client operations.
When a client blocks for a consumer group, we don't know the actual ID
we want to be served: other clients blocked in the same consumer group
may be served first, so the consumer group latest delivered ID changes.
This was not handled correctly, all the clients in the consumer group
were unblocked without data but the first.
With such information will be able to use a private localtime()
implementation serverLog(), which does not use any locking and is both
thread and fork() safe.
PR #5081 fixes an "interesting" bug about Redis Cluster failover but in
general about the updating of repl_down_since, that is used in order to
count the time a slave was left disconnected from its master.
While the fix provided resolves the specific issue, in general the
validity of repl_down_since is limited to states that are different
than the state CONNECTED, and the disconnected time is set when the
state is DISCONNECTED. However from CONNECTED to other states, the state
machine must always go to DISCONNECTED first. So it makes sense to set
the field to zero (since it is meaningless in that context) when the
state is set to CONNECTED.
Instead of telling the user to set the renamed command to "" to remove
the renaming, to the obvious thing when a command is renamed to itself.
So if I want to remove the renaming of PING, I just rename it to PING
again.
Unlike the BZPOP variants, these functions take a single key. This fixes
an erroneous CROSSSLOT error when passing a count to a cluster enabled
server.
RESTORE now supports:
1. Setting LRU/LFU
2. Absolute-time TTL
Other related changes:
1. RDB loading will not override LRU bits when RDB file
does not contain the LRU opcode.
2. RDB loading will not set LRU/LFU bits if the server's
maxmemory-policy does not match.
this reduces the extra 8 bytes we save before each pointer.
but more importantly maybe, it makes the valgrind runs to be more similiar
to our normal runs.
note: the change in malloc_stats struct in server.h is to eliminate an name conflict.
structs that are not typedefed are resolved from a separate name space.
due to incorrect forward declaration, it didn't provide all arguments.
this lead to random value being read from the stack and return of incorrect time,
which in this case doesn't matter since no one uses it.
Basically we cannot be sure that if the key is expired while writing the
AOF, the main thread will surely find the key expired. There are
possible race conditions like the moment at which the "now" is sampled,
and the fact that time may jump backward.
Think about the following:
SET a 5
EXPIRE a 1
AOF rewrite starts after about 1 second. The child process finds the key
expired, while in the main thread instead an INCR command is called
against the key "a" immediately after a fork, and the scheduler was
faster to give execution time to the main thread, so "a" is yet not
expired.
The main thread will generate an INCR a command to the AOF log that will
be appended to the rewritten AOF file, but that INCR command will target
a non existin "a" key, so a new non volatile key "a" will be created.
Two observations:
A) In theory by computing "now" before the fork, we should be sure that
if a key is expired at that time, it will be expired later when the
main thread will try to access to such key. However this does not take
into account the fact that the computer time may jump backward.
B) Technically we may still make the process safe by using a monotonic
time source.
However there were other similar related bugs, and in general the new
"vision" is that Redis persistence files should represent the memory
state without trying to be too smart: this makes the design more
consistent, bugs less likely to arise from complex interactions, and in
the end what is to fix is the Redis expire process to have less expired
keys in RAM.
Thanks to Oran Agra and Guy Benoish for writing me an email outlining
this problem, after they conducted a Redis 5 code review.
The old version could not handle the fact that "STREAMS" is a valid key
name for streams. Now we really try to parse the command like the
command implementation would do.
Related to #5028 and 4857.
The loop allocated a buffer for the right number of keys positions, then
overflowed it going past the limit.
Related to #4857 and cause of the memory violation seen in #5028.
Now a MAXLEN of 0 really does what it means: it will create a zero
entries stream. This is useful in order to make sure that the behavior
is identical to XTRIM, that must be able to reduce the stream to zero
elements when MAXLEN is given.
Also now MAXLEN with a count < 0 will return an error.
The ability of "SENTINEL SET" to change the reconfiguration script at
runtime is a problem even in the security model of Redis: any client
inside the network may set any executable to be ran once a failover is
triggered.
This option adds protection for this problem: by default the two
SENTINEL SET subcommands modifying scripts paths are denied. However the
user is still able to rever that using the Sentinel configuration file
in order to allow such a feature.
This way we let big endian systems to still load old RDB versions.
However newver versions will be saved and loaded in a way that make RDB
expires cross-endian again. Thanks to @oranagra for the reporting and
the discussion about this problem, leading to this fix.
Currently it does not look it's sensible to generate events for streams
consumer groups modification, being them metadata, however at least for
key-level events, like the creation or removal of a consumer group, I
added a few events here and there. Later we can evaluate if it makes
sense to add more. From the POV instead of WAIT (in Redis transaciton)
and signaling the key as modified, it looks like that the transaction
should not fail when a stream is modified, so no calls are made in
consumer groups related functions to signalModifiedKey().
Again thanks to @oranagra. The object idle time does not fit into an int
sometimes: use the native type that the serialization function will get
as argument, which is uint64_t.
A user with many connections (10 thousand) on a single Redis server
reports in issue #4983 that sometimes Redis is idle becuase at the same
time many clients need to resize their query buffer according to the old
policy.
It looks like this was created by the fact that we allow the query
buffer to grow without problems to a size up to PROTO_MBULK_BIG_ARG
normally, but when the client is idle we immediately are more strict,
and a query buffer greater than 1024 bytes is already enough to trigger
the resize. So for instance if most of the clients stop at the same time
this issue should be easily triggered.
This behavior actually looks odd, and there should be only a clear limit
after we say, let's look at this query buffer to check if it's time to
resize it. This commit puts the limit at PROTO_MBULK_BIG_ARG, and the
check is performed both if compared to the peak usage the current usage
is too big, or if the client is idle.
Then when the check is performed, to waste just a few kbytes is
considered enough to proceed with the resize. This should fix the issue.
We unblocked the client too early, when the group name object was no
longer valid in client->bpop, so propagating XCLAIM later in
streamPropagateXCLAIM() deferenced a field already set to NULL.
Now that we have SETID, the inetrnals of consumer groups should be able
to handle the case of the same message delivered multiple times just
as a side effect of calling XREADGROUP. Normally this should never
happen but if the admin manually "XGROUP SETID mykey mygroup 0",
messages will get re-delivered to clients waiting for the ">" special
ID. The consumer groups internals were not able to handle the case of a
message re-delivered in this circumstances that was already assigned to
another owner.
and will not be inconsistent after we call debug loadaof.
Before this commit, there were 2 problems:
1, When appendonly is set to no and there is not a appendonly file,
redis-server will crash if we call DEBUG LOADAOF.
2, When appendonly is set to no and there is a appendonly file,
redis-server will hold different data after loading appendonly
file.
The AOF tail of a combined RDB+AOF is based on the premise of applying
the AOF commands to the exact state that there was in the server while
the RDB was persisted. By expiring keys while loading the RDB file, we
change the state, so applying the AOF tail later may change the state.
Test case:
* Time1: SET a 10
* Time2: EXPIREAT a $time5
* Time3: INCR a
* Time4: PERSIT A. Start bgrewiteaof with RDB preamble. The value of a is 11 without expire time.
* Time5: Restart redis from the RDB+AOF: consistency violation.
Thanks to @soloestoy for providing the patch.
Thanks to @trevor211 for the original issue report and the initial fix.
Check issue #4950 for more info.
See issue #2819 for details. The gist is that when we want to send INFO
because we are over the time, we used to send only INFO commands, no
longer sending PING commands. However if a master fails exactly when we
are about to send an INFO command, the PING times will result zero
because the PONG reply was already received, and we'll fail to send more
PINGs, since we try only to send INFO commands: the failure detector
will delay until the connection is closed and re-opened for "long
timeout".
This commit changes the logic so that we can send the three kind of
messages regardless of the fact we sent another one already in the same
code path. It could happen that we go over the message limit for the
link by a few messages, but this is not significant. However now we'll
not introduce delays in sending commands just because there was
something else to send at the same time.
problems fixed:
* failing to read fragmentation information from jemalloc
* overflow in jemalloc fragmentation hint to the defragger
* test suite not triggering eviction after population
Usually blocking operations make a lot of sense with multiple keys so
that we can listen to multiple queues (or whatever the app models) with
a single connection. However in the synchronous case it is more useful
to be able to ask for N elements. This is a change that I also wanted to
perform soon or later in the blocking list variant, but here it is more
natural since there is no reply type difference.
Some times it was not released on error, sometimes it was released two
times because the error path expected the "di" var to be NULL if the
iterator was already released. Thanks to @oranagra for pinging me about
potential problems of this kind inside rdb.c.
Implementation notes: as INFO is "already broken", I didn't want to break it further. Instead of computing the server.lua_script dict size on every call, I'm keeping a running sum of the body's length and dict overheads.
This implementation is naive as it **does not** take into consideration dict rehashing, but that inaccuracy pays off in speed ;)
Demo time:
```bash
$ redis-cli info memory | grep "script"
used_memory_scripts:96
used_memory_scripts_human:96B
number_of_cached_scripts:0
$ redis-cli eval "" 0 ; redis-cli info memory | grep "script"
(nil)
used_memory_scripts:120
used_memory_scripts_human:120B
number_of_cached_scripts:1
$ redis-cli script flush ; redis-cli info memory | grep "script"
OK
used_memory_scripts:96
used_memory_scripts_human:96B
number_of_cached_scripts:0
$ redis-cli eval "return('Hello, Script Cache :)')" 0 ; redis-cli info memory | grep "script"
"Hello, Script Cache :)"
used_memory_scripts:152
used_memory_scripts_human:152B
number_of_cached_scripts:1
$ redis-cli eval "return redis.sha1hex(\"return('Hello, Script Cache :)')\")" 0 ; redis-cli info memory | grep "script"
"1be72729d43da5114929c1260a749073732dc822"
used_memory_scripts:232
used_memory_scripts_human:232B
number_of_cached_scripts:2
✔ 19:03:54 redis [lua_scripts-in-info-memory L ✚…⚑] $ redis-cli evalsha 1be72729d43da5114929c1260a749073732dc822 0
"Hello, Script Cache :)"
```
- Almost all Cluster Manager related code moved to
the same section.
- Many macroes converted to functions
- Added various comments
- Little code restyling
This is useful in the reply and timeout callback, if the module wants to
do some cleanup of the blocked client handle that may be stored around
in the module-private data structures.
In some modules it may be useful to have an idea about being near to
OOM. Anyway additionally an explicit call to get the fill ratio will be
added in the future.
This way it is possible to use conditional compilation to be compatible
with a larger amount of Redis versions, however note that this breaks
binary compatibiltiy, so the module must be compiled with the
corresponding redismodule.h file depending on the version of Redis
targeted.
Note that this was an experimental API that can only be enabled with
REIDSMODULE_EXPERIMENTAL_API, so it is subject to change until its
promoted to stable API. Sorry for the breakage, it is trivial to
resolve btw. This change will not be back ported to Redis 4.0.