Rather than blindly evicting until maxmemory limit is achieved, this
update adds a time limit to eviction. While over the maxmemory limit,
eviction will process before each command AND as a timeProc when no
commands are running.
This will reduce the latency impact on many cases, especially pathological
cases like massive used memory increase during dict rehashing.
There is a risk that some other edge cases (like massive pipelined use
of MGET) could cause Redis memory usage to keep growing despite the
eviction attempts, so a new maxmemory-eviction-tenacity config is
introduced to let users mitigate that.
This commit makes stream object returning "stream" as encoding type in OBJECT ENCODING subcommand and DEBUG OBJECT command.
Till now, it would return "unknown"
This is a catch-all test to confirm that that rewrite produces a valid
output for all parameters and that this process does not introduce
undesired configuration changes.
Save parameters should either be default or whatever specified in the
config file. This fixes an issue introduced in #7092 which causes
configuration file settings to be applied on top of the defaults.
Improve RM_Call inline documentation about the fmt argument
so that we don't completely depend on the web docs.
Co-authored-by: Oran Agra <oran@redislabs.com>
There was a bug. Although cluster replicas would allow read commands,
they would not allow a MULTI-EXEC that's composed solely of read commands.
Adds tests for coverage.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Eran Liberty <eranl@amazon.com>
Starting redis 6.0 and the changes we made to the diskless master to be
suitable for TLS, I made the master avoid reaping (wait3) the pid of the
child until we know all replicas are done reading their rdb.
I did that in order to avoid a state where the rdb_child_pid is -1 but
we don't yet want to start another fork (still busy serving that data to
replicas).
It turns out that the solution used so far was problematic in case the
fork child was being killed (e.g. by the kernel OOM killer), in that
case there's a chance that we currently disabled the read event on the
rdb pipe, since we're waiting for a replica to become writable again.
and in that scenario the master would have never realized the child
exited, and the replica will remain hung too.
Note that there's no mechanism to detect a hung replica while it's in
rdb transfer state.
The solution here is to add another pipe which is used by the parent to
tell the child it is safe to exit. this mean that when the child exits,
for whatever reason, it is safe to reap it.
Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
part of #6271 (Accelerate diskless master connections) but was dropped
when that PR was rebased after the TLS fork/pipe changes (5a47794).
Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
has chance to detect that the child exited, it should be the one to call
it so that we don't have to wait for cron (server.hz) to do that.
Fix issues with writeConn() which resulted with corruption of the stream by leaving an extra byte in the buffer. The trigger for this is partial writes or write errors which were not experienced on Linux but reported on macOS.
During long running scripts or loading RDB/AOF, we may need to do some
defragging. Since processEventsWhileBlocked is called periodically at
unknown intervals, and many cron jobs either depend on run_with_period
(including active defrag), or rely on being called at server.hz rate
(i.e. active defrag knows ho much time to run by looking at server.hz),
the whileBlockedCron may have to run a loop triggering the cron jobs in it
(currently only active defrag) several times.
Other changes:
- Adding a test for defrag during aof loading.
- Changing key-load-delay config to take negative values for fractions
of a microsecond sleep
DEBUG ZIPLIST <key> currently returns the following error string if the
key is not a ziplist: "ERR Not an sds encoded string.". This looks like
an accidental copy/paste error from the error returned in the else if
branch above where this string is returned if the key is not an sds
string. The command was added in
ac61f90625 and looking at the commit,
nothing indicates that it is not an accidental typo.
The error string now returns a correct error: "Not a ziplist encoded
object", which accurately describes the error.
The previous algorithm is of O(n^2) time complexity.
It would have run through the ziplist entries one by one, each time doing a `realloc` and a
`memmove` (moving the entire tail of the ziplist).
The new algorithm is O(n), it runs over all the records once, computing the size of the `realloc`
needed, then does one `realloc`, and run thought the records again doing many smaller `memmove`s,
each time moving just one record.
So this change reduces many reallocs, and moves each record just once.
Co-authored-by: zhumaohua <zhumaohua@megvii.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Update adds a general source for retrieving a monotonic time.
In addition, AE has been updated to utilize the new monotonic
clock for timer processing.
This performance improvement is **not** enabled in a default build due to various H/W compatibility
concerns, see README.md for details. It does however change the default use of gettimeofday with
clock_gettime and somewhat improves performance.
This update provides the following
1. An interface for retrieving a monotonic clock. getMonotonicUs returns a uint64_t (aka monotime)
with the number of micro-seconds from an arbitrary point. No more messing with tv_sec/tv_usec.
Simple routines are provided for measuring elapsed milli-seconds or elapsed micro-seconds (the
most common use case for a monotonic timer). No worries about time moving backwards.
2. High-speed assembler implementation for x86 and ARM. The standard method for retrieving the
monotonic clock is POSIX.1b (1993): clock_gettime(CLOCK_MONOTONIC, timespec*). However, most
modern processors provide a constant speed instruction clock which can be retrieved in a fraction
of the time that it takes to call clock_gettime. For x86, this is provided by the RDTSC
instruction. For ARM, this is provided by the CNTVCT_EL0 instruction. As a compile-time option,
these high-speed timers can be chosen. (Default is POSIX clock_gettime.)
3. Refactor of event loop timers. The timer processing in ae.c has been refactored to use the new
monotonic clock interface. This results in simpler/cleaner logic and improved performance.
65a3307bc (released in 6.0.6) has a side effect, when processCommand
rejects a command with pre-made shared object error string, it trims the
newlines from the end of the string. if that string is later used with
addReply, the newline will be missing, breaking the protocol, and
leaving the client hung.
It seems that the only scenario which this happens is when replying with
-LOADING to some command, and later using that reply from the CONFIG
SET command (still during loading). this will result in hung client.
Refactoring the code in order to avoid trimming these newlines from
shared string objects, and do the newline trimming only in other cases
where it's needed.
Co-authored-by: Guy Benoish <guy.benoish@redislabs.com>
During a long AOF or RDB loading, the memory stats were not updated, and
INFO would return stale data, specifically about fragmentation and RSS.
In the past some of these were sampled directly inside the INFO command,
but were moved to cron as an optimization.
This commit introduces a concept of loadingCron which should take
some of the responsibilities of serverCron.
It attempts to limit it's rate to approximately the server Hz, but may
not be very accurate.
In order to avoid too many system call, we use the cached ustime, and
also make sure to update it in both AOF loading and RDB loading inside
processEventsWhileBlocked (it seems AOF loading was missing it).
If the server gets MULTI command followed by only read
commands, and right before it gets the EXEC it reaches OOM,
the client will get OOM response.
So, from now on, it will get OOM response only if there was
at least one command that was tagged with `use-memory` flag
A first step to enable a consistent full percentile analysis on query latency so that we can fully understand the performance and stability characteristics of the redis-server system we are measuring. It also improves the instantaneous reported metrics, and the csv output format.
Otherwise, it is treated as a single allocation and freed synchronously. The following logic is used for estimating the effort in constant-ish time complexity:
1. Check the number of nodes.
1. Add an allocation for each consumer group registered inside the stream.
1. Check the number of PELs in the first CG, and then add this count times the number of CGs.
1. Check the number of consumers in the first CG, and then add this count times the number of CGs.
The previous fix using _Atomic was insufficient, since we check and set it in
different places.
The implications of this bug are just that a portion of the bug report will be shown
twice, in the race case of two concurrent crashes.
When calling to LPOS command when RANK is higher than matches,
the return value is non valid response. For example:
```
LPUSH l a
:1
LPOS l b RANK 5 COUNT 10
*-4
```
It may break client-side parser.
Now, we count how many replies were replied in the array.
```
LPUSH l a
:1
LPOS l b RANK 5 COUNT 10
*0
```
After fork, the child process(redis-aof-rewrite) will get the fd opened
by the parent process(redis), when redis killed by kill -9, it will not
graceful exit(call prepareForShutdown()), so redis-aof-rewrite thread may still
alive, the fd(lock) will still be held by redis-aof-rewrite thread, and
redis restart will fail to get lock, means fail to start.
This issue was causing failures in the cluster tests in github actions.
Co-authored-by: Oran Agra <oran@redislabs.com>
Since users often post just the crash log in github issues, the log
print that's above it is missing.
No reason not to include the size in the panic message itself.
65a3307bc9 added rejectCommand which takes an robj reply and passes it
through addReplyErrorSafe to addReplyErrorLength.
The robj contains newline at it's end, but addReplyErrorSafe converts it
to spaces, and passes it to addReplyErrorLength which adds the protocol
newlines.
The result was that most error replies (like OOM) had extra two trailing
spaces in them.
Avoid re-configuring (and validating) SSL/TLS configuration on `CONFIG
SET` when TLS is not actively enabled for incoming connections, cluster
bus or replication.
This fixes failures when tests run without `--tls` on binaries that were
built with TLS support.
An additional benefit is that it's now possible to perform a multi-step
configuration process while TLS is disabled. The new configuration will
be verified and applied only when TLS is effectively enabled.
In order to keep the redismodule.h self-contained but still usable with
gcc v10 and later, annotate each API function tentative definition with
the __common__ attribute. This avoids the 'multiple definition' errors
modules will otherwise see for all API functions at link time.
Further details at gcc.gnu.org/gcc-10/porting_to.html
Turn the existing __attribute__ ((unused)), ((__common__)) and ((print))
annotations into conditional macros for any compilers not accepting this
syntax. These macros only expand to API annotations under gcc.
Provide a pre- and post- macro for every API function, so that they can
be defined differently by the file that includes redismodule.h.
Removing REDISMODULE_API_FUNC in the interest of keeping the function
declarations readable.
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Client that issued WAIT last will most likely have the highest replication offset, so imagine a probably common case where all clients are waiting for the same number of replicas. we prefer the loop to start from the last client (one waiting for the highest offset), so that the optimization in the function will call replicationCountAcksByOffset for each client until it found a good one, and stop calling it for the rest of the clients.
the way the loop was implemented would mean that in such case it is likely to call replicationCountAcksByOffset for all clients.
Note: the change from > to >= is not directly related to the above.
Co-authored-by: 曹正斌 <caozb@jiedaibao.com>
Add Linux kernel OOM killer control option.
This adds the ability to control the Linux OOM killer oom_score_adj
parameter for all Redis processes, depending on the process role (i.e.
master, replica, background child).
A oom-score-adj global boolean flag control this feature. In addition,
specific values can be configured using oom-score-adj-values if
additional tuning is required.
This is a rebased version of #3078 originally by shaharmor
with the following patches by TysonAndre made after rebasing
to work with the updated C API:
1. Add 2 more unit tests
(wrong argument count error message, integer over 64 bits)
2. Use addReplyArrayLen instead of addReplyMultiBulkLen.
3. Undo changes to src/help.h - for the ZMSCORE PR,
I heard those should instead be automatically
generated from the redis-doc repo if it gets updated
Motivations:
- Example use case: Client code to efficiently check if each element of a set
of 1000 items is a member of a set of 10 million items.
(Similar to reasons for working on #7593)
- HMGET and ZMSCORE already exist. This may lead to developers deciding
to implement functionality that's best suited to a regular set with a
data type of sorted set or hash map instead, for the multi-get support.
Currently, multi commands or lua scripting to call sismember multiple times
would almost definitely be less efficient than a native smismember
for the following reasons:
- Need to fetch the set from the string every time
instead of reusing the C pointer.
- Using pipelining or multi-commands would result in more bytes sent
and received by the client for the repeated SISMEMBER KEY sections.
- Need to specially encode the data and decode it from the client
for lua-based solutions.
- Proposed solutions using Lua or SADD/SDIFF could trigger writes to
memory, which is undesirable on a redis replica server
or when commands get replicated to replicas.
Co-Authored-By: Shahar Mor <shahar@peer5.com>
Co-Authored-By: Tyson Andre <tysonandre775@hotmail.com>
In redismodule.h, RedisModule_DeauthenticateAndCloseClient returns void
`void REDISMODULE_API_FUNC(RedisModule_DeauthenticateAndCloseClient)(RedisModuleCtx *ctx, uint64_t client_id);`
But in module.c, RM_DeauthenticateAndCloseClient returns int
`int RM_DeauthenticateAndCloseClient(RedisModuleCtx *ctx, uint64_t client_id)`
It it safe to change return value from `void` to `int` from the user's perspective.
Added RedisModule_HoldString that either returns a
shallow copy of the given String (by increasing
the String ref count) or a new deep copy of String
in case its not possible to get a shallow copy.
Co-authored-by: Itamar Haber <itamar@redislabs.com>
Add logic to redis-cli to display RESP3 PUSH messages when we detect
STDOUT is a tty, with an optional command-line argument to override
the default behavior.
The new argument: --show-pushes <yn>
Examples:
$ redis-cli -3 --show-pushes no
$ echo "client tracking on\nget k1\nset k1 v1"| redis-cli -3 --show-pushes y
Before this fix we where attempting to select a db before creating db the DB, see: #7323
This issue doesn't seem to have any implications, since the selected DB index is 0,
the db pointer remains NULL, and will later be correctly set before using this dummy
client for the first time.
As we know, we call 'moduleInitModulesSystem()' before 'initServer()'. We will allocate
memory for server.db in 'initServer', but we call 'createClient()' that will call 'selectDb()'
in 'moduleInitModulesSystem()', before the databases where created. Instead, we should call
'createClient()' for moduleFreeContextReusedClient after 'initServer()'.
Diskless master has some inherent latencies.
1) fork starts with delay from cron rather than immediately
2) replica is put online only after an ACK. but the ACK
was sent only once a second.
3) but even if it would arrive immediately, it will not
register in case cron didn't yet detect that the fork is done.
Besides that, when a replica disconnects, it doesn't immediately
attempts to re-connect, it waits for replication cron (one per second).
in case it was already online, it may be important to try to re-connect
as soon as possible, so that the backlog at the master doesn't vanish.
In case it disconnected during rdb transfer, one can argue that it's
not very important to re-connect immediately, but this is needed for the
"diskless loading short read" test to be able to run 100 iterations in 5
seconds, rather than 3 (waiting for replication cron re-connection)
changes in this commit:
1) sync command starts a fork immediately if no sync_delay is configured
2) replica sends REPLCONF ACK when done reading the rdb (rather than on 1s cron)
3) when a replica unexpectedly disconnets, it immediately tries to
re-connect rather than waiting 1s
4) when when a child exits, if there is another replica waiting, we spawn a new
one right away, instead of waiting for 1s replicationCron.
5) added a call to connectWithMaster from replicationSetMaster. which is called
from the REPLICAOF command but also in 3 places in cluster.c, in all of
these the connection attempt will now be immediate instead of delayed by 1
second.
side note:
we can add a call to rdbPipeReadHandler in replconfCommand when getting
a REPLCONF ACK from the replica to solve a race where the replica got
the entire rdb and EOF marker before we detected that the pipe was
closed.
in the test i did see this race happens in one about of some 300 runs,
but i concluded that this race is unlikely in real life (where the
replica is on another host and we're more likely to first detect the
pipe was closed.
the test runs 100 iterations in 3 seconds, so in some cases it'll take 4
seconds instead (waiting for another REPLCONF ACK).
Removing unneeded startBgsaveForReplication from updateSlavesWaitingForBgsave
Now that CheckChildrenDone is calling the new replicationStartPendingFork
(extracted from serverCron) there's actually no need to call
startBgsaveForReplication from updateSlavesWaitingForBgsave anymore,
since as soon as updateSlavesWaitingForBgsave returns, CheckChildrenDone is
calling replicationStartPendingFork that handles that anyway.
The code in updateSlavesWaitingForBgsave had a bug in which it ignored
repl-diskless-sync-delay, but removing that code shows that this bug was
hiding another bug, which is that the max_idle should have used >= and
not >, this one second delay has a big impact on my new test.
this race would only happen when two threads paniced at the same time,
and even then the only consequence is some extra log lines.
race reported in #7391
This makes it possible to add tests that generate assertions, and run
them with valgrind, making sure that there are no memory violations
prior to the assertion.
New config options:
- crash-log-enabled - can be disabled for cleaner core dumps
- crash-memcheck-enabled - useful for faster termination after a crash
- use-exit-on-panic - to be used by the test suite so that valgrind can
detect leaks and memory corruptions
Other changes:
- Crash log is printed even on system that dont HAVE_BACKTRACE, i.e. in
both SIGSEGV and assert / panic
- Assertion and panic won't print registers and code around EIP (which
was useless), but will do fast memory test (which may still indicate
that the assertion was due to memory corrpution)
I had to reshuffle code in order to re-use it, so i extracted come code
into function without actually doing any changes to the code:
- logServerInfo
- logModulesInfo
- doFastMemoryTest (with the exception of it being conditional)
- dumpCodeAroundEIP
changes to the crash report on segfault:
- logRegisters is called right after the stack trace (before info) done
just in order to have more re-usable code
- stack trace skips the first two items on the stack (the crash log and
signal handler functions)
Syntax: `ZMSCORE KEY MEMBER [MEMBER ...]`
This is an extension of #2359
amended by Tyson Andre to work with the changed unstable API,
add more tests, and consistently return an array.
- It seemed as if it would be more likely to get reviewed
after updating the implementation.
Currently, multi commands or lua scripting to call zscore multiple times
would almost definitely be less efficient than a native ZMSCORE
for the following reasons:
- Need to fetch the set from the string every time instead of reusing the C
pointer.
- Using pipelining or multi-commands would result in more bytes sent by
the client for the repeated `ZMSCORE KEY` sections.
- Need to specially encode the data and decode it from the client
for lua-based solutions.
- The fastest solution I've seen for large sets(thousands or millions)
involves lua and a variadic ZADD, then a ZINTERSECT, then a ZRANGE 0 -1,
then UNLINK of a temporary set (or lua). This is still inefficient.
Co-authored-by: Tyson Andre <tysonandre775@hotmail.com>
The Redis sentinel would crash with a segfault after a few minutes
because it tried to read from a page without read permissions. Check up
front whether the sds is long enough to contain redis:slave or
redis:master before memcmp() as is done everywhere else in
sentinelRefreshInstanceInfo().
Bug report and commit message from Theo Buehler. Fix from Nam Nguyen.
Co-authored-by: Nam Nguyen <namn@berkeley.edu>
valsize was not modified during the for loop below instead of getting from c->argv[4], therefore there is no need to put inside the for loop.. Moreover, putting the check outside loop will also avoid memory leaking, decrRefCount(key) should be called in the original code if we put the check in for loop
The connection API may create an accepted connection object in an error
state, and callers are expected to check it before attempting to use it.
Co-authored-by: mrpre <mrpre@163.com>
Initialize and configure OpenSSL even when tls-port is not used, because
we may still have tls-cluster or tls-replication.
Also, make sure to reconfigure OpenSSL when these parameters are changed
as TLS could have been enabled for the first time.
this code is in use only if the master is disk-based, and the replica is
diskless. In this case we use a buffered reader, but we must avoid reading
past the rdb file, into the command stream. which Luckly rdb.c doesn't
really attempt to do (it knows how much it should read).
When rioConnRead detects that the extra buffering attempt reaches beyond
the read limit it should read less, but if the caller actually requested
more, then it should return with an error rather than a short read. the
bug would have resulted in short read.
in order to fix it, the code must consider the real requested size, and
not the extra buffering size.
Before that PR, processCommand() did not notice that cmd could be a module
command in which case getkeys_proc member has a different meaning.
The outcome was that a module command which doesn't take any key names in its
arguments (similar to SLOWLOG) would be handled as if it might have key name arguments
(similar to MEMORY), would consider cluster redirect but will end up with 0 keys
after an excessive call to getKeysFromCommand, and eventually do the right thing.
Since the dynamic allocations in raxIterator are only used for deep walks, memory
leak due to missing call to raxStop can only happen for rax with key names longer
than 32 bytes.
Out of all the missing calls, the only ones that may lead to a leak are the rax
for consumer groups and consumers, and these were only in AOFRW and rdbSave, which
normally only happen in fork or at shutdown.
Before this commit, processCommand() did not notice that cmd could be a module command
which declared `getkeys-api` and handled it for the purpose of cluster redirect it
as if it doesn't use any keys.
This commit fixed it by reusing the codes in addReplyCommand().
It will never happen that "lp != NULL && lp_bytes >= server.stream_node_max_bytes".
Assume that "lp != NULL && lp_bytes >= server.stream_node_max_bytes",
we got the following conditions:
a. lp != NULL
b. lp_bytes >= server.stream_node_max_bytes
If server.stream_node_max_bytes is 0, given condition a, condition b is always satisfied
If server.stream_node_max_bytes is not 0, given condition a and condition b, the codes just a
few lines above set lp to NULL, a controdiction with condition a
So that condition b is recundant. We could delete it safely.
Specifically, the key passed to the module aof_rewrite callback is a stack allocated robj. When passing it to RedisModule_EmitAOF (with appropriate "s" fmt string) redis used to panic when trying to inc the ref count of the stack allocated robj. Now support such robjs by coying them to a new heap robj. This doesn't affect performance because using the alternative "c" or "b" format strings also copies the input to a new heap robj.
in case the rdb child failed, crashed or terminated unexpectedly redis
would have marked the replica clients with repl_put_online_on_ack and
then kill them only after a minute when no ack was received.
it would not stream anything to these connections, so the only effect of
this bug is a delay of 1 minute in the replicas attempt to re-connect.
* fix description about ZIP_BIG_PREVLEN(the code is ok), it's similar to
antirez#4705
* fix description about ziplist entry encoding field (the code is ok),
the max length should be 2^32 - 1 when encoding is 5 bytes
* Tests: fix and reintroduce redis-cli tests.
These tests have been broken and disabled for 10 years now!
* TLS: add remaining redis-cli support.
This adds support for the redis-cli --pipe, --rdb and --replica options
previously unsupported in --tls mode.
* Fix writeConn().
Similarly to EXPIREAT with TTL in the past, which implicitly deletes the
key and return success, RESTORE should not store key that are already
expired into the db.
When used together with REPLACE it should emit a DEL to keyspace
notification and replication stream.
* tests/valgrind: don't use debug restart
DEBUG REATART causes two issues:
1. it uses execve which replaces the original process and valgrind doesn't
have a chance to check for errors, so leaks go unreported.
2. valgrind report invalid calls to close() which we're unable to resolve.
So now the tests use restart_server mechanism in the tests, that terminates
the old server and starts a new one, new PID, but same stdout, stderr.
since the stderr can contain two or more valgrind report, it is not enough
to just check for the absence of leaks, we also need to check for some known
errors, we do both, and fail if we either find an error, or can't find a
report saying there are no leaks.
other changes:
- when killing a server that was already terminated we check for leaks too.
- adding DEBUG LEAK which was used to test it.
- adding --trace-children to valgrind, although no longer needed.
- since the stdout contains two or more runs, we need slightly different way
of checking if the new process is up (explicitly looking for the new PID)
- move the code that handles --wait-server to happen earlier (before
watching the startup message in the log), and serve the restarted server too.
* squashme - CR fixes
For example:
BITOP not targetkey sourcekey
If targetkey and sourcekey doesn't exist, BITOP has no effect,
we do not propagate it, thus can save aof and replica flow.
In order to support the use of multi-exec in pipeline, it is important that
MULTI and EXEC are never rejected and it is easy for the client to know if the
connection is still in multi state.
It was easy to make sure MULTI and DISCARD never fail (done by previous
commits) since these only change the client state and don't do any actual
change in the server, but EXEC is a different story.
Since in the past, it was possible for clients to handle some EXEC errors and
retry the EXEC, we now can't affort to return any error on EXEC other than
EXECABORT, which now carries with it the real reason for the abort too.
Other fixes in this commit:
- Some checks that where performed at the time of queuing need to be re-
validated when EXEC runs, for instance if the transaction contains writes
commands, it needs to be aborted. there was one check that was already done
in execCommand (-READONLY), but other checks where missing: -OOM, -MISCONF,
-NOREPLICAS, -MASTERDOWN
- When a command is rejected by processCommand it was rejected with addReply,
which was not recognized as an error in case the bad command came from the
master. this will enable to count or MONITOR these errors in the future.
- make it easier for tests to create additional (non deferred) clients.
- add tests for the fixes of this commit.
The scan key module API provides the scan callback with the current
field name and value (if it exists). Those arguments are RedisModuleString*
which means it supposes to point to robj which is encoded as a string.
Using createStringObjectFromLongLong function might return robj that
points to an integer and so break a module that tries for example to
use RedisModule_StringPtrLen on the given field/value.
The PR introduces a fix that uses the createObject function and sdsfromlonglong function.
Using those function promise that the field and value pass to the to the
scan callback will be Strings.
The PR also changes the Scan test module to use RedisModule_StringPtrLen
to catch the issue. without this, the issue is hidden because
RedisModule_ReplyWithString knows to handle integer encoding of the
given robj (RedisModuleString).
The PR also introduces a new test to verify the issue is solved.
The `LRANK` command returns the index (position) of a given element
within a list. Using the `direction` argument it is possible to specify
going from head to tail (acending, 1) or from tail to head (decending,
-1). Only the first found index is returend. The complexity is O(N).
When using lists as a queue it can be of interest at what position a
given element is, for instance to monitor a job processing through a
work queue. This came up within the Python `rq` project which is based
on Redis[0].
[0]: https://github.com/rq/rq/issues/1197
Signed-off-by: Paul Spooren <mail@aparcar.org>
the recent change in that loop (iteration rather than waiting for it to
be empty) was intended to avoid an endless loop in case some slave would
refuse to be freed.
but the lookup of the first client remained, which would have caused it
to try the first one again and again instead of moving on.
Much like MULTI/EXEC/DISCARD, the WATCH and UNWATCH are not actually
operating on the database or server state, but instead operate on the
client state. the client may send them all in one long pipeline and check
all the responses only at the end, so failing them may lead to a
mismatch between the client state on the server and the one on the
client end, and execute the wrong commands (ones that were meant to be
discarded)
the watched keys are not actually stored in the client struct, but they
are in fact part of the client state. for instance, they're not cleared
or moved in SWAPDB or FLUSHDB.
Now it is also possible for ACL SETUSER to accept empty strings
as valid operations (doing nothing), so for instance
ACL SETUSER myuser ""
Will have just the effect of creating a user in the default state.
This should fix#7329.
This impacts client verification for chained certificates (such as Lets
Encrypt certificates). Client Verify requires the full chain in order to
properly verify the certificate.
After a closer look, the Redis core devleopers all believe that this was
too fragile, caused many bugs that we didn't expect and that were very
hard to track. Better to find an alternative solution that is simpler.
We want to react a bit more aggressively if we sense that the master is
sending us some corrupted stream. By setting the protocol error we both
ensure that the replica will disconnect, and avoid caching the master so
that a full SYNC will be required. This is protective against
replication bugs.
`clusterStartHandshake` will start hand handshake
and eventually send CLUSTER MEET message, which is strictly prohibited
in the REDIS CLUSTER SPEC.
Only system administrator can initiate CLUSTER MEET message.
Futher, according to the SPEC, rather than IP/PORT pairs, only nodeid
can be trusted.
After adjustMeaningfulReplOffset(), all the other related variable
should be updated, including server.second_replid_offset.
Or the old version redis like 5.0 may receive wrong data from
replication stream, cause redis 5.0 can sync with redis 6.0,
but doesn't know meaningful offset.
Otherwise we run into that:
Backtrace:
src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035]
src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390]
src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a]
src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282]
src/redis-server 127.0.0.1:21322[0x4552d4]
src/redis-server 127.0.0.1:21322[0x4c5593]
src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786]
src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d]
src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830]
src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409]
Since we disconnect all the replicas and free the replication backlog in
certain replication paths, and the code that will free the replication
backlog expects that no replica is connected.
However we still need to free the replicas asynchronously in certain
cases, as documented in the top comment of disconnectSlaves().
Citing from the issue:
btw I suggest we change this fix to something else:
* We revert the fix.
* We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset.
This way we can avoid disconnections when there is no trimming of the backlog.
Note that we now want to disconnect replicas asynchronously in
disconnectSlaves(), because it's in general safer now that we can call
it from freeClient(). Otherwise for instance the command:
CLIENT KILL TYPE master
May crash: clientCommand() starts running the linked of of clients,
looking for clients to kill. However it finds the master, kills it
calling freeClient(), but this in turn calls replicationCacheMaster()
that may also call disconnectSlaves() now. So the linked list iterator
of the clientCommand() will no longer be valid.
There's a rare case which leads to stagnation in the defragger, causing
it to keep scanning the keyspace and do nothing (not moving any
allocation), this happens when all the allocator slabs of a certain bin
have the same % utilization, but the slab from which new allocations are
made have a lower utilization.
this commit fixes it by removing the current slab from the overall
average utilization of the bin, and also eliminate any precision loss in
the utilization calculation and move the decision about the defrag to
reside inside jemalloc.
and also add a test that consistently reproduce this issue.
in ACLSetUserCommandBit, when the command bit overflows, no operation
is performed, so no need clear the USER_FLAG_ALLCOMMANDS flag.
in ACLSetUser, when adding subcommand, we don't need to call
ACLGetCommandID ahead since subcommand may be empty.
This was broken in 1a7cd2c: we identified a crash in the CI, what
was happening before the fix should be like that:
1. The client gets in the async free list.
2. However freeClient() gets called again against the same client
which is a master.
3. The client arrived in freeClient() with the CLOSE_ASAP flag set.
4. The master gets cached, but NOT removed from the CLOSE_ASAP linked
list.
5. The master client that was cached was immediately removed since it
was still in the list.
6. Redis accessed a freed cached master.
This is how the crash looked like:
=== REDIS BUG REPORT START: Cut & paste starting from here ===
1092:S 16 May 2020 11:44:09.731 # Redis 999.999.999 crashed by signal: 11
1092:S 16 May 2020 11:44:09.731 # Crashed running the instruction at: 0x447e18
1092:S 16 May 2020 11:44:09.731 # Accessing address: 0xffffffffffffffff
1092:S 16 May 2020 11:44:09.731 # Failed assertion: (:0)
------ STACK TRACE ------
EIP:
src/redis-server 127.0.0.1:21300(readQueryFromClient+0x48)[0x447e18]
And the 0xffff address access likely comes from accessing an SDS that is
set to NULL (we go -1 offset to read the header).
The context is issue #7205: since the introduction of threaded I/O we close
clients asynchronously by default from readQueryFromClient(). So we
should no longer prevent the caching of the master client, to later
PSYNC incrementally, if such flags are set. However we also don't want
the master client to be cached with such flags (would be closed
immediately after being restored). And yet we want a way to understand
if a master was closed because of a protocol error, and in that case
prevent the caching.
This bug was introduced by a recent change in which readQueryFromClient
is using freeClientAsync, and despite the fact that now
freeClientsInAsyncFreeQueue is in beforeSleep, that's not enough since
it's not called during loading in processEventsWhileBlocked.
furthermore, afterSleep was called in that case but beforeSleep wasn't.
This bug also caused slowness sine the level-triggered mode of epoll
kept signaling these connections as readable causing us to keep doing
connRead again and again for ll of these, which keep accumulating.
now both before and after sleep are called, but not all of their actions
are performed during loading, some are only reserved for the main loop.
fixes issue #7215
This is really required only for older OpenSSL versions.
Also, at the moment Redis does not use OpenSSL from multiple threads so
this will only be useful if modules end up doing that.
We want to send pings and pongs at specific intervals, since our packets
also contain information about the configuration of the cluster and are
used for gossip. However since our cluster bus is used in a mixed way
for data (such as Pub/Sub or modules cluster messages) and metadata,
sometimes a very busy channel may delay the reception of pong packets.
So after discussing it in #7216, this commit introduces a new field that
is not exposed in the cluster, is only an internal information about
the last time we received any data from a given node: we use this field
in order to avoid detecting failures, claiming data reception of new
data from the node is a proof of liveness.
This works because this struct is never referenced by its name,
but always by its type.
This prevents a conflict with struct user from <sys/user.h>
when compiling against uclibc.
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
* fix memlry leaks with diskless replica short read.
* fix a few timing issues with valgrind runs
* fix issue with valgrind and watchdog schedule signal
about the valgrind WD issue:
the stack trace test in logging.tcl, has issues with valgrind:
==28808== Can't extend stack to 0x1ffeffdb38 during signal delivery for thread 1:
==28808== too small or bad protection modes
it seems to be some valgrind bug with SA_ONSTACK.
SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed),
also, not sure if it's even valid without a call to sigaltstack()
Currently, there are several types of threads/child processes of a
redis server. Sometimes we need deeply optimise the performance of
redis, so we would like to isolate threads/processes.
There were some discussion about cpu affinity cases in the issue:
https://github.com/antirez/redis/issues/2863
So implement cpu affinity setting by redis.conf in this patch, then
we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
bgsave_cpulist by cpu list.
Examples of cpulist in redis.conf:
server_cpulist 0-7:2 means cpu affinity 0,2,4,6
bio_cpulist 1,3 means cpu affinity 1,3
aof_rewrite_cpulist 8-11 means cpu affinity 8,9,10,11
bgsave_cpulist 1,10-11 means cpu affinity 1,10,11
Test on linux/freebsd, both work fine.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
When deffered reply is added the previous reply node cannot be used so
all the extra space we allocated in it is wasted. in case someone uses
deffered replies in a loop, each time adding a small reply, each of
these reply nodes (the small string reply) would have consumed a 16k
block.
now when we add anther diferred reply node, we trim the unused portion
of the previous reply block.
see #7123
cherry picked from commit fb732f7a94
with fix to handle a crash with LIBC allocator, which apparently can
return the same pointer despite changing it's size.
i.e. shrinking an allocation of 16k into 56 bytes without changing the
pointer.
We could use uint64_t specific macros, but after all it's simpler to
just use an obvious equivalent type plus casting: this will be a no op
and is simpler than fixed size types printf macros.
1. add eviction-lazyfree monitor
2. put eviction-del & eviction-lazyfree into eviction-cycle
that means eviction-cycle contains all the latency in
the eviction cycle including del and lazyfree
3. use getMaxmemoryState to check if we can break in lazyfree-evict
If client gets blocked again in `processUnblockedClients`, redis will not send
`REPLCONF GETACK *` to slaves untill next eventloop, so the client will be
blocked for 100ms by default(10hz) if no other file event fired.
move server.get_ack_from_slaves sinppet after `processUnblockedClients`, so
that both the first WAIT command that puts client in blocked context and the
following WAIT command processed in processUnblockedClients would trigger
redis-sever to send `REPLCONF GETACK *`, so that the eventloop would get
`REPLCONG ACK <reploffset>` from slaves and unblocked ASAP.
come to think of it, in theory (not in practice), getDecodedObject can
return the same original object with refcount incremented, so the
pointer comparision in the previous commit was invalid.
so now instead of checking the encoding, we explicitly check the
refcount.
since the recent addition of OBJ_STATIC_REFCOUNT and the assertion in
incrRefCount it is now impossible to use dictFind using a static robj,
because dictEncObjKeyCompare will call getDecodedObject which tries to
increment the refcount just in order to decrement it later.