When we use monitor in redis-cli but encounter an error reply,
we will get stuck until we press Ctrl-C to quit.
This is a harmless bug. It might be useful if we add parameters
to monitor in the future, suck as monitoring only selected db.
before:
```
127.0.0.1:6379> monitor wrong
(error) ERR wrong number of arguments for 'monitor' command or subcommand
^C(9.98s)
127.0.0.1:6379>
```
after:
```
127.0.0.1:6379> monitor wrong
(error) ERR wrong number of arguments for 'monitor' command or subcommand
127.0.0.1:6379>
```
We can now do: `config set maxmemory 10m repl-backlog-size 5m`
## Basic algorithm to support "transaction like" config sets:
1. Backup all relevant current values (via get).
2. Run "verify" and "set" on everything, if we fail run "restore".
3. Run "apply" on everything (optional optimization: skip functions already run). If we fail run "restore".
4. Return success.
### restore
1. Run set on everything in backup. If we fail log it and continue (this puts us in an undefined
state but we decided it's better than the alternative of panicking). This indicates either a bug
or some unsupported external state.
2. Run apply on everything in backup (optimization: skip functions already run). If we fail log
it (see comment above).
3. Return error.
## Implementation/design changes:
* Apply function are idempotent (have no effect if they are run more than once for the same config).
* No indication in set functions if we're reading the config or running from the `CONFIG SET` command
(removed `update` argument).
* Set function should set some config variable and assume an (optional) apply function will use that
later to apply. If we know this setting can be safely applied immediately and can always be reverted
and doesn't depend on any other configuration we can apply immediately from within the set function
(and not store the setting anywhere). This is the case of this `dir` config, for example, which has no
apply function. No apply function is need also in the case that setting the variable in the `server` struct
is all that needs to be done to make the configuration take effect. Note that the original concept of `update_fn`,
which received the old and new values was removed and replaced by the optional apply function.
* Apply functions use settings written to the `server` struct and don't receive any inputs.
* I take care that for the generic (non-special) configs if there's no change I avoid calling the setter (possible
optimization: avoid calling the apply function as well).
* Passing the same config parameter more than once to `config set` will fail. You can't do `config set my-setting
value1 my-setting value2`.
Note that getting `save` in the context of the conf file parsing to work here as before was a pain.
The conf file supports an aggregate `save` definition, where each `save` line is added to the server's
save params. This is unlike any other line in the config file where each line overwrites any previous
configuration. Since we now support passing multiple save params in a single line (see top comments
about `save` in https://github.com/redis/redis/pull/9644) we should deprecate the aggregate nature of
this config line and perhaps reduce this ugly code in the future.
Adds the ability to autogenerate the sequence part of the millisecond-only explicit ID specified for `XADD`. This is useful in case added entries have an externally-provided timestamp without sub-millisecond resolution.
Add master-reboot-down-after-period as a configurable parameter, to make it possible to trigger a failover from a master that is responding with `-LOADING` for a long time after being restarted.
The issue can only happened with a bad Lua script that claims to return
a big number while actually return data which is not a big number (contains
chars that are not digits). Such thing will not cause an issue unless the big
number value contains `\r\n` and then it messes the resp3 structure. The fix
changes all the appearances of '\r\n' with spaces.
Such an issue can also happened on simple string or error replies but those
already handle it the same way this PR does (replace `\r\n` with spaces).
Other replies type are not vulnerable to this issue because they are not
counting on free text that is terminated with `\r\n` (either it contains the
bulk length like string reply or they are typed reply that can not inject free
text like boolean or number).
The issue only exists on unstable branch, big number reply on Lua script
was not yet added to any official release.
* Fix CLIENT KILL kill all clients with id 0 or with skipme
CLIENT KILL with ID argument should only kill the client with the provided ID. In old code,
CLIENT KILL with id 0 will kill all the connected clients.
Co-authored-by: Ofir Luzon <ofirluzon@gmail.com>
Update CI so that warnings cause build failures.
Also fix a warning in `test-sanitizer-address`:
```
In function ‘strncpy’,
inlined from ‘clusterUpdateMyselfIp’ at cluster.c:545:13:
/usr/include/x86_64-linux-gnu/bits/string_fortified.h:106:10:
error: ‘__builtin_strncpy’ specified bound 46 equals destination size [-Werror=stringop-truncation]
106 | return __builtin___strncpy_chk (__dest, __src, __len, __bos (__dest));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
```
This test relies on that `XREAD BLOCK 20000 STREAMS s1{t} s2{t} s3{t} $ $ $`
is executed by redis before `XADD s2{t} * new abcd1234`. A ` wait_for_blocked_client`
is needed between the two to ensure the order, otherwise `XADD s2{t} * new abcd1234`
might be executed first due to network delay causing a test failure.
Co-authored-by: xiaolei <xiaolei@91jkys.com>
This pr is following #9779 .
## Describe of feature
Now when we turn on the `list-compress-depth` configuration, the list will compress
the ziplist between `[list-compress-depth, -list-compress-depth]`.
When we need to use the compressed data, we will first decompress it, then use it,
and finally compress it again.
It's controlled by `quicklistNode->recompress`, which is designed to avoid the need to
re-traverse the entire quicklist for compression after each decompression, we only need
to recompress the quicklsitNode being used.
In order to ensure the correctness of recompressing, we should normally let
quicklistDecompressNodeForUse and quicklistCompress appear in pairs, otherwise,
it may lead to the head and tail being compressed or the middle ziplist not being
compressed correctly, which is exactly the problem this pr needs to solve.
## Solution
1. Reset `quicklistIter` after insert and replace.
The quicklist node will be compressed in `quicklistInsertAfter`, `quicklistInsertBefore`,
`quicklistReplaceAtIndex`, so we can safely reset the quicklistIter to avoid it being used again
2. `quicklistIndex` will return an iterator that can be used to recompress the current node after use.
## Test
1. In the `Stress Tester for #3343-Similar Errors` test, when the server crashes or when
`valgrind` or `asan` error is detected, print violating commands.
2. Add a crash test due to wrongly recompressing after `lrem`.
3. Remove `insert before with 0 elements` and `insert after with 0 elements`,
Now we forbid any operation on an NULL quicklistIter.
In order to test the situation where multiple clients are
blocked, we set up multiple clients to execute some blocking
commands. These tests depend on the order of command processing.
Those tests are based on the wrong assumption that the command
send first will be executed by the server first, which is obviously
wrong in some network delyas.
This commit ensures orderly execution of commands by waiting
and judging the number of blocked clients each time.
Fix#9850
This commit 0f8b634cd (CVE-2021-32626 released in 6.2.6, 6.0.16, 5.0.14)
fixes an invalid memory write issue by using `lua_checkstack` API to make
sure the Lua stack is not overflow. This fix was added on 3 places:
1. `luaReplyToRedisReply`
2. `ldbRedis`
3. `redisProtocolToLuaType`
On the first 2 functions, `lua_checkstack` is handled gracefully while the
last is handled with an assert and a statement that this situation can
not happened (only with misbehave module):
> the Redis reply might be deep enough to explode the LUA stack (notice
that currently there is no such command in Redis that returns such a nested
reply, but modules might do it)
The issue that was discovered is that user arguments is also considered part
of the stack, and so the following script (for example) make the assertion reachable:
```
local a = {}
for i=1,7999 do
a[i] = 1
end
return redis.call("lpush", "l", unpack(a))
```
This is a regression because such a script would have worked before and now
its crashing Redis. The solution is to clear the function arguments from the Lua
stack which makes the original assumption true and the assertion unreachable.
msgpack lib missed using lua_checkstack and so on rare
cases overflow the stack by at most 2 elements. This is a
violation of the Lua C API. Notice that Lua allocates
additional 5 more elements on top of lua->stack_last
so Redis does not access an invalid memory. But it is an
API violation and we should avoid it.
This PR also added a new Lua compilation option. The new
option can be enable using environment variable called
LUA_DEBUG. If set to `yes` (by default `no`), Lua will be
compiled without optimizations and with debug symbols (`-O0 -g`).
In addition, in this new mode, Lua will be compiled with the
`-DLUA_USE_APICHECK` flag that enables extended Lua C API
validations.
In addition, set LUA_DEBUG=yes on daily valgrind flow so we
will be able to catch Lua C API violations in the future.
Writable replicas now no longer use the values of expired keys. Expired keys are
deleted when lookupKeyWrite() is used, even on a writable replica. Previously,
writable replicas could use the value of an expired key in write commands such
as INCR, SUNIONSTORE, etc..
This commit also sorts out the mess around the functions lookupKeyRead() and
lookupKeyWrite() so they now indicate what we intend to do with the key and
are not affected by the command calling them.
Multi-key commands like SUNIONSTORE, ZUNIONSTORE, COPY and SORT with the
store option now use lookupKeyRead() for the keys they're reading from (which will
not allow reading from logically expired keys).
This commit also fixes a bug where PFCOUNT could return a value of an
expired key.
Test modules commands have their readonly and write flags updated to correctly
reflect their lookups for reading or writing. Modules are not required to
correctly reflect this in their command flags, but this change is made for
consistency since the tests serve as usage examples.
Fixes#6842. Fixes#7475.
Remove lcsGetKeys to clean up the remaining STRALGO after #9733.
i.e. it still used a getkeys_proc which was still looking for the KEYS or STRINGS arguments
Currently, the watching clients are marked as dirty when a watched
key is touched, but we continue watching the keys for no reason.
Then, when the same key is touched again, we iterate again on the
watching clients list and mark all clients as dirty again.
Only when the exec/unwatch command is issued will the client be
removed from the key->watching_clients list. The same applies when
a dirty client calls the WATCH command. The key will be added to be
watched by the client even if it has no effect.
In the field, no performance degradation was observed as a result of the
current implementation; it is merely a cleanup with possible memory and
performance gains in some situations.
Co-authored-by: Oran Agra <oran@redislabs.com>
logs message prints wrong file is failed to open temporary file
logs have error occurred in getcwd (uses same errno to report error)
Co-authored-by: Pavel Melkozerov <pavel.melkozerov@nokia.com>
Part three of implementing #8702, following #8887 and #9366 .
## Description of the feature
1. Replace the ziplist container of quicklist with listpack.
2. Convert existing quicklist ziplists on RDB loading time. an O(n) operation.
## Interface changes
1. New `list-max-listpack-size` config is an alias for `list-max-ziplist-size`.
2. Replace `debug ziplist` command with `debug listpack`.
## Internal changes
1. Add `lpMerge` to merge two listpacks . (same as `ziplistMerge`)
2. Add `lpRepr` to print info of listpack which is used in debugCommand and `quicklistRepr`. (same as `ziplistRepr`)
3. Replace `QUICKLIST_NODE_CONTAINER_ZIPLIST` with `QUICKLIST_NODE_CONTAINER_PACKED`(following #9357 ).
It represent that a quicklistNode is a packed node, as opposed to a plain node.
4. Remove `createZiplistObject` method, which is never used.
5. Calculate listpack entry size using overhead overestimation in `quicklistAllowInsert`.
We prefer an overestimation, which would at worse lead to a few bytes below the lowest limit of 4k.
## Improvements
1. Calling `lpShrinkToFit` after converting Ziplist to listpack, which was missed at #9366.
2. Optimize `quicklistAppendPlainNode` to avoid memcpy data.
## Bugfix
1. Fix crash in `quicklistRepr` when ziplist is compressed, introduced from #9366.
## Test
1. Add unittest for `lpMerge`.
2. Modify the old quicklist ziplist corrupt dump test.
Co-authored-by: Oran Agra <oran@redislabs.com>
In #9323, when `repl-diskless-load` is enabled and set to `swapdb`,
if the master replication ID hasn't changed, we can load data-set
asynchronously, and serving read commands during the full resync.
In `diskless loading short read` test, after a loading successfully,
we will wait for the loading to stop and continue the for loop.
After the introduction of `async_loading`, we also need to check it.
Otherwise the next loop will start too soon, may trigger a timing issue.
In #8287, some overflow checks have been added. But when
`when *= 1000` overflows, it will become a positive number.
And the check not able to catch it. The key will be added with
a short expiration time and will deleted a few seconds later.
In #9601, will check the overflow after `*=` and return an
error first, and avoiding this situation.
In this commit, added some tests to cover those code paths.
Found it in #9825, and close it.
Some people complain that QUIT is missing from help/command table.
Not appearing in COMMAND command, command stats, ACL, etc.
and instead, there's a hack in processCommand with a comment that looks outdated.
Note that it is [documented](https://redis.io/commands/quit)
At the same time, HOST: and POST are there in the command table although these are not real commands.
They would appear in the COMMAND command, and even in commandstats.
Other changes:
1. Initialize the static logged_time static var in securityWarningCommand
2. add `no-auth` flag to RESET so it can always be executed.
Issue found by corrupt-dump-fuzzer test with ASAN.
The problem was that lpSkip and lpGetWithSize could read the next listpack entry without validating that it's in range.
Similarly even the memcmp in lpFind could do that and possibly crash on segfault and now they'll crash on assert first.
The naive fix of using lpAssertValidEntry every time, resulted in 30% degradation in the lpFind benchmark of the unit test.
The final fix with the condition at the bottom has no performance implications.
TCL8.5 can't handle cases where part of the string is escaped and part of it isn't,
if there's a single char that needs escaping, we need to escape the whole string.
The `PEXPIRE/PSETEX/PEXPIREAT can set sub-second expires` test is
a very time sensitive test, it used to occasionally fail on MacOS.
It will perform there internal tests in a loop, as long as one
fails, it will try to excute again in the next loop.
oranagra suggested that we can split it into three individual tests,
so that if one fails, we do not need to retry the others. And maybe
it will increase the chances of success dramatically.
Each is executed 500 times, and the number of retries is collected:
```
PSETEX, total: 500, sum: 745, min: 0, max: 13, avg: 1.49
PEXPIRE, total: 500, sum: 575, min: 0, max: 16, avg: 1.15
PEXPIREAT, total: 500, sum: 0, min: 0, max: 0, avg: 0.0
ALL(old_way), total: 500, sum: 8090, min: 0, max: 138, avg: 16.18
```
And we can see the threshold is very low.
Splitting the test also makes the code better to maintain.
Co-authored-by: Oran Agra <oran@redislabs.com>
Leak found by the corrupt-dump-fuzzer when using GCC ASAN, which seems
to falsely report leaks on pointers kept only on the stack when calling exit.
Instead we now use _exit on panic / assert to skip these leak checks.
Additionally, check for sanitizer warnings in the corrupt-dump-fuzzer between iterations,
so that when something is found we know which test to relate it too (and it prints reproduction command list)
With dynamically growing argc (#9528), it is necessary to initialize
argv_len. Normally createClient() handles that, but in the case of a
module shared_client, this needs to be done explicitly.
This also addresses an issue with rewriteClientCommandArgument() which
doesn't properly handle the case where the new element extends beyond
argc but not beyond argv_len.
LCS can allocate immense amount of memory (sizes of two inputs multiplied by each other).
In the past this caused some possible security issues due to overflows, which we solved
and also added use of `trymalloc` to return "Insufficient memory" instead of OOM panic zmalloc.
But in case overcommit is enabled, it could be that we won't get the OOM panic, and zmalloc
will succeed, and then we can get OOM killed by the kernel.
The solution here is to prevent LCS from allocating transient memory that's bigger than
`proto-max-bulk-len` config.
This config is not directly related to transient memory, but using a hard coded value ad well as
introducing a specific config seems wrong.
This comes to solve an error in the corrupt-dump-fuzzer test that started in the daily CI see #9799
Background:
Following the upgrade to jemalloc 5.2, there was a test that used to be flaky and
started failing consistently (on 32bit), so we disabled it (see #9645).
This is a test that i introduced in #7289 when i attempted to solve a rare stagnation
problem, and it later turned out i failed to solve it, ans what's more i added a test that
caused it to be not so rare, and as i mentioned, now in jemalloc 5.2 it became consistent on 32bit.
Stagnation can happen when all the slabs of the bin are equally utilized, so the decision
to move an allocation from a relatively empty slab to a relatively full one, will never
happen, and in that test all the slabs are at 50% utilization, so the defragger could just
keep scanning the keyspace and not move anything.
What this PR changes:
* First, finally in jemalloc 5.2 we have the count of non-full slabs, so when we compare
the utilization of the current slab, we can compare it to the average utilization of the non-full
slabs in our bin, instead of the total average of our bin. this takes the full slabs out of the game,
since they're not candidates for migration (neither source nor target).
* Secondly, We add some 12% (100/8) to the decision to defrag an allocation, this is the part
that aims to avoid stagnation, and it's especially important since the above mentioned change
can get us closer to stagnation.
* Thirdly, since jemalloc 5.2 adds sharded bins, we take into account all shards (something
that's missing from the original PR that merged it), this isn't expected to make any difference
since anyway there should be just one shard.
How this was benchmarked.
What i did was run the memefficiency test unit with `--verbose` and compare the defragger hits
and misses the tests reported.
At first, when i took into consideration only the non-full slabs, it got a lot worse (i got into
stagnation, or just got a lot of misses and a lot of hits), but when i added the 10% i got back
to results that were slightly better than the ones of the jemalloc 5.1 branch. i.e. full defragmentation
was achieved with fewer hits (relocations), and fewer misses (keyspace scans).
Recently we started using list-compress-depth in tests (was completely untested till now).
Turns this triggered test failures with the external mode, since the tests left the setting enabled
and then it was used in other tests (specifically the fuzzer named "Stress tester for #3343-alike bugs").
This PR fixes the issue of the `recompress` flag being left set by mistake, which caused the code to
later to compress the head or tail nodes (which should never be compressed)
The solution is to reset the recompress flag when it should have been (when it was decided not to compress).
Additionally we're adding some assertions and improve the tests so in order to catch other similar bugs.
Currently PING returns different status when server is not serving data,
for example when `LOADING` or `BUSY`.
But same was not true for `MASTERDOWN`
This commit makes PING reply with `MASTERDOWN` when
replica-serve-stale-data=no and link is MASTER is down.
Drop the STRALGO command, now LCS is a command of its own and it only works on keys (not input strings).
The motivation is that STRALGO's syntax was really messed-up...
- assumes all (future) string algorithms will take similar arguments
- mixes command that takes keys and one that doesn't in the same command.
- make it nearly impossible to expose the right key spec in COMMAND INFO (issues cluster clients)
- hard for cluster clients to determine the key names (firstkey, lastkey, etc)
- hard for ACL / flags (is it a read command?)
This is a breaking change.
Moves ZPOP ... 0 fast exit path after type check to reply with
WRONGTYPE. In the past it will return an empty array.
Also now count is not allowed to be negative.
see #9680
before:
```
127.0.0.1:6379> set zset str
OK
127.0.0.1:6379> zpopmin zset 0
(empty array)
127.0.0.1:6379> zpopmin zset -1
(empty array)
```
after:
```
127.0.0.1:6379> set zset str
OK
127.0.0.1:6379> zpopmin zset 0
(error) WRONGTYPE Operation against a key holding the wrong kind of value
127.0.0.1:6379> zpopmin zset -1
(error) ERR value is out of range, must be positive
```
Add --accurate to unit tests (new feature recently added)
Add --no-latency to valgrind run (was present only for modules)
add --no-latency to macos and freebsd runs (was not present for modules)
add --timeout to freebsd (same one we have for valgrind)
in `scan 0 match ""` case, pat is empty sds(patlen is 0), I don't think should access the
first character directly in this case(even though the first character is ' \0 '), for the
code readability, I switch the two positions of judgment logic.
Redis supports inserting data over 4GB into string (and recently for lists too, see #9357),
But LZF compression used in RDB files (see `rdbcompression` config), and in quicklist
(see `list-compress-depth` config) does not support compress/decompress data over
UINT32_MAX, which will result in corrupting the rdb after compression.
Internal changes:
1. Modify the `unsigned int` parameter of `lzf_compress/lzf_decompress` to `size_t`.
2. Modify the variable types in `lzf_compress` involving offsets and lengths to `size_t`.
3. Set LZF_USE_OFFSETS to 0.
When LZF_USE_OFFSETS is 1, lzf store offset into `LZF_HSLOT`(32bit).
Even in 64-bit, `LZF_USE_OFFSETS` defaults to 1, because lzf assumes that it only
compresses and decompresses data smaller than UINT32_MAX.
But now we need to make lzf support 64-bit, turning on `LZF_USE_OFFSETS` will make
it impossible to store 64-bit offsets or pointers.
BTW, disable LZF_USE_OFFSETS also brings a few performance improvements.
Tests:
1. Add test for compress/decompress string large than UINT32_MAX.
2. Add unittest for compress/decompress quicklistNode.
The client flags is a 64 bit integer, but the temporary cached value on the stack of call() is 32 bit.
luckily this doesn't lead to any bugs since the only flags used against this variables are below 32 bit.