In monitor/pubsub mode, if the server closes the connection,
for example, use `CLIENT KILL`, redis-cli will exit directly
without printing any error messages.
This commit ensures that redis-cli will try to print the
error messages before exiting. Also there is a minor cleanup
for restart, see the example below.
before:
```
127.0.0.1:6379> monitor
OK
[root@ redis]#
127.0.0.1:6379> subscribe channel
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "channel"
3) (integer) 1
[root@ redis]#
127.0.0.1:6379> restart
127.0.0.1:6379> get keyUse 'restart' only in Lua debugging mode.
(nil)
```
after:
```
127.0.0.1:6379> monitor
OK
Error: Server closed the connection
[root@ redis]#
127.0.0.1:6379> subscribe channel
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "channel"
3) (integer) 1
Error: Server closed the connection
[root@ redis]#
127.0.0.1:6379> restart
Use 'restart' only in Lua debugging mode.
```
Older version of GNU Make (<4.3) required quoting of number signs (#) to
avoid them being treated as a comment. Newer versions will treat this
quote as a literal.
This issue and a proposed solution is discussed here:
https://lists.gnu.org/archive/html/info-gnu/2020-01/msg00004.html
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Creating fork (or even a foreground SAVE) during a transaction breaks the atomicity of the transaction.
In addition to that, it could mess up the propagated transaction to the AOF file.
This change blocks SAVE, PSYNC, SYNC and SHUTDOWN from being executed inside MULTI-EXEC.
It does that by adding a command flag, so that modules can flag their commands with that flag too.
Besides it changes BGSAVE, BGREWRITEAOF, and CONFIG SET appendonly, to turn the
scheduled flag instead of forking righ taway.
Other changes:
* expose `protected`, `no-async-loading`, and `no_multi` flags in COMMAND command
* add a test to validate propagation of FLUSHALL inside a transaction.
* add a test to validate how CONFIG SET that errors reacts in a transaction
Co-authored-by: Oran Agra <oran@redislabs.com>
This would mean that the effects of `CONFIG SET maxmemory` may not be visible once the command returns.
That could anyway happen since incremental eviction was added in redis 6.2 (see #7653)
We do this to fix one of the propagation bugs about eviction see #9890 and #10014.
Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
Introducing a folder with multiple AOF files tracked by a manifest file.
The main issues with the the original AOFRW mechanism are:
* buffering of commands that are processed during rewrite (consuming a lot of RAM)
* freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
* double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
The main modifications of this PR:
1. Remove the AOF rewrite buffer and related code.
2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
incremental commands since the last AOFRW.
3. Use a AOF manifest file to record and manage these AOF files mentioned above.
4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
`appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
6. Remove the `aof_rewrite_buffer_length` field in info.
7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
It also gives users the opportunity to preserve the history AOFs. just for testing use now.
8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
9. Support upgrade (load) data from old version redis.
10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
manifest file will be placed in this directory.
11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
`aof-load-truncated` is enabled.
Co-authored-by: Oran Agra <oran@redislabs.com>
This commit implements a sharded pubsub implementation based off of shard channels.
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Since #9166 we have an assertion here to make sure replica clients don't write anything to their buffer.
But in reality a replica may attempt write data to it's buffer simply by sending a command on the replication link.
This command in most cases will be rejected since #8868 but it'll still generate an error.
Actually the only valid command to send on a replication link is 'REPCONF ACK` which generates no response.
We want to keep the design so that replicas can send commands but we need to avoid any situation where we start
putting data in their response buffers, especially since they aren't used anymore. This PR makes sure to disconnect
a rogue client which generated a write on the replication link that cause something to be written to the response buffer.
To recreate the bug this fixes simply connect via telnet to a redis server and write sync\r\n wait for the the payload to
be written and then write any command (valid or invalid), such as ping\r\n on the telnet connection. It'll crash the server.
To avoid data loss, this commit adds a grace period for lagging replicas to
catch up the replication offset.
Done:
* Wait for replicas when shutdown is triggered by SIGTERM and SIGINT.
* Wait for replicas when shutdown is triggered by the SHUTDOWN command. A new
blocked client type BLOCKED_SHUTDOWN is introduced, allowing multiple clients
to call SHUTDOWN in parallel.
Note that they don't expect a response unless an error happens and shutdown is aborted.
* Log warning for each replica lagging behind when finishing shutdown.
* CLIENT_PAUSE_WRITE while waiting for replicas.
* Configurable grace period 'shutdown-timeout' in seconds (default 10).
* New flags for the SHUTDOWN command:
- NOW disables the grace period for lagging replicas.
- FORCE ignores errors writing the RDB or AOF files which would normally
prevent a shutdown.
- ABORT cancels ongoing shutdown. Can't be combined with other flags.
* New field in the output of the INFO command: 'shutdown_in_milliseconds'. The
value is the remaining maximum time to wait for lagging replicas before
finishing the shutdown. This field is present in the Server section **only**
during shutdown.
Not directly related:
* When shutting down, if there is an AOF saving child, it is killed **even** if AOF
is disabled. This can happen if BGREWRITEAOF is used when AOF is off.
* Client pause now has end time and type (WRITE or ALL) per purpose. The
different pause purposes are *CLIENT PAUSE command*, *failover* and
*shutdown*. If clients are unpaused for one purpose, it doesn't affect client
pause for other purposes. For example, the CLIENT UNPAUSE command doesn't
affect client pause initiated by the failover or shutdown procedures. A completed
failover or a failed shutdown doesn't unpause clients paused by the CLIENT
PAUSE command.
Notes:
* DEBUG RESTART doesn't wait for replicas.
* We already have a warning logged when a replica disconnects. This means that
if any replica connection is lost during the shutdown, it is either logged as
disconnected or as lagging at the time of exit.
Co-authored-by: Oran Agra <oran@redislabs.com>
This is needed in order to ease the deployment of functions for ephemeral cases, where user
needs to spin up a server with functions pre-loaded.
#### Details:
* Added `--functions-rdb` option to _redis-cli_.
* Functions only rdb via `REPLCONF rdb-filter-only functions`. This is a placeholder for a space
separated inclusion filter for the RDB. In the future can be `REPLCONF rdb-filter-only
"functions db:3 key-patten:user*"` and a complementing `rdb-filter-exclude` `REPLCONF`
can also be added.
* Handle "slave requirements" specification to RDB saving code so we can use the same RDB
when different slaves express the same requirements (like functions-only) and not share the
RDB when their requirements differ. This is currently just a flags `int`, but can be extended to
a more complex structure with various filter fields.
* make sure to support filters only in diskless replication mode (not to override the persistence file),
we do that by forcing diskless (even if disabled by config)
other changes:
* some refactoring in rdb.c (extract portion of a big function to a sub-function)
* rdb_key_save_delay used in AOFRW too
* sendChildInfo takes the number of updated keys (incremental, rather than absolute)
Co-authored-by: Oran Agra <oran@redislabs.com>
Now if redis is still loading when we receive sigterm, we will wait for the loading to reach the event
loop (once in 2mb) before actually shutting down. See #10003.
This change caused valgrind CI to fail.
See https://github.com/redis/redis/runs/4662901673?check_suite_focus=true
This pr is mainly to solve the problem that redis process cannot be exited normally.
When the master is disconnected, if repl is processing diskless loading and using `connRead` to read data from master,
it may enter an infinite retry state, which does not handle `connRead` returning 0(master connection disconnected).
There are two changes in this commit:
1. Add -X option to redis-cli.
Currently `-x` can only be used to provide the last argument,
so you can do `redis-cli dump keyname > key.dump`,
and then do `redis-cli -x restore keyname 0 < key.dump`.
But what if you want to add the replace argument (which comes last?).
oran suggested adding such usage:
`redis-cli -X <tag> restore keyname <tag> replace < key.dump`
i.e. you're able to provide a string in the arguments that's gonna be
substituted with the content from stdin.
Note that the tag name should not conflict with others non-replaced args.
And the -x and -X options are conflicting.
Some usages:
```
[root]# echo mypasswd | src/redis-cli -X passwd_tag mset username myname password passwd_tag OK
[root]# echo username > username.txt
[root]# head -c -1 username.txt | src/redis-cli -X name_tag mget name_tag password
1) "myname"
2) "mypasswd\n"
```
2. Handle the combination of both `-x` and `--cluster` or `-X` and `--cluster`
Extend the broadcast option to receive the last arg or <tag> arg from the stdin.
Now we can use `redis-cli -x --cluster call <host>:<port> cmd`,
or `redis-cli -X <tag> --cluster call <host>:<port> cmd <tag>`.
(support part of #9899)
Add missing information about commands, mainly from reviewing redis-doc and removing
the metadata from it (https://github.com/redis/redis-doc/pull/1722)
* Reintroduces CLUSTER S****S (supported by Redis) but missing from the JSON / docs (related? #9675).
Note that without that json file, the command won't work (breaking change)
* Adds the `replicas` argument (exists in Redis) to `CLIENT KILL`.
* Adds `history` entries to several commands based on redis-doc's man pages.
* Adds `since` to applicable command arguments based on `history` (this basically makes
some of `history` redundant - perhaps at a later stage).
* Uses proper semantic versioning in all version references.
* Also removes `geoencodeCommand` and `geodecodeCommand` header
declarations per b96af595a5.
on the signal handler we are enabling server.shutdown_asap flag instead of just doing `exit()`,
and then catch it on the whileBlockedCron() where we prepare for shutdown correctly.
this is a more ore organized and safe termination, the old approach was missing these for example:
1. removal of the pidfile
2. shutdown event to modules
Preventing COFIG SET maxmemory from propagating is just the tip of the iceberg.
Module that performs a write operation in a notification can cause any
command to be propagated, based on server.dirty
We need to come up with a better solution.
Following #9656, this script generates a "commands.json" file from the output
of the new COMMAND. The output of this script is used in redis/redis-doc#1714
and by redis/redis-io#259. This also converts a couple of rogue dashes (in
'key-specs' and 'multiple-token' flags) to underscores (continues #9959).
Follow the conclusions to support Functions in redis cluster (#9899)
Added 2 new FUNCTION sub-commands:
1. `FUNCTION DUMP` - dump a binary payload representation of all the functions.
2. `FUNCTION RESTORE <PAYLOAD> [FLUSH|APPEND|REPLACE]` - give the binary payload extracted
using `FUNCTION DUMP`, restore all the functions on the given payload. Restore policy can be given to
control how to handle existing functions (default is APPEND):
* FLUSH: delete all existing functions.
* APPEND: appends the restored functions to the existing functions. On collision, abort.
* REPLACE: appends the restored functions to the existing functions. On collision,
replace the old function with the new function.
Modify `redis-cli --cluster add-node` to use `FUNCTION DUMP` to get existing functions from
one of the nodes in the cluster, and `FUNCTION RESTORE` to load the same set of functions
to the new node. `redis-cli` will execute this step before sending the `CLUSTER MEET` command
to the new node. If `FUNCTION DUMP` returns an error, assume the current Redis version do not
support functions and skip `FUNCTION RESTORE`. If `FUNCTION RESTORE` fails, abort and do not send
the `CLUSTER MEET` command. If the new node already contains functions (before the `FUNCTION RESTORE`
is sent), abort and do not add the node to the cluster. Test was added to verify
`redis-cli --cluster add-node` works as expected.
Use case insensitive string comparison for function names (like we do for commands and configs)
In addition, add verification that the functions only use the following characters: [a-zA-Z0-9_]
Introduce `redis-cli --json` option.
CSV doesn't support Map type, then parsing SLOWLOG or HMSET with multiple args are not helpful.
By default the `--json` implies RESP3, which makes it much more useful, and a `-2`
option was added to force RESP2.
When `HELLO 3` fails, it prints a warning message (which can be silenced with `-2`).
If a user passed `-3` explicitly, the non-interactive mode will also exit with error without
running the command, while in interactive session it'll keep running after printing the warning.
JSON output would be helpful to parse Redis replies with other tools like jq.
```
redis-cli --json slowlog get | jq
[
[
1,
1639677545,
322362,
[
"HMSET",
"dummy-key",
"field1",
"123,456,789... (152 more bytes)",
"field2",
"111,222,333... (140 more bytes)",
"field3",
"... (349 more arguments)"
],
"127.0.0.1:49312",
""
]
]
```
The mess:
Some parts use alsoPropagate for late propagation, others using an immediate one (propagate()),
causing edge cases, ugly/hacky code, and the tendency for bugs
The basic idea is that all commands are propagated via alsoPropagate (i.e. added to a list) and the
top-most call() is responsible for going over that list and actually propagating them (and wrapping
them in MULTI/EXEC if there's more than one command). This is done in the new function,
propagatePendingCommands.
Callers to propagatePendingCommands:
1. top-most call() (we want all nested call()s to add to the also_propagate array and just the top-most
one to propagate them) - via `afterCommand`
2. handleClientsBlockedOnKeys: it is out of call() context and it may propagate stuff - via `afterCommand`.
3. handleClientsBlockedOnKeys edge case: if the looked-up key is already expired, we will propagate the
expire but will not unblock any client so `afterCommand` isn't called. in that case, we have to propagate
the deletion explicitly.
4. cron stuff: active-expire and eviction may also propagate stuff
5. modules: the module API allows to propagate stuff from just about anywhere (timers, keyspace notifications,
threads). I could have tried to catch all the out-of-call-context places but it seemed easier to handle it in one
place: when we free the context. in the spirit of what was done in call(), only the top-most freeing of a module
context may cause propagation.
6. modules: when using a thread-safe ctx it's not clear when/if the ctx will be freed. we do know that the module
must lock the GIL before calling RM_Replicate/RM_Call so we propagate the pending commands when
releasing the GIL.
A "known limitation", which were actually a bug, was fixed because of this commit (see propagate.tcl):
When using a mix of RM_Call with `!` and RM_Replicate, the command would propagate out-of-order:
first all the commands from RM_Call, and then the ones from RM_Replicate
Another thing worth mentioning is that if, in the past, a client would issue a MULTI/EXEC with just one
write command the server would blindly propagate the MULTI/EXEC too, even though it's redundant.
not anymore.
This commit renames propagate() to propagateNow() in order to cause conflicts in pending PRs.
propagatePendingCommands is the only caller of propagateNow, which is now a static, internal helper function.
Optimizations:
1. alsoPropagate will not add stuff to also_propagate if there's no AOF and replicas
2. alsoPropagate reallocs also_propagagte exponentially, to save calls to memmove
Bugfixes:
1. CONFIG SET can create evictions, sending notifications which can cause to dirty++ with modules.
we need to prevent it from propagating to AOF/replicas
2. We need to set current_client in RM_Call. buggy scenario:
- CONFIG SET maxmemory, eviction notifications, module hook calls RM_Call
- assertion in lookupKey crashes, because current_client has CONFIG SET, which isn't CMD_WRITE
3. minor: in eviction, call propagateDeletion after notification, like active-expire and all commands
(we always send a notification before propagating the command)
## background
Till now CONFIG SET was blocked during loading.
(In the not so distant past, GET was disallowed too)
We recently (not released yet) added an async-loading mode, see #9323,
and during that time it'll serve CONFIG SET and any other command.
And now we realized (#9770) that some configs, and commands are dangerous
during async-loading.
## changes
* Allow most CONFIG SET during loading (both on async-loading and normal loading)
* Allow CONFIG REWRITE and CONFIG RESETSTAT during loading
* Block a few config during loading (`appendonly`, `repl-diskless-load`, and `dir`)
* Block a few commands during loading (list below)
## the blocked commands:
* SAVE - obviously we don't wanna start a foregreound save during loading 8-)
* BGSAVE - we don't mind to schedule one, but we don't wanna fork now
* BGREWRITEAOF - we don't mind to schedule one, but we don't wanna fork now
* MODULE - we obviously don't wanna unload a module during replication / rdb loading
(MODULE HELP and MODULE LIST are not blocked)
* SYNC / PSYNC - we're in the middle of RDB loading from master, must not allow sync
requests now.
* REPLICAOF / SLAVEOF - we're in the middle of replicating, maybe it makes sense to let
the user abort it, but he couldn't do that so far, i don't wanna take any risk of bugs due to odd state.
* CLUSTER - only allow [HELP, SLOTS, NODES, INFO, MYID, LINKS, KEYSLOT, COUNTKEYSINSLOT,
GETKEYSINSLOT, RESET, REPLICAS, COUNT_FAILURE_REPORTS], for others, preserve the status quo
## other fixes
* processEventsWhileBlocked had an issue when being nested, this could happen with a busy script
during async loading (new), but also in a busy script during AOF loading (old). this lead to a crash in
the scenario described in #6988
The issue with MAY_REPLICATE is that all automatic mechanisms to handle
write commands will not work. This require have a special treatment for:
* Not allow those commands to be executed on RO replica.
* Allow those commands to be executed on RO replica from primary connection.
* Allow those commands to be executed on the RO replica from AOF.
By setting those commands as WRITE commands we are getting all those properties from Redis.
Test was added to verify that those properties work as expected.
In addition, rearrange when and where functions are flushed. Before this PR functions were
flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the
assumptions that functions are data and need to be created/deleted alongside with the
data. A side effect of this, for example, `debug reload noflush` did not flush the data but
did flush the functions, `debug loadaof` flush the data but not the functions.
This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will
now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not
want to flush the functions (on all other cases, functions will be flushed). Used the new flag
on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof`
to verify that functions behave the same as the data.
Notice that because now functions will be deleted along side with the data we can not allow
`CLUSTER RESET` to be called from within a function (it will cause the function to be released
while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET` so it will not be possible
to be called from within a function. The other cluster commands are allowed from within a
function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a
given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script.
Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions.
When loading functions, if this flag is set, we will replace old functions with new ones on collisions.
# Background
The main goal of this PR is to remove relevant logics on Lua script verbatim replication,
only keeping effects replication logic, which has been set as default since Redis 5.0.
As a result, Lua in Redis 7.0 would be acting the same as Redis 6.0 with default
configuration from users' point of view.
There are lots of reasons to remove verbatim replication.
Antirez has listed some of the benefits in Issue #5292:
>1. No longer need to explain to users side effects into scripts.
They can do whatever they want.
>2. No need for a cache about scripts that we sent or not to the slaves.
>3. No need to sort the output of certain commands inside scripts
(SMEMBERS and others): this both simplifies and gains speed.
>4. No need to store scripts inside the RDB file in order to startup correctly.
>5. No problems about evicting keys during the script execution.
When looking back at Redis 5.0, antirez and core team decided to set the config
`lua-replicate-commands yes` by default instead of removing verbatim replication
directly, in case some bad situations happened. 3 years later now before Redis 7.0,
it's time to remove it formally.
# Changes
- configuration for lua-replicate-commands removed
- created config file stub for backward compatibility
- Replication script cache removed
- this is useless under script effects replication
- relevant statistics also removed
- script persistence in RDB files is also removed
- Propagation of SCRIPT LOAD and SCRIPT FLUSH to replica / AOF removed
- Deterministic execution logic in scripts removed (i.e. don't run write commands
after random ones, and sorting output of commands with random order)
- the flags indicating which commands have non-deterministic results are kept as hints to clients.
- `redis.replicate_commands()` & `redis.set_repl()` changed
- now `redis.replicate_commands()` does nothing and return an 1
- ...and then `redis.set_repl()` can be issued before `redis.replicate_commands()` now
- Relevant TCL cases adjusted
- DEBUG lua-always-replicate-commands removed
# Other changes
- Fix a recent bug comparing CLIENT_ID_AOF to original_client->flags instead of id. (introduced in #9780)
Co-authored-by: Oran Agra <oran@redislabs.com>
Recent PRs have introduced some failures, this commit
try to fix these CI failures. Here are the changes:
1. Enable debug-command in sentinel test.
```
Master reboot in very short time: ERR DEBUG command not allowed. If the
enable-debug-command option is set to "local", you can run it from a
local connection, otherwise you need to set this option in the
configuration file, and then restart the server.
```
2. Enable protected-config in sentinel test.
```
SDOWN is triggered by misconfigured instance replying with errors: ERR
CONFIG SET failed (possibly related to argument 'dir') - can't set
protected config
```
3. Enable debug-command in cluster test.
```
Verify slaves consistency: ERR DEBUG command not allowed. If the
enable-debug-command option is set to "local", you can run it from a
local connection, otherwise you need to set this option in the
configuration file, and then restart the server.
```
4. quicklist fill should be signed int.
The reason for the modification is to eliminate the warning.
Modify `int fill: QL_FILL_BITS` to `signed int fill: QL_FILL_BITS`
The first three were introduced at #9920 (same issue).
And the last one was introduced at #9962.
1. Local variable 's' hides a parameter of the same name.
```
int anetTcpAccept(char *err, int s, char *ip, size_t ip_len, int *port) {
if ((fd = anetGenericAccept(err,s,(struct sockaddr*)&sa,&salen)) == ANET_ERR){}
}
```
Change the parameter name from `s` to `serversock`,
also unified with the header file definition.
2. Comparison is always false because i <= 48.
```
for (i = 0; i < DICT_STATS_VECTLEN-1; i++) { // i < 49
(i == DICT_STATS_VECTLEN-1)?">= ":"", // i == 49
}
```
`i == DICT_STATS_VECTLEN-1` always result false, it is a dead code.
3. Empty block without comment.
`else if (!strcasecmp(opt,"ch")) {}`, add a comment to avoid warnings.
4. Bit field fill of type int should have explicitly unsigned integral, explicitly signed integral, or enumeration type.
Modify `int fill: QL_FILL_BITS;` to `unsigned int fill: QL_FILL_BITS;`
5. The result of this call to reconnectingRedisCommand is not checked for null, but 80% of calls to reconnectingRedisCommand check for null.
Just a cleanup job like others.
Block sensitive configs and commands by default.
* `enable-protected-configs` - block modification of configs with the new `PROTECTED_CONFIG` flag.
Currently we add this flag to `dbfilename`, and `dir` configs,
all of which are non-mutable configs that can set a file redis will write to.
* `enable-debug-command` - block the `DEBUG` command
* `enable-module-command` - block the `MODULE` command
These have a default value set to `no`, so that these features are not
exposed by default to client connections, and can only be set by modifying the config file.
Users can change each of these to either `yes` (allow all access), or `local` (allow access from
local TCP connections and unix domain connections)
Note that this is a **breaking change** (specifically the part about MODULE command being disabled by default).
I.e. we don't consider DEBUG command being blocked as an issue (people shouldn't have been using it),
and the few configs we protected are unlikely to have been set at runtime anyway.
On the other hand, it's likely to assume some users who use modules, load them from the config file anyway.
Note that's the whole point of this PR, for redis to be more secure by default and reduce the attack surface on
innocent users, so secure defaults will necessarily mean a breaking change.
some languages can build a json-like object by parsing a textual json,
but it works poorly when attributes contain hyphens
example in JS:
```
let j = JSON.parse(json)
j['key-name'] <- works
j.key-name <= illegal syntax
```
Introduce memory management on cluster link buffers:
* Introduce a new `cluster-link-sendbuf-limit` config that caps memory usage of cluster bus link send buffers.
* Introduce a new `CLUSTER LINKS` command that displays current TCP links to/from peers.
* Introduce a new `mem_cluster_links` field under `INFO` command output, which displays the overall memory usage by all current cluster links.
* Introduce a new `total_cluster_links_buffer_limit_exceeded` field under `CLUSTER INFO` command output, which displays the accumulated count of cluster links freed due to `cluster-link-sendbuf-limit`.
Added `FUNCTION FLUSH` command. The new sub-command allows delete all the functions.
An optional `[SYNC|ASYNC]` argument can be given to control whether or not to flush the
functions synchronously or asynchronously. if not given the default flush mode is chosen by
`lazyfree-lazy-user-flush` configuration values.
Add the missing `functions.tcl` test to the list of tests that are executed in test_helper.tcl,
and call FUNCTION FLUSH in between servers in external mode
Sort the sub-commands so that every time we execute the script it generates the exact same results.
This will case less merge conflicts if two PRs edit different json files.
also:
* make the script agnostic to where it is executed (more flexible).
* add documentation about commands.c and the json files in the readme.
Co-authored-by: Oran Agra <oran@redislabs.com>
Support doing `CONFIG GET <x> <y> <z>`, each of them can also be
a pattern with wildcards.
This avoids duplicates in the result by looping over the configs and for
each once checking all the patterns, once a match is found for a pattern
we move on to the next config.
Delete the hardcoded command table and replace it with an auto-generated table, based
on a JSON file that describes the commands (each command must have a JSON file).
These JSON files are the SSOT of everything there is to know about Redis commands,
and it is reflected fully in COMMAND INFO.
These JSON files are used to generate commands.c (using a python script), which is then
committed to the repo and compiled.
The purpose is:
* Clients and proxies will be able to get much more info from redis, instead of relying on hard coded logic.
* drop the dependency between Redis-user and the commands.json in redis-doc.
* delete help.h and have redis-cli learn everything it needs to know just by issuing COMMAND (will be
done in a separate PR)
* redis.io should stop using commands.json and learn everything from Redis (ultimately one of the release
artifacts should be a large JSON, containing all the information about all of the commands, which will be
generated from COMMAND's reply)
* the byproduct of this is:
* module commands will be able to provide that info and possibly be more of a first-class citizens
* in theory, one may be able to generate a redis client library for a strictly typed language, by using this info.
### Interface changes
#### COMMAND INFO's reply change (and arg-less COMMAND)
Before this commit the reply at index 7 contained the key-specs list
and reply at index 8 contained the sub-commands list (Both unreleased).
Now, reply at index 7 is a map of:
- summary - short command description
- since - debut version
- group - command group
- complexity - complexity string
- doc-flags - flags used for documentation (e.g. "deprecated")
- deprecated-since - if deprecated, from which version?
- replaced-by - if deprecated, which command replaced it?
- history - a list of (version, what-changed) tuples
- hints - a list of strings, meant to provide hints for clients/proxies. see https://github.com/redis/redis/issues/9876
- arguments - an array of arguments. each element is a map, with the possibility of nesting (sub-arguments)
- key-specs - an array of keys specs (already in unstable, just changed location)
- subcommands - a list of sub-commands (already in unstable, just changed location)
- reply-schema - will be added in the future (see https://github.com/redis/redis/issues/9845)
more details on these can be found in https://github.com/redis/redis-doc/pull/1697
only the first three fields are mandatory
#### API changes (unreleased API obviously)
now they take RedisModuleCommand opaque pointer instead of looking up the command by name
- RM_CreateSubcommand
- RM_AddCommandKeySpec
- RM_SetCommandKeySpecBeginSearchIndex
- RM_SetCommandKeySpecBeginSearchKeyword
- RM_SetCommandKeySpecFindKeysRange
- RM_SetCommandKeySpecFindKeysKeynum
Currently, we did not add module API to provide additional information about their commands because
we couldn't agree on how the API should look like, see https://github.com/redis/redis/issues/9944.
### Somehow related changes
1. Literals should be in uppercase while placeholder in lowercase. Now all the GEO* command
will be documented with M|KM|FT|MI and can take both lowercase and uppercase
### Unrelated changes
1. Bugfix: no_madaory_keys was absent in COMMAND's reply
2. expose CMD_MODULE as "module" via COMMAND
3. have a dedicated uint64 for ACL categories (instead of having them in the same uint64 as command flags)
Co-authored-by: Itamar Haber <itamar@garantiadata.com>
Functions are considered data, so changing a function should be counted
as a data change and should affect the persistence policy. For example
if we want to persist an RDB each minute if there was a single change,
functions should be counted as such change. Using `forceCommandPropagation`
will not give us the desired effect and so we must switch to `server.dirty++`
When CONFIG SET fails, print the name of the config that failed.
This is helpful since config set is now variadic.
however, there are cases where several configs have the same apply
function, and we can't be sure which one of them caused the failure.
This caused a crash when adding elements larger than 2GB to a set (same goes for hash keys). See #8455.
Details:
* The fix makes the dict hash functions receive a `size_t` instead of an `int`. In practice the dict hash functions
call siphash which receives a `size_t` and the callers to the hash function pass a `size_t` to it so the fix is trivial.
* The issue was recreated by attempting to add a >2gb value to a set. Appropriate tests were added where I create
a set with large elements and check basic functionality on it (SADD, SCARD, SPOP, etc...).
* When I added the tests I also refactored a bit all the tests code which is run under the `--large-memory` flag.
This removed code duplication for the test framework's `write_big_bulk` and `write_big_bulk` code and also takes
care of not allocating the test frameworks helper huge string used by these tests when not run under `--large-memory`.
* I also added the _violoations.tcl_ unit tests to be part of the entire test suite and leaned up non relevant list related
tests that were in there. This was done in this PR because most of the _violations_ tests are "large memory" tests.
When rdb creates a consumer without determining whether it exists in advance,
it may return NULL and crash if it encounters corrupt data with duplicate consumers.
Added `HIDDEN_CONFIG` to hide debug / dev / testing configs from CONFIG GET
when it is used with a wildcard.
These are not documented in redis.conf so now CONFIG GET only works when they
are explicitly specified.
The current configs are:
```
key-load-delay
loading-process-events-interval-bytes
rdb-key-save-delay
use-exit-on-panic
watchdog-period
```
For `SENTINEL SET`, we can use in these ways:
1. SENTINEL SET mymaster quorum 3
2. SENTINEL SET mymaster quorum 5 parallel-syncs 1
For `SENTINEL SIMULATE-FAILURE`, although it is only used for testing:
1. SENTINEL SIMULATE-FAILURE CRASH-AFTER-ELECTION
2. SENTINEL SIMULATE-FAILURE CRASH-AFTER-ELECTION CRASH-AFTER-PROMOTION
When disabling redis oom-score-adj managment we restore the
base value read before enabling oom-score-adj management.
This fixes an issue introduced in #9748 where updating
`oom-score-adj-values` while `oom-score-adj` was set to `no`
would write the base oom score adj value read on startup to `/proc`.
This is a bug since while `oom-score-adj` is disabled we should
never write to proc and let external processes manage it.
Added appropriate tests.
# Redis Function
This PR added the Redis Functions capabilities that were suggested on #8693.
The PR also introduce a big refactoring to the current Lua implementation
(i.e `scripting.c`). The main purpose of the refactoring is to have better
code sharing between the Lua implementation that exists today on Redis
(`scripting.c`) and the new Lua engine that is introduced on this PR.
The refactoring includes code movements and file name changes as well as some
logic changes that need to be carefully reviewed. To make the review easier,
the PR was split into multiple commits. Each commit is deeply described later on
but the main concept is that some commits are just moving code around without
making any logical changes, those commits are less likely to cause any issues
or regressions and can be reviewed fast. Other commits, which perform code and
logic changes, need to be reviewed carefully, but those commits were created
after the code movements so it's pretty easy to see what was changed. To sum up,
it is highly recommended to review this PR commit by commit as it will be easier
to see the changes, it is also recommended to read each commit description
(written below) to understand what was changed on the commit and whether or not
it's just a huge code movement or a logic changes.
## Terminology
Currently, the terminology in Redis is not clearly defined. Scripts refer to Lua
scripts and eval also refers only to Lua. Introducing Redis Function requires
redefining those terms to be able to clearly understand what is been discussed
on each context.
* eval - legacy Lua script implementation.
* Function - new scripting implementation (currently implemented in Lua but in
the future, it might be other languages like javascript).
* Engine - the component that is responsible for executing functions.
* Script - Function or legacy Lua (executed with `eval` or `evalsha`)
## Refactoring New Structure
Today, the entire scripting logic is located on `scripting.c`. This logic can
be split into 3 main groups:
1. Script management - responsible for storing the scripts that were sent to
Redis and retrieving them when they need to be run (base on the script sha
on the current implementation).
2. Script invocation - invoke the script given on `eval` or `evalsha` command
(this part includes finding the relevant script, preparing the arguments, ..)
3. Interact back with Redis (command invocation)
Those 3 groups are tightly coupled on `scripting.c`. Redis Functions also need
to use those groups logics, for example, to interact back with Redis or to
execute Lua code. The refactoring attempts to split those 3 groups and define
APIs so that we can reuse the code both on legacy Lua scripts and Redis Functions.
In order to do so we define the following units:
1. script.c: responsible for interaction with Redis from within a script.
2. script_lua.c: responsible to execute Lua code, uses `script.c` to interact
with Redis from within the Lua code.
3. function_lua.c: contains the Lua engine implementation, uses `script_lua.c`
to execute the Lua code.
4. functions.c: Contains Redis Functions implementation (`FUNCTION` command,),
uses `functions_lua.c` if the function it wants to invoke needs the Lua
engine.
4. eval.c: the original `scripting.c` contains the Lua legacy implementation and
was refactored to use `script_lua.c` to invoke the Lua code.
## Commits breakdown
Notice: Some small commits are omitted from this list as they are small and
insignificant (for example build fixes)
### First commit - code movements
This commit rename `scripting.c` -> `eval.c` and introduce the new `script_lua.c`
unit. The commit moves relevant code from `eval.c` (`scripting.c`) to
`script_lua.c`, the purpose of moving the code is so that later we will be able
to re-use the code on the Lua engine (`function_lua.c`). The commit only moves
the code without modifying even a single line, so there is a very low risk of
breaking anything and it also makes it much easier to see the changes on the
following commits.
Because the commit does not change the code (only moves it), it does not compile.
But we do not care about it as the only purpose here is to make the review
processes simpler.
### Second commit - move legacy Lua variables into `eval.c`
Today, all Lua-related variables are located on the server struct. The commit
attempt to identify those variable and take them out from the server struct,
leaving only script related variables (variables that later need to be used
also by engines)
The following variable where renamed and left on the server struct:
* lua_caller -> script_caller
* lua_time_limit -> script_time_limit
* lua_timedout -> script_timedout
* lua_oom -> script_oom
* lua_disable_deny_script -> script_disable_deny_script
* in_eval -> in_script
The following variables where moved to lctx under eval.c
* lua
* lua_client
* lua_cur_script
* lua_scripts
* lua_scripts_mem
* lua_replicate_commands
* lua_write_dirty
* lua_random_dirty
* lua_multi_emitted
* lua_repl
* lua_kill
* lua_time_start
* lua_time_snapshot
This commit is in a low risk of introducing any issues and it is just moving
variables around and not changing any logic.
### Third commit - introducing script unit
This commit introduces the `script.c` unit. Its purpose (as described above) is
to provide an API for scripts to interact with Redis. Interaction includes
mostly executing commands, but also other functionalities. The interaction is
done using a `ScriptRunCtx` object that needs to be created by the user and
initialized using `scriptPrepareForRun`. A detailed list of functionalities
expose by the unit:
1. Calling commands (including all the validation checks such as
acl, cluster, read only run, ...)
2. Set Resp
3. Set Replication method (AOF/REPLICATION/NONE)
4. Call Redis back on long-running scripts to allow Redis to reply to clients
and perform script kill
The commit introduces the new unit and uses it on eval commands to interact with
Redis.
### Fourth commit - Moved functionality of invoke Lua code to `script_lua.c`
This commit moves the logic of invoking the Lua code into `script_lua.c` so
later it can be used also by Lua engine (`function_lua.c`). The code is located
on `callFunction` function and assumes the Lua function already located on the
top of the Lua stack. This commit also change `eval.c` to use the new
functionality to invoke Lua code.
### Fith commit - Added Redis Functions unit (`functions.c`) and Lua engine
(`function_lua.c`)
Added Redis Functions unit under `functions.c`, included:
1. FUNCTION command:
* FUNCTION CREATE
* FUNCTION CALL
* FUNCTION DELETE
* FUNCTION KILL
* FUNCTION INFO
* FUNCTION STATS
2. Register engines
In addition, this commit introduces the first engine that uses the Redis
Functions capabilities, the Lua engine (`function_lua.c`)
## API Changes
### `lua-time-limit`
configuration was renamed to `script-time-limit` (keep `lua-time-limit` as alias
for backward compatibility).
### Error log changes
When integrating with Redis from within a Lua script, the `Lua` term was removed
from all the error messages and instead we write only `script`. For example:
`Wrong number of args calling Redis command From Lua script` -> `Wrong number
of args calling Redis command From script`
### `info memory` changes:
Before stating all the changes made to memory stats we will try to explain the
reason behind them and what we want to see on those metrics:
* memory metrics should show both totals (for all scripting frameworks), as well
as a breakdown per framework / vm.
* The totals metrics should have "human" metrics while the breakdown shouldn't.
* We did try to maintain backward compatibility in some way, that said we did
make some repurpose to existing metrics where it looks reasonable.
* We separate between memory used by the script framework (part of redis's
used_memory), and memory used by the VM (not part of redis's used_memory)
A full breakdown of `info memory` changes:
* `used_memory_lua` and `used_memory_lua_human` was deprecated,
`used_memory_vm_eval` has the same meaning as `used_memory_lua`
* `used_memory_scripts` was renamed to `used_memory_scripts_eval`
* `used_memory_scripts` and `used_memory_scripts_human` were repurposed and now
return the total memory used by functions and eval (not including vm memory,
only code cache, and structs).
* `used_memory_vm_function` was added and represents the total memory used by
functions vm's
* `used_memory_functions` was added and represents the total memory by functions
(not including vm memory, only code cache, and structs)
* `used_memory_vm_total` and `used_memory_vm_total_human` was added and
represents the total memory used by vm's (functions and eval combined)
### `functions.caches`
`functions.caches` field was added to `memory stats`, representing the memory
used by engines that are not functions (this memory includes data structures
like dictionaries, arrays, ...)
## New API
### FUNCTION CREATE
Usage: FUNCTION CREATE `ENGINE` `NAME` `[REPLACE]` `[DESC <DESCRIPTION>]` `<CODE>`
* `ENGINE` - The name of the engine to use to create the script.
* `NAME` - the name of the function that can be used later to call the function
using `FUNCTION CALL` command.
* `REPLACE` - if given, replace the given function with the existing function
(if exists).
* `DESCRIPTION` - optional argument describing the function and what it does
* `CODE` - function code.
The command will return `OK` if created successfully or error in the following
cases:
* The given engine name does not exist
* The function name is already taken and `REPLACE` was not used.
* The given function failed on the compilation.
### FCALL and FCALL_RO
Usage: FCALL/FCALL_RO `NAME` `NUM_KEYS key1 key2` … ` arg1 arg2`
Call and execute the function specified by `NAME`. The function will receive
all arguments given after `NUM_KEYS`. The return value from the function will
be returned to the user as a result.
* `NAME` - Name of the function to run.
* The rest is as today with EVALSHA command.
The command will return an error in the following cases:
* `NAME` does not exist
* The function itself returned an error.
The `FCALL_RO` is equivalent to `EVAL_RO` and allows only read-only commands to
be invoked from the script.
### FUNCTION DELETE
Usage: FUNCTION DELETE `NAME`
Delete a function identified by `NAME`. Return `OK` on success or error on one
of the following:
* The given function does not exist
### FUNCTION INFO
Usage: FUNCTION INFO `NAME` [WITHCODE]
Return information about a function by function name:
* Function name
* Engine name
* Description
* Raw code (only if WITHCODE argument is given)
### FUNCTION LIST
Usage: FUNCTION LIST
Return general information about all the functions:
* Function name
* Engine name
* Description
### FUNCTION STATS
Usage: FUNCTION STATS
Return information about the current running function:
* Function name
* Command that was used to invoke the function
* Duration in MS that the function is already running
If no function is currently running, this section is just a RESP nil.
Additionally, return a list of all the available engines.
### FUNCTION KILL
Usage: `FUNCTION KILL`
Kill the currently executing function. The command will fail if the function
already initiated a write command.
## Notes
Note: Function creation/deletion is replicated to AOF but AOFRW is not
implemented sense its going to be removed: #9794
Redis function unit is located inside functions.c
and contains Redis Function implementation:
1. FUNCTION commands:
* FUNCTION CREATE
* FCALL
* FCALL_RO
* FUNCTION DELETE
* FUNCTION KILL
* FUNCTION INFO
2. Register engine
In addition, this commit introduce the first engine
that uses the Redis Function capabilities, the
Lua engine.
When we use monitor in redis-cli but encounter an error reply,
we will get stuck until we press Ctrl-C to quit.
This is a harmless bug. It might be useful if we add parameters
to monitor in the future, suck as monitoring only selected db.
before:
```
127.0.0.1:6379> monitor wrong
(error) ERR wrong number of arguments for 'monitor' command or subcommand
^C(9.98s)
127.0.0.1:6379>
```
after:
```
127.0.0.1:6379> monitor wrong
(error) ERR wrong number of arguments for 'monitor' command or subcommand
127.0.0.1:6379>
```
The functionality was moved to script_lua.c under
callFunction function. Its purpose is to call the Lua
function already located on the top of the Lua stack.
Used the new function on eval.c to invoke Lua code.
The function will also be used to invoke Lua
code on the Lua engine.
Script unit is a new unit located on script.c.
Its purpose is to provides an API for functions (and eval)
to interact with Redis. Interaction includes mostly
executing commands, but also functionalities like calling
Redis back on long scripts or check if the script was killed.
The interaction is done using a scriptRunCtx object that
need to be created by the user and initialized using scriptPrepareForRun.
Detailed list of functionalities expose by the unit:
1. Calling commands (including all the validation checks such as
acl, cluster, read only run, ...)
2. Set Resp
3. Set Replication method (AOF/REPLICATION/NONE)
4. Call Redis back to on long running scripts to allow Redis reply
to clients and perform script kill
The commit introduce the new unit and uses it on eval commands to
interact with Redis.
The following variable was renamed:
1. lua_caller -> script_caller
2. lua_time_limit -> script_time_limit
3. lua_timedout -> script_timedout
4. lua_oom -> script_oom
5. lua_disable_deny_script -> script_disable_deny_script
6. in_eval -> in_script
The following variables was moved to lctx under eval.c
1. lua
2. lua_client
3. lua_cur_script
4. lua_scripts
5. lua_scripts_mem
6. lua_replicate_commands
7. lua_write_dirty
8. lua_random_dirty
9. lua_multi_emitted
10. lua_repl
11. lua_kill
12. lua_time_start
13. lua_time_snapshot
This commit is in a low risk of introducing any issues and it
is just moving varibales around and not changing any logic.
This commit is only move code around without changing it.
The reason behind this is to make review process easier
by allowing the reviewer to simply ignore all code movements.
changes:
1. rename scripting.c to eval.c
2. introduce and new file, script_lua.c, and move parts of Lua
functionality to this new file. script_lua.c will eventually
contains the shared code between legacy lua and lua engine.
This commit does not compiled on purpose. Its only purpose is to move
code and rename files.
We can now do: `config set maxmemory 10m repl-backlog-size 5m`
## Basic algorithm to support "transaction like" config sets:
1. Backup all relevant current values (via get).
2. Run "verify" and "set" on everything, if we fail run "restore".
3. Run "apply" on everything (optional optimization: skip functions already run). If we fail run "restore".
4. Return success.
### restore
1. Run set on everything in backup. If we fail log it and continue (this puts us in an undefined
state but we decided it's better than the alternative of panicking). This indicates either a bug
or some unsupported external state.
2. Run apply on everything in backup (optimization: skip functions already run). If we fail log
it (see comment above).
3. Return error.
## Implementation/design changes:
* Apply function are idempotent (have no effect if they are run more than once for the same config).
* No indication in set functions if we're reading the config or running from the `CONFIG SET` command
(removed `update` argument).
* Set function should set some config variable and assume an (optional) apply function will use that
later to apply. If we know this setting can be safely applied immediately and can always be reverted
and doesn't depend on any other configuration we can apply immediately from within the set function
(and not store the setting anywhere). This is the case of this `dir` config, for example, which has no
apply function. No apply function is need also in the case that setting the variable in the `server` struct
is all that needs to be done to make the configuration take effect. Note that the original concept of `update_fn`,
which received the old and new values was removed and replaced by the optional apply function.
* Apply functions use settings written to the `server` struct and don't receive any inputs.
* I take care that for the generic (non-special) configs if there's no change I avoid calling the setter (possible
optimization: avoid calling the apply function as well).
* Passing the same config parameter more than once to `config set` will fail. You can't do `config set my-setting
value1 my-setting value2`.
Note that getting `save` in the context of the conf file parsing to work here as before was a pain.
The conf file supports an aggregate `save` definition, where each `save` line is added to the server's
save params. This is unlike any other line in the config file where each line overwrites any previous
configuration. Since we now support passing multiple save params in a single line (see top comments
about `save` in https://github.com/redis/redis/pull/9644) we should deprecate the aggregate nature of
this config line and perhaps reduce this ugly code in the future.
Adds the ability to autogenerate the sequence part of the millisecond-only explicit ID specified for `XADD`. This is useful in case added entries have an externally-provided timestamp without sub-millisecond resolution.
Add master-reboot-down-after-period as a configurable parameter, to make it possible to trigger a failover from a master that is responding with `-LOADING` for a long time after being restarted.
The issue can only happened with a bad Lua script that claims to return
a big number while actually return data which is not a big number (contains
chars that are not digits). Such thing will not cause an issue unless the big
number value contains `\r\n` and then it messes the resp3 structure. The fix
changes all the appearances of '\r\n' with spaces.
Such an issue can also happened on simple string or error replies but those
already handle it the same way this PR does (replace `\r\n` with spaces).
Other replies type are not vulnerable to this issue because they are not
counting on free text that is terminated with `\r\n` (either it contains the
bulk length like string reply or they are typed reply that can not inject free
text like boolean or number).
The issue only exists on unstable branch, big number reply on Lua script
was not yet added to any official release.
* Fix CLIENT KILL kill all clients with id 0 or with skipme
CLIENT KILL with ID argument should only kill the client with the provided ID. In old code,
CLIENT KILL with id 0 will kill all the connected clients.
Co-authored-by: Ofir Luzon <ofirluzon@gmail.com>
Update CI so that warnings cause build failures.
Also fix a warning in `test-sanitizer-address`:
```
In function ‘strncpy’,
inlined from ‘clusterUpdateMyselfIp’ at cluster.c:545:13:
/usr/include/x86_64-linux-gnu/bits/string_fortified.h:106:10:
error: ‘__builtin_strncpy’ specified bound 46 equals destination size [-Werror=stringop-truncation]
106 | return __builtin___strncpy_chk (__dest, __src, __len, __bos (__dest));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
```
This pr is following #9779 .
## Describe of feature
Now when we turn on the `list-compress-depth` configuration, the list will compress
the ziplist between `[list-compress-depth, -list-compress-depth]`.
When we need to use the compressed data, we will first decompress it, then use it,
and finally compress it again.
It's controlled by `quicklistNode->recompress`, which is designed to avoid the need to
re-traverse the entire quicklist for compression after each decompression, we only need
to recompress the quicklsitNode being used.
In order to ensure the correctness of recompressing, we should normally let
quicklistDecompressNodeForUse and quicklistCompress appear in pairs, otherwise,
it may lead to the head and tail being compressed or the middle ziplist not being
compressed correctly, which is exactly the problem this pr needs to solve.
## Solution
1. Reset `quicklistIter` after insert and replace.
The quicklist node will be compressed in `quicklistInsertAfter`, `quicklistInsertBefore`,
`quicklistReplaceAtIndex`, so we can safely reset the quicklistIter to avoid it being used again
2. `quicklistIndex` will return an iterator that can be used to recompress the current node after use.
## Test
1. In the `Stress Tester for #3343-Similar Errors` test, when the server crashes or when
`valgrind` or `asan` error is detected, print violating commands.
2. Add a crash test due to wrongly recompressing after `lrem`.
3. Remove `insert before with 0 elements` and `insert after with 0 elements`,
Now we forbid any operation on an NULL quicklistIter.
This commit 0f8b634cd (CVE-2021-32626 released in 6.2.6, 6.0.16, 5.0.14)
fixes an invalid memory write issue by using `lua_checkstack` API to make
sure the Lua stack is not overflow. This fix was added on 3 places:
1. `luaReplyToRedisReply`
2. `ldbRedis`
3. `redisProtocolToLuaType`
On the first 2 functions, `lua_checkstack` is handled gracefully while the
last is handled with an assert and a statement that this situation can
not happened (only with misbehave module):
> the Redis reply might be deep enough to explode the LUA stack (notice
that currently there is no such command in Redis that returns such a nested
reply, but modules might do it)
The issue that was discovered is that user arguments is also considered part
of the stack, and so the following script (for example) make the assertion reachable:
```
local a = {}
for i=1,7999 do
a[i] = 1
end
return redis.call("lpush", "l", unpack(a))
```
This is a regression because such a script would have worked before and now
its crashing Redis. The solution is to clear the function arguments from the Lua
stack which makes the original assumption true and the assertion unreachable.
Writable replicas now no longer use the values of expired keys. Expired keys are
deleted when lookupKeyWrite() is used, even on a writable replica. Previously,
writable replicas could use the value of an expired key in write commands such
as INCR, SUNIONSTORE, etc..
This commit also sorts out the mess around the functions lookupKeyRead() and
lookupKeyWrite() so they now indicate what we intend to do with the key and
are not affected by the command calling them.
Multi-key commands like SUNIONSTORE, ZUNIONSTORE, COPY and SORT with the
store option now use lookupKeyRead() for the keys they're reading from (which will
not allow reading from logically expired keys).
This commit also fixes a bug where PFCOUNT could return a value of an
expired key.
Test modules commands have their readonly and write flags updated to correctly
reflect their lookups for reading or writing. Modules are not required to
correctly reflect this in their command flags, but this change is made for
consistency since the tests serve as usage examples.
Fixes#6842. Fixes#7475.
Remove lcsGetKeys to clean up the remaining STRALGO after #9733.
i.e. it still used a getkeys_proc which was still looking for the KEYS or STRINGS arguments
Currently, the watching clients are marked as dirty when a watched
key is touched, but we continue watching the keys for no reason.
Then, when the same key is touched again, we iterate again on the
watching clients list and mark all clients as dirty again.
Only when the exec/unwatch command is issued will the client be
removed from the key->watching_clients list. The same applies when
a dirty client calls the WATCH command. The key will be added to be
watched by the client even if it has no effect.
In the field, no performance degradation was observed as a result of the
current implementation; it is merely a cleanup with possible memory and
performance gains in some situations.
Co-authored-by: Oran Agra <oran@redislabs.com>
logs message prints wrong file is failed to open temporary file
logs have error occurred in getcwd (uses same errno to report error)
Co-authored-by: Pavel Melkozerov <pavel.melkozerov@nokia.com>
Part three of implementing #8702, following #8887 and #9366 .
## Description of the feature
1. Replace the ziplist container of quicklist with listpack.
2. Convert existing quicklist ziplists on RDB loading time. an O(n) operation.
## Interface changes
1. New `list-max-listpack-size` config is an alias for `list-max-ziplist-size`.
2. Replace `debug ziplist` command with `debug listpack`.
## Internal changes
1. Add `lpMerge` to merge two listpacks . (same as `ziplistMerge`)
2. Add `lpRepr` to print info of listpack which is used in debugCommand and `quicklistRepr`. (same as `ziplistRepr`)
3. Replace `QUICKLIST_NODE_CONTAINER_ZIPLIST` with `QUICKLIST_NODE_CONTAINER_PACKED`(following #9357 ).
It represent that a quicklistNode is a packed node, as opposed to a plain node.
4. Remove `createZiplistObject` method, which is never used.
5. Calculate listpack entry size using overhead overestimation in `quicklistAllowInsert`.
We prefer an overestimation, which would at worse lead to a few bytes below the lowest limit of 4k.
## Improvements
1. Calling `lpShrinkToFit` after converting Ziplist to listpack, which was missed at #9366.
2. Optimize `quicklistAppendPlainNode` to avoid memcpy data.
## Bugfix
1. Fix crash in `quicklistRepr` when ziplist is compressed, introduced from #9366.
## Test
1. Add unittest for `lpMerge`.
2. Modify the old quicklist ziplist corrupt dump test.
Co-authored-by: Oran Agra <oran@redislabs.com>
Some people complain that QUIT is missing from help/command table.
Not appearing in COMMAND command, command stats, ACL, etc.
and instead, there's a hack in processCommand with a comment that looks outdated.
Note that it is [documented](https://redis.io/commands/quit)
At the same time, HOST: and POST are there in the command table although these are not real commands.
They would appear in the COMMAND command, and even in commandstats.
Other changes:
1. Initialize the static logged_time static var in securityWarningCommand
2. add `no-auth` flag to RESET so it can always be executed.
Issue found by corrupt-dump-fuzzer test with ASAN.
The problem was that lpSkip and lpGetWithSize could read the next listpack entry without validating that it's in range.
Similarly even the memcmp in lpFind could do that and possibly crash on segfault and now they'll crash on assert first.
The naive fix of using lpAssertValidEntry every time, resulted in 30% degradation in the lpFind benchmark of the unit test.
The final fix with the condition at the bottom has no performance implications.
Leak found by the corrupt-dump-fuzzer when using GCC ASAN, which seems
to falsely report leaks on pointers kept only on the stack when calling exit.
Instead we now use _exit on panic / assert to skip these leak checks.
Additionally, check for sanitizer warnings in the corrupt-dump-fuzzer between iterations,
so that when something is found we know which test to relate it too (and it prints reproduction command list)
With dynamically growing argc (#9528), it is necessary to initialize
argv_len. Normally createClient() handles that, but in the case of a
module shared_client, this needs to be done explicitly.
This also addresses an issue with rewriteClientCommandArgument() which
doesn't properly handle the case where the new element extends beyond
argc but not beyond argv_len.
LCS can allocate immense amount of memory (sizes of two inputs multiplied by each other).
In the past this caused some possible security issues due to overflows, which we solved
and also added use of `trymalloc` to return "Insufficient memory" instead of OOM panic zmalloc.
But in case overcommit is enabled, it could be that we won't get the OOM panic, and zmalloc
will succeed, and then we can get OOM killed by the kernel.
The solution here is to prevent LCS from allocating transient memory that's bigger than
`proto-max-bulk-len` config.
This config is not directly related to transient memory, but using a hard coded value ad well as
introducing a specific config seems wrong.
This comes to solve an error in the corrupt-dump-fuzzer test that started in the daily CI see #9799
Recently we started using list-compress-depth in tests (was completely untested till now).
Turns this triggered test failures with the external mode, since the tests left the setting enabled
and then it was used in other tests (specifically the fuzzer named "Stress tester for #3343-alike bugs").
This PR fixes the issue of the `recompress` flag being left set by mistake, which caused the code to
later to compress the head or tail nodes (which should never be compressed)
The solution is to reset the recompress flag when it should have been (when it was decided not to compress).
Additionally we're adding some assertions and improve the tests so in order to catch other similar bugs.
Currently PING returns different status when server is not serving data,
for example when `LOADING` or `BUSY`.
But same was not true for `MASTERDOWN`
This commit makes PING reply with `MASTERDOWN` when
replica-serve-stale-data=no and link is MASTER is down.
Drop the STRALGO command, now LCS is a command of its own and it only works on keys (not input strings).
The motivation is that STRALGO's syntax was really messed-up...
- assumes all (future) string algorithms will take similar arguments
- mixes command that takes keys and one that doesn't in the same command.
- make it nearly impossible to expose the right key spec in COMMAND INFO (issues cluster clients)
- hard for cluster clients to determine the key names (firstkey, lastkey, etc)
- hard for ACL / flags (is it a read command?)
This is a breaking change.
Moves ZPOP ... 0 fast exit path after type check to reply with
WRONGTYPE. In the past it will return an empty array.
Also now count is not allowed to be negative.
see #9680
before:
```
127.0.0.1:6379> set zset str
OK
127.0.0.1:6379> zpopmin zset 0
(empty array)
127.0.0.1:6379> zpopmin zset -1
(empty array)
```
after:
```
127.0.0.1:6379> set zset str
OK
127.0.0.1:6379> zpopmin zset 0
(error) WRONGTYPE Operation against a key holding the wrong kind of value
127.0.0.1:6379> zpopmin zset -1
(error) ERR value is out of range, must be positive
```
in `scan 0 match ""` case, pat is empty sds(patlen is 0), I don't think should access the
first character directly in this case(even though the first character is ' \0 '), for the
code readability, I switch the two positions of judgment logic.
Redis supports inserting data over 4GB into string (and recently for lists too, see #9357),
But LZF compression used in RDB files (see `rdbcompression` config), and in quicklist
(see `list-compress-depth` config) does not support compress/decompress data over
UINT32_MAX, which will result in corrupting the rdb after compression.
Internal changes:
1. Modify the `unsigned int` parameter of `lzf_compress/lzf_decompress` to `size_t`.
2. Modify the variable types in `lzf_compress` involving offsets and lengths to `size_t`.
3. Set LZF_USE_OFFSETS to 0.
When LZF_USE_OFFSETS is 1, lzf store offset into `LZF_HSLOT`(32bit).
Even in 64-bit, `LZF_USE_OFFSETS` defaults to 1, because lzf assumes that it only
compresses and decompresses data smaller than UINT32_MAX.
But now we need to make lzf support 64-bit, turning on `LZF_USE_OFFSETS` will make
it impossible to store 64-bit offsets or pointers.
BTW, disable LZF_USE_OFFSETS also brings a few performance improvements.
Tests:
1. Add test for compress/decompress string large than UINT32_MAX.
2. Add unittest for compress/decompress quicklistNode.
The client flags is a 64 bit integer, but the temporary cached value on the stack of call() is 32 bit.
luckily this doesn't lead to any bugs since the only flags used against this variables are below 32 bit.
- Added sanitizer support. `address`, `undefined` and `thread` sanitizers are available.
- To build Redis with desired sanitizer : `make SANITIZER=undefined`
- There were some sanitizer findings, cleaned up codebase
- Added tests with address and undefined behavior sanitizers to daily CI.
- Added tests with address sanitizer to the per-PR CI (smoke out mem leaks sooner).
Basically, there are three types of issues :
**1- Unaligned load/store** : Most probably, this issue may cause a crash on a platform that
does not support unaligned access. Redis does unaligned access only on supported platforms.
**2- Signed integer overflow.** Although, signed overflow issue can be problematic time to time
and change how compiler generates code, current findings mostly about signed shift or simple
addition overflow. For most platforms Redis can be compiled for, this wouldn't cause any issue
as far as I can tell (checked generated code on godbolt.org).
**3 -Minor leak** (redis-cli), **use-after-free**(just before calling exit());
UB means nothing guaranteed and risky to reason about program behavior but I don't think any
of the fixes here worth backporting. As sanitizers are now part of the CI, preventing new issues
will be the real benefit.
During diskless replication, the check for broken EOF mark is misplaced
and should be earlier. Now we do not swap db, we do proper cleanup and
correctly raise module events on this kind of failure.
This issue existed prior to #9323, but before, the side effect was not restoring
backup and not raising the correct module events on this failure.
* Clean up EINTR handling so EINTR will not change connection state to begin with.
* On TLS, catch EINTR and return it as-is before going through OpenSSL error handling (which seems to not distinguish it from EAGAIN).
This refactors all `CONFIG SET`s and conf file loading arguments go through
the generic config handling interface.
Refactoring changes:
- All config params go through the `standardConfig` interface (some stuff which
is only related to the config file and not the `CONFIG` command still has special
handling for rewrite/config file parsing, `loadmodule`, for example.) .
- Added `MULTI_ARG_CONFIG` flag for configs to signify they receive a variable
number of arguments instead of a single argument. This is used to break up space
separated arguments to `CONFIG SET` so the generic setter interface can pass
multiple arguments to the setter function. When parsing the config file we also break
up anything after the config name into multiple arguments to the setter function.
Interface changes:
- A side effect of the above interface is that the `bind` argument in the config file can
be empty (no argument at all) this is treated the same as passing an single empty
string argument (same as `save` already used to work).
- Support rewrite and setting `watchdog-period` from config file (was only supported
by the CONFIG command till now).
- Another side effect is that the `save T X` config argument now supports multiple
Time-Changes pairs in a single line like its `CONFIG SET` counterpart. So in the
config file you can either do:
```
save 3600 1
save 600 10
```
or do
```
save 3600 1 600 10
```
Co-authored-by: Bjorn Svensson <bjorn.a.svensson@est.tech>
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Introduced in #8179, this fixes the command's replies in the 0 count edge case.
[BREAKING] changes the reply type when count is 0 to an empty array (instead of nil)
Moves LPOP ... 0 fast exit path after type check to reply with WRONGTYPE
When repl-diskless-load is enabled, the connection is set to the blocking state.
The connection may be interrupted by a signal during a system call.
This would have resulted in a disconnection and possibly a reconnection loop.
Co-authored-by: Oran Agra <oran@redislabs.com>
Redis lists are stored in quicklist, which is currently a linked list of ziplists.
Ziplists are limited to storing elements no larger than 4GB, so when bigger
items are added they're getting truncated.
This PR changes quicklists so that they're capable of storing large items
in quicklist nodes that are plain string buffers rather than ziplist.
As part of the PR there were few other changes in redis:
1. new DEBUG sub-commands:
- QUICKLIST-PACKED-THRESHOLD - set the threshold of for the node type to
be plan or ziplist. default (1GB)
- QUICKLIST <key> - Shows low level info about the quicklist encoding of <key>
2. rdb format change:
- A new type was added - RDB_TYPE_LIST_QUICKLIST_2 .
- container type (packed / plain) was added to the beginning of the rdb object
(before the actual node list).
3. testing:
- Tests that requires over 100MB will be by default skipped. a new flag was
added to 'runtest' to run the large memory tests (not used by default)
Co-authored-by: sundb <sundbcn@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Add new no-mandatory-keys flag to support COMMAND GETKEYS of commands
which have no mandatory keys.
In the past we would have got this error:
```
127.0.0.1:6379> command getkeys eval "return 1" 0
(error) ERR Invalid arguments specified for command
```
When using SETNX and SETXX we could end up doing key lookup twice.
This presents a small inefficiency price.
Also once we have statistics of write hit and miss they'll be wrong (recording the same key hit twice)
Since the loop in incrementalTrimReplicationBacklog checks the size of histlen,
we cannot afford to update it only when the loop exits, this may cause deleting
much more replication blocks, and replication backlog may be less than setting size.
introduce in #9166
Co-authored-by: sundb <sundbcn@gmail.com>
After PR #9166 , replication backlog is not a real block of memory, just contains a
reference points to replication buffer's block and the blocks index (to accelerate
search offset when partial sync), so we need update both replication buffer's block's
offset and replication backlog blocks index's offset when master restart from RDB,
since the `server.master_repl_offset` is changed.
The implications of this bug was just a slow search, but not a replication failure.
The module test in reply.tcl was introduced by #8521 but didn't run until recently (see #9639)
and then it started failing with valgrind.
This is because valgrind uses 64 bit long double (unlike most other platforms that have at least 80 bits)
But besides valgrind, the tests where also incompatible with ARM32, which also uses 64 bit long doubles.
We now use appropriate value to avoid issues with either valgrind or ARM32
In all the double tests, i use 3.141, which is safe since since addReplyDouble uses
`%.17Lg` which is able to represent this value without adding any digits due to precision loss.
In the long double, since we use `%.17Lf` in ld2string, it preserves 17 significant
digits, rather than 17 digit after the decimal point (like in `%.17Lg`).
So to make these similar, i use value lower than 1 (no digits left of
the period)
Lastly, we have the same issue with TCL (no long doubles) so we read
raw protocol in that test.
Note that the only error before this fix (in both valgrind and ARM32 is this:
```
*** [err]: RM_ReplyWithLongDouble: a float reply in tests/unit/moduleapi/reply.tcl
Expected '3.141' to be equal to '3.14100000000000001' (context: type eval line 2 cmd {assert_equal 3.141 [r rw.longdouble 3.141]} proc ::test)
```
so the changes to debug.c and scripting.tcl aren't really needed, but i consider them a cleanup
(i.e. scripting.c validated a different constant than the one that's sent to it from debug.c).
Another unrelated change is to add the RESP version to the repeated tests in reply.tcl
The previous code did not check whether COUNT is set.
So we can use `lmpop 2 key1 key2 left count 1 count 2`.
This situation can occur in LMPOP/BLMPOP/ZMPOP/BZMPOP commands.
LMPOP/BLMPOP introduced in #9373, ZMPOP/BZMPOP introduced in #9484.
Add timestamp annotation in AOF, one part of #9325.
Enabled with the new `aof-timestamp-enabled` config option.
Timestamp annotation format is "#TS:${timestamp}\r\n"."
TS" is short of timestamp and this method could save extra bytes in AOF.
We can use timestamp annotation for some special functions.
- know the executing time of commands
- restore data to a specific point-in-time (by using redis-check-rdb to truncate the file)
Improve code doc for allowed_firstargs (used to be allowed_commands before #9504.
I don't think the text in the code needs to refer to the history (it's not there just for backwards compatibility).
instead it should just describe what it does.
Let modules use additional type of RESP3 response (unused by redis so far)
Also fix tests that where introduced in #8521 but didn't actually run.
Co-authored-by: Oran Agra <oran@redislabs.com>
## Background
For redis master, one replica uses one copy of replication buffer, that is a big waste of memory,
more replicas more waste, and allocate/free memory for every reply list also cost much.
If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with
replicas and can't finish synchronization with replica. If we set client-output-buffer-limit big,
master may be OOM when there are many replicas that separately keep much memory.
Because replication buffers of different replica client are the same, one simple idea is that
all replicas only use one replication buffer, that will effectively save memory.
Since replication backlog content is the same as replicas' output buffer, now we
can discard replication backlog memory and use global shared replication buffer
to implement replication backlog mechanism.
## Implementation
I create one global "replication buffer" which contains content of replication stream.
The structure of "replication buffer" is similar to the reply list that exists in every client.
But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields.
```c
/* Replication buffer blocks is the list of replBufBlock.
*
* +--------------+ +--------------+ +--------------+
* | refcount = 1 | ... | refcount = 0 | ... | refcount = 2 |
* +--------------+ +--------------+ +--------------+
* | / \
* | / \
* | / \
* Repl Backlog Replia_A Replia_B
*
* Each replica or replication backlog increments only the refcount of the
* 'ref_repl_buf_node' which it points to. So when replica walks to the next
* node, it should first increase the next node's refcount, and when we trim
* the replication buffer nodes, we remove node always from the head node which
* refcount is 0. If the refcount of the head node is not 0, we must stop
* trimming and never iterate the next node. */
/* Similar with 'clientReplyBlock', it is used for shared buffers between
* all replica clients and replication backlog. */
typedef struct replBufBlock {
int refcount; /* Number of replicas or repl backlog using. */
long long id; /* The unique incremental number. */
long long repl_offset; /* Start replication offset of the block. */
size_t size, used;
char buf[];
} replBufBlock;
```
So now when we feed replication stream into replication backlog and all replicas, we only need
to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of
replication backlog and replicas to references of the global replication buffer blocks. And we also
need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim
replication backlog if exceeding `repl-backlog-size`.
When sending reply to replicas, we also need to iterate replication buffer blocks and send its
content, when totally sending one block for replica, we decrease current node count and
increase the next current node count, and then free the block which reference is 0 from the
head of replication buffer blocks.
Since now we use linked list to manage replication backlog, it may cost much time for iterating
all linked list nodes to find corresponding replication buffer node. So we create a rax tree to
store some nodes for index, but to avoid rax tree occupying too much memory, i record
one per 64 nodes for index.
Currently, to make partial resynchronization as possible as much, we always let replication
backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting
if slow replicas that reference vast replication buffer blocks, and this method doesn't increase
memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced
replication buffer blocks when we need to trim backlog for exceeding backlog size setting,
we trim backlog incrementally (free 64 blocks per call now), and make it faster in
`beforeSleep` (free 640 blocks).
### Other changes
- `mem_total_replication_buffers`: we add this field in INFO command, it means the total
memory of replication buffers used.
- `mem_clients_slaves`: now even replica is slow to replicate, and its output buffer memory
is not 0, but it still may be 0, since replication backlog and replicas share one global replication
buffer, only if replication buffer memory is more than the repl backlog setting size, we consider
the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption
of repl backlog.
- Key eviction
Since all replicas and replication backlog share global replication buffer, we think only the
part of exceeding backlog size the extra separate consumption of replicas.
Because we trim backlog incrementally in the background, backlog size may exceeds our
setting if slow replicas that reference vast replication buffer blocks disconnect.
To avoid massive eviction loop, we don't count the delayed freed replication backlog into
used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory.
- `client-output-buffer-limit` check for replica clients
It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size
config (partial sync will succeed and then replica will get disconnected). Such a configuration is
ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption
implications since the replica client will share the backlog buffers memory.
- Drop replication backlog after loading data if needed
We always create replication backlog if server is a master, we need it because we put DELs in
it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb,
it is not possible to support partial resynchronization, to avoid extra memory of replication backlog,
we drop it.
- Multi IO threads
Since all replicas and replication backlog use global replication buffer, if I/O threads are enabled,
to guarantee data accessing thread safe, we must let main thread handle sending the output buffer
to all replicas. But before, other IO threads could handle sending output buffer of all replicas.
## Other optimizations
This solution resolve some other problem:
- When replicas disconnect with master since of out of output buffer limit, releasing the output
buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now,
it doesn't cause freezing.
- This implementation may mitigate reply list copy cost time(also freezes server) when one replication
has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy
reference info, it is very light.
- If we set replication backlog size big, it also may cost much time to copy replication backlog into
replica's output buffer. But this commit eliminates this problem.
- Resizing replication backlog size doesn't empty current replication backlog content.
I moved a bunch of stats in redisFork to be executed only on successful
fork, since they seem wrong to be done when it failed.
I guess when fork fails it does that immediately, no latency spike.
Before this commit, module blocked clients did not carry through the original RESP version, resulting with RESP3 clients receiving unexpected RESP2 replies.
Following #9483 the daily CI exposed a few problems.
* The cluster creation code (uses redis-cli) is complicated to test with TLS enabled.
for now i'm just skipping them since the tests we run there don't really need that kind of coverage
* cluster port binding failures
note that `find_available_port` already looks for a free cluster port
but the code in `wait_server_started` couldn't detect the failure of binding
(the text it greps for wasn't found in the log)
## Intro
The purpose is to allow having different flags/ACL categories for
subcommands (Example: CONFIG GET is ok-loading but CONFIG SET isn't)
We create a small command table for every command that has subcommands
and each subcommand has its own flags, etc. (same as a "regular" command)
This commit also unites the Redis and the Sentinel command tables
## Affected commands
CONFIG
Used to have "admin ok-loading ok-stale no-script"
Changes:
1. Dropped "ok-loading" in all except GET (this doesn't change behavior since
there were checks in the code doing that)
XINFO
Used to have "read-only random"
Changes:
1. Dropped "random" in all except CONSUMERS
XGROUP
Used to have "write use-memory"
Changes:
1. Dropped "use-memory" in all except CREATE and CREATECONSUMER
COMMAND
No changes.
MEMORY
Used to have "random read-only"
Changes:
1. Dropped "random" in PURGE and USAGE
ACL
Used to have "admin no-script ok-loading ok-stale"
Changes:
1. Dropped "admin" in WHOAMI, GENPASS, and CAT
LATENCY
No changes.
MODULE
No changes.
SLOWLOG
Used to have "admin random ok-loading ok-stale"
Changes:
1. Dropped "random" in RESET
OBJECT
Used to have "read-only random"
Changes:
1. Dropped "random" in ENCODING and REFCOUNT
SCRIPT
Used to have "may-replicate no-script"
Changes:
1. Dropped "may-replicate" in all except FLUSH and LOAD
CLIENT
Used to have "admin no-script random ok-loading ok-stale"
Changes:
1. Dropped "random" in all except INFO and LIST
2. Dropped "admin" in ID, TRACKING, CACHING, GETREDIR, INFO, SETNAME, GETNAME, and REPLY
STRALGO
No changes.
PUBSUB
No changes.
CLUSTER
Changes:
1. Dropped "admin in countkeysinslots, getkeysinslot, info, nodes, keyslot, myid, and slots
SENTINEL
No changes.
(note that DEBUG also fits, but we decided not to convert it since it's for
debugging and anyway undocumented)
## New sub-command
This commit adds another element to the per-command output of COMMAND,
describing the list of subcommands, if any (in the same structure as "regular" commands)
Also, it adds a new subcommand:
```
COMMAND LIST [FILTERBY (MODULE <module-name>|ACLCAT <cat>|PATTERN <pattern>)]
```
which returns a set of all commands (unless filters), but excluding subcommands.
## Module API
A new module API, RM_CreateSubcommand, was added, in order to allow
module writer to define subcommands
## ACL changes:
1. Now, that each subcommand is actually a command, each has its own ACL id.
2. The old mechanism of allowed_subcommands is redundant
(blocking/allowing a subcommand is the same as blocking/allowing a regular command),
but we had to keep it, to support the widespread usage of allowed_subcommands
to block commands with certain args, that aren't subcommands (e.g. "-select +select|0").
3. I have renamed allowed_subcommands to allowed_firstargs to emphasize the difference.
4. Because subcommands are commands in ACL too, you can now use "-" to block subcommands
(e.g. "+client -client|kill"), which wasn't possible in the past.
5. It is also possible to use the allowed_firstargs mechanism with subcommand.
For example: `+config -config|set +config|set|loglevel` will block all CONFIG SET except
for setting the log level.
6. All of the ACL changes above required some amount of refactoring.
## Misc
1. There are two approaches: Either each subcommand has its own function or all
subcommands use the same function, determining what to do according to argv[0].
For now, I took the former approaches only with CONFIG and COMMAND,
while other commands use the latter approach (for smaller blamelog diff).
2. Deleted memoryGetKeys: It is no longer needed because MEMORY USAGE now uses the "range" key spec.
4. Bugfix: GETNAME was missing from CLIENT's help message.
5. Sentinel and Redis now use the same table, with the same function pointer.
Some commands have a different implementation in Sentinel, so we redirect
them (these are ROLE, PUBLISH, and INFO).
6. Command stats now show the stats per subcommand (e.g. instead of stats just
for "config" you will have stats for "config|set", "config|get", etc.)
7. It is now possible to use COMMAND directly on subcommands:
COMMAND INFO CONFIG|GET (The pipeline syntax was inspired from ACL, and
can be used in functions lookupCommandBySds and lookupCommandByCString)
8. STRALGO is now a container command (has "help")
## Breaking changes:
1. Command stats now show the stats per subcommand (see (5) above)
Prevent clients from being blocked forever in cluster when they block with their own module command
and the hash slot is migrated to another master at the same time.
These will get a redirection message when unblocked.
Also, release clients blocked on module commands when cluster is down (same as other blocked clients)
This commit adds basic tests for the main (non-cluster) redis test infra that test the cluster.
This was done because the cluster test infra can't handle some common test features,
but most importantly we only build the test modules with the non-cluster test suite.
note that rather than really supporting cluster operations by the test infra, it was added (as dup code)
in two files, one for module tests and one for non-modules tests, maybe in the future we'll refactor that.
Co-authored-by: Oran Agra <oran@redislabs.com>
Since the size of mode_t is platform dependant we handle the
`unixsocketperm` configuration as a generic int type.
mode_t is either an unsigned int or unsigned short (macOS) and
the range-limits allows for a simple cast to a mode_t.
there is no need to compare the value of ep and sp
```
sp = start = s;
// the only way that make ep > sp is sdslen(s) == 0
// so when ep > sp,must exist ep-sp == -1
ep = end = s+sdslen(s)-1;
while(sp <= end && strchr(cset, *sp)) sp++;
while(ep > sp && strchr(cset, *ep)) ep--;
// -1 + 1 already equals 0
len = (sp > ep) ? 0 : ((ep-sp)+1);
```
Signed-off-by: Bo Cai <charpty@gmail.com>
[src/bitops.c:512] -> [src/bitops.c:507]: (warning) Either the condition 'if(o&&o->encoding==1)' is redundant or there is possible null pointer dereference: o.
This function has checks for `o` to be null or non-null, so it is odd that it accesses it first..
This is useful for approximating size computation of complex module types.
Note that the mem_usage2 callback is new and has not been released yet, which is why we can modify it.
Adding -i option (sleep interval) of repeat and bigkeys to redis-cli --scan.
When the keyspace contains many already expired keys scanning the
dataset with redis-cli --scan can impact the performance
Co-authored-by: Oran Agra <oran@redislabs.com>
bigkeys sleep is defined each 100 scanned keys, and it is checked it only between scan cycles.
In cases that scan does not return exactly 10 keys it will never sleep.
In addition the comment was sleep each 100 SCANs but it was 100 scanned keys.
When calling `XADD` with a predefined id (instead of `*`) there's no need to run
the code which replaces the supplied id with itself. Only when we pass a wildcard
id we need to do this.
For apps which always supply their own id this is a slight optimization.