In #11666, we added a while loop and will split a big reply
node to multiple nodes. The update of tail->repl_offset may
be wrong. Like before #11666, we would have created at most
one new reply node, and now we will create multiple nodes if
it is a big reply node.
Now we are creating more than one node, and the tail->repl_offset
of all the nodes except the last one are incorrect. Because we
update master_repl_offset at the beginning, and then use it to
update the tail->repl_offset. This would have lead to an assertion
during PSYNC, a test was added to validate that case.
Besides that, the calculation of size was adjusted to fix
tests that failed due to a combination of a very low backlog size,
and some thresholds of that get violated because of the relatively
high overhead of replBufBlock. So now if the backlog size / 16 is too
small, we'll take PROTO_REPLY_CHUNK_BYTES instead.
Co-authored-by: Oran Agra <oran@redislabs.com>
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes#11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
Since ironing the details of the reply schema of each and every command can take a long time, we
would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
### Background
In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
### Motivation
1. Documentation. This is the primary goal.
2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
testsuite, see the "Testing" section)
### Schema
The idea is to supply some sort of schema for the various replies of each command.
The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
and without the `FULL` modifier)
We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
Example for `BZPOPMIN`:
```
"reply_schema": {
"oneOf": [
{
"description": "Timeout reached and no elements were popped.",
"type": "null"
},
{
"description": "The keyname, popped member, and its score.",
"type": "array",
"minItems": 3,
"maxItems": 3,
"items": [
{
"description": "Keyname",
"type": "string"
},
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
]
}
```
#### Notes
1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
schema for documentation (and possibly to create a fuzzer that validates the replies)
3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
is with `WITHSCORES` and the other is without.
4. For documentation, there will be another optional field "notes" in which we will add a short description of
the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
array, for example)
Given the above:
1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
(given that "description" and "notes" are comprehensive enough)
2. We can generate a client in a strongly typed language (but the return type could be a conceptual
`union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
3. We can create a fuzzer for RESP3.
### Limitations (because we are using the standard json-schema)
The problem is that Redis' replies are more diverse than what the json format allows. This means that,
when we convert the reply to a json (in order to validate the schema against it), we lose information (see
the "Testing" section below).
The other option would have been to extend the standard json-schema (and json format) to include stuff
like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
seemed like too much work, so we decided to compromise.
Examples:
1. We cannot tell the difference between an "array" and a "set"
2. We cannot tell the difference between simple-string and bulk-string
3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
compares (member,score) tuples and not just the member name.
### Testing
This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
are indeed correct (i.e. describe the actual response of Redis).
To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
it executed and their replies.
For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
`--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
`--log-req-res --force-resp3`)
You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
`.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
These files are later on processed by `./utils/req-res-log-validator.py` which does:
1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
2. For each request-response pair, it validates the response against the request's reply_schema
(obtained from the extended COMMAND DOCS)
5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
the existing redis test suite, rather than attempt to write a fuzzer.
#### Notes about RESP2
1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
accept RESP3 as the future RESP)
2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
so that we can validate it, we will need to know how to convert the actual reply to the one expected.
- number and boolean are always strings in RESP2 so the conversion is easy
- objects (maps) are always a flat array in RESP2
- others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
handling (so the client will not be totally auto-generated)
Example for ZRANGE:
```
"reply_schema": {
"anyOf": [
{
"description": "A list of member elements",
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
},
{
"description": "Members and their scores. Returned in case `WITHSCORES` was used.",
"notes": "In RESP2 this is returned as a flat array",
"type": "array",
"uniqueItems": true,
"items": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": [
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
}
]
}
```
### Other changes
1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
regardless of the special log-req-res mode ("Pub/Sub PING" for example)
2. Update the history field of CLIENT LIST
3. Added basic tests for commands that were not covered at all by the testsuite
### TODO
- [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
- [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
- [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
- [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
of the tests - https://github.com/redis/redis/issues/11897
- [x] (probably a separate PR) add all missing schemas
- [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
- [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
fight with the tcl including mechanism a bit)
- [x] issue: module API - https://github.com/redis/redis/issues/11898
- [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Shaya Potter <shaya@redislabs.com>
As `sdsRemoveFreeSpace` have an impact on performance even if it is a no-op (see details at #11508).
Only call the function when there is a possibility that the string contains free space.
* For strings coming from the network, it's only if they're bigger than PROTO_MBULK_BIG_ARG
* For strings coming from scripts, it's only if they're smaller than LUA_CMD_OBJCACHE_MAX_LEN
* For strings coming from modules, it could be anything.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: sundb <sundbcn@gmail.com>
When no-touch mode is enabled, the client will not touch LRU/LFU of the
keys it accesses, except when executing command `TOUCH`.
This allows inspecting or modifying the key-space without affecting their eviction.
Changes:
- A command `CLIENT NO-TOUCH ON|OFF` to switch on and off this mode.
- A client flag `#define CLIENT_NOTOUCH (1ULL<<45)`, which can be shown
with `CLIENT INFO`, by the letter "T" in the "flags" field.
- Clear `NO-TOUCH` flag in `clearClientConnectionState`, which is used by `RESET`
command and resetting temp clients used by modules.
- Also clear `NO-EVICT` flag in `clearClientConnectionState`, this might have been an
oversight, spotted by @madolson.
- A test using `DEBUG OBJECT` command to verify that LRU stat is not touched when
no-touch mode is on.
Co-authored-by: chentianjie <chentianjie@alibaba-inc.com>
Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com>
Co-authored-by: sundb <sundbcn@gmail.com>
* Make it clear that current_client is the root client that was called by
external connection
* add executing_client which is the client that runs the current command
(can be a module or a script)
* Remove script_caller that was used for commands that have CLIENT_SCRIPT
to get the client that called the script. in most cases, that's the current_client,
and in others (when being called from a module), it could be an intermediate
client when we actually want the original one used by the external connection.
bugfixes:
* RM_Call with C flag should log ACL errors with the requested user rather than
the one used by the original client, this also solves a crash when RM_Call is used
with C flag from a detached thread safe context.
* addACLLogEntry would have logged info about the script_caller, but in case the
script was issued by a module command we actually want the current_client. the
exception is when RM_Call is called from a timer event, in which case we don't
have a current_client.
behavior changes:
* client side tracking for scripts now tracks the keys that are read by the script
instead of the keys that are declared by the caller for EVAL
other changes:
* Log both current_client and executing_client in the crash log.
* remove prepareLuaClient and resetLuaClient, being dead code that was forgotten.
* remove scriptTimeSnapshot and snapshot_time and instead add cmd_time_snapshot
that serves all commands and is reset only when execution nesting starts.
* remove code to propagate CLIENT_FORCE_REPL from the executed command
to the script caller since scripts aren't propagated anyway these days and anyway
this flag wouldn't have had an effect since CLIENT_PREVENT_PROP is added by scriptResetRun.
* fix a module GIL violation issue in afterSleep that was introduced in #10300 (unreleased)
Starting from Redis 7.0 (#9890) we started wrapping everything a command
propagates with MULTI/EXEC. The problem is that both SCAN and RANDOMKEY can
lazy-expire arbitrary keys (similar behavior to active-expire), and put DELs in a transaction.
Fix: When these commands are called without a parent exec-unit (e.g. not in EVAL or
MULTI) we avoid wrapping their DELs in a transaction (for the same reasons active-expire
and eviction avoids a transaction)
This PR adds a per-command flag that indicates that the command may touch arbitrary
keys (not the ones in the arguments), and uses that flag to avoid the MULTI-EXEC.
For now, this flag is internal, since we're considering other solutions for the future.
Note for cluster mode: if SCAN/RANDOMKEY is inside EVAL/MULTI it can still cause the
same situation (as it always did), but it won't cause a CROSSSLOT because replicas and AOF
do not perform slot checks.
The problem with the above is mainly for 3rd party ecosystem tools that propagate commands
from master to master, or feed an AOF file with redis-cli into a master.
This PR aims to fix the regression in redis 7.0, and we opened #11792 to try to handle the
bigger problem with lazy expire better for another release.
The PR adds support for the following flags on RedisModule_OpenKey:
* REDISMODULE_OPEN_KEY_NONOTIFY - Don't trigger keyspace event on key misses.
* REDISMODULE_OPEN_KEY_NOSTATS - Don't update keyspace hits/misses counters.
* REDISMODULE_OPEN_KEY_NOEXPIRE - Avoid deleting lazy expired keys.
* REDISMODULE_OPEN_KEY_NOEFFECTS - Avoid any effects from fetching the key
In addition, added `RM_GetOpenKeyModesAll`, which returns the mask of all
supported OpenKey modes. This allows the module to check, in runtime, which
OpenKey modes are supported by the current Redis instance.
Introduce .is_local method to connection, and implement for TCP/TLS/
Unix socket, also drop 'int islocalClient(client *c)'. Then we can
hide the detail into the specific connection types.
Uplayer tests a connection is local or not by abstract method only.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
*TL;DR*
---------------------------------------
Following the discussion over the issue [#7551](https://github.com/redis/redis/issues/7551)
We decided to refactor the client blocking code to eliminate some of the code duplications
and to rebuild the infrastructure better for future key blocking cases.
*In this PR*
---------------------------------------
1. reprocess the command once a client becomes unblocked on key (instead of running
custom code for the unblocked path that's different than the one that would have run if
blocking wasn't needed)
2. eliminate some (now) irrelevant code for handling unblocking lists/zsets/streams etc...
3. modify some tests to intercept the error in cases of error on reprocess after unblock (see
details in the notes section below)
4. replace '$' on the client argv with current stream id. Since once we reprocess the stream
XREAD we need to read from the last msg and not wait for new msg in order to prevent
endless block loop.
5. Added statistics to the info "Clients" section to report the:
* `total_blocking_keys` - number of blocking keys
* `total_blocking_keys_on_nokey` - number of blocking keys which have at least 1 client
which would like
to be unblocked on when the key is deleted.
6. Avoid expiring unblocked key during unblock. Previously we used to lookup the unblocked key
which might have been expired during the lookup. Now we lookup the key using NOTOUCH and
NOEXPIRE to avoid deleting it at this point, so propagating commands in blocked.c is no longer needed.
7. deprecated command flags. We decided to remove the CMD_CALL_STATS and CMD_CALL_SLOWLOG
and make an explicit verification in the call() function in order to decide if stats update should take place.
This should simplify the logic and also mitigate existing issues: for example module calls which are
triggered as part of AOF loading might still report stats even though they are called during AOF loading.
*Behavior changes*
---------------------------------------------------
1. As this implementation prevents writing dedicated code handling unblocked streams/lists/zsets,
since we now re-process the command once the client is unblocked some errors will be reported differently.
The old implementation used to issue
``UNBLOCKED the stream key no longer exists``
in the following cases:
- The stream key has been deleted (ie. calling DEL)
- The stream and group existed but the key type was changed by overriding it (ie. with set command)
- The key not longer exists after we swapdb with a db which does not contains this key
- After swapdb when the new db has this key but with different type.
In the new implementation the reported errors will be the same as if the command was processed after effect:
**NOGROUP** - in case key no longer exists, or **WRONGTYPE** in case the key was overridden with a different type.
2. Reprocessing the command means that some checks will be reevaluated once the
client is unblocked.
For example, ACL rules might change since the command originally was executed and
will fail once the client is unblocked.
Another example is OOM condition checks which might enable the command to run and
block but fail the command reprocess once the client is unblocked.
3. One of the changes in this PR is that no command stats are being updated once the
command is blocked (all stats will be updated once the client is unblocked). This implies
that when we have many clients blocked, users will no longer be able to get that information
from the command stats. However the information can still be gathered from the client list.
**Client blocking**
---------------------------------------------------
the blocking on key will still be triggered the same way as it is done today.
in order to block the current client on list of keys, the call to
blockForKeys will still need to be made which will perform the same as it is today:
* add the client to the list of blocked clients on each key
* keep the key with a matching list node (position in the global blocking clients list for that key)
in the client private blocking key dict.
* flag the client with CLIENT_BLOCKED
* update blocking statistics
* register the client on the timeout table
**Key Unblock**
---------------------------------------------------
Unblocking a specific key will be triggered (same as today) by calling signalKeyAsReady.
the implementation in that part will stay the same as today - adding the key to the global readyList.
The reason to maintain the readyList (as apposed to iterating over all clients blocked on the specific key)
is in order to keep the signal operation as short as possible, since it is called during the command processing.
The main change is that instead of going through a dedicated code path that operates the blocked command
we will just call processPendingCommandsAndResetClient.
**ClientUnblock (keys)**
---------------------------------------------------
1. Unblocking clients on keys will be triggered after command is
processed and during the beforeSleep
8. the general schema is:
9. For each key *k* in the readyList:
```
For each client *c* which is blocked on *k*:
in case either:
1. *k* exists AND the *k* type matches the current client blocking type
OR
2. *k* exists and *c* is blocked on module command
OR
3. *k* does not exists and *c* was blocked with the flag
unblock_on_deleted_key
do:
1. remove the client from the list of clients blocked on this key
2. remove the blocking list node from the client blocking key dict
3. remove the client from the timeout list
10. queue the client on the unblocked_clients list
11. *NEW*: call processCommandAndResetClient(c);
```
*NOTE:* for module blocked clients we will still call the moduleUnblockClientByHandle
which will queue the client for processing in moduleUnblockedClients list.
**Process Unblocked clients**
---------------------------------------------------
The process of all unblocked clients is done in the beforeSleep and no change is planned
in that part.
The general schema will be:
For each client *c* in server.unblocked_clients:
* remove client from the server.unblocked_clients
* set back the client readHandler
* continue processing the pending command and input buffer.
*Some notes regarding the new implementation*
---------------------------------------------------
1. Although it was proposed, it is currently difficult to remove the
read handler from the client while it is blocked.
The reason is that a blocked client should be unblocked when it is
disconnected, or we might consume data into void.
2. While this PR mainly keep the current blocking logic as-is, there
might be some future additions to the infrastructure that we would
like to have:
- allow non-preemptive blocking of client - sometimes we can think
that a new kind of blocking can be expected to not be preempt. for
example lets imagine we hold some keys on disk and when a command
needs to process them it will block until the keys are uploaded.
in this case we will want the client to not disconnect or be
unblocked until the process is completed (remove the client read
handler, prevent client timeout, disable unblock via debug command etc...).
- allow generic blocking based on command declared keys - we might
want to add a hook before command processing to check if any of the
declared keys require the command to block. this way it would be
easier to add new kinds of key-based blocking mechanisms.
Co-authored-by: Oran Agra <oran@redislabs.com>
Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
1. Get rid of server.core_propagates - we can just rely on module/call nesting levels
2. Rename in_nested_call to execution_nesting and update the comment
3. Remove module_ctx_nesting (redundant, we can use execution_nesting)
4. Modify postExecutionUnitOperations according to the comment (The main purpose of this PR)
5. trackingHandlePendingKeyInvalidations: Check the nesting level inside this function
In #11290, we added listpack encoding for SET object.
But forgot to support it in zuiFind, causes ZINTER, ZINTERSTORE,
ZINTERCARD, ZIDFF, ZDIFFSTORE to crash.
And forgot to support it in RM_ScanKey, causes it hang.
This PR add support SET listpack in zuiFind, and in RM_ScanKey.
And add tests for related commands to cover this case.
Other changes:
- There is no reason for zuiFind to go into the internals of the SET.
It can simply use setTypeIsMember and don't care about encoding.
- Remove the `#include "intset.h"` from server.h reduce the chance of
accidental intset API use.
- Move setTypeAddAux, setTypeRemoveAux and setTypeIsMemberAux
interfaces to the header.
- In scanGenericCommand, use setTypeInitIterator and setTypeNext
to handle OBJ_SET scan.
- In RM_ScanKey, improve hash scan mode, use lpGetValue like zset,
they can share code and better performance.
The zuiFind part fixes#11578
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
## Issue
During the client input/output buffer processing, the memory usage is
incrementally updated to keep track of clients going beyond a certain
threshold `maxmemory-clients` to be evicted. However, this additional
tracking activity leads to unnecessary CPU cycles wasted when no
client-eviction is required. It is applicable in two cases.
* `maxmemory-clients` is set to `0` which equates to no client eviction
(applicable to all clients)
* `CLIENT NO-EVICT` flag is set to `ON` which equates to a particular
client not applicable for eviction.
## Solution
* Disable client memory usage tracking during the read/write flow when
`maxmemory-clients` is set to `0` or `client no-evict` is `on`.
The memory usage is tracked only during the `clientCron` i.e. it gets
periodically updated.
* Cleanup the clients from the memory usage bucket when client eviction
is disabled.
* When the maxmemory-clients config is enabled or disabled at runtime,
we immediately update the memory usage buckets for all clients (tested
scanning 80000 took some 20ms)
Benchmark shown that this can improve performance by about 5% in
certain situations.
Co-authored-by: Oran Agra <oran@redislabs.com>
In most cases when a listpack or intset is converted to a dict, the conversion
is trigged when adding an element. The extra element is added after conversion
to dict (in all cases except when the conversion is triggered by
set-max-intset-entries being reached).
If set-max-listpack-entries is set to a power of two, let's say 128, when
adding the 129th element, the 128 element listpack is first converted to a dict
with a hashtable presized for 128 elements. After converting to dict, the 129th
element is added to the dict which immediately triggers incremental rehashing
to size 256.
This commit instead presizes the dict to one more element, with the assumption
that conversion to dict is followed by adding another element, so the dict
doesn't immediately need rehashing.
Co-authored-by: sundb <sundbcn@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
profiling EVALSHA\ we see that luaReplyToRedisReply takes 8.73% out of the
56.90% of luaCallFunction CPU cycles.
Using addReplyStatusLength instead of directly composing the protocol to avoid
sdscatprintf and addReplySds ( which imply multiple sdslen calls ).
The new approach drops
luaReplyToRedisReply CPU cycles to 3.77%
Add a new module event `RedisModule_Event_Key`, this event is fired
when a key is removed from the keyspace.
The event includes an open key that can be used for reading the key before
it is removed. Modules can also extract the key-name, and use RM_Open
or RM_Call to access key from within that event, but shouldn't modify anything
from within this event.
The following sub events are available:
- `REDISMODULE_SUBEVENT_KEY_DELETED`
- `REDISMODULE_SUBEVENT_KEY_EXPIRED`
- `REDISMODULE_SUBEVENT_KEY_EVICTED`
- `REDISMODULE_SUBEVENT_KEY_OVERWRITE`
The data pointer can be casted to a RedisModuleKeyInfo structure
with the following fields:
```
RedisModuleKey *key; // Opened Key
```
### internals
* We also add two dict functions:
`dictTwoPhaseUnlinkFind` finds an element from the table, also get the plink of the entry.
The entry is returned if the element is found. The user should later call `dictTwoPhaseUnlinkFree`
with it in order to unlink and release it. Otherwise if the key is not found, NULL is returned.
These two functions should be used in pair. `dictTwoPhaseUnlinkFind` pauses rehash and
`dictTwoPhaseUnlinkFree` resumes rehash.
* We change `dbOverwrite` to `dbReplaceValue` which just replaces the value of the key and
doesn't fire any events. The "overwrite" part (which emits events) is just when called from `setKey`,
the other places that called dbOverwrite were ones that just update the value in-place (INCR*, SPOP,
and dbUnshareStringValue). This should not have any real impact since `moduleNotifyKeyUnlink` and
`signalDeletedKeyAsReady` wouldn't have mattered in these cases anyway (i.e. module keys and
stream keys didn't have direct calls to dbOverwrite)
* since we allow doing RM_OpenKey from withing these callbacks, we temporarily disable lazy expiry.
* We also temporarily disable lazy expiry when we are in unlink/unlink2 callback and keyspace
notification callback.
* Move special definitions to the top of redismodule.h
This is needed to resolve compilation errors with RedisModuleKeyInfoV1
that carries a RedisModuleKey member.
Co-authored-by: Oran Agra <oran@redislabs.com>
As being discussed in #10981 we see a degradation in performance
between v6.2 and v7.0 of Redis on the EVAL command.
After profiling the current unstable branch we can see that we call the
expensive function evalCalcFunctionName twice.
The current "fix" is to basically avoid calling evalCalcFunctionName and
even dictFind(lua_scripts) twice for the same command.
Instead we cache the current script's dictEntry (for both Eval and Functions)
in the current client so we don't have to repeat these calls.
The exception would be when doing an EVAL on a new script that's not yet
in the script cache. in that case we will call evalCalcFunctionName (and even
evalExtractShebangFlags) twice.
Co-authored-by: Oran Agra <oran@redislabs.com>
Add an option "withscores" to ZRANK and ZREVRANK.
Add `[withscore]` option to both `zrank` and `zrevrank`, like this:
```
z[rev]rank key member [withscore]
```
It look like it will generate a warning in FreeBSD:
```
./server.h:105:9: warning: 'member2struct' macro redefined [-Wmacro-redefined]
#define member2struct(struct_name, member_name, member_addr) \
^
/usr/include/sys/param.h:365:9: note: previous definition is here
#define member2struct(s, m, x) \
^
```
Add a `redis_` prefix to it, avoid the warning, introduced in #11511
### Summary of API additions
* `RedisModule_AddPostNotificationJob` - new API to call inside a key space
notification (and on more locations in the future) and allow to add a post job as describe above.
* New module option, `REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS`,
allows to disable Redis protection of nested key-space notifications.
* `RedisModule_GetModuleOptionsAll` - gets the mask of all supported module options so a module
will be able to check if a given option is supported by the current running Redis instance.
### Background
The following PR is a proposal of handling write operations inside module key space notifications.
After a lot of discussions we came to a conclusion that module should not perform any write
operations on key space notification.
Some examples of issues that such write operation can cause are describe on the following links:
* Bad replication oreder - https://github.com/redis/redis/pull/10969
* Used after free - https://github.com/redis/redis/pull/10969#issuecomment-1223771006
* Used after free - https://github.com/redis/redis/pull/9406#issuecomment-1221684054
There are probably more issues that are yet to be discovered. The underline problem with writing
inside key space notification is that the notification runs synchronously, this means that the notification
code will be executed in the middle on Redis logic (commands logic, eviction, expire).
Redis **do not assume** that the data might change while running the logic and such changes
can crash Redis or cause unexpected behaviour.
The solution is to state that modules **should not** perform any write command inside key space
notification (we can chose whether or not we want to force it). To still cover the use-case where
module wants to perform a write operation as a reaction to key space notifications, we introduce
a new API , `RedisModule_AddPostNotificationJob`, that allows to register a callback that will be
called by Redis when the following conditions hold:
* It is safe to perform any write operation.
* The job will be called atomically along side the operation that triggers it (in our case, key
space notification).
Module can use this new API to safely perform any write operation and still achieve atomicity
between the notification and the write.
Although currently the API is supported on key space notifications, the API is written in a generic
way so that in the future we will be able to use it on other places (server events for example).
### Technical Details
Whenever a module uses `RedisModule_AddPostNotificationJob` the callback is added to a list
of callbacks (called `modulePostExecUnitJobs`) that need to be invoke after the current execution
unit ends (whether its a command, eviction, or active expire). In order to trigger those callback
atomically with the notification effect, we call those callbacks on `postExecutionUnitOperations`
(which was `propagatePendingCommands` before this PR). The new function fires the post jobs
and then calls `propagatePendingCommands`.
If the callback perform more operations that triggers more key space notifications. Those keys
space notifications might register more callbacks. Those callbacks will be added to the end
of `modulePostExecUnitJobs` list and will be invoke atomically after the current callback ends.
This raises a concerns of entering an infinite loops, we consider infinite loops as a logical bug
that need to be fixed in the module, an attempt to protect against infinite loops by halting the
execution could result in violation of the feature correctness and so **Redis will make no attempt
to protect the module from infinite loops**
In addition, currently key space notifications are not nested. Some modules might want to allow
nesting key-space notifications. To allow that and keep backward compatibility, we introduce a
new module option called `REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS`.
Setting this option will disable the Redis key-space notifications nesting protection and will
pass this responsibility to the module.
### Redis infrastructure
This PR promotes the existing `propagatePendingCommands` to an "Execution Unit" concept,
which is called after each atomic unit of execution,
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com>
In #11511 we introduced member_offset which has a sanitizer warning:
```
multi.c:390:26: runtime error: member access within null pointer of type 'watchedKey' (aka 'struct watchedKey')
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior multi.c:390:26
```
We can use offsetof() from stddef.h. This is part of the standard lib
just to avoid this UB :) Sanitizer should not complain after we change
this.
1. Use offsetof instead of member_offset, so we can delete this now
2. Changed (uint8_t*) cast to (char*).
This does not matter much but according to standard, we are only allowed
to cast pointers to its own type, char* and void*. Let's try to follow
the rules.
This change was suggested by tezc and the comments is also from him.
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
In unwatchAllKeys() function, we traverse all the keys watched by the client,
and for each key we need to remove the client from the list of clients watching that key.
This is implemented by listSearchKey which traverses the list of clients.
If we can reach the node of the list of clients from watchedKey in O(1) time,
then we do not need to call listSearchKey anymore.
Changes in this PR: put the node of the list of clients of each watched key in the
db inside the watchedKey structure. In this way, for every key watched by the client,
we can get the watchedKey structure and then reach the node in the list of clients in
db->watched_keys to remove it from that list.
From the perspective of the list of clients watching the key, the list node is inside a
watchedKey structure, so we can get to the watchedKey struct from the listnode by
struct member offset math. And because of this, node->value is not used, we can point
node->value to the list itself, so that we don't need to fetch the list of clients from the dict.
Introduce Shard IDs to logically group nodes in cluster mode.
1. Added a new "shard_id" field to "cluster nodes" output and nodes.conf after "hostname"
2. Added a new PING extension to propagate "shard_id"
3. Handled upgrade from pre-7.2 releases automatically
4. Refactored PING extension assembling/parsing logic
Behavior of Shard IDs:
Replicas will always follow the shards of their reported primaries. If a primary updates its shard ID, the replica will follow. (This need not follow for cluster v2) This is not an expected use case.
Improve memory efficiency of list keys
## Description of the feature
The new listpack encoding uses the old `list-max-listpack-size` config
to perform the conversion, which we can think it of as a node inside a
quicklist, but without 80 bytes overhead (internal fragmentation included)
of quicklist and quicklistNode structs.
For example, a list key with 5 items of 10 chars each, now takes 128 bytes
instead of 208 it used to take.
## Conversion rules
* Convert listpack to quicklist
When the listpack length or size reaches the `list-max-listpack-size` limit,
it will be converted to a quicklist.
* Convert quicklist to listpack
When a quicklist has only one node, and its length or size is reduced to half
of the `list-max-listpack-size` limit, it will be converted to a listpack.
This is done to avoid frequent conversions when we add or remove at the bounding size or length.
## Interface changes
1. add list entry param to listTypeSetIteratorDirection
When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry,
so when changing the direction, we need to use the current node (listTypeEntry->p) to
update `listTypeIterator->lpi` to the next node in the reverse direction.
## Benchmark
### Listpack VS Quicklist with one node
* LPUSH - roughly 0.3% improvement
* LRANGE - roughly 13% improvement
### Both are quicklist
* LRANGE - roughly 3% improvement
* LRANGE without pipeline - roughly 3% improvement
From the benchmark, as we can see from the results
1. When list is quicklist encoding, LRANGE improves performance by <5%.
2. When list is listpack encoding, LRANGE improves performance by ~13%,
the main enhancement is brought by `addListListpackRangeReply()`.
## Memory usage
1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each.
shows memory usage down by 35.49%, from 214MB to 138MB.
## Note
1. Add conversion callback to support doing some work before conversion
Since the quicklist iterator decompresses the current node when it is released, we can
no longer decompress the quicklist after we convert the list.
Small sets with not only integer elements are listpack encoded, by default
up to 128 elements, max 64 bytes per element, new config `set-max-listpack-entries`
and `set-max-listpack-value`. This saves memory for small sets compared to using a hashtable.
Sets with only integers, even very small sets, are still intset encoded (up to 1G
limit, etc.). Larger sets are hashtable encoded.
This PR increments the RDB version, and has an effect on OBJECT ENCODING
Possible conversions when elements are added:
intset -> listpack
listpack -> hashtable
intset -> hashtable
Note: No conversion happens when elements are deleted. If all elements are
deleted and then added again, the set is deleted and recreated, thus implicitly
converted to a smaller encoding.
Renamed from "Pause Clients" to "Pause Actions" since the mechanism can pause
several actions in redis, not just clients (e.g. eviction, expiration).
Previously each pause purpose (which has a timeout that's tracked separately from others purposes),
also implicitly dictated what it pauses (reads, writes, eviction, etc). Now it is explicit, and
the actions that are paused (bit flags) are defined separately from the purpose.
- Previously, when using feature pause-client it also implicitly means to make the server static:
- Pause replica traffic
- Pauses eviction processing
- Pauses expire processing
Making the server static is used also for failover and shutdown. This PR internally rebrand
pause-client API to become pause-action API. It also Simplifies pauseClients structure
by replacing pointers array with static array.
The context of this PR is to add another trigger to pause-client which will activated in case
of OOM as throttling mechanism ([see here](https://github.com/redis/redis/issues/10907)).
In this case we want only to pause client, and eviction actions.
RM_Call is designed to let modules call redis commands disregarding the
OOM state (the module is responsible to declare its command flags to redis,
or perform the necessary checks).
The other (new) alternative is to pass the "M" flag to RM_Call so that redis can
OOM reject commands implicitly.
However, Currently, RM_Call enforces OOM on scripts (excluding scripts that
declared `allow-oom`) in all cases, regardless of the RM_Call "M" flag being present.
This PR fixes scripts to be consistent with other commands being executed by RM_Call.
It modifies the flow in effect treats scripts as if they if they have the ALLOW_OOM script
flag, if the "M" flag is not passed (i.e. no OOM checking is being performed by RM_Call,
so no OOM checking should be done on script).
Co-authored-by: Oran Agra <oran@redislabs.com>
The use case is a module that wants to implement a blocking command on a key that
necessarily exists and wants to unblock the client in case the key is deleted (much like
what we implemented for XREADGROUP in #10306)
New module API:
* RedisModule_BlockClientOnKeysWithFlags
Flags:
* REDISMODULE_BLOCK_UNBLOCK_NONE
* REDISMODULE_BLOCK_UNBLOCK_DELETED
### Detailed description of code changes
blocked.c:
1. Both module and stream functions are called whether the key exists or not, regardless of
its type. We do that in order to allow modules/stream to unblock the client in case the key
is no longer present or has changed type (the behavior for streams didn't change, just code
that moved into serveClientsBlockedOnStreamKey)
2. Make sure afterCommand is called in serveClientsBlockedOnKeyByModule, in order to propagate
actions from moduleTryServeClientBlockedOnKey.
3. handleClientsBlockedOnKeys: call propagatePendingCommands directly after lookupKeyReadWithFlags
to prevent a possible lazy-expire DEL from being mixed with any command propagated by the
preceding functions.
4. blockForKeys: Caller can specifiy that it wants to be awakened if key is deleted.
Minor optimizations (use dictAddRaw).
5. signalKeyAsReady became signalKeyAsReadyLogic which can take a boolean in case the key is deleted.
It will only signal if there's at least one client that awaits key deletion (to save calls to
handleClientsBlockedOnKeys).
Minor optimizations (use dictAddRaw)
db.c:
1. scanDatabaseForDeletedStreams is now scanDatabaseForDeletedKeys and will signalKeyAsReady
for any key that was removed from the database or changed type. It is the responsibility of the code
in blocked.c to ignore or act on deleted/type-changed keys.
2. Use the new signalDeletedKeyAsReady where needed
blockedonkey.c + tcl:
1. Added test of new capabilities (FSL.BPOPGT now requires the key to exist in order to work)
### Background
The issue is that when saving an RDB with module AUX data, the module AUX metadata
(moduleid, when, ...) is saved to the RDB even though the module did not saved any actual data.
This prevent loading the RDB in the absence of the module (although there is no actual data in
the RDB that requires the module to be loaded).
### Solution
The solution suggested in this PR is that module AUX will be saved on the RDB only if the module
actually saved something during `aux_save` function.
To support backward compatibility, we introduce `aux_save2` callback that acts the same as
`aux_save` with the tiny change of avoid saving the aux field if no data was actually saved by
the module. Modules can use the new API to make sure that if they have no data to save,
then it will be possible to load the created RDB even without the module.
### Concerns
A module may register for the aux load and save hooks just in order to be notified when
saving or loading starts or completed (there are better ways to do that, but it still possible
that someone used it).
However, if a module didn't save a single field in the save callback, it means it's not allowed
to read in the read callback, since it has no way to distinguish between empty and non-empty
payloads. furthermore, it means that if the module did that, it must never change it, since it'll
break compatibility with it's old RDB files, so this is really not a valid use case.
Since some modules (ones who currently save one field indicating an empty payload), need
to know if saving an empty payload is valid, and if Redis is gonna ignore an empty payload
or store it, we opted to add a new API (rather than change behavior of an existing API and
expect modules to check the redis version)
### Technical Details
To avoid saving AUX data on RDB, we change the code to first save the AUX metadata
(moduleid, when, ...) into a temporary buffer. The buffer is then flushed to the rio at the first
time the module makes a write operation inside the `aux_save` function. If the module saves
nothing (and `aux_save2` was used), the entire temporary buffer is simply dropped and no
data about this AUX field is saved to the RDB. This make it possible to load the RDB even in
the absence of the module.
Test was added to verify the fix.
Motivation: for applications that use RM ACL verification functions, they would
want to return errors back to the user, in ways that are consistent with Redis.
While investigating how we should return ACL errors to the user, we realized that
Redis isn't consistent, and currently returns ACL error strings in 3 primary ways.
[For the actual implications of this change, see the "Impact" section at the bottom]
1. how it returns an error when calling a command normally
ACL_DENIED_CMD -> "this user has no permissions to run the '%s' command"
ACL_DENIED_KEY -> "this user has no permissions to access one of the keys used as arguments"
ACL_DENIED_CHANNEL -> "this user has no permissions to access one of the channels used as arguments"
2. how it returns an error when calling via 'acl dryrun' command
ACL_DENIED_CMD -> "This user has no permissions to run the '%s' command"
ACL_DENIED_KEY -> "This user has no permissions to access the '%s' key"
ACL_DENIED_CHANNEL -> "This user has no permissions to access the '%s' channel"
3. how it returns an error via RM_Call (and scripting is similar).
ACL_DENIED_CMD -> "can't run this command or subcommand";
ACL_DENIED_KEY -> "can't access at least one of the keys mentioned in the command arguments";
ACL_DENIED_CHANNEL -> "can't publish to the channel mentioned in the command";
In addition, if one wants to use RM_Call's "dry run" capability instead of the RM ACL
functions directly, one also sees a different problem than it returns ACL errors with a -ERR,
not a -PERM, so it can't be returned directly to the caller.
This PR modifies the code to generate a base message in a common manner with the ability
to set verbose flag for acl dry run errors, and keep it unset for normal/rm_call/script cases
```c
sds getAclErrorMessage(int acl_res, user *user, struct redisCommand *cmd, sds errored_val, int verbose) {
switch (acl_res) {
case ACL_DENIED_CMD:
return sdscatfmt(sdsempty(), "User %S has no permissions to run "
"the '%S' command", user->name, cmd->fullname);
case ACL_DENIED_KEY:
if (verbose) {
return sdscatfmt(sdsempty(), "User %S has no permissions to access "
"the '%S' key", user->name, errored_val);
} else {
return sdsnew("No permissions to access a key");
}
case ACL_DENIED_CHANNEL:
if (verbose) {
return sdscatfmt(sdsempty(), "User %S has no permissions to access "
"the '%S' channel", user->name, errored_val);
} else {
return sdsnew("No permissions to access a channel");
}
}
```
The caller can append/prepend the message (adding NOPERM for normal/RM_Call or indicating it's within a script).
Impact:
- Plain commands, as well as scripts and RM_Call now include the user name.
- ACL DRYRUN remains the only one that's verbose (mentions the offending channel or key name)
- Changes RM_Call ACL errors from being a `-ERR` to being `-NOPERM` (besides for textual changes)
**This somewhat a breaking change, but it only affects the RM_Call with both `C` and `E`, or `D`**
- Changes ACL errors in scripts textually from being
`The user executing the script <old non unified text>`
to
`ACL failure in script: <new unified text>`
Freeze time during execution of scripts and all other commands.
This means that a key is either expired or not, and doesn't change
state during a script execution. resolves#10182
This PR try to add a new `commandTimeSnapshot` function.
The function logic is extracted from `keyIsExpired`, but the related
calls to `fixed_time_expire` and `mstime()` are removed, see below.
In commands, we will avoid calling `mstime()` multiple times
and just use the one that sampled in call. The background is,
e.g. using `PEXPIRE 1` with valgrind sometimes result in the key
being deleted rather than expired. The reason is that both `PEXPIRE`
command and `checkAlreadyExpired` call `mstime()` separately.
There are other more important changes in this PR:
1. Eliminate `fixed_time_expire`, it is no longer needed.
When we want to sample time we should always use a time snapshot.
We will use `in_nested_call` instead to update the cached time in `call`.
2. Move the call for `updateCachedTime` from `serverCron` to `afterSleep`.
Now `commandTimeSnapshot` will always return the sample time, the
`lookupKeyReadWithFlags` call in `getNodeByQuery` will get a outdated
cached time (because `processCommand` is out of the `call` context).
We put the call to `updateCachedTime` in `aftersleep`.
3. Cache the time each time the module lock Redis.
Call `updateCachedTime` in `moduleGILAfterLock`, affecting `RM_ThreadSafeContextLock`
and `RM_ThreadSafeContextTryLock`
Currently the commandTimeSnapshot change affects the following TTL commands:
- SET EX / SET PX
- EXPIRE / PEXPIRE
- SETEX / PSETEX
- GETEX EX / GETEX PX
- TTL / PTTL
- EXPIRETIME / PEXPIRETIME
- RESTORE key TTL
And other commands just use the cached mstime (including TIME).
This is considered to be a breaking change since it can break a script
that uses a loop to wait for a key to expire.
This PR introduces a couple of changes to improve cluster test stability:
1. Increase the cluster node timeout to 3 seconds, which is similar to the
normal cluster tests, but introduce a new mechanism to increase the ping
period so that the tests are still fast. This new config is a debug config.
2. Set `cluster-replica-no-failover yes` on a wider array of tests which are
sensitive to failovers. This was occurring on the ARM CI.
- fix `the the` typo
- `LPOPRPUSH` does not exist, should be `RPOPLPUSH`
- `CLUSTER GETKEYINSLOT` 's time complexity should be O(N)
- `there bytes` should be `three bytes`, this closes#11266
- `slave` word to `replica` in log, modified the front and missed the back
- remove useless aofReadDiffFromParent in server.h
- `trackingHandlePendingKeyInvalidations` method adds a void parameter
Mainly fix two minor bug
1. When handle BL*POP/BLMOVE commands with blocked clients, we should increment server.dirty.
2. `listPopRangeAndReplyWithKey()` in `serveClientBlockedOnList()` should not repeat calling
`signalModifiedKey()` which has been called when an element was pushed on the list.
(was skipped in all bpop commands, other than blmpop)
Other optimization
add `signal` param for `listElementsRemoved` to skip `signalModifiedKey()` to unify all pop operation.
Unifying all pop operations also prepares us for #11303, so that we can avoid having to deal with the
conversion from quicklist to listpack() in various places when the list shrinks.
Adds a number of user management/ACL validaiton/command execution functions to improve a
Redis module's ability to enforce ACLs correctly and easily.
* RM_SetContextUser - sets a RedisModuleUser on the context, which RM_Call will use to both
validate ACLs (if requested and set) as well as assign to the client so that scripts executed via
RM_Call will have proper ACL validation.
* RM_SetModuleUserACLString - Enables one to pass an entire ACL string, not just a single OP
and have it applied to the user
* RM_GetModuleUserACLString - returns a stringified version of the user's ACL (same format as dump
and list). Contains an optimization to cache the stringified version until the underlying ACL is modified.
* Slightly re-purpose the "C" flag to RM_Call from just being about ACL check before calling the
command, to actually running the command with the right user, so that it also affects commands
inside EVAL scripts. see #11231
The bug is that the the server keeps on sending newlines to the client.
As a result, the receiver might not find the EOF marker since it searches
for it only on the end of each payload it reads from the socket.
The but only affects `CLIENT_REPL_RDBONLY`.
This affects `redis-cli --rdb` (depending on timing)
The fixed consist of two steps:
1. The `CLIENT_REPL_RDBONLY` should be closed ASAP (we cannot
always call to `freeClient` so we use `freeClientAsync`)
2. Add new replication state `SLAVE_STATE_RDB_TRANSMITTED`
When RM_Call was used with `M` (reject OOM), `W` (reject writes),
as well as `S` (rejecting stale or write commands in "Script mode"),
it would have only checked the command flags, but not the declared
script flag in case it's a command that runs a script.
Refactoring: extracts out similar code in server.c's processCommand
to be usable in RM_Call as well.
Redis 7.0 has #9890 which added an assertion when the propagation queue
was not flushed and we got to beforeSleep.
But it turns out that when processCommands calls getNodeByQuery and
decides to reject the command, it can lead to a key that was lazy
expired and is deleted without later flushing the propagation queue.
This change prevents lazy expiry from deleting the key at this stage
(not as part of a command being processed in `call`)
The PR reverts the changes made on #10969.
The reason for revert was trigger because of occasional test failure
that started after the PR was merged.
The issue is that if there is a lazy expire during the command invocation,
the `del` command is added to the replication stream after the command
placeholder. So the logical order on the primary is:
* Delete the key (lazy expiration)
* Command invocation
But the replication stream gets it the other way around:
* Command invocation (because the command is written into the placeholder)
* Delete the key (lazy expiration)
So if the command write to the key that was just lazy expired we will get
inconsistency between primary and replica.
One solution we considered is to add another lazy expire replication stream
and write all the lazy expire there. Then when replicating, we will replicate the
lazy expire replication stream first. This will solve this specific test failure but
we realize that the issues does not ends here and the more we dig the more
problems we find.One of the example we thought about (that can actually
crashes Redis) is as follow:
* User perform SINTERSTORE
* When Redis tries to fetch the second input key it triggers lazy expire
* The lazy expire trigger a module logic that deletes the first input key
* Now Redis hold the robj of the first input key that was actually freed
We believe we took the wrong approach and we will come up with another
PR that solve the problem differently, for now we revert the changes so we
will not have the tests failure.
Notice that not the entire code was revert, some parts of the PR are changes
that we would like to keep. The changes that **was** reverted are:
* Saving a placeholder for replication at the beginning of the command (`call` function)
* Order of the replication stream on active expire and eviction (we will decide how
to handle it correctly on follow up PR)
* `Spop` changes are no longer needed (because we reverted the placeholder code)
Changes that **was not** reverted:
* On expire/eviction, wrap the `del` and the notification effect in a multi exec.
* `PropagateNow` function can still accept a special dbid, -1, indicating not to replicate select.
* Keep optimisation for reusing the `alsoPropagate` array instead of allocating it each time.
Tests:
* All tests was kept and only few tests was modify to work correctly with the changes
* Test was added to verify that the revert fixes the issues.
* Support BUILD_TLS=module to be loaded as a module via config file or
command line. e.g. redis-server --loadmodule redis-tls.so
* Updates to redismodule.h to allow it to be used side by side with
server.h by defining REDISMODULE_CORE_MODULE
* Changes to server.h, redismodule.h and module.c to avoid repeated
type declarations (gcc 4.8 doesn't like these)
* Add a mechanism for non-ABI neutral modules (ones who include
server.h) to refuse loading if they detect not being built together with
redis (release.c)
* Fix wrong signature of RedisModuleDefragFunc, this could break
compilation of a module, but not the ABI
* Move initialization of listeners in server.c to be after loading
the modules
* Config TLS after initialization of listeners
* Init cluster after initialization of listeners
* Add TLS module to CI
* Fix a test suite race conditions:
Now that the listeners are initialized later, it's not sufficient to
wait for the PID message in the log, we need to wait for the "Server
Initialized" message.
* Fix issues with moduleconfigs test as a result from start_server
waiting for "Server Initialized"
* Fix issues with modules/infra test as a result of an additional module
present
Notes about Sentinel:
Sentinel can't really rely on the tls module, since it uses hiredis to
initiate connections and depends on OpenSSL (won't be able to use any
other connection modules for that), so it was decided that when TLS is
built as a module, sentinel does not support TLS at all.
This means that it keeps using redis_tls_ctx and redis_tls_client_ctx directly.
Example code of config in redis-tls.so(may be use in the future):
RedisModuleString *tls_cfg = NULL;
void tlsInfo(RedisModuleInfoCtx *ctx, int for_crash_report) {
UNUSED(for_crash_report);
RedisModule_InfoAddSection(ctx, "");
RedisModule_InfoAddFieldLongLong(ctx, "var", 42);
}
int tlsCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
{
if (argc != 2) return RedisModule_WrongArity(ctx);
return RedisModule_ReplyWithString(ctx, argv[1]);
}
RedisModuleString *getStringConfigCommand(const char *name, void *privdata) {
REDISMODULE_NOT_USED(name);
REDISMODULE_NOT_USED(privdata);
return tls_cfg;
}
int setStringConfigCommand(const char *name, RedisModuleString *new, void *privdata, RedisModuleString **err) {
REDISMODULE_NOT_USED(name);
REDISMODULE_NOT_USED(err);
REDISMODULE_NOT_USED(privdata);
if (tls_cfg) RedisModule_FreeString(NULL, tls_cfg);
RedisModule_RetainString(NULL, new);
tls_cfg = new;
return REDISMODULE_OK;
}
int RedisModule_OnLoad(void *ctx, RedisModuleString **argv, int argc)
{
....
if (RedisModule_CreateCommand(ctx,"tls",tlsCommand,"",0,0,0) == REDISMODULE_ERR)
return REDISMODULE_ERR;
if (RedisModule_RegisterStringConfig(ctx, "cfg", "", REDISMODULE_CONFIG_DEFAULT, getStringConfigCommand, setStringConfigCommand, NULL, NULL) == REDISMODULE_ERR)
return REDISMODULE_ERR;
if (RedisModule_LoadConfigs(ctx) == REDISMODULE_ERR) {
if (tls_cfg) {
RedisModule_FreeString(ctx, tls_cfg);
tls_cfg = NULL;
}
return REDISMODULE_ERR;
}
...
}
Co-authored-by: zhenwei pi <pizhenwei@bytedance.com>
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Introduce listen method into connection type, this allows no hard code
of listen logic. Originally, we initialize server during startup like
this:
if (server.port)
listenToPort(server.port,&server.ipfd);
if (server.tls_port)
listenToPort(server.port,&server.tlsfd);
if (server.unixsocket)
anetUnixServer(...server.unixsocket...);
...
if (createSocketAcceptHandler(&server.ipfd, acceptTcpHandler) != C_OK)
if (createSocketAcceptHandler(&server.tlsfd, acceptTcpHandler) != C_OK)
if (createSocketAcceptHandler(&server.sofd, acceptTcpHandler) != C_OK)
...
If a new connection type gets supported, we have to add more hard code
to setup listener.
Introduce .listen and refactor listener, and Unix socket supports this.
this allows to setup listener arguments and create listener in a loop.
What's more, '.listen' is defined in connection.h, so we should include
server.h to import 'struct socketFds', but server.h has already include
'connection.h'. To avoid including loop(also to make code reasonable),
define 'struct connListener' in connection.h instead of 'struct socketFds'
in server.h. This leads this commit to get more changes.
There are more fields in 'struct connListener', hence it's possible to
simplify changeBindAddr & applyTLSPort() & updatePort() into a single
logic: update the listener config from the server.xxx, and re-create
the listener.
Because of the new field 'priv' in struct connListener, we expect to pass
this to the accept handler(even it's not used currently), this may be used
in the future.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Unix socket uses different accept handler/create listener from TCP,
to hide these difference to avoid hard code, use a new unix socket
connection type. Also move 'acceptUnixHandler' into unix.c.
Currently, the connection framework becomes like following:
uplayer
|
connection layer
/ | \
TCP Unix TLS
It's possible to build Unix socket support as a shared library, and
load it dynamically. Because TCP and Unix socket don't require any
heavy dependencies or overheads, we build them into Redis statically.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Abstract accept handler for socket&TLS, and add helper function
'connAcceptHandler' to get accept handler by specified type.
Also move acceptTcpHandler into socket.c, and move
acceptTLSHandler into tls.c.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
socketFds is also suitable for Unix socket, then we can use
'createSocketAcceptHandler' to create accept handler.
And then, we can abstract accept handler in the future.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Use connTypeRegister() to register a connection type into redis, and
query connection by connectionByType() via type.
With this change, we can hide TLS specified methods into connection
type:
- void tlsInit(void);
- void tlsCleanup(void);
- int tlsConfigure(redisTLSContextConfig *ctx_config);
- int isTlsConfigured(void);
Merge isTlsConfigured & tlsConfigure, use an argument *reconfigure*
to distinguish:
tlsConfigure(&server.tls_ctx_config)
-> onnTypeConfigure(CONN_TYPE_TLS, &server.tls_ctx_config, 1)
isTlsConfigured() && tlsConfigure(&server.tls_ctx_config)
-> connTypeConfigure(CONN_TYPE_TLS, &server.tls_ctx_config, 0)
Finally, we can remove USE_OPENSSL from config.c. If redis is built
without TLS, and still run redis with TLS, then redis reports:
# Missing implement of connection type 1
# Failed to configure TLS. Check logs for more info.
The log can be optimised, let's leave it in the future. Maybe we can
use connection type as a string.
Although uninitialized fields of a static struct are zero, we still
set them as NULL explicitly in socket.c, let them clear to read & maintain:
.init = NULL,
.cleanup = NULL,
.configure = NULL,
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Till now Redis officially supported tuning it via environment variable see #1074.
But we had other requests to allow changing it at runtime, see #799, and #11041.
Note that `strcoll()` is used as Lua comparison function and also for comparison of
certain string objects in Redis, which leads to a problem that, in different regions,
for some characters, the result may be different. Below is an example.
```
127.0.0.1:6333> SORT test alpha
1) "<"
2) ">"
3) ","
4) "*"
127.0.0.1:6333> CONFIG GET locale-collate
1) "locale-collate"
2) ""
127.0.0.1:6333> CONFIG SET locale-collate 1
(error) ERR CONFIG SET failed (possibly related to argument 'locale')
127.0.0.1:6333> CONFIG SET locale-collate C
OK
127.0.0.1:6333> SORT test alpha
1) "*"
2) ","
3) "<"
4) ">"
```
That will cause accidental code compatibility issues for Lua scripts and some
Redis commands. This commit creates a new config parameter to control the
local environment which only affects `Collate` category. Above shows how it
affects `SORT` command, and below shows the influence on Lua scripts.
```
127.0.0.1:6333> CONFIG GET locale-collate
1) " locale-collate"
2) "C"
127.0.0.1:6333> EVAL "return ',' < '*'" 0
(nil)
127.0.0.1:6333> CONFIG SET locale-collate ""
OK
127.0.0.1:6333> EVAL "return ',' < '*'" 0
(integer) 1
```
Co-authored-by: calvincjli <calvincjli@tencent.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
This PR makes sure that "name" is unique for all arguments in the same
level (i.e. all args of a command and all args within a block/oneof).
This means several argument with identical meaning can be referred to together,
but also if someone needs to refer to a specific one, they can use its full path.
In addition, the "display_text" field has been added, to be used by redis.io
in order to render the syntax of the command (for the vast majority it is
identical to "name" but sometimes we want to use a different string
that is not "name")
The "display" field is exposed via COMMAND DOCS and will be present
for every argument, except "oneof" and "block" (which are container
arguments)
Other changes:
1. Make sure we do not have any container arguments ("oneof" or "block")
that contain less than two sub-args (otherwise it doesn't make sense)
2. migrate.json: both AUTH and AUTH2 should not be "optional"
3. arg names cannot contain underscores, and force the usage of hyphens
(most of these were a result of the script that generated the initial json files
from redis.io commands.json).
Fix replication inconsistency on modules that uses key space notifications.
### The Problem
In general, key space notifications are invoked after the command logic was
executed (this is not always the case, we will discuss later about specific
command that do not follow this rules). For example, the `set x 1` will trigger
a `set` notification that will be invoked after the `set` logic was performed, so
if the notification logic will try to fetch `x`, it will see the new data that was written.
Consider the scenario on which the notification logic performs some write
commands. for example, the notification logic increase some counter,
`incr x{counter}`, indicating how many times `x` was changed.
The logical order by which the logic was executed is has follow:
```
set x 1
incr x{counter}
```
The issue is that the `set x 1` command is added to the replication buffer
at the end of the command invocation (specifically after the key space
notification logic was invoked and performed the `incr` command).
The replication/aof sees the commands in the wrong order:
```
incr x{counter}
set x 1
```
In this specific example the order is less important.
But if, for example, the notification would have deleted `x` then we would
end up with primary-replica inconsistency.
### The Solution
Put the command that cause the notification in its rightful place. In the
above example, the `set x 1` command logic was executed before the
notification logic, so it should be added to the replication buffer before
the commands that is invoked by the notification logic. To achieve this,
without a major code refactoring, we save a placeholder in the replication
buffer, when finishing invoking the command logic we check if the command
need to be replicated, and if it does, we use the placeholder to add it to the
replication buffer instead of appending it to the end.
To be efficient and not allocating memory on each command to save the
placeholder, the replication buffer array was modified to reuse memory
(instead of allocating it each time we want to replicate commands).
Also, to avoid saving a placeholder when not needed, we do it only for
WRITE or MAY_REPLICATE commands.
#### Additional Fixes
* Expire and Eviction notifications:
* Expire/Eviction logical order was to first perform the Expire/Eviction
and then the notification logic. The replication buffer got this in the
other way around (first notification effect and then the `del` command).
The PR fixes this issue.
* The notification effect and the `del` command was not wrap with
`multi-exec` (if needed). The PR also fix this issue.
* SPOP command:
* On spop, the `spop` notification was fired before the command logic
was executed. The change in this PR would have cause the replication
order to be change (first `spop` command and then notification `logic`)
although the logical order is first the notification logic and then the
`spop` logic. The right fix would have been to move the notification to
be fired after the command was executed (like all the other commands),
but this can be considered a breaking change. To overcome this, the PR
keeps the current behavior and changes the `spop` code to keep the right
logical order when pushing commands to the replication buffer. Another PR
will follow to fix the SPOP properly and match it to the other command (we
split it to 2 separate PR's so it will be easy to cherry-pick this PR to 7.0 if
we chose to).
#### Unhanded Known Limitations
* key miss event:
* On key miss event, if a module performed some write command on the
event (using `RM_Call`), the `dirty` counter would increase and the read
command that cause the key miss event would be replicated to the replication
and aof. This problem can also happened on a write command that open
some keys but eventually decides not to perform any action. We decided
not to handle this problem on this PR because the solution is complex
and will cause additional risks in case we will want to cherry-pick this PR.
We should decide if we want to handle it in future PR's. For now, modules
writers is advice not to perform any write commands on key miss event.
#### Testing
* We already have tests to cover cases where a notification is invoking write
commands that are also added to the replication buffer, the tests was modified
to verify that the replica gets the command in the correct logical order.
* Test was added to verify that `spop` behavior was kept unchanged.
* Test was added to verify key miss event behave as expected.
* Test was added to verify the changes do not break lazy expiration.
#### Additional Changes
* `propagateNow` function can accept a special dbid, -1, indicating not
to replicate `select`. We use this to replicate `multi/exec` on `propagatePendingCommands`
function. The side effect of this change is that now the `select` command
will appear inside the `multi/exec` block on the replication stream (instead of
outside of the `multi/exec` block). Tests was modified to match this new behavior.
Fixing few macros that doesn't follows most basic safety conventions
which is wrapping any usage of passed variable
with parentheses and if written more than one command, then wrap
it with do-while(0) (or parentheses).
The reason we do this is because in #11036, we added error
log message when failing to open RDB file for reading.
In loadDdataFromDisk we call rdbLoad and also check errno,
now the logging corrupts errno (reported in alpine daily).
It is not safe to rely on errno as we do today, so we change
the return value of rdbLoad function to enums, like we have
when loading an AOF.
Following #10996, it forgot to modify RM_StringCompare in module.c
Modified RM_StringCompare, compareStringObjectsWithFlags,
compareStringObjects and collateStringObjects.
## Issue
During the MULTI/EXEC flow, each command gets queued until the `EXEC`
command is received and during this phase on every command queue, a
`realloc` is being invoked. This could be expensive based on the realloc
behavior (if copy to a new memory location).
## Solution
In order to reduce the no. of syscall, couple of optimization I've used.
1. By default, reserve memory for atleast two commands. `MULTI/EXEC` for a
single command doesn't have any significance. Hence, I believe customer wouldn't use it.
2. For further reservation, increase the memory allocation in exponent growth (power of 2).
This reduces the no. of `realloc` call from `N` to `log(N)` times.
## Other changes:
* Include multi exec queued command array in client memory consumption calculation
(affects client eviction too)
Account sharded pubsub channels memory consumption in client memory usage
computation to accurately evict client based on the set threshold for `maxmemory-clients`.
This was harmless because we marked the parent command
with SENTINEL flag. So the populateCommandTable was ok.
And we also don't show the flag (SENTINEL and ONLY-SENTNEL)
in COMMAND INFO.
In this PR, we also add the same CMD_SENTINEL and CMD_ONLY_SENTINEL
flags check when populating the sub-commands.
so that in the future it'll be possible to add some sub-commands to sentinel or sentinel-only but not others.
The SET and BITFIELD command were added `get_keys_function` in #10148, causing
them to be wrongly marked movablekeys in `populateCommandMovableKeys`.
This was an unintended side effect introduced in #10148 (7.0 RC1)
which could cause some clients an extra round trip for these commands in cluster mode.
Since we define movablekeys as a way to determine if the legacy range [first, last, step]
doesn't find all keys, then we need a completely different approach.
The right approach should be to check if the legacy range covers all key-specs,
and if none of the key-specs have the INCOMPLETE flag.
This way, we don't need to look at getkeys_proc of VARIABLE_FLAG at all.
Probably with the exception of modules, who may still not be using key-specs.
In this PR, we removed `populateCommandMovableKeys` and put its logic in
`populateCommandLegacyRangeSpec`.
In order to properly serve both old and new modules, we must probably keep relying
CMD_MODULE_GETKEYS, but do that only for modules that don't declare key-specs.
For ones that do, we need to take the same approach we take with native redis commands.
This approach was proposed by Oran. Fixes#10833
Co-authored-by: Oran Agra <oran@redislabs.com>
The important part is that read-only scripts (not just EVAL_RO
and FCALL_RO, but also ones with `no-writes` executed by normal EVAL or
FCALL), will now be permitted to run during CLIENT PAUSE WRITE (unlike
before where only the _RO commands would be processed).
Other than that, some errors like OOM, READONLY, MASTERDOWN are now
handled by processCommand, rather than the command itself affects the
error string (and even error code in some cases), and command stats.
Besides that, now the `may-replicate` commands, PFCOUNT and PUBLISH, will
be considered `write` commands in scripts and will be blocked in all
read-only scripts just like other write commands.
They'll also be blocked in EVAL_RO (i.e. even for scripts without the
`no-writes` shebang flag.
This commit also hides the `may_replicate` flag from the COMMAND command
output. this is a **breaking change**.
background about may_replicate:
We don't want to expose a no-may-replicate flag or alike to scripts, since we
consider the may-replicate thing an internal concern of redis, that we may
some day get rid of.
In fact, the may-replicate flag was initially introduced to flag EVAL: since
we didn't know what it's gonna do ahead of execution, before function-flags
existed). PUBLISH and PFCOUNT, both of which because they have side effects
which may some day be fixed differently.
code changes:
The changes in eval.c are mostly code re-ordering:
- evalCalcFunctionName is extracted out of evalGenericCommand
- evalExtractShebangFlags is extracted luaCreateFunction
- evalGetCommandFlags is new code
* Fix broken protocol when redis can't persist to RDB (general commands, not
modules), excessive newline. regression of #10372 (7.0 RC3)
* Fix broken protocol when Redis can't persist to AOF (modules and
scripts), missing newline.
* Fix bug in OOM check of EVAL scripts called from RM_Call.
set the cached OOM state for scripts before executing module commands too,
so that it can serve scripts that are executed by modules.
i.e. in the past EVAL executed by RM_Call could have either falsely
fail or falsely succeeded because of a wrong cached OOM state flag.
* Fix bugs with RM_Yield:
1. SHUTDOWN should only accept the NOSAVE mode
2. Avoid eviction during yield command processing.
3. Avoid processing master client commands while yielding from another client
* Add new two more checks to RM_Call script mode.
1. READONLY You can't write against a read only replica
2. MASTERDOWN Link with MASTER is down and `replica-serve-stale-data` is set to `no`
* Add new RM_Call flag to let redis automatically refuse `deny-oom` commands
while over the memory limit.
* Add tests to cover various errors from Scripts, Modules, Modules
calling scripts, and Modules calling commands in script mode.
Add tests:
* Looks like the MISCONF error was completely uncovered by the tests,
add tests for it, including from scripts, and modules
* Add tests for NOREPLICAS from scripts
* Add tests for the various errors in module RM_Call, including RM_Call that
calls EVAL, and RM_call in "eval mode". that includes:
NOREPLICAS, READONLY, MASTERDOWN, MISCONF
Move the client flags to a more cache friendly position within the client struct
we regain the lost 2% of CPU cycles since v6.2 ( from 630532.57 to 647449.80 ops/sec ).
These are due to higher rate of calls to getClientType due to changes in #9166 and #10020
The amount of `server.stat_net_output_bytes/server.stat_net_input_bytes`
is actually the sum of replication flow and users' data flow.
It may cause confusions like this:
"Why does my server get such a large output_bytes while I am doing nothing? ".
After discussions and revisions, now here is the change about what this
PR brings (final version before merge):
- 2 server variables to count the network bytes during replication,
including fullsync and propagate bytes.
- `server.stat_net_repl_output_bytes`/`server.stat_net_repl_input_bytes`
- 3 info fields to print the input and output of repl bytes and instantaneous
value of total repl bytes.
- `total_net_repl_input_bytes` / `total_net_repl_output_bytes`
- `instantaneous_repl_total_kbps`
- 1 new API `rioCheckType()` to check the type of rio. So we can use this
to distinguish between diskless and diskbased replication
- 2 new counting items to keep network statistics consistent between master
and slave
- rdb portion during diskless replica. in `rdbLoadProgressCallback()`
- first line of the full sync payload. in `readSyncBulkPayload()`
Co-authored-by: Oran Agra <oran@redislabs.com>
To easily distinguish between sharded channel message and a global
channel message, introducing `smessage` (instead of `message`) as
message bulk for sharded channel publish message.
This is gonna be a breaking change in 7.0.1!
Background:
Sharded pubsub introduced in redis 7.0, but after the release we quickly
realized that the fact that it's problematic that the client can't distinguish
between normal (global) pubsub messages and sharded ones.
This is important because the same connection can subscribe to both,
but messages sent to one pubsub system are not propagated to the
other (they're completely separate), so if one connection is used to
subscribe to both, we need to assist the client library to know which
message it got so it can forward it to the correct callback.
Remove some dead code in object.c, ziplist is no longer used in 7.0
Some backgrounds:
zipmap - hash: replaced by ziplist in #285
ziplist - hash: replaced by listpack in #8887
ziplist - zset: replaced by listpack in #9366
ziplist - list: replaced by quicklist (listpack) in #2143 / #9740
Moved the location of ziplist.h in the server.c
sometimes it is using `scriptIsRunning()` and other times it is using `server.in_script`.
We should use the `scriptIsRunning()` method consistently throughout the code base.
Removed server.in_script sine it's no longer used / needed.
This fixes a possible regression in Redis 7.0.0, in which doing CONFIG SET
on a TLS config would not reload the configuration in case the new config is
the same file as before.
A volatile configuration is a configuration value which is a reference to the
configuration data and not the configuration data itself. In such a case Redis
doesn't know if the config data changed under the hood and can't assume a
change happens only when the config value changes. Therefore it needs to
be applied even when setting a config value to the same value as it was before.
Fix#10552
We no longer piggyback getkeys_proc to hold the RedisModuleCommand struct, when exists
Others:
Use `doesCommandHaveKeys` in `RM_GetCommandKeysWithFlags` and `getKeysSubcommandImpl`.
It causes a very minor behavioral change in commands that don't have actual keys, but have a spec
with `CMD_KEY_NOT_KEY`.
For example, before this command `COMMAND GETKEYS SPUBLISH` would return
`Invalid arguments specified for command` but not it returns `The command has no key arguments`
Changes:
- When AOF is enabled **after** startup, the data accumulated during `AOF_WAIT_REWRITE`
will only be stored in a temp INCR AOF file. Only after the first AOFRW is successful, we will
add it to manifest file.
Before this fix, the manifest referred to the temp file which could cause a restart during that
time to load it without it's base.
- Add `aof_rewrites_consecutive_failures` info field for aofrw limiting implementation.
Now we can guarantee that these behaviors of MP-AOF are the same as before (past redis releases):
- When AOF is enabled after startup, the data accumulated during `AOF_WAIT_REWRITE` will only
be stored in a visible place. Only after the first AOFRW is successful, we will add it to manifest file.
- When disable AOF, we did not delete the AOF file in the past so there's no need to change that
behavior now (yet).
- When toggling AOF off and then on (could be as part of a full-sync), a crash or restart before the
first rewrite is completed, would result with the previous version being loaded (might not be right thing,
but that's what we always had).
The SHUTDOWN command has various flags to change it's default behavior,
but in some cases establishing a connection to redis is complicated and it's easier
for the management software to use signals. however, so far the signals could only
trigger the default shutdown behavior.
Here we introduce the option to control shutdown arguments for SIGTERM and SIGINT.
New config options:
`shutdown-on-sigint [nosave | save] [now] [force]`
`shutdown-on-sigterm [nosave | save] [now] [force]`
Implementation:
Support MULTI_ARG_CONFIG on createEnumConfig to support multiple enums to be applied as bit flags.
Co-authored-by: Oran Agra <oran@redislabs.com>
Missing a typeof, we will get errors like this:
- multiple definition of `replicationErrorBehavior'
- ld: error: duplicate symbol: replicationErrorBehavior
Introduced in #10504
* Till now, replicas that were unable to persist, would still execute the commands
they got from the master, now they'll panic by default, and we add a new
`replica-ignore-disk-errors` config to change that.
* Till now, when a command failed on a replica or AOF-loading, it only logged a
warning and a stat, we add a new `propagation-error-behavior` config to allow
panicking in that state (may become the default one day)
Note that commands that fail on the replica can either indicate a bug that could
cause data inconsistency between the replica and the master, or they could be
in some cases (specifically in previous versions), a result of a command (e.g. EVAL)
that failed on the master, but still had to be propagated to fail on the replica as well.
Adds the `allow-cross-slot-keys` flag to Eval scripts and Functions to allow
scripts to access keys from multiple slots.
The default behavior is now that they are not allowed to do that (unlike before).
This is a breaking change for 7.0 release candidates (to be part of 7.0.0), but
not for previous redis releases since EVAL without shebang isn't doing this check.
Note that the check is done on both the keys declared by the EVAL / FCALL command
arguments, and also the ones used by the script when making a `redis.call`.
A note about the implementation, there seems to have been some confusion
about allowing access to non local keys. I thought I missed something in our
wider conversation, but Redis scripts do block access to non-local keys.
So the issue was just about cross slots being accessed.
1. Disk error and slave count checks didn't flag the transactions or counted correctly in command stats (regression from #10372 , 7.0 RC3)
2. RM_Call will reply the same way Redis does, in case of non-exisitng command or arity error
3. RM_WrongArtiy will consider the full command name
4. Use lowercase 'u' in "unknonw subcommand" (to align with "unknown command")
Followup work of #10127
This PR unifies all the places that test if the current client is the
master client or AOF client, and uses a method to test that on all of
these.
Other than some refactoring, these are the actual implications:
- Replicas **don't** ignore disk error when processing commands not
coming from their master.
**This is important for PING to be used for health check of replicas**
- SETRANGE, APPEND, SETBIT, BITFIELD don't do proto_max_bulk_len check for AOF
- RM_Call in SCRIPT_MODE ignores disk error when coming from master /
AOF
- RM_Call in cluster mode ignores slot check when processing AOF
- Scripts ignore disk error when processing AOF
- Scripts **don't** ignore disk error on a replica, if the command comes
from clients other than the master
- SCRIPT KILL won't kill script coming from AOF
- Scripts **don't** skip OOM check on replica if the command comes from
clients other than the master
Note that Script, AOF, and module clients don't reach processCommand,
which is why some of the changes don't actually have any implications.
Note, reverting the change done to processCommand in 2f4240b9d9
should be dead code due to the above mentioned fact.
Add a configuration option to attach an operating system-specific identifier to Redis sockets, supporting advanced network configurations using iptables (Linux) or ipfw (FreeBSD).
since PUBLISH and SPUBLISH use different dictionaries for channels and clients,
and we already have an API for PUBLISH, it only makes sense to have one for SPUBLISH
Add test coverage and unifying some test infrastructure.
Add an optional keyspace event when new keys are added to the db.
This is useful for applications where clients need to be aware of the redis keyspace.
Such an application can SCAN once at startup and then listen for "new" events (plus
others associated with DEL, RENAME, etc).
Apparently, some modules can afford deprecating command arguments
(something that was never done in Redis, AFAIK), so in order to represent
this piece of information, we added the `deprecated_since` field to redisCommandArg
(in symmetry to the already existing `since` field).
This commit adds `const char *deprecated_since` to `RedisModuleCommandArg`,
which is technically a breaking change, but since 7.0 was not released yet, we decided to let it slide
Durability of database is a big and old topic, in this regard Redis use AOF to
support it, and `appendfsync=alwasys` policy is the most strict level, guarantee
all data is both written and synced on disk before reply success to client.
But there are some cases have been overlooked, and could lead to durability broken.
1. The most clear one is about threaded-io mode
we should also set client's write handler with `ae_barrier` in
`handleClientsWithPendingWritesUsingThreads`, or the write handler would be
called after read handler in the next event loop, it means the write command result
could be replied to client before flush to AOF.
2. About blocked client (mostly by module)
in `beforeSleep()`, `handleClientsBlockedOnKeys()` should be called before
`flushAppendOnlyFile()`, in case the unblocked clients modify data without persistence
but send reply.
3. When handling `ProcessingEventsWhileBlocked`
normally it takes place when lua/function/module timeout, and we give a chance to users
to kill the slow operation, but we should call `flushAppendOnlyFile()` before
`handleClientsWithPendingWrites()`, in case the other clients in the last event loop get
acknowledge before data persistence.
for a instance:
```
in the same event loop
client A executes set foo bar
client B executes eval "for var=1,10000000,1 do end" 0
```
after the script timeout, client A will get `OK` but lose data after restart (kill redis when
timeout) if we don't flush the write command to AOF.
4. A more complex case about `ProcessingEventsWhileBlocked`
it is lua timeout in transaction, for example
`MULTI; set foo bar; eval "for var=1,10000000,1 do end" 0; EXEC`, then client will get set
command's result before the whole transaction done, that breaks atomicity too.
fortunately, it's already fixed by #5428 (although it's not the original purpose just a side
effect : )), but module timeout should be fixed too.
case 1, 2, 3 are fixed in this commit, the module issue in case 4 needs a followup PR.
Add field to COMMAND DOCS response to denote the name of the module
that added that command.
COMMAND LIST can filter by module, but if you get the full commands list,
you may still wanna know which command belongs to which module.
The alternative would be to do MODULE LIST, and then multiple calls to COMMAND LIST
This feature adds the ability to add four different types (Bool, Numeric,
String, Enum) of configurations to a module to be accessed via the redis
config file, and the CONFIG command.
**Configuration Names**:
We impose a restriction that a module configuration always starts with the
module name and contains a '.' followed by the config name. If a module passes
"config1" as the name to a register function, it will be registered as MODULENAME.config1.
**Configuration Persistence**:
Module Configurations exist only as long as a module is loaded. If a module is
unloaded, the configurations are removed.
There is now also a minimal core API for removal of standardConfig objects
from configs by name.
**Get and Set Callbacks**:
Storage of config values is owned by the module that registers them, and provides
callbacks for Redis to access and manipulate the values.
This is exposed through a GET and SET callback.
The get callback returns a typed value of the config to redis. The callback takes
the name of the configuration, and also a privdata pointer. Note that these only
take the CONFIGNAME portion of the config, not the entire MODULENAME.CONFIGNAME.
```
typedef RedisModuleString * (*RedisModuleConfigGetStringFunc)(const char *name, void *privdata);
typedef long long (*RedisModuleConfigGetNumericFunc)(const char *name, void *privdata);
typedef int (*RedisModuleConfigGetBoolFunc)(const char *name, void *privdata);
typedef int (*RedisModuleConfigGetEnumFunc)(const char *name, void *privdata);
```
Configs must also must specify a set callback, i.e. what to do on a CONFIG SET XYZ 123
or when loading configurations from cli/.conf file matching these typedefs. *name* is
again just the CONFIGNAME portion, *val* is the parsed value from the core,
*privdata* is the registration time privdata pointer, and *err* is for providing errors to a client.
```
typedef int (*RedisModuleConfigSetStringFunc)(const char *name, RedisModuleString *val, void *privdata, RedisModuleString **err);
typedef int (*RedisModuleConfigSetNumericFunc)(const char *name, long long val, void *privdata, RedisModuleString **err);
typedef int (*RedisModuleConfigSetBoolFunc)(const char *name, int val, void *privdata, RedisModuleString **err);
typedef int (*RedisModuleConfigSetEnumFunc)(const char *name, int val, void *privdata, RedisModuleString **err);
```
Modules can also specify an optional apply callback that will be called after
value(s) have been set via CONFIG SET:
```
typedef int (*RedisModuleConfigApplyFunc)(RedisModuleCtx *ctx, void *privdata, RedisModuleString **err);
```
**Flags:**
We expose 7 new flags to the module, which are used as part of the config registration.
```
#define REDISMODULE_CONFIG_MODIFIABLE 0 /* This is the default for a module config. */
#define REDISMODULE_CONFIG_IMMUTABLE (1ULL<<0) /* Can this value only be set at startup? */
#define REDISMODULE_CONFIG_SENSITIVE (1ULL<<1) /* Does this value contain sensitive information */
#define REDISMODULE_CONFIG_HIDDEN (1ULL<<4) /* This config is hidden in `config get <pattern>` (used for tests/debugging) */
#define REDISMODULE_CONFIG_PROTECTED (1ULL<<5) /* Becomes immutable if enable-protected-configs is enabled. */
#define REDISMODULE_CONFIG_DENY_LOADING (1ULL<<6) /* This config is forbidden during loading. */
/* Numeric Specific Configs */
#define REDISMODULE_CONFIG_MEMORY (1ULL<<7) /* Indicates if this value can be set as a memory value */
```
**Module Registration APIs**:
```
int (*RedisModule_RegisterBoolConfig)(RedisModuleCtx *ctx, char *name, int default_val, unsigned int flags, RedisModuleConfigGetBoolFunc getfn, RedisModuleConfigSetBoolFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata);
int (*RedisModule_RegisterNumericConfig)(RedisModuleCtx *ctx, const char *name, long long default_val, unsigned int flags, long long min, long long max, RedisModuleConfigGetNumericFunc getfn, RedisModuleConfigSetNumericFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata);
int (*RedisModule_RegisterStringConfig)(RedisModuleCtx *ctx, const char *name, const char *default_val, unsigned int flags, RedisModuleConfigGetStringFunc getfn, RedisModuleConfigSetStringFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata);
int (*RedisModule_RegisterEnumConfig)(RedisModuleCtx *ctx, const char *name, int default_val, unsigned int flags, const char **enum_values, const int *int_values, int num_enum_vals, RedisModuleConfigGetEnumFunc getfn, RedisModuleConfigSetEnumFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata);
int (*RedisModule_LoadConfigs)(RedisModuleCtx *ctx);
```
The module name will be auto appended along with a "." to the front of the name of the config.
**What RM_Register[...]Config does**:
A RedisModule struct now keeps a list of ModuleConfig objects which look like:
```
typedef struct ModuleConfig {
sds name; /* Name of config without the module name appended to the front */
void *privdata; /* Optional data passed into the module config callbacks */
union get_fn { /* The get callback specificed by the module */
RedisModuleConfigGetStringFunc get_string;
RedisModuleConfigGetNumericFunc get_numeric;
RedisModuleConfigGetBoolFunc get_bool;
RedisModuleConfigGetEnumFunc get_enum;
} get_fn;
union set_fn { /* The set callback specified by the module */
RedisModuleConfigSetStringFunc set_string;
RedisModuleConfigSetNumericFunc set_numeric;
RedisModuleConfigSetBoolFunc set_bool;
RedisModuleConfigSetEnumFunc set_enum;
} set_fn;
RedisModuleConfigApplyFunc apply_fn;
RedisModule *module;
} ModuleConfig;
```
It also registers a standardConfig in the configs array, with a pointer to the
ModuleConfig object associated with it.
**What happens on a CONFIG GET/SET MODULENAME.MODULECONFIG:**
For CONFIG SET, we do the same parsing as is done in config.c and pass that
as the argument to the module set callback. For CONFIG GET, we call the
module get callback and return that value to config.c to return to a client.
**CONFIG REWRITE**:
Starting up a server with module configurations in a .conf file but no module load
directive will fail. The flip side is also true, specifying a module load and a bunch
of module configurations will load those configurations in using the module defined
set callbacks on a RM_LoadConfigs call. Configs being rewritten works the same
way as it does for standard configs, as the module has the ability to specify a
default value. If a module is unloaded with configurations specified in the .conf file
those configurations will be commented out from the .conf file on the next config rewrite.
**RM_LoadConfigs:**
`RedisModule_LoadConfigs(RedisModuleCtx *ctx);`
This last API is used to make configs available within the onLoad() after they have
been registered. The expected usage is that a module will register all of its configs,
then call LoadConfigs to trigger all of the set callbacks, and then can error out if any
of them were malformed. LoadConfigs will attempt to set all configs registered to
either a .conf file argument/loadex argument or their default value if an argument is
not specified. **LoadConfigs is a required function if configs are registered.
** Also note that LoadConfigs **does not** call the apply callbacks, but a module
can do that directly after the LoadConfigs call.
**New Command: MODULE LOADEX [CONFIG NAME VALUE] [ARGS ...]:**
This command provides the ability to provide startup context information to a module.
LOADEX stands for "load extended" similar to GETEX. Note that provided config
names need the full MODULENAME.MODULECONFIG name. Any additional
arguments a module might want are intended to be specified after ARGS.
Everything after ARGS is passed to onLoad as RedisModuleString **argv.
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Madelyn Olson <matolson@amazon.com>
Co-authored-by: sundb <sundbcn@gmail.com>
Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
To remove `pending_querybuf`, the key point is reusing `querybuf`, it means master client's `querybuf` is not only used to parse command, but also proxy to sub-replicas.
1. add a new variable `repl_applied` for master client to record how many data applied (propagated via `replicationFeedStreamFromMasterStream()`) but not trimmed in `querybuf`.
2. don't sdsrange `querybuf` in `commandProcessed()`, we trim it to `repl_applied` after the whole replication pipeline processed to avoid fragmented `sdsrange`. And here are some scenarios we cannot trim to `qb_pos`:
* we don't receive complete command from master
* master client blocked because of client pause
* IO threads operate read, master client flagged with CLIENT_PENDING_COMMAND
In these scenarios, `qb_pos` points to the part of the current command or the beginning of next command, and the current command is not applied yet, so the `repl_applied` is not equal to `qb_pos`.
Some other notes:
* Do not do big arg optimization on master client, since we can only sdsrange `querybuf` after data sent to replicas.
* Set `qb_pos` and `repl_applied` to 0 when `freeClient` in `replicationCacheMaster`.
* Rewrite `processPendingCommandsAndResetClient` to `processPendingCommandAndInputBuffer`, let `processInputBuffer` to be called successively after `processCommandAndResetClient`.
The PR extends RM_Call with 3 new capabilities using new flags that
are given to RM_Call as part of the `fmt` argument.
It aims to assist modules that are getting a list of commands to be
executed from the user (not hard coded as part of the module logic),
think of a module that implements a new scripting language...
* `S` - Run the command in a script mode, this means that it will raise an
error if a command which are not allowed inside a script (flaged with the
`deny-script` flag) is invoked (like SHUTDOWN). In addition, on script mode,
write commands are not allowed if there is not enough good replicas (as
configured with `min-replicas-to-write`) and/or a disk error happened.
* `W` - no writes mode, Redis will reject any command that is marked with `write`
flag. Again can be useful to modules that implement a new scripting language
and wants to prevent any write commands.
* `E` - Return errors as RedisModuleCallReply. Today the errors that happened
before the command was invoked (like unknown commands or acl error) return
a NULL reply and set errno. This might be missing important information about
the failure and it is also impossible to just pass the error to the user using
RM_ReplyWithCallReply. This new flag allows you to get a RedisModuleCallReply
object with the relevant error message and treat it as if it was an error that was
raised by the command invocation.
Tests were added to verify the new code paths.
In addition small refactoring was done to share some code between modules,
scripts, and `processCommand` function:
1. `getAclErrorMessage` was added to `acl.c` to unified to log message extraction
from the acl result
2. `checkGoodReplicasStatus` was added to `replication.c` to check the status of
good replicas. It is used on `scriptVerifyWriteCommandAllow`, `RM_Call`, and
`processCommand`.
3. `writeCommandsGetDiskErrorMessage` was added to `server.c` to get the error
message on persistence failure. Again it is used on `scriptVerifyWriteCommandAllow`,
`RM_Call`, and `processCommand`.
When rewrite the config file, we need read the old config file first,
but the CONFIG_MAX_LEN is 1024, so if some lines are longer than it,
it will generate a wrong config file, and redis cannot reboot from
the new config file.
Rename CONFIG_MAX_LINE to CONFIG_READ_LEN
Currently the sort and sort_ro can access external keys via `GET` and `BY`
in order to make sure the user cannot violate the authorization ACL
rules, the decision is to reject external keys access patterns unless ACL allows
SORT full access to all keys.
I.e. for backwards compatibility, SORT with GET/BY keeps working, but
if ACL has restrictions to certain keys, these features get permission denied.
### Implemented solution
We have discussed several potential solutions and decided to only allow the GET and BY
arguments when the user has all key permissions with the SORT command. The reasons
being that SORT with GET or BY is problematic anyway, for instance it is not supported in
cluster mode since it doesn't declare keys, and we're not sure the combination of that feature
with ACL key restriction is really required.
**HOWEVER** If in the fullness of time we will identify a real need for fine grain access
support for SORT, we would implement the complete solution which is the alternative
described below.
### Alternative (Completion solution):
Check sort ACL rules after executing it and before committing output (either via store or
to COB). it would require making several changes to the sort command itself. and would
potentially cause performance degradation since we will have to collect all the get keys
instead of just applying them to a temp array and then scan the access keys against the
ACL selectors. This solution can include an optimization to avoid the overheads of collecting
the key names, in case the ACL rules grant SORT full key-access, or if the ACL key pattern
literal matches the one used in GET/BY. It would also mean that authorization would be
O(nlogn) since we will have to complete most of the command execution before we can
perform verification
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
In a benchmark we noticed we spend a relatively long time updating the client
memory usage leading to performance degradation.
Before #8687 this was performed in the client's cron and didn't affect performance.
But since introducing client eviction we need to perform this after filling the input
buffers and after processing commands. This also lead me to write this code to be
thread safe and perform it in the i/o threads.
It turns out that the main performance issue here is related to atomic operations
being performed while updating the total clients memory usage stats used for client
eviction (`server.stat_clients_type_memory[]`). This update needed to be atomic
because `updateClientMemUsage()` was called from the IO threads.
In this commit I make sure to call `updateClientMemUsage()` only from the main thread.
In case of threaded IO I call it for each client during the "fan-in" phase of the read/write
operation. This also means I could chuck the `updateClientMemUsageBucket()` function
which was called during this phase and embed it into `updateClientMemUsage()`.
Profiling shows this makes `updateClientMemUsage()` (on my x86_64 linux) roughly x4 faster.
For an integer string like "123456789012345678901" which could cause
overflow-failure in string2ll() conversion, we could compare its length at
the beginning to avoid extra work.
* move LONG_STR_SIZE to be in declared in util.h, next to MAX_LONG_DOUBLE_CHARS