since PUBLISH and SPUBLISH use different dictionaries for channels and clients,
and we already have an API for PUBLISH, it only makes sense to have one for SPUBLISH
Add test coverage and unifying some test infrastructure.
The tests verify that loading a binary payload to the Lua interpreter raises an error.
The Lua code modification was done here: fdf9d45509
which force the Lau interpreter to always use the text parser.
From #9166, we call several times of prepareReplicasToWrite when propagating
one write command to replication stream (once per argument, same as we do for
normal clients), that is not necessary. Now we only call it one time per command
at the begin of feeding replication stream.
This results in reducing CPU consumption and slightly better performance,
specifically when there are many replicas.
Add APIs to allow modules to compute the memory consumption of opaque objects owned by redis.
Without these, the mem_usage callbacks of module data types are useless in many cases.
Other changes:
Fix streamRadixTreeMemoryUsage to include the size of the rax structure itself
By the convention of errors, there is supposed to be a space between the code and the name.
While looking at some lua stuff I noticed that interpreter errors were not adding the space,
so some clients will try to map the detailed error message into the error.
We have tests that hit this condition, but they were just checking that the string "starts" with ERR.
I updated some other tests with similar incorrect string checking. This isn't complete though, as
there are other ways we check for ERR I didn't fix.
Produces some fun output like:
```
# Errorstats
errorstat_ERR:count=1
errorstat_ERRuser_script_1_:count=1
```
This PR fix the following minor errors before Redis 7 release:
ZRANGEBYLEX command in deprecated in 6.2.0, and could be replaced by ZRANGE with the
BYLEX argument, but in the document, the words is written incorrect in " by ZRANGE with the BYSCORE argument"
Fix function zpopmaxCommand incorrect comment
The comments of function zmpopCommand and bzmpopCommand are not consistent with document description, fix them
Co-authored-by: Ubuntu <lucas.guang.yang1@huawei.com>
we can observe that when adding to a stream without ID there is a duplicate work
on sds creation/freeing/sdslen that costs ~11% of the CPU cycles.
This PR avoids it by not freeing the sds after the first reply.
The expected reduction in CPU cycles is around 9-10%
Additionally, we now pre-allocate the sds to the right size, to avoid realloc.
this brought another ~10% improvement
Co-authored-by: Oran Agra <oran@redislabs.com>
Add an optional keyspace event when new keys are added to the db.
This is useful for applications where clients need to be aware of the redis keyspace.
Such an application can SCAN once at startup and then listen for "new" events (plus
others associated with DEL, RENAME, etc).
Apparently, some modules can afford deprecating command arguments
(something that was never done in Redis, AFAIK), so in order to represent
this piece of information, we added the `deprecated_since` field to redisCommandArg
(in symmetry to the already existing `since` field).
This commit adds `const char *deprecated_since` to `RedisModuleCommandArg`,
which is technically a breaking change, but since 7.0 was not released yet, we decided to let it slide
Allow specifying an ACL log reason, which is shown in the log. Right now it always shows "unknown", which is a little bit cryptic. This is a breaking change, but this API was added as part of 7 so it seems ok to stabilize it still.
* Extending the use of hashTypeGetValue.
Functions hashTypeExists, hashTypeGetValueLength and addHashFieldToReply
have a similar pattern on calling hashTypeGetFromHashTable or
hashTypeGetFromZipList depending on the underlying data structure. What
does functions are duing is exactly what hashTypeGetValue does. Those
functions were changed to use existing function hashTypeGetValue making
the code more consistent.
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Durability of database is a big and old topic, in this regard Redis use AOF to
support it, and `appendfsync=alwasys` policy is the most strict level, guarantee
all data is both written and synced on disk before reply success to client.
But there are some cases have been overlooked, and could lead to durability broken.
1. The most clear one is about threaded-io mode
we should also set client's write handler with `ae_barrier` in
`handleClientsWithPendingWritesUsingThreads`, or the write handler would be
called after read handler in the next event loop, it means the write command result
could be replied to client before flush to AOF.
2. About blocked client (mostly by module)
in `beforeSleep()`, `handleClientsBlockedOnKeys()` should be called before
`flushAppendOnlyFile()`, in case the unblocked clients modify data without persistence
but send reply.
3. When handling `ProcessingEventsWhileBlocked`
normally it takes place when lua/function/module timeout, and we give a chance to users
to kill the slow operation, but we should call `flushAppendOnlyFile()` before
`handleClientsWithPendingWrites()`, in case the other clients in the last event loop get
acknowledge before data persistence.
for a instance:
```
in the same event loop
client A executes set foo bar
client B executes eval "for var=1,10000000,1 do end" 0
```
after the script timeout, client A will get `OK` but lose data after restart (kill redis when
timeout) if we don't flush the write command to AOF.
4. A more complex case about `ProcessingEventsWhileBlocked`
it is lua timeout in transaction, for example
`MULTI; set foo bar; eval "for var=1,10000000,1 do end" 0; EXEC`, then client will get set
command's result before the whole transaction done, that breaks atomicity too.
fortunately, it's already fixed by #5428 (although it's not the original purpose just a side
effect : )), but module timeout should be fixed too.
case 1, 2, 3 are fixed in this commit, the module issue in case 4 needs a followup PR.
Add field to COMMAND DOCS response to denote the name of the module
that added that command.
COMMAND LIST can filter by module, but if you get the full commands list,
you may still wanna know which command belongs to which module.
The alternative would be to do MODULE LIST, and then multiple calls to COMMAND LIST
The `auto-aof-rewrite-percentage` config defines at what growth percentage
an automatic AOF rewrite is triggered.
This normally works OK since the size of the AOF file at the end of a rewrite
is stored in `server.aof_rewrite_base_size`.
However, on startup, redis used to store the entire size of the AOF file into that
variable, resulting in a wrong automatic AOF rewrite trigger (could have been
triggered much later than desired).
This issue would only affect the first AOFRW after startup, after that future AOFRW
would have been triggered correctly.
This bug existed in all previous versions of Redis.
This PR unifies the meaning of `server.aof_rewrite_base_size`, which only represents
the size of BASE AOF.
Note that after an AOFRW this size includes the size of the incremental file (all the
commands that executed during rewrite), so that auto-aof-rewrite-percentage is the
ratio from the size of the AOF after rewrite.
However, on startup, it is complicated to know that size, and we compromised on
taking just the size of the base file, this means that the first rewrite after startup can
happen a little bit too soon.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: yoav-steinberg <yoav@redislabs.com>
The bug was when using REDISMODULE_YIELD_FLAG_CLIENTS.
in that case we would have only set the CLIENTS type flag in
server.busy_module_yield_flags and then clear that flag when exiting
RM_Yield, so we would never call unblockPostponedClients when the
context is destroyed.
This didn't really have any actual implication, which is why the tests
couldn't (and still can't) find that since the bug only happens when
using CLIENT, but in this case we won't have any clients to un-postpone
i.e. clients will get rejected with BUSY error, rather than being
postponed.
Unrelated:
* Adding tests for nested contexts, just in case.
* Avoid nested RM_Yield calls
Fixes in command argument in json files
* Fixes BITFIELD's syntax ("sub-commands" can be repeated, and OVERFLOW is only valid for SET and INCR)
* Improves readability of SET (reordered)
* Fixes GEOSEARCH and GEOSEARCH_RO syntices (use `oneof` for mutually exclusive group instead of `optional`)
* Fixes MIGRATE syntax (use `oneof` for mutually exclusive group instead of `optional`)
* Fixes MODULE LOADEX syntax (the `CONFIG` token should be repeated too when using multiple configs)
other:
* make generate-command-help.rb accept a path to commands.json, or read it from stdin (e.g. `generate-commands-json.py | generate-command-help.rb -`)
Fixed a bug that used the `hincrbyfloat` or `hincrby` commands to make the field or value exceed the
`hash_max_listpack_value` but did not change the object encoding of the hash structure.
Add a length check for field and value, check the length of value first, if the length of value does not
exceed `hash_max_listpack_value` then check the length of field.
If the length of field or value is too long, it will reduce the efficiency of listpack, and the object encoding
will become hashtable after AOF restart, so this is also to keep the same before and after AOF restart.
Sentinel once in a while experience Sentinel TILT period or leader election
failure cycle. The problem is that those default timeout are too big and once
it happens, it breaks our tests. Suggesting:
- Reducing failover-timeout from 20 to 10sec (actually it is multiplied by 2
and reach 40sec of timeout)
- Modify tilt-period from default of 30sec to 5sec. When TILT period happens
it might lead to failover in our tests, and might cause also to failover cycle
cycle failure.
Sentinel tests should `wait_for_condition` up to 50seconds, where needed,
to be stable in case having single TILT period or failover failure cycle.
In addition relax timing configuration for "manual failover" Sentinel test
(was modified several months ago as part of an effort to reduce tests runtime)
The command json documents should just include information about the "arguments" and the "outputs".
I removed all of the 'functional wording' so it's clear.
## Move library meta data to be part of the library payload.
Following the discussion on https://github.com/redis/redis/issues/10429 and the intention to add (in the future) library versioning support, we believe that the entire library metadata (like name and engine) should be part of the library payload and not provided by the `FUNCTION LOAD` command. The reasoning behind this is that the programmer who developed the library should be the one who set those values (name, engine, and in the future also version). **It is not the responsibility of the admin who load the library into the database.**
The PR moves all the library metadata (engine and function name) to be part of the library payload. The metadata needs to be provided on the first line of the payload using the shebang format (`#!<engine> name=<name>`), example:
```lua
#!lua name=test
redis.register_function('foo', function() return 1 end)
```
The above script will run on the Lua engine and will create a library called `test`.
## API Changes (compare to 7.0 rc2)
* `FUNCTION LOAD` command was change and now it simply gets the library payload and extract the engine and name from the payload. In addition, the command will now return the function name which can later be used on `FUNCTION DELETE` and `FUNCTION LIST`.
* The description field was completely removed from`FUNCTION LOAD`, and `FUNCTION LIST`
## Breaking Changes (compare to 7.0 rc2)
* Library description was removed (we can re-add it in the future either as part of the shebang line or an additional line).
* Loading an AOF file that was generated by either 7.0 rc1 or 7.0 rc2 will fail because the old command syntax is invalid.
## Notes
* Loading an RDB file that was generated by rc1 / rc2 **is** supported, Redis will automatically add the shebang to the libraries payloads (we can probably delete that code after 7.0.3 or so since there's no need to keep supporting upgrades from an RC build).
* Limit cluster node id length for CLUSTER commands loading
* Cluster node name sanity check for length and values
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
If, for some reason, Redis decides not to execute the script, we need
to pop the function and error handler from Lua stack. Otherwise, eventually
the Lua stack will explode.
Relevant only for 7.0-rc1 and 7.0-rc2.
* Fix race condition where node loses its last slot and turns into replica
When a node has lost its last slot and finds out from the SETSLOT command
before the cluster bus PONG from the new owner arrives. In this case, the
node didn't turn itself into a replica of the new slot owner.
This commit adds the same logic to the SETSLOT command as already exists
for the cluster bus PONG processing.
* Revert "Fix new / failing cluster slot migration test (#10482)"
This reverts commit 0b21ef8d49.
In this test, the old slot owner finds out that it has lost its last
slot in a nondeterministic way. Either the cluster bus PONG from the
new slot owner and sometimes in a SETSLOT command from redis-cli. In
both cases, the result should be the same and the old owner should
turn itself into a replica of the new slot owner.
Fix global `strval` not reset to NULL after being freed, causing a crash on alpine
(most likely because the dynamic library loader doesn't init globals on reload)
By the way, fix the memory leak of using `RedisModule_Free` to free `RedisModuleString`,
and add a corresponding test.
During 11-manual-takeover.tcl, if the killing of the instances happens
too slowly, one of the replicas might be able to promote itself.
I'm not sure why it was slow, but it was observed taking 6 seconds
which is enough time to do an election.
I was able to verify the error locally by adding a small delay (1 second)
during ASAN CI. A fix is just to disable automated failover until all the
nodes are confirmed dead.
On error, redis-cli was returning `REDIS_ERR` on some cases by mistake. `REDIS_ERR` is `-1` which becomes `255` as exit code. This commit changes it and returns `1` on errors to be consistent.
This feature adds the ability to add four different types (Bool, Numeric,
String, Enum) of configurations to a module to be accessed via the redis
config file, and the CONFIG command.
**Configuration Names**:
We impose a restriction that a module configuration always starts with the
module name and contains a '.' followed by the config name. If a module passes
"config1" as the name to a register function, it will be registered as MODULENAME.config1.
**Configuration Persistence**:
Module Configurations exist only as long as a module is loaded. If a module is
unloaded, the configurations are removed.
There is now also a minimal core API for removal of standardConfig objects
from configs by name.
**Get and Set Callbacks**:
Storage of config values is owned by the module that registers them, and provides
callbacks for Redis to access and manipulate the values.
This is exposed through a GET and SET callback.
The get callback returns a typed value of the config to redis. The callback takes
the name of the configuration, and also a privdata pointer. Note that these only
take the CONFIGNAME portion of the config, not the entire MODULENAME.CONFIGNAME.
```
typedef RedisModuleString * (*RedisModuleConfigGetStringFunc)(const char *name, void *privdata);
typedef long long (*RedisModuleConfigGetNumericFunc)(const char *name, void *privdata);
typedef int (*RedisModuleConfigGetBoolFunc)(const char *name, void *privdata);
typedef int (*RedisModuleConfigGetEnumFunc)(const char *name, void *privdata);
```
Configs must also must specify a set callback, i.e. what to do on a CONFIG SET XYZ 123
or when loading configurations from cli/.conf file matching these typedefs. *name* is
again just the CONFIGNAME portion, *val* is the parsed value from the core,
*privdata* is the registration time privdata pointer, and *err* is for providing errors to a client.
```
typedef int (*RedisModuleConfigSetStringFunc)(const char *name, RedisModuleString *val, void *privdata, RedisModuleString **err);
typedef int (*RedisModuleConfigSetNumericFunc)(const char *name, long long val, void *privdata, RedisModuleString **err);
typedef int (*RedisModuleConfigSetBoolFunc)(const char *name, int val, void *privdata, RedisModuleString **err);
typedef int (*RedisModuleConfigSetEnumFunc)(const char *name, int val, void *privdata, RedisModuleString **err);
```
Modules can also specify an optional apply callback that will be called after
value(s) have been set via CONFIG SET:
```
typedef int (*RedisModuleConfigApplyFunc)(RedisModuleCtx *ctx, void *privdata, RedisModuleString **err);
```
**Flags:**
We expose 7 new flags to the module, which are used as part of the config registration.
```
#define REDISMODULE_CONFIG_MODIFIABLE 0 /* This is the default for a module config. */
#define REDISMODULE_CONFIG_IMMUTABLE (1ULL<<0) /* Can this value only be set at startup? */
#define REDISMODULE_CONFIG_SENSITIVE (1ULL<<1) /* Does this value contain sensitive information */
#define REDISMODULE_CONFIG_HIDDEN (1ULL<<4) /* This config is hidden in `config get <pattern>` (used for tests/debugging) */
#define REDISMODULE_CONFIG_PROTECTED (1ULL<<5) /* Becomes immutable if enable-protected-configs is enabled. */
#define REDISMODULE_CONFIG_DENY_LOADING (1ULL<<6) /* This config is forbidden during loading. */
/* Numeric Specific Configs */
#define REDISMODULE_CONFIG_MEMORY (1ULL<<7) /* Indicates if this value can be set as a memory value */
```
**Module Registration APIs**:
```
int (*RedisModule_RegisterBoolConfig)(RedisModuleCtx *ctx, char *name, int default_val, unsigned int flags, RedisModuleConfigGetBoolFunc getfn, RedisModuleConfigSetBoolFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata);
int (*RedisModule_RegisterNumericConfig)(RedisModuleCtx *ctx, const char *name, long long default_val, unsigned int flags, long long min, long long max, RedisModuleConfigGetNumericFunc getfn, RedisModuleConfigSetNumericFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata);
int (*RedisModule_RegisterStringConfig)(RedisModuleCtx *ctx, const char *name, const char *default_val, unsigned int flags, RedisModuleConfigGetStringFunc getfn, RedisModuleConfigSetStringFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata);
int (*RedisModule_RegisterEnumConfig)(RedisModuleCtx *ctx, const char *name, int default_val, unsigned int flags, const char **enum_values, const int *int_values, int num_enum_vals, RedisModuleConfigGetEnumFunc getfn, RedisModuleConfigSetEnumFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata);
int (*RedisModule_LoadConfigs)(RedisModuleCtx *ctx);
```
The module name will be auto appended along with a "." to the front of the name of the config.
**What RM_Register[...]Config does**:
A RedisModule struct now keeps a list of ModuleConfig objects which look like:
```
typedef struct ModuleConfig {
sds name; /* Name of config without the module name appended to the front */
void *privdata; /* Optional data passed into the module config callbacks */
union get_fn { /* The get callback specificed by the module */
RedisModuleConfigGetStringFunc get_string;
RedisModuleConfigGetNumericFunc get_numeric;
RedisModuleConfigGetBoolFunc get_bool;
RedisModuleConfigGetEnumFunc get_enum;
} get_fn;
union set_fn { /* The set callback specified by the module */
RedisModuleConfigSetStringFunc set_string;
RedisModuleConfigSetNumericFunc set_numeric;
RedisModuleConfigSetBoolFunc set_bool;
RedisModuleConfigSetEnumFunc set_enum;
} set_fn;
RedisModuleConfigApplyFunc apply_fn;
RedisModule *module;
} ModuleConfig;
```
It also registers a standardConfig in the configs array, with a pointer to the
ModuleConfig object associated with it.
**What happens on a CONFIG GET/SET MODULENAME.MODULECONFIG:**
For CONFIG SET, we do the same parsing as is done in config.c and pass that
as the argument to the module set callback. For CONFIG GET, we call the
module get callback and return that value to config.c to return to a client.
**CONFIG REWRITE**:
Starting up a server with module configurations in a .conf file but no module load
directive will fail. The flip side is also true, specifying a module load and a bunch
of module configurations will load those configurations in using the module defined
set callbacks on a RM_LoadConfigs call. Configs being rewritten works the same
way as it does for standard configs, as the module has the ability to specify a
default value. If a module is unloaded with configurations specified in the .conf file
those configurations will be commented out from the .conf file on the next config rewrite.
**RM_LoadConfigs:**
`RedisModule_LoadConfigs(RedisModuleCtx *ctx);`
This last API is used to make configs available within the onLoad() after they have
been registered. The expected usage is that a module will register all of its configs,
then call LoadConfigs to trigger all of the set callbacks, and then can error out if any
of them were malformed. LoadConfigs will attempt to set all configs registered to
either a .conf file argument/loadex argument or their default value if an argument is
not specified. **LoadConfigs is a required function if configs are registered.
** Also note that LoadConfigs **does not** call the apply callbacks, but a module
can do that directly after the LoadConfigs call.
**New Command: MODULE LOADEX [CONFIG NAME VALUE] [ARGS ...]:**
This command provides the ability to provide startup context information to a module.
LOADEX stands for "load extended" similar to GETEX. Note that provided config
names need the full MODULENAME.MODULECONFIG name. Any additional
arguments a module might want are intended to be specified after ARGS.
Everything after ARGS is passed to onLoad as RedisModuleString **argv.
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Madelyn Olson <matolson@amazon.com>
Co-authored-by: sundb <sundbcn@gmail.com>
Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
* missing parenthesis meant that the ubuntu and centos jobs were not
skipped
* the recently divided freebsd, macos, and valgrind jobs, which are now
split into distict jobs for redis, modules, sentinel, cluster. were
all executed, producing a build, but not running anything.
now they're filtered at the job level
* iothreads was missing from the skip list defaults, so was not skipped