The new module redact test will fail with valgrind:
```
[err]: modules can redact arguments in tests/unit/moduleapi/auth.tcl
Expected 'slowlog reset' to be equal to 'auth.redact 1 (redacted) 3 (redacted)' (context: type eval line 12 cmd {assert_equal {slowlog reset} [lindex [lindex [r slowlog get] 2] 3]} proc ::test)
```
The reason is that with `slowlog-log-slower-than 10000`,
`slowlog get` will have a chance to exceed 10ms.
Made two changes to avoid failure:
1. change `slowlog-log-slower-than` from 10000 to -1, distable it.
2. assert to use the previous execution result.
In theory, the second one can actually be left unchanged, but i
think it will be better if it is changed.
Implement a new cluster shards command, which provides a flexible and extensible API for topology discovery.
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Currently the sort and sort_ro can access external keys via `GET` and `BY`
in order to make sure the user cannot violate the authorization ACL
rules, the decision is to reject external keys access patterns unless ACL allows
SORT full access to all keys.
I.e. for backwards compatibility, SORT with GET/BY keeps working, but
if ACL has restrictions to certain keys, these features get permission denied.
### Implemented solution
We have discussed several potential solutions and decided to only allow the GET and BY
arguments when the user has all key permissions with the SORT command. The reasons
being that SORT with GET or BY is problematic anyway, for instance it is not supported in
cluster mode since it doesn't declare keys, and we're not sure the combination of that feature
with ACL key restriction is really required.
**HOWEVER** If in the fullness of time we will identify a real need for fine grain access
support for SORT, we would implement the complete solution which is the alternative
described below.
### Alternative (Completion solution):
Check sort ACL rules after executing it and before committing output (either via store or
to COB). it would require making several changes to the sort command itself. and would
potentially cause performance degradation since we will have to collect all the get keys
instead of just applying them to a temp array and then scan the access keys against the
ACL selectors. This solution can include an optimization to avoid the overheads of collecting
the key names, in case the ACL rules grant SORT full key-access, or if the ACL key pattern
literal matches the one used in GET/BY. It would also mean that authorization would be
O(nlogn) since we will have to complete most of the command execution before we can
perform verification
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
In a benchmark we noticed we spend a relatively long time updating the client
memory usage leading to performance degradation.
Before #8687 this was performed in the client's cron and didn't affect performance.
But since introducing client eviction we need to perform this after filling the input
buffers and after processing commands. This also lead me to write this code to be
thread safe and perform it in the i/o threads.
It turns out that the main performance issue here is related to atomic operations
being performed while updating the total clients memory usage stats used for client
eviction (`server.stat_clients_type_memory[]`). This update needed to be atomic
because `updateClientMemUsage()` was called from the IO threads.
In this commit I make sure to call `updateClientMemUsage()` only from the main thread.
In case of threaded IO I call it for each client during the "fan-in" phase of the read/write
operation. This also means I could chuck the `updateClientMemUsageBucket()` function
which was called during this phase and embed it into `updateClientMemUsage()`.
Profiling shows this makes `updateClientMemUsage()` (on my x86_64 linux) roughly x4 faster.
We need to wait for `sentinelTimer` to kick in, and then trigger the reconnect.
As for another change, we should better call `server_set_password` before calling SENTINEL SET auth-pass.
Fixes problem introeuced in #10400
For an integer string like "123456789012345678901" which could cause
overflow-failure in string2ll() conversion, we could compare its length at
the beginning to avoid extra work.
* move LONG_STR_SIZE to be in declared in util.h, next to MAX_LONG_DOUBLE_CHARS
Better check the monitors list argument instead of server.monitors in the function,
although they are basically the same in the context, so this doesn't have any
impact on the current code.
When updating SENTINEL with master’s new password (command:
`SENTINEL SET mymaster auth-pass some-new-password`),
sentinel might still keep the old connection and avoid reconnecting
with the new password. This is because of wrong logic that traces
the last ping (pong) time to servers. In fact it worked fine until 8631e64
changed the condition to send ping. To resolve it with minimal risk,
let’s disconnect master and replicas once changing password/user.
Based on earlier work of yz1509.
The following usage will output an empty newline:
```
> redis-cli help set
empty line
```
The reason is that in interactive mode, we have called
`cliInitHelp`, which initializes help.
When using `redis-cli help xxx` or `redis-cli help ? xxx`,
we can't match the command due to empty `helpEntries`,
so we output an empty newline.
In this commit, we will call `cliInitHelp` to init the help.
Note that in this case, we need to call `cliInitHelp` (COMMAND DOCS)
every time, which i think is acceptable.
So now the output will look like:
```
[redis]# src/redis-cli help get
GET key
summary: Get the value of a key
since: 1.0.0
group: string
[redis]#
```
Fixes#10378
This PR also fix a redis-cli crash when using `--ldb --eval`:
```
[root]# src/redis-cli --ldb --eval test.lua test 1
Lua debugging session started, please use:
quit -- End the session.
restart -- Restart the script in debug mode again.
help -- Show Lua script debugging commands.
* Stopped at 1, stop reason = step over
-> 1 local num = redis.call('GET', KEYS[1]);
redis-cli: redis-cli.c:718: cliCountCommands: Assertion
`commandTable->element[i]->type == 1' failed.
Aborted
```
Because in ldb mode, `COMMAND DOCS` or `COMMAND` will
return an array, only with one element, and the type
is `REDIS_REPLY_STATUS`, the result is `<error> Unknown
Redis Lua debugger command or wrong number of arguments`.
So if we are in the ldb mode, and init the Redis HELP, we
will get the wrong response and crash the redis-cli.
In ldb mode we don't initialize HELP, help is only initialized
after the lua debugging session ends.
It was broken in #10043
* fix-replication-comments
The described capacity
`and to schedule a new BGSAVE if there are slaves that attached while a BGSAVE was in progress`
was moved to `checkChildrenDone()` named by `replicationStartPendingFork`
But the comment was not changed, may misleading others.
* remove-misleading-comments
The described capacity
`to schedule a new BGSAVE if there are slaves that attached while a BGSAVE was in progress`
and
`or when the replication RDB transfer strategy is modified from disk to socket or the other way around`
were not correct now.
* stats and latency commands have non-deterministic output.
* the ones about latency should be sent to ALL_NODES (considering
reads from replicas)
* the ones about running scripts and memory usage only to masters.
* stats aggregation is SPECIAL (like in INFO)
Deleting a stream while a client is blocked XREADGROUP should unblock the client.
The idea is that if a client is blocked via XREADGROUP is different from
any other blocking type in the sense that it depends on the existence of both
the key and the group. Even if the key is deleted and then revived with XADD
it won't help any clients blocked on XREADGROUP because the group no longer
exist, so they would fail with -NOGROUP anyway.
The conclusion is that it's better to unblock these clients (with error) upon
the deletion of the key, rather than waiting for the first XADD.
Other changes:
1. Slightly optimize all `serveClientsBlockedOn*` functions by checking `server.blocked_clients_by_type`
2. All `serveClientsBlockedOn*` functions now use a list iterator rather than looking at `listFirst`, relying
on `unblockClient` to delete the head of the list. Before this commit, only `serveClientsBlockedOnStreams`
used to work like that.
3. bugfix: CLIENT UNBLOCK ERROR should work even if the command doesn't have a timeout_callback
(only relevant to module commands)
In some special commands like eval_ro / fcall_ro we allow no-writes commands.
But may-replicate commands are no-writes too, that leads crash when client pause write:
`Expected '*table size: 4096*' to match '*table size: 8192*'`
This test failed once on daily macOS, the reason is because
the bgsave has not stopped after the kill and `after 200`.
So there is a child process and no rehash triggered.
This commit use `waitForBgsave` to wait for it to finish.
Normally, `redis-cli` escapes non-printable data received from Redis, using a custom scheme (which is also used to handle quoted input). When using `--json` this is not desired as it is not compatible with RFC 7159, which specifies JSON strings are assumed to be Unicode and how they should be escaped.
This commit changes `--json` to follow RFC 7159, which means that properly encoded Unicode strings in Redis will result with a valid Unicode JSON.
However, this introduces a new problem with `--json` and data that is not valid Unicode (e.g., random binary data, text that follows other encoding, etc.). To address this, we add `--quoted-json` which produces JSON strings that follow the original redis-cli quoting scheme.
For example, a value that consists of only null (0x00) bytes will show up as:
* `"\u0000\u0000\u0000"` when using `--json`
* `"\\x00\\x00\\x00"` when using `--quoted-json`
1. since ZSKIPLIST_P is float, using it directly inside the condition used to causes floating point code to be used (gcc/x86)
2. In some operating system(eg.Windows), the largest value returned from random() is 0x7FFF(15bit), so after bitwise AND with 0xFFFF, the probability of the less operation returning true in the while loop's condition is no more equal to ZSKIPLIST_P.
3. In case some library has random() returning int in range [0~ZSKIPLIST_P*65535], the while loop will be an infinite loop.
4. on Linux where RAND_MAX is higher than 0xFFFF, this change actually improves precision (despite not matching the result against a float value)
In order to resolve some flaky tests which hard rely on examine memory footprint.
we introduce the following fixes:
# Fix in client-eviction test - by @yoav-steinberg
Sometime the libc allocator can use different size client struct allocations.
this may cause unexpected memory calculations to fail the test.
# Introduce new DEBUG command for disabling reply buffer resizing
In order to eliminate reply buffer resizing during specific tests.
we introduced the ability to disable (and enable) the resizing cron job
Co-authored-by: yoav-steinberg yoav@redislabs.com
* Moved configuration storage from a list to a hash table
* Configs are returned in a non-deterministic order. It's possible that a client was relying on order (hopefully not).
* Fixed an esoteric bug where if you did a set with an alias with an error, it would throw an error indicating a bug with the preferred name for that config.
After introducing #9822 need to prevent client reply buffer shrink
to maintain correct client memory math.
add needs:debug missing one one test.
Co-authored-by: Oran Agra <oran@redislabs.com>
This PR fix 2 issues on Lua scripting:
* Server error reply statistics (some errors were counted twice).
* Error code and error strings returning from scripts (error code was missing / misplaced).
## Statistics
a Lua script user is considered part of the user application, a sophisticated transaction,
so we want to count an error even if handled silently by the script, but when it is
propagated outwards from the script we don't wanna count it twice. on the other hand,
if the script decides to throw an error on its own (using `redis.error_reply`), we wanna
count that too.
Besides, we do count the `calls` in command statistics for the commands the script calls,
we we should certainly also count `failed_calls`.
So when a simple `eval "return redis.call('set','x','y')" 0` fails, it should count the failed call
to both SET and EVAL, but the `errorstats` and `total_error_replies` should be counted only once.
The PR changes the error object that is raised on errors. Instead of raising a simple Lua
string, Redis will raise a Lua table in the following format:
```
{
err='<error message (including error code)>',
source='<User source file name>',
line='<line where the error happned>',
ignore_error_stats_update=true/false,
}
```
The `luaPushError` function was modified to construct the new error table as describe above.
The `luaRaiseError` was renamed to `luaError` and is now simply called `lua_error` to raise
the table on the top of the Lua stack as the error object.
The reason is that since its functionality is changed, in case some Redis branch / fork uses it,
it's better to have a compilation error than a bug.
The `source` and `line` fields are enriched by the error handler (if possible) and the
`ignore_error_stats_update` is optional and if its not present then the default value is `false`.
If `ignore_error_stats_update` is true, the error will not be counted on the error stats.
When parsing Redis call reply, each error is translated to a Lua table on the format describe
above and the `ignore_error_stats_update` field is set to `true` so we will not count errors
twice (we counted this error when we invoke the command).
The changes in this PR might have been considered as a breaking change for users that used
Lua `pcall` function. Before, the error was a string and now its a table. To keep backward
comparability the PR override the `pcall` implementation and extract the error message from
the error table and return it.
Example of the error stats update:
```
127.0.0.1:6379> lpush l 1
(integer) 2
127.0.0.1:6379> eval "return redis.call('get', 'l')" 0
(error) WRONGTYPE Operation against a key holding the wrong kind of value. script: e471b73f1ef44774987ab00bdf51f21fd9f7974a, on @user_script:1.
127.0.0.1:6379> info Errorstats
# Errorstats
errorstat_WRONGTYPE:count=1
127.0.0.1:6379> info commandstats
# Commandstats
cmdstat_eval:calls=1,usec=341,usec_per_call=341.00,rejected_calls=0,failed_calls=1
cmdstat_info:calls=1,usec=35,usec_per_call=35.00,rejected_calls=0,failed_calls=0
cmdstat_lpush:calls=1,usec=14,usec_per_call=14.00,rejected_calls=0,failed_calls=0
cmdstat_get:calls=1,usec=10,usec_per_call=10.00,rejected_calls=0,failed_calls=1
```
## error message
We can now construct the error message (sent as a reply to the user) from the error table,
so this solves issues where the error message was malformed and the error code appeared
in the middle of the error message:
```diff
127.0.0.1:6379> eval "return redis.call('set','x','y')" 0
-(error) ERR Error running script (call to 71e6319f97b0fe8bdfa1c5df3ce4489946dda479): @user_script:1: OOM command not allowed when used memory > 'maxmemory'.
+(error) OOM command not allowed when used memory > 'maxmemory' @user_script:1. Error running script (call to 71e6319f97b0fe8bdfa1c5df3ce4489946dda479)
```
```diff
127.0.0.1:6379> eval "redis.call('get', 'l')" 0
-(error) ERR Error running script (call to f_8a705cfb9fb09515bfe57ca2bd84a5caee2cbbd1): @user_script:1: WRONGTYPE Operation against a key holding the wrong kind of value
+(error) WRONGTYPE Operation against a key holding the wrong kind of value script: 8a705cfb9fb09515bfe57ca2bd84a5caee2cbbd1, on @user_script:1.
```
Notica that `redis.pcall` was not change:
```
127.0.0.1:6379> eval "return redis.pcall('get', 'l')" 0
(error) WRONGTYPE Operation against a key holding the wrong kind of value
```
## other notes
Notice that Some commands (like GEOADD) changes the cmd variable on the client stats so we
can not count on it to update the command stats. In order to be able to update those stats correctly
we needed to promote `realcmd` variable to be located on the client struct.
Tests was added and modified to verify the changes.
Related PR's: #10279, #10218, #10278, #10309
Co-authored-by: Oran Agra <oran@redislabs.com>
Avoid deferred array reply on genericZrangebyrankCommand() when consumer type is client.
I.e. any ZRANGE / ZREVRNGE (when tank is used).
This was a performance regression introduced in #7844 (v 6.2) mainly affecting pipelined workloads.
Co-authored-by: Oran Agra <oran@redislabs.com>
This is basically just a subtree pull of the latest (unreleased) hiredis.
Unfortunately, the `sds -> hisds` patch was pulled as a subtree update from a remote branch rather than a local redis change. Because of that, it goes away on every subtree update. It is now applied as a local commit so it should survive in the future.
Adds the ability to track the lag of a consumer group (CG), that is, the number
of entries yet-to-be-delivered from the stream.
The proposed constant-time solution is in the spirit of "best-effort."
Partially addresses #8737.
## Description of approach
We add a new "entries_added" property to the stream. This starts at 0 for a new
stream and is incremented by 1 with every `XADD`. It is essentially an all-time
counter of the entries added to the stream.
Given the stream's length and this counter value, we can trivially find the logical
"entries_added" counter of the first ID if and only if the stream is contiguous.
A fragmented stream contains one or more tombstones generated by `XDEL`s.
The new "xdel_max_id" stream property tracks the latest tombstone.
The CG also tracks its last delivered ID's as an "entries_read" counter and
increments it independently when delivering new messages, unless the this
read counter is invalid (-1 means invalid offset). When the CG's counter is
available, the reported lag is the difference between added and read counters.
Lastly, this also adds a "first_id" field to the stream structure in order to make
looking it up cheaper in most cases.
## Limitations
There are two cases in which the mechanism isn't able to track the lag.
In these cases, `XINFO` replies with `null` in the "lag" field.
The first case is when a CG is created with an arbitrary last delivered ID,
that isn't "0-0", nor the first or the last entries of the stream. In this case,
it is impossible to obtain a valid read counter (short of an O(N) operation).
The second case is when there are one or more tombstones fragmenting
the stream's entries range.
In both cases, given enough time and assuming that the consumers are
active (reading and lacking) and advancing, the CG should be able to
catch up with the tip of the stream and report zero lag.
Once that's achieved, lag tracking would resume as normal (until the
next tombstone is set).
## API changes
* `XGROUP CREATE` added with the optional named argument `[ENTRIESREAD entries-read]`
for explicitly specifying the new CG's counter.
* `XGROUP SETID` added with an optional positional argument `[ENTRIESREAD entries-read]`
for specifying the CG's counter.
* `XINFO` reports the maximal tombstone ID, the recorded first entry ID, and total
number of entries added to the stream.
* `XINFO` reports the current lag and logical read counter of CGs.
* `XSETID` is an internal command that's used in replication/aof. It has been added with
the optional positional arguments `[ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-entry-id]`
for propagating the CG's offset and maximal tombstone ID of the stream.
## The generic unsolved problem
The current stream implementation doesn't provide an efficient way to obtain the
approximate/exact size of a range of entries. While it could've been nice to have
that ability (#5813) in general, let alone specifically in the context of CGs, the risk
and complexities involved in such implementation are in all likelihood prohibitive.
## A refactoring note
The `streamGetEdgeID` has been refactored to accommodate both the existing seek
of any entry as well as seeking non-deleted entries (the addition of the `skip_tombstones`
argument). Furthermore, this refactoring also migrated the seek logic to use the
`streamIterator` (rather than `raxIterator`) that was, in turn, extended with the
`skip_tombstones` Boolean struct field to control the emission of these.
Co-authored-by: Guy Benoish <guy.benoish@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Avoid sprintf/ll2string on setDeferredAggregateLen()/addReplyLongLongWithPrefix() when we can used shared objects.
In some pipelined workloads this achieves about 10% improvement.
Co-authored-by: Oran Agra <oran@redislabs.com>
In case HELLO message received from another sentinel, with same address like another instance registered in the past but with different runid. Then there was cumbersome logic to modify the instance the port to 0 to in order to mark as invalid and later on to delete it. But the deletion is happening during update of instances in such a way that we might end up accessing an instance that was deleted just before.
Didn't find a good reason why to postpone the deletion action of an obsolete instance (deletion is taking place instantly, for other cases ) -> Lets delete at once
There is a mixture of logic of Sentinel address update with the logic of deletion of Sentinels that match a given Address -> Split to two!