Commit Graph

358 Commits

Author SHA1 Message Date
Wen Hui
5dc631d880
Add missing test cases for hash commands (#12851)
We dont have test for hgetall against key doesnot exist so added the
test in test suite and along with this, added wrong type cases for other
missing commands.
2023-12-17 14:02:53 +02:00
Binbin
96e9dec419
Bump codespell from 2.2.4 to 2.2.5 (#12557)
and adjustments.
2023-09-08 16:10:17 +03:00
Chen Tianjie
b26e8e3213
Optimize ZRANGE offset location from linear search to skiplist jump. (#12450)
ZRANGE BYSCORE/BYLEX with [LIMIT offset count] option was
using every level in skiplist to jump to the first/last node in range,
but only use level[0] in skiplist to locate the node at offset, resulting
in sub-optimal performance using LIMIT:
```
while (ln && offset--) {
    if (reverse) {
        ln = ln->backward;
    } else {
        ln = ln->level[0].forward;
    }
}
```
It could be slow when offset is very big. We can get the total rank of
the offset location and use skiplist to jump to it. It is an improvement
from O(offset) to O(log rank).

Below shows how this is implemented (if the offset is positve):

Use the skiplist to seach for the first element in the range, record its
rank `rank_0`, so we can have the rank of the target node `rank_t`.
Meanwhile we record the last node we visited which has zsl->level-1
levels and its rank `rank_1`. Then we start from the zsl->level-1 node,
use skiplist to go forward `rank_t-rank_1` nodes to reach the target node.

It is very similiar when the offset is reversed.

Note that if `rank_t` is very close to `rank_0`, we just start from the first
element in range and go node by node, this for the case when zsl->level-1
node is to far away and it is quicker to reach the target node by node.

Here is a test using a random generated zset including 10000 elements
(with different positive scores), doing a bench mark which compares how
fast the `ZRANGE` command is exucuted before and after the optimization. 

The start score is set to 0 and the count is set to 1 to make sure that
most of the time is spent on locating the offset.
```
memtier_benchmark -h 127.0.0.1 -p 6379 --command="zrange test 0 +inf byscore limit <offset> 1"
```
| offset | QPS(unstable) | QPS(optimized) |
|--------|--------|--------|
| 10 | 73386.02 | 74819.82 |
| 1000 | 48084.96 | 73177.73 |
| 2000 | 31156.79 | 72805.83 |
| 5000 | 10954.83 | 71218.21 |

With the result above, we can see that the original code is greatly
slowed down when offset gets bigger, and with the optimization the
speed is almost not affected.

Similiar results are generated when testing reversed offset:
```
memtier_benchmark -h 127.0.0.1 -p 6379 --command="zrange test +inf 0 byscore rev limit <offset> 1"
```
| offset | QPS(unstable) | QPS(optimized) |
|--------|--------|--------|
| 10 | 74505.14 | 71653.67 |
| 1000 | 46829.25 | 72842.75 |
| 2000 | 28985.48 | 73669.01 |
| 5000 | 11066.22 | 73963.45 | 

And the same conclusion is drawn from the tests of ZRANGE BYLEX.
2023-08-31 14:42:08 +03:00
Harkrishn Patro
42985b00ea
Test coverage for incr/decr operation on robj encoding type optimization (#12435)
Additional test coverage for incr/decr operation.

integer number could be present in raw encoding format due to operation like append. A incr/decr operation following it optimize the string to int encoding format.
2023-07-25 16:43:31 -07:00
Harkrishn Patro
34b95f752c
Add test case for APPEND command usage on integer value (#12429)
Add test coverage to validate object encoding update on APPEND command usage on a integer value
2023-07-24 18:25:50 -07:00
Binbin
6b57241fa8
Revert zrangeGenericCommand negative offset check (#12377)
The negative offset check was added in #9052, we realized
that this is a non-mandatory breaking change and we would
like to add it only in 8.0.

This reverts PR #9052, will be re-introduced later in 8.0.
2023-07-02 20:38:30 +03:00
Binbin
d9c2ef8a72
zrangeGenericCommand add check for negative offset (#9052)
Now we will check the offset in zrangeGenericCommand.
With a negative offset, we will throw an error and return.

This also resolve the issue of zeroing the destination key
in case of the "store" variant when we input a negative offset.
```
127.0.0.1:6379> set key value
OK
127.0.0.1:6379> zrangestore key myzset 0 10 byscore limit -1 10
(integer) 0
127.0.0.1:6379> exists key
(integer) 0
```

This change affects the following commands:
- ZRANGE / ZRANGESTORE / ZRANGEBYLEX / ZRANGEBYSCORE
- ZREVRANGE / ZREVRANGEBYSCORE / ZREVRANGEBYLEX
2023-06-20 14:33:17 +03:00
Binbin
d306d86146
Fix ZRANK/ZREVRANK reply_schema description (#12331)
The parameter name is WITHSCORE instead of WITHSCORES.
2023-06-20 11:15:40 +03:00
sundb
b00a235186
Use Reservoir Sampling for random sampling of dict, and fix hang during fork (#12276)
## Issue:
When a dict has a long chain or the length of the chain is longer than
the number of samples, we will never be able to sample the elements
at the end of the chain using dictGetSomeKeys().
This could mean that SRANDMEMBER can be hang in and endless loop.
The most severe case, is the pathological case of when someone uses SCAN+DEL
or SSCAN+SREM creating an unevenly distributed dict.
This was amplified by the recent change in #11692 which prevented a
down-sizing rehashing while there is a fork.

## Solution
1. Before, we will stop sampling when we reach the maximum number
  of samples, even if there is more data after the current chain.
  Now when we reach the maximum we use the Reservoir Sampling
  algorithm to fairly sample the end of the chain that cannot be sampled
2. Fix the rehashing code, so that the same as it allows rehashing for up-sizing
  during fork when the ratio is extreme, it will allow it for down-sizing as well.

Issue was introduced (or became more severe) by #11692

Co-authored-by: Oran Agra <oran@redislabs.com>
2023-06-16 19:13:07 +03:00
Binbin
439b0315c8
Fix SPOP/RESTORE propagation when doing lazy free (#12320)
In SPOP, when COUNT is greater than or equal to set's size,
we will remove the set. In dbDelete, we will do DEL or UNLINK
according to the lazy flag. This is also required for propagate.

In RESTORE, we won't store expired keys into the db, see #7472.
When used together with REPLACE, it should emit a DEL or UNLINK
according to the lazy flag.

This PR also adds tests to cover the propagation. The RESTORE
test will also cover #7472.
2023-06-16 08:14:11 -07:00
Binbin
9dc6f93e9d
Add command being unblocked cause another command to get unblocked execution order test (#12324)
* Add command being unblocked cause another command to get unblocked execution order test

In #12301, we observed that if the
`while(listLength(server.ready_keys) != 0)`
in handleClientsBlockedOnKeys is changed to
`if(listLength(server.ready_keys) != 0)`,
the order of command execution will change.

It is wrong to change that. It means that if a command
being unblocked causes another command to get unblocked
(like a BLMOVE would do), then the new unblocked command
will wait for later to get processed rather than right away.

It'll not have any real implication if we change that since
we do call handleClientsBlockedOnKeys in beforeSleep again,
and redis will still behave correctly, but we don't change that.

An example:
1. $rd1 blmove src{t} dst{t} left right 0
2. $rd2 blmove dst{t} src{t} right left 0
3. $rd3 set key1{t}, $rd3 lpush src{t}, $rd3 set key2{t} in a pipeline

The correct order would be:
1. set key1{t}
2. lpush src{t}
3. lmove src{t} dst{t} left right
4. lmove dst{t} src{t} right left
5. set key2{t}

The wrong order would be:
1. set key1{t}
2. lpush src{t}
3. lmove src{t} dst{t} left right
4. set key2{t}
5. lmove dst{t} src{t} right left

This PR adds corresponding test to cover it.

* Add comment near while(listLength(server.ready_keys) != 0)
2023-06-16 15:39:00 +03:00
Wen Hui
1946083410
Removing the duplicate test case (#12310)
Looks like the Zadd test case was copied to create Zincrby test case ,but missed to change the command.
2023-06-14 10:03:33 +03:00
Binbin
e7129e43e0
Fix XREADGROUP BLOCK stuck in endless loop (#12301)
For the XREADGROUP BLOCK > scenario, there is an endless loop.
Due to #11012, it keep going, reprocess command -> blockForKeys -> reprocess command

The right fix is to avoid an endless loop in handleClientsBlockedOnKey and handleClientsBlockedOnKeys,
looks like there was some attempt in handleClientsBlockedOnKeys but maybe not sufficiently good,
and it looks like using a similar trick in handleClientsBlockedOnKey is complicated.
i.e. stashing the list on the stack and iterating on it after creating a fresh one for future use,
is problematic since the code keeps accessing the global list.

Co-authored-by: Oran Agra <oran@redislabs.com>
2023-06-13 13:27:05 +03:00
Oran Agra
2764dc3768
Optimize MSETNX to avoid double lookup (#11944)
This is a redo of #11594 which got reverted in #11940
It improves performance by avoiding double lookup of the the key.
2023-05-28 10:58:29 +03:00
Wen Hui
1a188e4ed6
[BUG] Incorrect error msg for XREAD command (#12238)
XREAD only supports a special ID of $ and XREADGROUP only supports ^.
make sure not to suggest the wrong one when rerunning an error about unbalanced ID arguments

Co-authored-by: Oran Agra <oran@redislabs.com>
2023-05-28 08:37:32 +03:00
Wen Hui
d664889992
Adding test case for hvals, hkeys, hexists against wrong type (#12198)
HVALS, HKEYS and HEXISTS commands wrong type test cases were not covered so added the test cases.
2023-05-24 09:34:13 +03:00
Binbin
48757934ff
Performance improvement to ZADD and ZRANGESTORE, convert to skiplist and expand dict in advance (#12185)
For zsets that will eventually be stored as the skiplist encoding (has a dict),
we can convert it to skiplist ahead of time. This change checks the number
of arguments in the ZADD command, and converts the data-structure
if the number of new entries exceeds the listpack-max-entries configuration.
This can cause us to over-allocate memory if there are duplicate entries in the
input, which is unexpected.

For ZRANGESTORE, we know the size of the zset, so we can expand
the dict in advance, to avoid the temporary dict from being rehashed
while it grows.

Simple benchmarks shows it provides some 4% improvement in ZADD and 20% in ZRANGESTORE
2023-05-18 15:24:46 +03:00
Binbin
fd566f4050
Fix for set max entries edge case in setTypeCreate / setTypeMaybeConvert (#12183)
In the judgment in setTypeCreate, we should judge size_hint <= max_entries.

This results in the following inconsistencies:
```
127.0.0.1:6379> config set set-max-intset-entries 5 set-max-listpack-entries 5
OK

127.0.0.1:6379> sadd intset_set1 1 2 3 4 5
(integer) 5
127.0.0.1:6379> object encoding intset_set1
"hashtable"
127.0.0.1:6379> sadd intset_set2 1 2 3 4
(integer) 4
127.0.0.1:6379> sadd intset_set2 5
(integer) 1
127.0.0.1:6379> object encoding intset_set2
"intset"

127.0.0.1:6379> sadd listpack_set1 a 1 2 3 4
(integer) 5
127.0.0.1:6379> object encoding listpack_set1
"hashtable"
127.0.0.1:6379> sadd listpack_set2 a 1 2 3
(integer) 4
127.0.0.1:6379> sadd listpack_set2 4
(integer) 1
127.0.0.1:6379> object encoding listpack_set2
"listpack"
```

This was introduced in #12019, added corresponding tests.
2023-05-16 11:32:21 -07:00
Wen Hui
e45272884e
Adding missing test case for smembers scard commands (#12148)
Minor missing test case addition.

SMEMBERS SCARD against non set
SMEMBERS SCARD against non existing key
2023-05-16 09:26:49 -07:00
JJ Lu
11cf5cbdcc
Fix bug: LPOS RANK LONG_ MIN causes overflow (#12167)
Limit the range of RANK to -LONG_ MAX ~ LONG_ MAX.
Without this limit, passing -9223372036854775808 would effectively
be the same as passing -1.
2023-05-14 09:04:33 +03:00
Wen Hui
42dd98ec19
adding missing test cases GET and GETEX (#12125)
adding test case of expired key or not exist for GET and GETEX.
for better test coverage.
2023-05-07 11:46:11 +03:00
Wen Hui
f32d1817e3
Updating missing test cases for Hash commands (#12116)
Adding missing test case against wrong type for HRANDFIELD HGET HGETALL HDEL HINCRBY HINCRBYFLOAT HSTRLEN.
2023-05-01 21:00:07 +03:00
Wen Hui
091412cf62
add test cases for decr decrby on missing key (#12070)
Minor test case addition for DECR and DECRBY.

Currently DECR and DECRBY do not have test case coverage for the
scenarios where they run on a non-existing key.
2023-04-19 09:55:56 +03:00
Wen Hui
d2db4aa753
Added getrange missing testcase (#12061)
Minor test case addition.
Currently GETRANGE command does not have the test case coverage for the scenarios:
An error is returned when key exists but of different type
Added missing test cases for getrange command.
2023-04-18 08:32:10 +03:00
Wen Hui
4375b01cc7
Adding missing test cases for substring (#12039)
There is are some missing test cases for SUBSTR command.
These might already be covered by GETRANGE, but no harm in adding them since they're simple.

Added 3 test case.

* start > stop
* start and stop both greater than string length
* when no key is present.
2023-04-13 21:48:26 +03:00
Wen Hui
bc82309ceb
Adding missing test cases for linsert command (#12040)
Currently LINSERT command does not have the test case coverage for following scenarios.
1. When key does not exist, it is considered an empty list and no operation is performed.
2. An error is returned when key exists but does not hold a list value.

Added above two missing test cases for linsert command.
2023-04-13 19:05:41 +03:00
Binbin
45b8eea19f
Add ZREMRANGEBYLEX basics tests to fix reply-schemas daily (#12021)
We do have ZREMRANGEBYLEX tests, but it is a stress test
marked with slow tag and then skipped in reply-schemas daily.

In the past, we were able to succeed on a daily, i guess
it was because there were some random command executions,
such as corrupt-dump-fuzzy, which might call it.

These test examples are taken from ZRANGEBYLEX basics test.
2023-04-11 11:14:16 +03:00
Oran Agra
48e0d47884
Avoid assertion when MSETNX is used with the same key twice (CVE-2023-28425) (#11940)
Using the same key twice in MSETNX command would trigger an assertion.

This reverts #11594 (introduced in Redis 7.0.8)
2023-03-20 18:50:44 +02:00
Binbin
0b159b34ea
Bump codespell to 2.2.4, fix typos and outupdated comments (#11911)
Fix some seen typos and wrong comments.
2023-03-16 08:50:32 +02:00
guybe7
4ba47d2d21
Add reply_schema to command json files (internal for now) (#10273)
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
Since ironing the details of the reply schema of each and every command can take a long time, we
would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.

### Background
In #9656 we add a lot of information about Redis commands, but we are missing information about the replies

### Motivation
1. Documentation. This is the primary goal.
2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
  languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
  testsuite, see the "Testing" section)

### Schema
The idea is to supply some sort of schema for the various replies of each command.
The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
and without the `FULL` modifier)
We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.

Example for `BZPOPMIN`:
```
"reply_schema": {
    "oneOf": [
        {
            "description": "Timeout reached and no elements were popped.",
            "type": "null"
        },
        {
            "description": "The keyname, popped member, and its score.",
            "type": "array",
            "minItems": 3,
            "maxItems": 3,
            "items": [
                {
                    "description": "Keyname",
                    "type": "string"
                },
                {
                    "description": "Member",
                    "type": "string"
                },
                {
                    "description": "Score",
                    "type": "number"
                }
            ]
        }
    ]
}
```

#### Notes
1.  It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
  to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
  where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
  schema for documentation (and possibly to create a fuzzer that validates the replies)
3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
  including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
  is with `WITHSCORES` and the other is without.
4. For documentation, there will be another optional field "notes" in which we will add a short description of
  the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
  array, for example)

Given the above:
1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
  (given that "description" and "notes" are comprehensive enough)
2. We can generate a client in a strongly typed language (but the return type could be a conceptual
  `union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
3. We can create a fuzzer for RESP3.

### Limitations (because we are using the standard json-schema)
The problem is that Redis' replies are more diverse than what the json format allows. This means that,
when we convert the reply to a json (in order to validate the schema against it), we lose information (see
the "Testing" section below).
The other option would have been to extend the standard json-schema (and json format) to include stuff
like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
seemed like too much work, so we decided to compromise.

Examples:
1. We cannot tell the difference between an "array" and a "set"
2. We cannot tell the difference between simple-string and bulk-string
3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
  case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
  compares (member,score) tuples and not just the member name. 

### Testing
This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
are indeed correct (i.e. describe the actual response of Redis).
To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
it executed and their replies.
For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
`--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
`--log-req-res --force-resp3`)
You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
`.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
These files are later on processed by `./utils/req-res-log-validator.py` which does:
1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
2. For each request-response pair, it validates the response against the request's reply_schema
  (obtained from the extended COMMAND DOCS)
5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
  the existing redis test suite, rather than attempt to write a fuzzer.

#### Notes about RESP2
1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
  accept RESP3 as the future RESP)
2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
  so that we can validate it, we will need to know how to convert the actual reply to the one expected.
   - number and boolean are always strings in RESP2 so the conversion is easy
   - objects (maps) are always a flat array in RESP2
   - others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
     handling (so the client will not be totally auto-generated)

Example for ZRANGE:
```
"reply_schema": {
    "anyOf": [
        {
            "description": "A list of member elements",
            "type": "array",
            "uniqueItems": true,
            "items": {
                "type": "string"
            }
        },
        {
            "description": "Members and their scores. Returned in case `WITHSCORES` was used.",
            "notes": "In RESP2 this is returned as a flat array",
            "type": "array",
            "uniqueItems": true,
            "items": {
                "type": "array",
                "minItems": 2,
                "maxItems": 2,
                "items": [
                    {
                        "description": "Member",
                        "type": "string"
                    },
                    {
                        "description": "Score",
                        "type": "number"
                    }
                ]
            }
        }
    ]
}
```

### Other changes
1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
  regardless of the special log-req-res mode ("Pub/Sub PING" for example)
2. Update the history field of CLIENT LIST
3. Added basic tests for commands that were not covered at all by the testsuite

### TODO

- [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
  when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
  is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
- [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
- [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
- [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
  of the tests - https://github.com/redis/redis/issues/11897
- [x] (probably a separate PR) add all missing schemas
- [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
- [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
  fight with the tcl including mechanism a bit)
- [x] issue: module API - https://github.com/redis/redis/issues/11898
- [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899

Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Shaya Potter <shaya@redislabs.com>
2023-03-11 10:14:16 +02:00
sundb
3fba3ccd96
Skip test for sdsRemoveFreeSpace when mem_allocator is not jemalloc (#11878)
Test `trim on SET with big value` (introduced from #11817) fails under mac m1 with libc mem_allocator.
The reason is that malloc(33000) will allocate 65536 bytes(>42000).
This test still passes under ubuntu with libc mem_allocator.

```
*** [err]: trim on SET with big value in tests/unit/type/string.tcl
Expected [r memory usage key] < 42000 (context: type source line 471 file /Users/iospack/data/redis_fork/tests/unit/type/string.tcl cmd {assert {[r memory usage key] < 42000}} proc ::test)
```

simple test under mac m1 with libc mem_allocator:
```c
void *p = zmalloc(33000);
printf("malloc size: %zu\n", zmalloc_size(p));

# output
malloc size: 65536
```
2023-03-07 09:06:58 +02:00
Oran Agra
b1939b052a
Integer Overflow in RAND commands can lead to assertion (CVE-2023-25155) (#11857)
Issue happens when passing a negative long value that greater than
the max positive value that the long can store.
2023-02-28 15:15:46 +02:00
Binbin
7d5382c0ff
Remove wrong code in list pot timeout test (#11805)
In #9373, actually need to replace `$rd $pop blist1{t} blist2{t} 1`
with `bpop_command_two_key $rd $pop blist1{t} blist2{t} 1` but forgot
to delete the latter.

This doesn't affect the test, because the later assert_error "WRONGTYPE"
is expected (and right). And if we read $rd again, it will get the
wrong result, like 'ERR unknown command 'BLMPOP_LEFT' | 'BLMPOP_RIGHT'
2023-02-15 07:46:56 +02:00
guybe7
9483ab0b8e
Minor changes around the blockonkeys test module (#11803)
All of the POP commands must not decr length below 0.
So, get_fsl will delete the key if the length is 0 (unless
the caller wished to create if doesn't exist)

Other:
1. Use REDISMODULE_WRITE where needed (POP commands)
2. Use wait_for_blokced_clients in test

Unrelated:
Use quotes instead of curly braces in zset.tcl, for variable expansion
2023-02-14 20:06:30 +02:00
uriyage
46393f9819
Optimization: sdsRemoveFreeSpace to avoid realloc on noop (#11766)
In #7875 (Redis 6.2), we changed the sds alloc to be the usable allocation
size in order to:

> reduce the need for realloc calls by making the sds implicitly take over
the internal fragmentation

This change was done most sds functions, excluding `sdsRemoveFreeSpace` and
`sdsResize`, the reason is that in some places (e.g. clientsCronResizeQueryBuffer)
we call sdsRemoveFreeSpace when we see excessive free space and want to trim it.
so if we don't trim it exactly to size, the caller may still see excessive free space and
call it again and again.

However, this resulted in some excessive calls to realloc, even when there's no need
and it's gonna be a no-op (e.g. when reducing 15 bytes allocation to 13).

It turns out that a call for realloc with jemalloc can be expensive even if it ends up
doing nothing, so this PR adds a check using `je_nallocx`, which is cheap to avoid
the call for realloc.

in addition to that this PR unifies sdsResize and sdsRemoveFreeSpace into common
code. the difference between them was that sdsResize would avoid using SDS_TYPE_5,
since it want to keep the string ready to be resized again, while sdsRemoveFreeSpace
would permit using SDS_TYPE_5 and get an optimal memory consumption.
now both methods take a `would_regrow` argument that makes it more explicit.

the only actual impact of that is that in clientsCronResizeQueryBuffer we call both sdsResize
and sdsRemoveFreeSpace for in different cases, and we now prevent the use of SDS_TYPE_5 in both.

The new test that was added to cover this concern used to pass before this PR as well,
this PR is just a performance optimization and cleanup.

Benchmark:
`redis-benchmark -c 100 -t set  -d 512 -P 10  -n  100000000`
on i7-9850H with jemalloc, shows improvement from 1021k ops/sec to 1067k (average of 3 runs).
some 4.5% improvement.

Co-authored-by: Oran Agra <oran@redislabs.com>
2023-01-31 17:26:35 +02:00
Oran Agra
16f408b1a0
Fix range issues in ZRANDMEMBER and HRANDFIELD (CVE-2023-22458) (#11674)
missing range check in ZRANDMEMBER and HRANDIFLD leading to panic due
to protocol limitations
2023-01-16 13:50:27 +02:00
Oran Agra
1ec82e6e97
Avoid integer overflows in SETRANGE and SORT (CVE-2022-35977) (#11720)
Authenticated users issuing specially crafted SETRANGE and SORT(_RO)
commands can trigger an integer overflow, resulting with Redis attempting
to allocate impossible amounts of memory and abort with an OOM panic.
2023-01-16 13:49:30 +02:00
Gabi Ganam
eef29b68a2
Blocking command with a 0.001 seconds timeout blocks indefinitely (#11688)
Any value in the range of [0-1) turns to 0 when being cast from double to long long. This change rounds up instead of down for values that can't be stored precisely as long doubles.
2023-01-08 01:02:48 -08:00
ranshid
383d902ce6
reprocess command when client is unblocked on keys (#11012)
*TL;DR*
---------------------------------------
Following the discussion over the issue [#7551](https://github.com/redis/redis/issues/7551)
We decided to refactor the client blocking code to eliminate some of the code duplications
and to rebuild the infrastructure better for future key blocking cases.


*In this PR*
---------------------------------------
1. reprocess the command once a client becomes unblocked on key (instead of running
   custom code for the unblocked path that's different than the one that would have run if
   blocking wasn't needed)
2. eliminate some (now) irrelevant code for handling unblocking lists/zsets/streams etc...
3. modify some tests to intercept the error in cases of error on reprocess after unblock (see
   details in the notes section below)
4. replace '$' on the client argv with current stream id. Since once we reprocess the stream
   XREAD we need to read from the last msg and not wait for new msg  in order to prevent
   endless block loop. 
5. Added statistics to the info "Clients" section to report the:
   * `total_blocking_keys` - number of blocking keys
   * `total_blocking_keys_on_nokey` - number of blocking keys which have at least 1 client
      which would like
   to be unblocked on when the key is deleted.
6. Avoid expiring unblocked key during unblock. Previously we used to lookup the unblocked key
   which might have been expired during the lookup. Now we lookup the key using NOTOUCH and
   NOEXPIRE to avoid deleting it at this point, so propagating commands in blocked.c is no longer needed.
7. deprecated command flags. We decided to remove the CMD_CALL_STATS and CMD_CALL_SLOWLOG
   and make an explicit verification in the call() function in order to decide if stats update should take place.
   This should simplify the logic and also mitigate existing issues: for example module calls which are
   triggered as part of AOF loading might still report stats even though they are called during AOF loading.

*Behavior changes*
---------------------------------------------------

1. As this implementation prevents writing dedicated code handling unblocked streams/lists/zsets,
since we now re-process the command once the client is unblocked some errors will be reported differently.
The old implementation used to issue
``UNBLOCKED the stream key no longer exists``
in the following cases:
   - The stream key has been deleted (ie. calling DEL)
   - The stream and group existed but the key type was changed by overriding it (ie. with set command)
   - The key not longer exists after we swapdb with a db which does not contains this key
   - After swapdb when the new db has this key but with different type.
   
In the new implementation the reported errors will be the same as if the command was processed after effect:
**NOGROUP** - in case key no longer exists, or **WRONGTYPE** in case the key was overridden with a different type.

2. Reprocessing the command means that some checks will be reevaluated once the
client is unblocked.
For example, ACL rules might change since the command originally was executed and
will fail once the client is unblocked.
Another example is OOM condition checks which might enable the command to run and
block but fail the command reprocess once the client is unblocked.

3. One of the changes in this PR is that no command stats are being updated once the
command is blocked (all stats will be updated once the client is unblocked). This implies
that when we have many clients blocked, users will no longer be able to get that information
from the command stats. However the information can still be gathered from the client list.

**Client blocking**
---------------------------------------------------

the blocking on key will still be triggered the same way as it is done today.
in order to block the current client on list of keys, the call to
blockForKeys will still need to be made which will perform the same as it is today:

*  add the client to the list of blocked clients on each key
*  keep the key with a matching list node (position in the global blocking clients list for that key)
   in the client private blocking key dict.
*  flag the client with CLIENT_BLOCKED
*  update blocking statistics
*  register the client on the timeout table

**Key Unblock**
---------------------------------------------------

Unblocking a specific key will be triggered (same as today) by calling signalKeyAsReady.
the implementation in that part will stay the same as today - adding the key to the global readyList.
The reason to maintain the readyList (as apposed to iterating over all clients blocked on the specific key)
is in order to keep the signal operation as short as possible, since it is called during the command processing.
The main change is that instead of going through a dedicated code path that operates the blocked command
we will just call processPendingCommandsAndResetClient.

**ClientUnblock (keys)**
---------------------------------------------------

1. Unblocking clients on keys will be triggered after command is
   processed and during the beforeSleep
8. the general schema is:
9. For each key *k* in the readyList:
```            
For each client *c* which is blocked on *k*:
            in case either:
	          1. *k* exists AND the *k* type matches the current client blocking type
	  	      OR
	          2. *k* exists and *c* is blocked on module command
	    	      OR
	          3. *k* does not exists and *c* was blocked with the flag
	             unblock_on_deleted_key
                 do:
                                  1. remove the client from the list of clients blocked on this key
                                  2. remove the blocking list node from the client blocking key dict
                                  3. remove the client from the timeout list
                                  10. queue the client on the unblocked_clients list
                                  11. *NEW*: call processCommandAndResetClient(c);
```
*NOTE:* for module blocked clients we will still call the moduleUnblockClientByHandle
              which will queue the client for processing in moduleUnblockedClients list.

**Process Unblocked clients**
---------------------------------------------------

The process of all unblocked clients is done in the beforeSleep and no change is planned
in that part.

The general schema will be:
For each client *c* in server.unblocked_clients:

        * remove client from the server.unblocked_clients
        * set back the client readHandler
        * continue processing the pending command and input buffer.

*Some notes regarding the new implementation*
---------------------------------------------------

1. Although it was proposed, it is currently difficult to remove the
   read handler from the client while it is blocked.
   The reason is that a blocked client should be unblocked when it is
   disconnected, or we might consume data into void.

2. While this PR mainly keep the current blocking logic as-is, there
   might be some future additions to the infrastructure that we would
   like to have:
   - allow non-preemptive blocking of client - sometimes we can think
     that a new kind of blocking can be expected to not be preempt. for
     example lets imagine we hold some keys on disk and when a command
     needs to process them it will block until the keys are uploaded.
     in this case we will want the client to not disconnect or be
     unblocked until the process is completed (remove the client read
     handler, prevent client timeout, disable unblock via debug command etc...).
   - allow generic blocking based on command declared keys - we might
     want to add a hook before command processing to check if any of the
     declared keys require the command to block. this way it would be
     easier to add new kinds of key-based blocking mechanisms.

Co-authored-by: Oran Agra <oran@redislabs.com>
Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
2023-01-01 23:35:42 +02:00
Oran Agra
669688a342
fix race in list test with blocking commands (#11627)
I've seen the `BRPOPLPUSH with multiple blocked clients` test hang.
this probably happened because rd2 blocked before rd1 and then it was
also released first, and rd1 remained blocked.

```
        r del blist{t} target1{t} target2{t}
        r set target1{t} nolist
        $rd1 brpoplpush blist{t} target1{t} 0
        $rd2 brpoplpush blist{t} target2{t} 0
        r lpush blist{t} foo

        assert_error "WRONGTYPE*" {$rd1 read}
        assert_equal {foo} [$rd2 read]
        assert_equal {foo} [r lrange target2{t} 0 -1]
```
changes:
* added all missing calls for wait_for_blocked_client after issuing blocking commands)
* removed some excessive `after 100`
* fix undetected crossslot error in BRPOPLPUSH test
* rollback changes to proto-max-bulk-len so external tests can be rerun
2022-12-18 17:14:14 +02:00
Binbin
20854cb610
Fix zuiFind crash / RM_ScanKey hang on SET object listpack encoding (#11581)
In #11290, we added listpack encoding for SET object.
But forgot to support it in zuiFind, causes ZINTER, ZINTERSTORE,
ZINTERCARD, ZIDFF, ZDIFFSTORE to crash.
And forgot to support it in RM_ScanKey, causes it hang.

This PR add support SET listpack in zuiFind, and in RM_ScanKey.
And add tests for related commands to cover this case.

Other changes:
- There is no reason for zuiFind to go into the internals of the SET.
  It can simply use setTypeIsMember and don't care about encoding.
- Remove the `#include "intset.h"` from server.h reduce the chance of
  accidental intset API use.
- Move setTypeAddAux, setTypeRemoveAux and setTypeIsMemberAux
  interfaces to the header.
- In scanGenericCommand, use setTypeInitIterator and setTypeNext
  to handle OBJ_SET scan.
- In RM_ScanKey, improve hash scan mode, use lpGetValue like zset,
  they can share code and better performance.

The zuiFind part fixes #11578

Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2022-12-09 17:08:01 +02:00
Binbin
79fe450ebc
Regenerate payloads for cgroups tests using string2printable (#11560)
The test failed with ERR DUMP payload version or checksum are wrong.
And it only fails on CentOS, this is due to the fact that tcl8.5 does not correctly
parse the hexadecimal abbreviation. And in Ubuntu we are using tcl8.6.
2022-12-01 09:11:33 +02:00
guybe7
72e90695ec
Stream consumers: Re-purpose seen-time, add active-time (#11099)
1. "Fixed" the current code so that seen-time/idle actually refers to interaction
  attempts (as documented; breaking change)
2. Added active-time/inactive to refer to successful interaction (what
  seen-time/idle used to be)

At first, I tried to avoid changing the behavior of seen-time/idle but then realized
that, in this case, the odds are the people read the docs and implemented their
code based on the docs (which didn't match the behavior).
For the most part, that would work fine, except that issue #9996 was found.

I was working under the assumption that people relied on the docs, and for
the most part, it could have worked well enough. so instead of fixing the docs,
as I would usually do, I fixed the code to match the docs in this particular case.

Note that, in case the consumer has never read any entries, the values
for both "active-time" (XINFO FULL) and "inactive" (XINFO CONSUMERS) will
be -1, meaning here that the consumer was never active.

Note that seen/active time is only affected by XREADGROUP / X[AUTO]CLAIM, not
by XPENDING, XINFO, and other "read-only" stream CG commands (always has been,
even before this PR)

Other changes:
* Another behavioral change (arguably a bugfix) is that XREADGROUP and X[AUTO]CLAIM
  create the consumer regardless of whether it was able to perform some reading/claiming
* RDB format change to save the `active_time`, and set it to the same value of `seen_time` in old rdb files.
2022-11-30 14:21:31 +02:00
C Charles
eeca7f2911
Add withscore option to ZRANK and ZREVRANK. (#11235)
Add an option "withscores" to ZRANK and ZREVRANK.

Add `[withscore]` option to both `zrank` and `zrevrank`, like this:
```
z[rev]rank key member [withscore]
```
2022-11-28 11:57:11 +02:00
Binbin
543e0daa63
Make assert_refcount skip the OBJECT REFCOUNT check with needs:debug tag (#11487)
This PR add `assert_refcount_morethan`, and modify `assert_refcount` to skip
the `OBJECT REFCOUNT` check with `needs:debug` flag. Use them to modify all
`OBJECT REFCOUNT` calls and also update the tests/README to be more specific.

The reasoning is that some of these tests could be testing something important,
and along the way also add a check for the refcount, and it could be a shame to skip
the whole test just because the refcount functionality is missing or blocked.
but much like the fact that some redis variants may not support DEBUG,
and still we want to run the majority of the test for coverage, and just skip the digest match.
2022-11-22 16:38:27 +02:00
sundb
2168ccc661
Add listpack encoding for list (#11303)
Improve memory efficiency of list keys

## Description of the feature
The new listpack encoding uses the old `list-max-listpack-size` config
to perform the conversion, which we can think it of as a node inside a
quicklist, but without 80 bytes overhead (internal fragmentation included)
of quicklist and quicklistNode structs.
For example, a list key with 5 items of 10 chars each, now takes 128 bytes
instead of 208 it used to take.

## Conversion rules
* Convert listpack to quicklist
  When the listpack length or size reaches the `list-max-listpack-size` limit,
  it will be converted to a quicklist.
* Convert quicklist to listpack
  When a quicklist has only one node, and its length or size is reduced to half
  of the `list-max-listpack-size` limit, it will be converted to a listpack.
  This is done to avoid frequent conversions when we add or remove at the bounding size or length.
    
## Interface changes
1. add list entry param to listTypeSetIteratorDirection
    When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry,
    so when changing the direction, we need to use the current node (listTypeEntry->p) to 
    update `listTypeIterator->lpi` to the next node in the reverse direction.

## Benchmark
### Listpack VS Quicklist with one node
* LPUSH - roughly 0.3% improvement
* LRANGE - roughly 13% improvement

### Both are quicklist
* LRANGE - roughly 3% improvement
* LRANGE without pipeline - roughly 3% improvement

From the benchmark, as we can see from the results
1. When list is quicklist encoding, LRANGE improves performance by <5%.
2. When list is listpack encoding, LRANGE improves performance by ~13%,
   the main enhancement is brought by `addListListpackRangeReply()`.

## Memory usage
1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each.
shows memory usage down by 35.49%, from 214MB to 138MB.

## Note
1. Add conversion callback to support doing some work before conversion
    Since the quicklist iterator decompresses the current node when it is released, we can 
    no longer decompress the quicklist after we convert the list.
2022-11-16 20:29:46 +02:00
Viktor Söderqvist
4e472a1a7f
Listpack encoding for sets (#11290)
Small sets with not only integer elements are listpack encoded, by default
up to 128 elements, max 64 bytes per element, new config `set-max-listpack-entries`
and `set-max-listpack-value`. This saves memory for small sets compared to using a hashtable.

Sets with only integers, even very small sets, are still intset encoded (up to 1G
limit, etc.). Larger sets are hashtable encoded.

This PR increments the RDB version, and has an effect on OBJECT ENCODING

Possible conversions when elements are added:

    intset -> listpack
    listpack -> hashtable
    intset -> hashtable

Note: No conversion happens when elements are deleted. If all elements are
deleted and then added again, the set is deleted and recreated, thus implicitly
converted to a smaller encoding.
2022-11-09 19:50:07 +02:00
Wen Hui
7395e370e6
Fix XSETID with max_deleted_entry_id issue (#11444)
Resolve an edge case where the ID of a stream is updated retroactively
to an ID lower than the already set max_deleted_entry_id.

Currently, if we have command as below:
**xsetid mystream 1-1 MAXDELETEDID 1-2**
Then we will get the following error:
**(error) ERR The ID specified in XSETID is smaller than the provided max_deleted_entry_id**
Becuase the provided MAXDELETEDID 1-2 is greated than input last-id: 1-1

Then we could assume there is a similar situation:
step 1: we add three items in the mystream

**127.0.0.1:6381> xadd mystream 1-1 a 1
"1-1"
127.0.0.1:6381> xadd mystream 1-2 b 2
"1-2"
127.0.0.1:6381> xadd mystream 1-3 c 3
"1-3"**

step 2: we could check the mystream infomation as below:
**127.0.0.1:6381> xinfo stream mystream
 1) "length"
 2) (integer) 3
 7) "last-generated-id"
 8) "1-3"
 9) "max-deleted-entry-id"
10) "0-0"

step 3: we delete the item id 1-2 and 1-3 as below:
**127.0.0.1:6381> xdel mystream 1-2
(integer) 1
127.0.0.1:6381> xdel mystream 1-3
(integer) 1**

step 4: we check the mystream information:
127.0.0.1:6381> xinfo stream mystream
 1) "length"
 2) (integer) 1
 7) "last-generated-id"
 8) "1-3"
 9) "max-deleted-entry-id"
10) "1-3"

we could notice that the **max-deleted-entry-id update to 1-3**, so right now, if we just run:
**xsetid mystream 1-2** 
the above command has the same effect with **xsetid mystream 1-2  MAXDELETEDID 1-3**

So we should return an error to the client that **(error) ERR The ID specified in XSETID is smaller than current max_deleted_entry_id**
2022-11-02 16:16:16 +02:00
guybe7
737a090511
Set errno in case XADD with partial ID fails (#11424)
This is a rare failure mode of a new feature of redis 7 introduced in #9217
(when the incremental part of the ID overflows).
Till now, the outcome of that error was undetermined (could easily result in
`Elements are too large to be stored` wrongly, due to unset `errno`).
2022-10-24 18:27:56 +03:00
filipe oliveira
29380ff77d
optimizing d2string() and addReplyDouble() with grisu2: double to string conversion based on Florian Loitsch's Grisu-algorithm (#10587)
All commands / use cases that heavily rely on double to a string representation conversion,
(e.g. meaning take a double-precision floating-point number like 1.5 and return a string like "1.5" ),
could benefit from a performance boost by swapping snprintf(buf,len,"%.17g",value) by the
equivalent [fpconv_dtoa](https://github.com/night-shift/fpconv) or any other algorithm that ensures
100% coverage of conversion.

This is a well-studied topic and Projects like MongoDB. RedPanda, PyTorch leverage libraries
( fmtlib ) that use the optimized double to string conversion underneath.


The positive impact can be substantial. This PR uses the grisu2 approach ( grisu explained on
https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf section 5 ). 

test suite changes:
Despite being compatible, in some cases it produces a different result from printf, and some tests
had to be adjusted.
one case is that `%.17g` (which means %e or %f which ever is shorter), chose to use `5000000000`
instead of 5e+9, which sounds like a bug?
In other cases, we changed TCL to compare numbers instead of strings to ignore minor rounding
issues (`expr 0.8 == 0.79999999999999999`)
2022-10-15 12:17:41 +03:00