2011-12-19 04:21:50 -05:00
|
|
|
start_server {tags {"introspection"}} {
|
2022-01-23 03:05:06 -05:00
|
|
|
test "PING" {
|
|
|
|
assert_equal {PONG} [r ping]
|
|
|
|
assert_equal {redis} [r ping redis]
|
|
|
|
assert_error {*wrong number of arguments for 'ping' command} {r ping hello redis}
|
|
|
|
}
|
|
|
|
|
2011-12-19 04:21:50 -05:00
|
|
|
test {CLIENT LIST} {
|
|
|
|
r client list
|
Add reply_schema to command json files (internal for now) (#10273)
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
Since ironing the details of the reply schema of each and every command can take a long time, we
would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
### Background
In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
### Motivation
1. Documentation. This is the primary goal.
2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
testsuite, see the "Testing" section)
### Schema
The idea is to supply some sort of schema for the various replies of each command.
The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
and without the `FULL` modifier)
We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
Example for `BZPOPMIN`:
```
"reply_schema": {
"oneOf": [
{
"description": "Timeout reached and no elements were popped.",
"type": "null"
},
{
"description": "The keyname, popped member, and its score.",
"type": "array",
"minItems": 3,
"maxItems": 3,
"items": [
{
"description": "Keyname",
"type": "string"
},
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
]
}
```
#### Notes
1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
schema for documentation (and possibly to create a fuzzer that validates the replies)
3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
is with `WITHSCORES` and the other is without.
4. For documentation, there will be another optional field "notes" in which we will add a short description of
the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
array, for example)
Given the above:
1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
(given that "description" and "notes" are comprehensive enough)
2. We can generate a client in a strongly typed language (but the return type could be a conceptual
`union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
3. We can create a fuzzer for RESP3.
### Limitations (because we are using the standard json-schema)
The problem is that Redis' replies are more diverse than what the json format allows. This means that,
when we convert the reply to a json (in order to validate the schema against it), we lose information (see
the "Testing" section below).
The other option would have been to extend the standard json-schema (and json format) to include stuff
like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
seemed like too much work, so we decided to compromise.
Examples:
1. We cannot tell the difference between an "array" and a "set"
2. We cannot tell the difference between simple-string and bulk-string
3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
compares (member,score) tuples and not just the member name.
### Testing
This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
are indeed correct (i.e. describe the actual response of Redis).
To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
it executed and their replies.
For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
`--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
`--log-req-res --force-resp3`)
You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
`.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
These files are later on processed by `./utils/req-res-log-validator.py` which does:
1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
2. For each request-response pair, it validates the response against the request's reply_schema
(obtained from the extended COMMAND DOCS)
5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
the existing redis test suite, rather than attempt to write a fuzzer.
#### Notes about RESP2
1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
accept RESP3 as the future RESP)
2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
so that we can validate it, we will need to know how to convert the actual reply to the one expected.
- number and boolean are always strings in RESP2 so the conversion is easy
- objects (maps) are always a flat array in RESP2
- others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
handling (so the client will not be totally auto-generated)
Example for ZRANGE:
```
"reply_schema": {
"anyOf": [
{
"description": "A list of member elements",
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
},
{
"description": "Members and their scores. Returned in case `WITHSCORES` was used.",
"notes": "In RESP2 this is returned as a flat array",
"type": "array",
"uniqueItems": true,
"items": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": [
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
}
]
}
```
### Other changes
1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
regardless of the special log-req-res mode ("Pub/Sub PING" for example)
2. Update the history field of CLIENT LIST
3. Added basic tests for commands that were not covered at all by the testsuite
### TODO
- [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
- [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
- [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
- [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
of the tests - https://github.com/redis/redis/issues/11897
- [x] (probably a separate PR) add all missing schemas
- [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
- [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
fight with the tcl including mechanism a bit)
- [x] issue: module API - https://github.com/redis/redis/issues/11898
- [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Shaya Potter <shaya@redislabs.com>
2023-03-11 03:14:16 -05:00
|
|
|
} {id=* addr=*:* laddr=*:* fd=* name=* age=* idle=* flags=N db=* sub=0 psub=0 ssub=0 multi=-1 qbuf=26 qbuf-free=* argv-mem=* multi-mem=0 rbs=* rbp=* obl=0 oll=0 omem=0 tot-mem=* events=r cmd=client|list user=* redir=-1 resp=*}
|
2020-12-07 07:24:05 -05:00
|
|
|
|
|
|
|
test {CLIENT LIST with IDs} {
|
|
|
|
set myid [r client id]
|
|
|
|
set cl [split [r client list id $myid] "\r\n"]
|
2022-01-23 03:05:06 -05:00
|
|
|
assert_match "id=$myid * cmd=client|list *" [lindex $cl 0]
|
2020-12-07 07:24:05 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
test {CLIENT INFO} {
|
|
|
|
r client info
|
Add reply_schema to command json files (internal for now) (#10273)
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
Since ironing the details of the reply schema of each and every command can take a long time, we
would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
### Background
In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
### Motivation
1. Documentation. This is the primary goal.
2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
testsuite, see the "Testing" section)
### Schema
The idea is to supply some sort of schema for the various replies of each command.
The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
and without the `FULL` modifier)
We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
Example for `BZPOPMIN`:
```
"reply_schema": {
"oneOf": [
{
"description": "Timeout reached and no elements were popped.",
"type": "null"
},
{
"description": "The keyname, popped member, and its score.",
"type": "array",
"minItems": 3,
"maxItems": 3,
"items": [
{
"description": "Keyname",
"type": "string"
},
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
]
}
```
#### Notes
1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
schema for documentation (and possibly to create a fuzzer that validates the replies)
3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
is with `WITHSCORES` and the other is without.
4. For documentation, there will be another optional field "notes" in which we will add a short description of
the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
array, for example)
Given the above:
1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
(given that "description" and "notes" are comprehensive enough)
2. We can generate a client in a strongly typed language (but the return type could be a conceptual
`union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
3. We can create a fuzzer for RESP3.
### Limitations (because we are using the standard json-schema)
The problem is that Redis' replies are more diverse than what the json format allows. This means that,
when we convert the reply to a json (in order to validate the schema against it), we lose information (see
the "Testing" section below).
The other option would have been to extend the standard json-schema (and json format) to include stuff
like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
seemed like too much work, so we decided to compromise.
Examples:
1. We cannot tell the difference between an "array" and a "set"
2. We cannot tell the difference between simple-string and bulk-string
3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
compares (member,score) tuples and not just the member name.
### Testing
This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
are indeed correct (i.e. describe the actual response of Redis).
To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
it executed and their replies.
For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
`--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
`--log-req-res --force-resp3`)
You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
`.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
These files are later on processed by `./utils/req-res-log-validator.py` which does:
1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
2. For each request-response pair, it validates the response against the request's reply_schema
(obtained from the extended COMMAND DOCS)
5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
the existing redis test suite, rather than attempt to write a fuzzer.
#### Notes about RESP2
1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
accept RESP3 as the future RESP)
2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
so that we can validate it, we will need to know how to convert the actual reply to the one expected.
- number and boolean are always strings in RESP2 so the conversion is easy
- objects (maps) are always a flat array in RESP2
- others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
handling (so the client will not be totally auto-generated)
Example for ZRANGE:
```
"reply_schema": {
"anyOf": [
{
"description": "A list of member elements",
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
},
{
"description": "Members and their scores. Returned in case `WITHSCORES` was used.",
"notes": "In RESP2 this is returned as a flat array",
"type": "array",
"uniqueItems": true,
"items": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": [
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
}
]
}
```
### Other changes
1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
regardless of the special log-req-res mode ("Pub/Sub PING" for example)
2. Update the history field of CLIENT LIST
3. Added basic tests for commands that were not covered at all by the testsuite
### TODO
- [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
- [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
- [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
- [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
of the tests - https://github.com/redis/redis/issues/11897
- [x] (probably a separate PR) add all missing schemas
- [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
- [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
fight with the tcl including mechanism a bit)
- [x] issue: module API - https://github.com/redis/redis/issues/11898
- [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Shaya Potter <shaya@redislabs.com>
2023-03-11 03:14:16 -05:00
|
|
|
} {id=* addr=*:* laddr=*:* fd=* name=* age=* idle=* flags=N db=* sub=0 psub=0 ssub=0 multi=-1 qbuf=26 qbuf-free=* argv-mem=* multi-mem=0 rbs=* rbp=* obl=0 oll=0 omem=0 tot-mem=* events=r cmd=client|info user=* redir=-1 resp=*}
|
2012-04-07 05:26:24 -04:00
|
|
|
|
2021-11-29 16:35:36 -05:00
|
|
|
test {CLIENT KILL with illegal arguments} {
|
2022-01-23 03:05:06 -05:00
|
|
|
assert_error "ERR wrong number of arguments for 'client|kill' command" {r client kill}
|
2021-11-29 16:35:36 -05:00
|
|
|
assert_error "ERR syntax error*" {r client kill id 10 wrong_arg}
|
|
|
|
|
2022-04-14 04:18:32 -04:00
|
|
|
assert_error "ERR *greater than 0*" {r client kill id str}
|
|
|
|
assert_error "ERR *greater than 0*" {r client kill id -1}
|
|
|
|
assert_error "ERR *greater than 0*" {r client kill id 0}
|
2021-11-29 16:35:36 -05:00
|
|
|
|
|
|
|
assert_error "ERR Unknown client type*" {r client kill type wrong_type}
|
|
|
|
|
|
|
|
assert_error "ERR No such user*" {r client kill user wrong_user}
|
|
|
|
|
|
|
|
assert_error "ERR syntax error*" {r client kill skipme yes_or_no}
|
|
|
|
}
|
|
|
|
|
|
|
|
test {CLIENT KILL SKIPME YES/NO will kill all clients} {
|
|
|
|
# Kill all clients except `me`
|
|
|
|
set rd1 [redis_deferring_client]
|
|
|
|
set rd2 [redis_deferring_client]
|
|
|
|
set connected_clients [s connected_clients]
|
|
|
|
assert {$connected_clients >= 3}
|
|
|
|
set res [r client kill skipme yes]
|
|
|
|
assert {$res == $connected_clients - 1}
|
|
|
|
|
|
|
|
# Kill all clients, including `me`
|
|
|
|
set rd3 [redis_deferring_client]
|
|
|
|
set rd4 [redis_deferring_client]
|
|
|
|
set connected_clients [s connected_clients]
|
|
|
|
assert {$connected_clients == 3}
|
|
|
|
set res [r client kill skipme no]
|
|
|
|
assert_equal $res $connected_clients
|
|
|
|
|
|
|
|
# After killing `me`, the first ping will throw an error
|
|
|
|
assert_error "*I/O error*" {r ping}
|
|
|
|
assert_equal "PONG" [r ping]
|
2022-11-04 12:46:37 -04:00
|
|
|
|
|
|
|
$rd1 close
|
|
|
|
$rd2 close
|
|
|
|
$rd3 close
|
|
|
|
$rd4 close
|
|
|
|
}
|
|
|
|
|
|
|
|
test "CLIENT KILL close the client connection during bgsave" {
|
|
|
|
# Start a slow bgsave, trigger an active fork.
|
|
|
|
r flushall
|
|
|
|
r set k v
|
|
|
|
r config set rdb-key-save-delay 10000000
|
|
|
|
r bgsave
|
|
|
|
wait_for_condition 1000 10 {
|
|
|
|
[s rdb_bgsave_in_progress] eq 1
|
|
|
|
} else {
|
|
|
|
fail "bgsave did not start in time"
|
|
|
|
}
|
|
|
|
|
|
|
|
# Kill (close) the connection
|
|
|
|
r client kill skipme no
|
|
|
|
|
|
|
|
# In the past, client connections needed to wait for bgsave
|
|
|
|
# to end before actually closing, now they are closed immediately.
|
|
|
|
assert_error "*I/O error*" {r ping} ;# get the error very quickly
|
|
|
|
assert_equal "PONG" [r ping]
|
|
|
|
|
|
|
|
# Make sure the bgsave is still in progress
|
|
|
|
assert_equal [s rdb_bgsave_in_progress] 1
|
|
|
|
|
|
|
|
# Stop the child before we proceed to the next test
|
|
|
|
r config set rdb-key-save-delay 0
|
|
|
|
r flushall
|
|
|
|
wait_for_condition 1000 10 {
|
|
|
|
[s rdb_bgsave_in_progress] eq 0
|
|
|
|
} else {
|
|
|
|
fail "bgsave did not stop in time"
|
|
|
|
}
|
2022-11-08 07:58:38 -05:00
|
|
|
} {} {needs:save}
|
2021-11-29 16:35:36 -05:00
|
|
|
|
Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875)
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes #11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-03-12 11:50:44 -04:00
|
|
|
test "CLIENT REPLY OFF/ON: disable all commands reply" {
|
|
|
|
set rd [redis_deferring_client]
|
|
|
|
|
|
|
|
# These replies were silenced.
|
|
|
|
$rd client reply off
|
|
|
|
$rd ping pong
|
|
|
|
$rd ping pong2
|
|
|
|
|
|
|
|
$rd client reply on
|
|
|
|
assert_equal {OK} [$rd read]
|
|
|
|
$rd ping pong3
|
|
|
|
assert_equal {pong3} [$rd read]
|
|
|
|
|
|
|
|
$rd close
|
|
|
|
}
|
|
|
|
|
|
|
|
test "CLIENT REPLY SKIP: skip the next command reply" {
|
|
|
|
set rd [redis_deferring_client]
|
|
|
|
|
|
|
|
# The first pong reply was silenced.
|
|
|
|
$rd client reply skip
|
|
|
|
$rd ping pong
|
|
|
|
|
|
|
|
$rd ping pong2
|
|
|
|
assert_equal {pong2} [$rd read]
|
|
|
|
|
|
|
|
$rd close
|
|
|
|
}
|
|
|
|
|
|
|
|
test "CLIENT REPLY ON: unset SKIP flag" {
|
|
|
|
set rd [redis_deferring_client]
|
|
|
|
|
|
|
|
$rd client reply skip
|
|
|
|
$rd client reply on
|
|
|
|
assert_equal {OK} [$rd read] ;# OK from CLIENT REPLY ON command
|
|
|
|
|
|
|
|
$rd ping
|
|
|
|
assert_equal {PONG} [$rd read]
|
|
|
|
|
|
|
|
$rd close
|
|
|
|
}
|
|
|
|
|
2012-04-07 05:26:24 -04:00
|
|
|
test {MONITOR can log executed commands} {
|
|
|
|
set rd [redis_deferring_client]
|
|
|
|
$rd monitor
|
2015-08-12 01:56:17 -04:00
|
|
|
assert_match {*OK*} [$rd read]
|
2012-04-07 05:26:24 -04:00
|
|
|
r set foo bar
|
|
|
|
r get foo
|
2021-11-15 04:07:43 -05:00
|
|
|
set res [list [$rd read] [$rd read]]
|
|
|
|
$rd close
|
|
|
|
set _ $res
|
2015-08-12 01:56:17 -04:00
|
|
|
} {*"set" "foo"*"get" "foo"*}
|
2012-04-07 05:26:24 -04:00
|
|
|
|
|
|
|
test {MONITOR can log commands issued by the scripting engine} {
|
|
|
|
set rd [redis_deferring_client]
|
|
|
|
$rd monitor
|
|
|
|
$rd read ;# Discard the OK
|
2015-08-12 01:56:17 -04:00
|
|
|
r eval {redis.call('set',KEYS[1],ARGV[1])} 1 foo bar
|
2012-04-07 05:26:24 -04:00
|
|
|
assert_match {*eval*} [$rd read]
|
|
|
|
assert_match {*lua*"set"*"foo"*"bar"*} [$rd read]
|
2021-11-15 04:07:43 -05:00
|
|
|
$rd close
|
2012-04-07 05:26:24 -04:00
|
|
|
}
|
2013-01-14 04:19:20 -05:00
|
|
|
|
2022-11-15 20:21:27 -05:00
|
|
|
test {MONITOR can log commands issued by functions} {
|
|
|
|
r function load replace {#!lua name=test
|
|
|
|
redis.register_function('test', function() return redis.call('set', 'foo', 'bar') end)
|
|
|
|
}
|
|
|
|
set rd [redis_deferring_client]
|
|
|
|
$rd monitor
|
|
|
|
$rd read ;# Discard the OK
|
|
|
|
r fcall test 0
|
|
|
|
assert_match {*fcall*test*} [$rd read]
|
|
|
|
assert_match {*lua*"set"*"foo"*"bar"*} [$rd read]
|
|
|
|
$rd close
|
|
|
|
}
|
|
|
|
|
2021-05-19 11:23:54 -04:00
|
|
|
test {MONITOR supports redacting command arguments} {
|
|
|
|
set rd [redis_deferring_client]
|
|
|
|
$rd monitor
|
|
|
|
$rd read ; # Discard the OK
|
|
|
|
|
|
|
|
r migrate [srv 0 host] [srv 0 port] key 9 5000
|
|
|
|
r migrate [srv 0 host] [srv 0 port] key 9 5000 AUTH user
|
|
|
|
r migrate [srv 0 host] [srv 0 port] key 9 5000 AUTH2 user password
|
|
|
|
catch {r auth not-real} _
|
|
|
|
catch {r auth not-real not-a-password} _
|
Add reply_schema to command json files (internal for now) (#10273)
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
Since ironing the details of the reply schema of each and every command can take a long time, we
would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
### Background
In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
### Motivation
1. Documentation. This is the primary goal.
2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
testsuite, see the "Testing" section)
### Schema
The idea is to supply some sort of schema for the various replies of each command.
The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
and without the `FULL` modifier)
We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
Example for `BZPOPMIN`:
```
"reply_schema": {
"oneOf": [
{
"description": "Timeout reached and no elements were popped.",
"type": "null"
},
{
"description": "The keyname, popped member, and its score.",
"type": "array",
"minItems": 3,
"maxItems": 3,
"items": [
{
"description": "Keyname",
"type": "string"
},
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
]
}
```
#### Notes
1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
schema for documentation (and possibly to create a fuzzer that validates the replies)
3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
is with `WITHSCORES` and the other is without.
4. For documentation, there will be another optional field "notes" in which we will add a short description of
the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
array, for example)
Given the above:
1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
(given that "description" and "notes" are comprehensive enough)
2. We can generate a client in a strongly typed language (but the return type could be a conceptual
`union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
3. We can create a fuzzer for RESP3.
### Limitations (because we are using the standard json-schema)
The problem is that Redis' replies are more diverse than what the json format allows. This means that,
when we convert the reply to a json (in order to validate the schema against it), we lose information (see
the "Testing" section below).
The other option would have been to extend the standard json-schema (and json format) to include stuff
like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
seemed like too much work, so we decided to compromise.
Examples:
1. We cannot tell the difference between an "array" and a "set"
2. We cannot tell the difference between simple-string and bulk-string
3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
compares (member,score) tuples and not just the member name.
### Testing
This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
are indeed correct (i.e. describe the actual response of Redis).
To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
it executed and their replies.
For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
`--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
`--log-req-res --force-resp3`)
You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
`.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
These files are later on processed by `./utils/req-res-log-validator.py` which does:
1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
2. For each request-response pair, it validates the response against the request's reply_schema
(obtained from the extended COMMAND DOCS)
5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
the existing redis test suite, rather than attempt to write a fuzzer.
#### Notes about RESP2
1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
accept RESP3 as the future RESP)
2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
so that we can validate it, we will need to know how to convert the actual reply to the one expected.
- number and boolean are always strings in RESP2 so the conversion is easy
- objects (maps) are always a flat array in RESP2
- others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
handling (so the client will not be totally auto-generated)
Example for ZRANGE:
```
"reply_schema": {
"anyOf": [
{
"description": "A list of member elements",
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
},
{
"description": "Members and their scores. Returned in case `WITHSCORES` was used.",
"notes": "In RESP2 this is returned as a flat array",
"type": "array",
"uniqueItems": true,
"items": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": [
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
}
]
}
```
### Other changes
1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
regardless of the special log-req-res mode ("Pub/Sub PING" for example)
2. Update the history field of CLIENT LIST
3. Added basic tests for commands that were not covered at all by the testsuite
### TODO
- [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
- [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
- [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
- [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
of the tests - https://github.com/redis/redis/issues/11897
- [x] (probably a separate PR) add all missing schemas
- [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
- [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
fight with the tcl including mechanism a bit)
- [x] issue: module API - https://github.com/redis/redis/issues/11898
- [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Shaya Potter <shaya@redislabs.com>
2023-03-11 03:14:16 -05:00
|
|
|
|
2021-05-19 11:23:54 -04:00
|
|
|
assert_match {*"key"*"9"*"5000"*} [$rd read]
|
|
|
|
assert_match {*"key"*"9"*"5000"*"(redacted)"*} [$rd read]
|
|
|
|
assert_match {*"key"*"9"*"5000"*"(redacted)"*"(redacted)"*} [$rd read]
|
|
|
|
assert_match {*"auth"*"(redacted)"*} [$rd read]
|
|
|
|
assert_match {*"auth"*"(redacted)"*"(redacted)"*} [$rd read]
|
Add reply_schema to command json files (internal for now) (#10273)
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
Since ironing the details of the reply schema of each and every command can take a long time, we
would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
### Background
In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
### Motivation
1. Documentation. This is the primary goal.
2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
testsuite, see the "Testing" section)
### Schema
The idea is to supply some sort of schema for the various replies of each command.
The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
and without the `FULL` modifier)
We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
Example for `BZPOPMIN`:
```
"reply_schema": {
"oneOf": [
{
"description": "Timeout reached and no elements were popped.",
"type": "null"
},
{
"description": "The keyname, popped member, and its score.",
"type": "array",
"minItems": 3,
"maxItems": 3,
"items": [
{
"description": "Keyname",
"type": "string"
},
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
]
}
```
#### Notes
1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
schema for documentation (and possibly to create a fuzzer that validates the replies)
3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
is with `WITHSCORES` and the other is without.
4. For documentation, there will be another optional field "notes" in which we will add a short description of
the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
array, for example)
Given the above:
1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
(given that "description" and "notes" are comprehensive enough)
2. We can generate a client in a strongly typed language (but the return type could be a conceptual
`union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
3. We can create a fuzzer for RESP3.
### Limitations (because we are using the standard json-schema)
The problem is that Redis' replies are more diverse than what the json format allows. This means that,
when we convert the reply to a json (in order to validate the schema against it), we lose information (see
the "Testing" section below).
The other option would have been to extend the standard json-schema (and json format) to include stuff
like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
seemed like too much work, so we decided to compromise.
Examples:
1. We cannot tell the difference between an "array" and a "set"
2. We cannot tell the difference between simple-string and bulk-string
3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
compares (member,score) tuples and not just the member name.
### Testing
This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
are indeed correct (i.e. describe the actual response of Redis).
To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
it executed and their replies.
For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
`--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
`--log-req-res --force-resp3`)
You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
`.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
These files are later on processed by `./utils/req-res-log-validator.py` which does:
1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
2. For each request-response pair, it validates the response against the request's reply_schema
(obtained from the extended COMMAND DOCS)
5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
the existing redis test suite, rather than attempt to write a fuzzer.
#### Notes about RESP2
1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
accept RESP3 as the future RESP)
2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
so that we can validate it, we will need to know how to convert the actual reply to the one expected.
- number and boolean are always strings in RESP2 so the conversion is easy
- objects (maps) are always a flat array in RESP2
- others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
handling (so the client will not be totally auto-generated)
Example for ZRANGE:
```
"reply_schema": {
"anyOf": [
{
"description": "A list of member elements",
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
},
{
"description": "Members and their scores. Returned in case `WITHSCORES` was used.",
"notes": "In RESP2 this is returned as a flat array",
"type": "array",
"uniqueItems": true,
"items": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": [
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
}
]
}
```
### Other changes
1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
regardless of the special log-req-res mode ("Pub/Sub PING" for example)
2. Update the history field of CLIENT LIST
3. Added basic tests for commands that were not covered at all by the testsuite
### TODO
- [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
- [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
- [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
- [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
of the tests - https://github.com/redis/redis/issues/11897
- [x] (probably a separate PR) add all missing schemas
- [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
- [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
fight with the tcl including mechanism a bit)
- [x] issue: module API - https://github.com/redis/redis/issues/11898
- [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Shaya Potter <shaya@redislabs.com>
2023-03-11 03:14:16 -05:00
|
|
|
|
|
|
|
foreach resp {3 2} {
|
|
|
|
if {[lsearch $::denytags "resp3"] >= 0} {
|
|
|
|
if {$resp == 3} {continue}
|
|
|
|
} elseif {$::force_resp3} {
|
|
|
|
if {$resp == 2} {continue}
|
|
|
|
}
|
|
|
|
catch {r hello $resp AUTH not-real not-a-password} _
|
|
|
|
assert_match "*\"hello\"*\"$resp\"*\"AUTH\"*\"(redacted)\"*\"(redacted)\"*" [$rd read]
|
|
|
|
}
|
2021-05-19 11:23:54 -04:00
|
|
|
$rd close
|
2021-06-09 08:13:24 -04:00
|
|
|
} {0} {needs:repl}
|
2021-05-19 11:23:54 -04:00
|
|
|
|
|
|
|
test {MONITOR correctly handles multi-exec cases} {
|
|
|
|
set rd [redis_deferring_client]
|
|
|
|
$rd monitor
|
|
|
|
$rd read ; # Discard the OK
|
|
|
|
|
|
|
|
# Make sure multi-exec statements are ordered
|
|
|
|
# correctly
|
|
|
|
r multi
|
|
|
|
r set foo bar
|
|
|
|
r exec
|
|
|
|
assert_match {*"multi"*} [$rd read]
|
|
|
|
assert_match {*"set"*"foo"*"bar"*} [$rd read]
|
|
|
|
assert_match {*"exec"*} [$rd read]
|
|
|
|
|
|
|
|
# Make sure we close multi statements on errors
|
|
|
|
r multi
|
|
|
|
catch {r syntax error} _
|
|
|
|
catch {r exec} _
|
|
|
|
|
|
|
|
assert_match {*"multi"*} [$rd read]
|
|
|
|
assert_match {*"exec"*} [$rd read]
|
|
|
|
|
|
|
|
$rd close
|
|
|
|
}
|
reprocess command when client is unblocked on keys (#11012)
*TL;DR*
---------------------------------------
Following the discussion over the issue [#7551](https://github.com/redis/redis/issues/7551)
We decided to refactor the client blocking code to eliminate some of the code duplications
and to rebuild the infrastructure better for future key blocking cases.
*In this PR*
---------------------------------------
1. reprocess the command once a client becomes unblocked on key (instead of running
custom code for the unblocked path that's different than the one that would have run if
blocking wasn't needed)
2. eliminate some (now) irrelevant code for handling unblocking lists/zsets/streams etc...
3. modify some tests to intercept the error in cases of error on reprocess after unblock (see
details in the notes section below)
4. replace '$' on the client argv with current stream id. Since once we reprocess the stream
XREAD we need to read from the last msg and not wait for new msg in order to prevent
endless block loop.
5. Added statistics to the info "Clients" section to report the:
* `total_blocking_keys` - number of blocking keys
* `total_blocking_keys_on_nokey` - number of blocking keys which have at least 1 client
which would like
to be unblocked on when the key is deleted.
6. Avoid expiring unblocked key during unblock. Previously we used to lookup the unblocked key
which might have been expired during the lookup. Now we lookup the key using NOTOUCH and
NOEXPIRE to avoid deleting it at this point, so propagating commands in blocked.c is no longer needed.
7. deprecated command flags. We decided to remove the CMD_CALL_STATS and CMD_CALL_SLOWLOG
and make an explicit verification in the call() function in order to decide if stats update should take place.
This should simplify the logic and also mitigate existing issues: for example module calls which are
triggered as part of AOF loading might still report stats even though they are called during AOF loading.
*Behavior changes*
---------------------------------------------------
1. As this implementation prevents writing dedicated code handling unblocked streams/lists/zsets,
since we now re-process the command once the client is unblocked some errors will be reported differently.
The old implementation used to issue
``UNBLOCKED the stream key no longer exists``
in the following cases:
- The stream key has been deleted (ie. calling DEL)
- The stream and group existed but the key type was changed by overriding it (ie. with set command)
- The key not longer exists after we swapdb with a db which does not contains this key
- After swapdb when the new db has this key but with different type.
In the new implementation the reported errors will be the same as if the command was processed after effect:
**NOGROUP** - in case key no longer exists, or **WRONGTYPE** in case the key was overridden with a different type.
2. Reprocessing the command means that some checks will be reevaluated once the
client is unblocked.
For example, ACL rules might change since the command originally was executed and
will fail once the client is unblocked.
Another example is OOM condition checks which might enable the command to run and
block but fail the command reprocess once the client is unblocked.
3. One of the changes in this PR is that no command stats are being updated once the
command is blocked (all stats will be updated once the client is unblocked). This implies
that when we have many clients blocked, users will no longer be able to get that information
from the command stats. However the information can still be gathered from the client list.
**Client blocking**
---------------------------------------------------
the blocking on key will still be triggered the same way as it is done today.
in order to block the current client on list of keys, the call to
blockForKeys will still need to be made which will perform the same as it is today:
* add the client to the list of blocked clients on each key
* keep the key with a matching list node (position in the global blocking clients list for that key)
in the client private blocking key dict.
* flag the client with CLIENT_BLOCKED
* update blocking statistics
* register the client on the timeout table
**Key Unblock**
---------------------------------------------------
Unblocking a specific key will be triggered (same as today) by calling signalKeyAsReady.
the implementation in that part will stay the same as today - adding the key to the global readyList.
The reason to maintain the readyList (as apposed to iterating over all clients blocked on the specific key)
is in order to keep the signal operation as short as possible, since it is called during the command processing.
The main change is that instead of going through a dedicated code path that operates the blocked command
we will just call processPendingCommandsAndResetClient.
**ClientUnblock (keys)**
---------------------------------------------------
1. Unblocking clients on keys will be triggered after command is
processed and during the beforeSleep
8. the general schema is:
9. For each key *k* in the readyList:
```
For each client *c* which is blocked on *k*:
in case either:
1. *k* exists AND the *k* type matches the current client blocking type
OR
2. *k* exists and *c* is blocked on module command
OR
3. *k* does not exists and *c* was blocked with the flag
unblock_on_deleted_key
do:
1. remove the client from the list of clients blocked on this key
2. remove the blocking list node from the client blocking key dict
3. remove the client from the timeout list
10. queue the client on the unblocked_clients list
11. *NEW*: call processCommandAndResetClient(c);
```
*NOTE:* for module blocked clients we will still call the moduleUnblockClientByHandle
which will queue the client for processing in moduleUnblockedClients list.
**Process Unblocked clients**
---------------------------------------------------
The process of all unblocked clients is done in the beforeSleep and no change is planned
in that part.
The general schema will be:
For each client *c* in server.unblocked_clients:
* remove client from the server.unblocked_clients
* set back the client readHandler
* continue processing the pending command and input buffer.
*Some notes regarding the new implementation*
---------------------------------------------------
1. Although it was proposed, it is currently difficult to remove the
read handler from the client while it is blocked.
The reason is that a blocked client should be unblocked when it is
disconnected, or we might consume data into void.
2. While this PR mainly keep the current blocking logic as-is, there
might be some future additions to the infrastructure that we would
like to have:
- allow non-preemptive blocking of client - sometimes we can think
that a new kind of blocking can be expected to not be preempt. for
example lets imagine we hold some keys on disk and when a command
needs to process them it will block until the keys are uploaded.
in this case we will want the client to not disconnect or be
unblocked until the process is completed (remove the client read
handler, prevent client timeout, disable unblock via debug command etc...).
- allow generic blocking based on command declared keys - we might
want to add a hook before command processing to check if any of the
declared keys require the command to block. this way it would be
easier to add new kinds of key-based blocking mechanisms.
Co-authored-by: Oran Agra <oran@redislabs.com>
Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
2023-01-01 16:35:42 -05:00
|
|
|
|
|
|
|
test {MONITOR log blocked command only once} {
|
|
|
|
|
|
|
|
# need to reconnect in order to reset the clients state
|
|
|
|
reconnect
|
|
|
|
|
|
|
|
set rd [redis_deferring_client]
|
|
|
|
set bc [redis_deferring_client]
|
|
|
|
r del mylist
|
|
|
|
|
|
|
|
$rd monitor
|
|
|
|
$rd read ; # Discard the OK
|
|
|
|
|
|
|
|
$bc blpop mylist 0
|
|
|
|
wait_for_blocked_clients_count 1
|
|
|
|
r lpush mylist 1
|
|
|
|
wait_for_blocked_clients_count 0
|
|
|
|
r lpush mylist 2
|
|
|
|
|
|
|
|
# we expect to see the blpop on the monitor first
|
|
|
|
assert_match {*"blpop"*"mylist"*"0"*} [$rd read]
|
|
|
|
|
|
|
|
# we scan out all the info commands on the monitor
|
|
|
|
set monitor_output [$rd read]
|
|
|
|
while { [string match {*"info"*} $monitor_output] } {
|
|
|
|
set monitor_output [$rd read]
|
|
|
|
}
|
|
|
|
|
|
|
|
# we expect to locate the lpush right when the client was unblocked
|
|
|
|
assert_match {*"lpush"*"mylist"*"1"*} $monitor_output
|
|
|
|
|
|
|
|
# we scan out all the info commands
|
|
|
|
set monitor_output [$rd read]
|
|
|
|
while { [string match {*"info"*} $monitor_output] } {
|
|
|
|
set monitor_output [$rd read]
|
|
|
|
}
|
|
|
|
|
|
|
|
# we expect to see the next lpush and not duplicate blpop command
|
|
|
|
assert_match {*"lpush"*"mylist"*"2"*} $monitor_output
|
|
|
|
|
|
|
|
$rd close
|
|
|
|
$bc close
|
|
|
|
}
|
2021-05-19 11:23:54 -04:00
|
|
|
|
2013-01-14 04:19:20 -05:00
|
|
|
test {CLIENT GETNAME should return NIL if name is not assigned} {
|
|
|
|
r client getname
|
|
|
|
} {}
|
|
|
|
|
|
|
|
test {CLIENT LIST shows empty fields for unassigned names} {
|
|
|
|
r client list
|
|
|
|
} {*name= *}
|
2014-07-31 14:39:49 -04:00
|
|
|
|
2013-01-14 04:19:20 -05:00
|
|
|
test {CLIENT SETNAME does not accept spaces} {
|
|
|
|
catch {r client setname "foo bar"} e
|
|
|
|
set e
|
|
|
|
} {ERR*}
|
|
|
|
|
|
|
|
test {CLIENT SETNAME can assign a name to this connection} {
|
|
|
|
assert_equal [r client setname myname] {OK}
|
|
|
|
r client list
|
|
|
|
} {*name=myname*}
|
|
|
|
|
|
|
|
test {CLIENT SETNAME can change the name of an existing connection} {
|
|
|
|
assert_equal [r client setname someothername] {OK}
|
|
|
|
r client list
|
|
|
|
} {*name=someothername*}
|
|
|
|
|
|
|
|
test {After CLIENT SETNAME, connection can still be closed} {
|
|
|
|
set rd [redis_deferring_client]
|
|
|
|
$rd client setname foobar
|
|
|
|
assert_equal [$rd read] "OK"
|
|
|
|
assert_match {*foobar*} [r client list]
|
|
|
|
$rd close
|
|
|
|
# Now the client should no longer be listed
|
2013-02-12 07:27:24 -05:00
|
|
|
wait_for_condition 50 100 {
|
|
|
|
[string match {*foobar*} [r client list]] == 0
|
|
|
|
} else {
|
|
|
|
fail "Client still listed in CLIENT LIST after SETNAME."
|
|
|
|
}
|
|
|
|
}
|
2019-11-28 04:24:57 -05:00
|
|
|
|
2023-03-22 02:17:20 -04:00
|
|
|
test {CLIENT SETINFO can set a library name to this connection} {
|
|
|
|
r CLIENT SETINFO lib-name redis.py
|
|
|
|
r CLIENT SETINFO lib-ver 1.2.3
|
|
|
|
r client info
|
|
|
|
} {*lib-name=redis.py lib-ver=1.2.3*}
|
|
|
|
|
|
|
|
test {CLIENT SETINFO invalid args} {
|
|
|
|
assert_error {*wrong number of arguments*} {r CLIENT SETINFO lib-name}
|
2023-03-22 08:32:36 -04:00
|
|
|
assert_error {*cannot contain spaces*} {r CLIENT SETINFO lib-name "redis py"}
|
|
|
|
assert_error {*newlines*} {r CLIENT SETINFO lib-name "redis.py\n"}
|
|
|
|
assert_error {*Unrecognized*} {r CLIENT SETINFO badger hamster}
|
2023-03-22 02:17:20 -04:00
|
|
|
# test that all of these didn't affect the previously set values
|
|
|
|
r client info
|
|
|
|
} {*lib-name=redis.py lib-ver=1.2.3*}
|
|
|
|
|
2023-03-23 04:48:45 -04:00
|
|
|
test {RESET does NOT clean library name} {
|
2023-03-22 02:17:20 -04:00
|
|
|
r reset
|
|
|
|
r client info
|
2023-03-23 04:48:45 -04:00
|
|
|
} {*lib-name=redis.py*} {needs:reset}
|
2023-03-22 02:17:20 -04:00
|
|
|
|
|
|
|
test {CLIENT SETINFO can clear library name} {
|
|
|
|
r CLIENT SETINFO lib-name ""
|
|
|
|
r client info
|
|
|
|
} {*lib-name= *}
|
|
|
|
|
2020-09-09 08:12:57 -04:00
|
|
|
test {CONFIG save params special case handled properly} {
|
|
|
|
# No "save" keyword - defaults should apply
|
|
|
|
start_server {config "minimal.conf"} {
|
|
|
|
assert_match [r config get save] {save {3600 1 300 100 60 10000}}
|
|
|
|
}
|
|
|
|
|
2022-05-09 06:37:49 -04:00
|
|
|
# First "save" keyword overrides hard coded defaults
|
2020-09-09 08:12:57 -04:00
|
|
|
start_server {config "minimal.conf" overrides {save {100 100}}} {
|
|
|
|
# Defaults
|
|
|
|
assert_match [r config get save] {save {100 100}}
|
|
|
|
}
|
2022-05-09 06:37:49 -04:00
|
|
|
|
|
|
|
# First "save" keyword appends default from config file
|
2023-04-18 09:14:26 -04:00
|
|
|
start_server {config "default.conf" overrides {save {900 1}} args {--save 100 100}} {
|
2022-05-09 06:37:49 -04:00
|
|
|
assert_match [r config get save] {save {900 1 100 100}}
|
|
|
|
}
|
|
|
|
|
|
|
|
# Empty "save" keyword resets all
|
2023-04-18 09:14:26 -04:00
|
|
|
start_server {config "default.conf" overrides {save {900 1}} args {--save {}}} {
|
2022-05-09 06:37:49 -04:00
|
|
|
assert_match [r config get save] {save {}}
|
|
|
|
}
|
2021-06-09 08:13:24 -04:00
|
|
|
} {} {external:skip}
|
2020-09-09 08:12:57 -04:00
|
|
|
|
2019-11-28 04:24:57 -05:00
|
|
|
test {CONFIG sanity} {
|
|
|
|
# Do CONFIG GET, CONFIG SET and then CONFIG GET again
|
|
|
|
# Skip immutable configs, one with no get, and other complicated configs
|
|
|
|
set skip_configs {
|
|
|
|
rdbchecksum
|
|
|
|
daemonize
|
|
|
|
io-threads-do-reads
|
2019-12-26 06:59:58 -05:00
|
|
|
tcp-backlog
|
2019-11-28 04:24:57 -05:00
|
|
|
always-show-logo
|
|
|
|
syslog-enabled
|
|
|
|
cluster-enabled
|
2022-03-10 02:52:49 -05:00
|
|
|
disable-thp
|
2019-11-28 04:24:57 -05:00
|
|
|
aclfile
|
|
|
|
unixsocket
|
|
|
|
pidfile
|
|
|
|
syslog-ident
|
|
|
|
appendfilename
|
Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788)
Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
Introducing a folder with multiple AOF files tracked by a manifest file.
The main issues with the the original AOFRW mechanism are:
* buffering of commands that are processed during rewrite (consuming a lot of RAM)
* freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
* double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
The main modifications of this PR:
1. Remove the AOF rewrite buffer and related code.
2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
incremental commands since the last AOFRW.
3. Use a AOF manifest file to record and manage these AOF files mentioned above.
4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
`appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
6. Remove the `aof_rewrite_buffer_length` field in info.
7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
It also gives users the opportunity to preserve the history AOFs. just for testing use now.
8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
9. Support upgrade (load) data from old version redis.
10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
manifest file will be placed in this directory.
11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
`aof-load-truncated` is enabled.
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-01-03 12:14:13 -05:00
|
|
|
appenddirname
|
2019-11-28 04:24:57 -05:00
|
|
|
supervised
|
|
|
|
syslog-facility
|
|
|
|
databases
|
2020-07-10 04:33:47 -04:00
|
|
|
io-threads
|
2019-11-28 04:24:57 -05:00
|
|
|
logfile
|
|
|
|
unixsocketperm
|
2021-08-21 22:43:18 -04:00
|
|
|
replicaof
|
2019-11-28 04:24:57 -05:00
|
|
|
slaveof
|
|
|
|
requirepass
|
Support setcpuaffinity on linux/bsd
Currently, there are several types of threads/child processes of a
redis server. Sometimes we need deeply optimise the performance of
redis, so we would like to isolate threads/processes.
There were some discussion about cpu affinity cases in the issue:
https://github.com/antirez/redis/issues/2863
So implement cpu affinity setting by redis.conf in this patch, then
we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
bgsave_cpulist by cpu list.
Examples of cpulist in redis.conf:
server_cpulist 0-7:2 means cpu affinity 0,2,4,6
bio_cpulist 1,3 means cpu affinity 1,3
aof_rewrite_cpulist 8-11 means cpu affinity 8,9,10,11
bgsave_cpulist 1,10-11 means cpu affinity 1,10,11
Test on linux/freebsd, both work fine.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
2020-05-02 08:05:39 -04:00
|
|
|
server_cpulist
|
|
|
|
bio_cpulist
|
|
|
|
aof_rewrite_cpulist
|
|
|
|
bgsave_cpulist
|
Add 'set-proc-title' config so that this mechanism can be disabled (#3623)
if option `set-proc-title' is no, then do nothing for proc title.
The reason has been explained long ago, see following:
We update redis to 2.8.8, then found there are some side effect when
redis always change the process title.
We run several slave instance on one computer, and all these salves
listen on unix socket only, then ps will show:
1 S redis 18036 1 0 80 0 - 56130 ep_pol 14:02 ? 00:00:31 /usr/sbin/redis-server *:0
1 S redis 23949 1 0 80 0 - 11074 ep_pol 15:41 ? 00:00:00 /usr/sbin/redis-server *:0
for redis 2.6 the output of ps is like following:
1 S redis 18036 1 0 80 0 - 56130 ep_pol 14:02 ? 00:00:31 /usr/sbin/redis-server /etc/redis/a.conf
1 S redis 23949 1 0 80 0 - 11074 ep_pol 15:41 ? 00:00:00 /usr/sbin/redis-server /etc/redis/b.conf
Later is more informational in our case. The situation
is worse when we manage the config and process running
state by salt. Salt check the process by running "ps |
grep SIG" (for Gentoo System) to check the running
state, where SIG is the string to search for when
looking for the service process with ps. Previously, we
define sig as "/usr/sbin/redis-server
/etc/redis/a.conf". Since the ps output is identical for
our case, so we have no way to check the state of
specified redis instance.
So, for our case, we prefer the old behavior, i.e, do
not change the process title for the main redis process.
Or add an option such as "set-proc-title [yes|no]" to
control this behavior.
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-01-28 04:12:39 -05:00
|
|
|
set-proc-title
|
2021-10-08 01:32:40 -04:00
|
|
|
cluster-config-file
|
2021-10-19 01:28:27 -04:00
|
|
|
cluster-port
|
2021-12-02 11:18:18 -05:00
|
|
|
oom-score-adj
|
|
|
|
oom-score-adj-values
|
Protected configs and sensitive commands (#9920)
Block sensitive configs and commands by default.
* `enable-protected-configs` - block modification of configs with the new `PROTECTED_CONFIG` flag.
Currently we add this flag to `dbfilename`, and `dir` configs,
all of which are non-mutable configs that can set a file redis will write to.
* `enable-debug-command` - block the `DEBUG` command
* `enable-module-command` - block the `MODULE` command
These have a default value set to `no`, so that these features are not
exposed by default to client connections, and can only be set by modifying the config file.
Users can change each of these to either `yes` (allow all access), or `local` (allow access from
local TCP connections and unix domain connections)
Note that this is a **breaking change** (specifically the part about MODULE command being disabled by default).
I.e. we don't consider DEBUG command being blocked as an issue (people shouldn't have been using it),
and the few configs we protected are unlikely to have been set at runtime anyway.
On the other hand, it's likely to assume some users who use modules, load them from the config file anyway.
Note that's the whole point of this PR, for redis to be more secure by default and reduce the attack surface on
innocent users, so secure defaults will necessarily mean a breaking change.
2021-12-19 03:46:16 -05:00
|
|
|
enable-protected-configs
|
|
|
|
enable-debug-command
|
|
|
|
enable-module-command
|
|
|
|
dbfilename
|
|
|
|
logfile
|
|
|
|
dir
|
2022-04-20 02:29:37 -04:00
|
|
|
socket-mark-id
|
Add reply_schema to command json files (internal for now) (#10273)
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
Since ironing the details of the reply schema of each and every command can take a long time, we
would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
### Background
In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
### Motivation
1. Documentation. This is the primary goal.
2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
testsuite, see the "Testing" section)
### Schema
The idea is to supply some sort of schema for the various replies of each command.
The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
and without the `FULL` modifier)
We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
Example for `BZPOPMIN`:
```
"reply_schema": {
"oneOf": [
{
"description": "Timeout reached and no elements were popped.",
"type": "null"
},
{
"description": "The keyname, popped member, and its score.",
"type": "array",
"minItems": 3,
"maxItems": 3,
"items": [
{
"description": "Keyname",
"type": "string"
},
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
]
}
```
#### Notes
1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
schema for documentation (and possibly to create a fuzzer that validates the replies)
3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
is with `WITHSCORES` and the other is without.
4. For documentation, there will be another optional field "notes" in which we will add a short description of
the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
array, for example)
Given the above:
1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
(given that "description" and "notes" are comprehensive enough)
2. We can generate a client in a strongly typed language (but the return type could be a conceptual
`union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
3. We can create a fuzzer for RESP3.
### Limitations (because we are using the standard json-schema)
The problem is that Redis' replies are more diverse than what the json format allows. This means that,
when we convert the reply to a json (in order to validate the schema against it), we lose information (see
the "Testing" section below).
The other option would have been to extend the standard json-schema (and json format) to include stuff
like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
seemed like too much work, so we decided to compromise.
Examples:
1. We cannot tell the difference between an "array" and a "set"
2. We cannot tell the difference between simple-string and bulk-string
3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
compares (member,score) tuples and not just the member name.
### Testing
This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
are indeed correct (i.e. describe the actual response of Redis).
To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
it executed and their replies.
For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
`--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
`--log-req-res --force-resp3`)
You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
`.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
These files are later on processed by `./utils/req-res-log-validator.py` which does:
1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
2. For each request-response pair, it validates the response against the request's reply_schema
(obtained from the extended COMMAND DOCS)
5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
the existing redis test suite, rather than attempt to write a fuzzer.
#### Notes about RESP2
1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
accept RESP3 as the future RESP)
2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
so that we can validate it, we will need to know how to convert the actual reply to the one expected.
- number and boolean are always strings in RESP2 so the conversion is easy
- objects (maps) are always a flat array in RESP2
- others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
handling (so the client will not be totally auto-generated)
Example for ZRANGE:
```
"reply_schema": {
"anyOf": [
{
"description": "A list of member elements",
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
},
{
"description": "Members and their scores. Returned in case `WITHSCORES` was used.",
"notes": "In RESP2 this is returned as a flat array",
"type": "array",
"uniqueItems": true,
"items": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": [
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
}
]
}
```
### Other changes
1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
regardless of the special log-req-res mode ("Pub/Sub PING" for example)
2. Update the history field of CLIENT LIST
3. Added basic tests for commands that were not covered at all by the testsuite
### TODO
- [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
- [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
- [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
- [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
of the tests - https://github.com/redis/redis/issues/11897
- [x] (probably a separate PR) add all missing schemas
- [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
- [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
fight with the tcl including mechanism a bit)
- [x] issue: module API - https://github.com/redis/redis/issues/11898
- [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Shaya Potter <shaya@redislabs.com>
2023-03-11 03:14:16 -05:00
|
|
|
req-res-logfile
|
|
|
|
client-default-resp
|
2019-11-28 04:24:57 -05:00
|
|
|
}
|
|
|
|
|
2020-07-10 04:33:47 -04:00
|
|
|
if {!$::tls} {
|
|
|
|
append skip_configs {
|
|
|
|
tls-prefer-server-ciphers
|
|
|
|
tls-session-cache-timeout
|
|
|
|
tls-session-cache-size
|
|
|
|
tls-session-caching
|
|
|
|
tls-cert-file
|
|
|
|
tls-key-file
|
2020-12-11 11:31:40 -05:00
|
|
|
tls-client-cert-file
|
|
|
|
tls-client-key-file
|
2020-07-10 04:33:47 -04:00
|
|
|
tls-dh-params-file
|
|
|
|
tls-ca-cert-file
|
|
|
|
tls-ca-cert-dir
|
|
|
|
tls-protocols
|
|
|
|
tls-ciphers
|
|
|
|
tls-ciphersuites
|
2021-03-01 09:04:44 -05:00
|
|
|
tls-port
|
2020-07-10 04:33:47 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-28 04:24:57 -05:00
|
|
|
set configs {}
|
|
|
|
foreach {k v} [r config get *] {
|
|
|
|
if {[lsearch $skip_configs $k] != -1} {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
dict set configs $k $v
|
|
|
|
# try to set the config to the same value it already has
|
|
|
|
r config set $k $v
|
|
|
|
}
|
|
|
|
|
|
|
|
set newconfigs {}
|
|
|
|
foreach {k v} [r config get *] {
|
|
|
|
if {[lsearch $skip_configs $k] != -1} {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
dict set newconfigs $k $v
|
|
|
|
}
|
|
|
|
|
|
|
|
dict for {k v} $configs {
|
|
|
|
set vv [dict get $newconfigs $k]
|
|
|
|
if {$v != $vv} {
|
|
|
|
fail "config $k mismatch, expecting $v but got $vv"
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
}
|
2020-09-09 08:43:11 -04:00
|
|
|
|
|
|
|
# Do a force-all config rewrite and make sure we're able to parse
|
|
|
|
# it.
|
|
|
|
test {CONFIG REWRITE sanity} {
|
|
|
|
# Capture state of config before
|
|
|
|
set configs {}
|
|
|
|
foreach {k v} [r config get *] {
|
|
|
|
dict set configs $k $v
|
|
|
|
}
|
|
|
|
|
|
|
|
# Rewrite entire configuration, restart and confirm the
|
|
|
|
# server is able to parse it and start.
|
|
|
|
assert_equal [r debug config-rewrite-force-all] "OK"
|
2021-02-23 13:28:03 -05:00
|
|
|
restart_server 0 true false
|
2021-05-18 10:10:06 -04:00
|
|
|
wait_done_loading r
|
2020-09-09 08:43:11 -04:00
|
|
|
|
|
|
|
# Verify no changes were introduced
|
|
|
|
dict for {k v} $configs {
|
|
|
|
assert_equal $v [lindex [r config get $k] 1]
|
|
|
|
}
|
2021-06-09 08:13:24 -04:00
|
|
|
} {} {external:skip}
|
2020-09-09 08:43:11 -04:00
|
|
|
|
2022-04-26 07:34:04 -04:00
|
|
|
test {CONFIG REWRITE handles save and shutdown properly} {
|
2021-03-29 11:53:20 -04:00
|
|
|
r config set save "3600 1 300 100 60 10000"
|
2022-04-26 07:34:04 -04:00
|
|
|
r config set shutdown-on-sigterm "nosave now"
|
|
|
|
r config set shutdown-on-sigint "save"
|
2021-03-29 11:53:20 -04:00
|
|
|
r config rewrite
|
|
|
|
restart_server 0 true false
|
|
|
|
assert_equal [r config get save] {save {3600 1 300 100 60 10000}}
|
2022-04-26 07:34:04 -04:00
|
|
|
assert_equal [r config get shutdown-on-sigterm] {shutdown-on-sigterm {nosave now}}
|
|
|
|
assert_equal [r config get shutdown-on-sigint] {shutdown-on-sigint save}
|
2021-03-29 11:53:20 -04:00
|
|
|
|
|
|
|
r config set save ""
|
2022-04-26 07:34:04 -04:00
|
|
|
r config set shutdown-on-sigterm "default"
|
2021-03-29 11:53:20 -04:00
|
|
|
r config rewrite
|
|
|
|
restart_server 0 true false
|
|
|
|
assert_equal [r config get save] {save {}}
|
2022-04-26 07:34:04 -04:00
|
|
|
assert_equal [r config get shutdown-on-sigterm] {shutdown-on-sigterm default}
|
2021-03-30 15:49:06 -04:00
|
|
|
|
|
|
|
start_server {config "minimal.conf"} {
|
|
|
|
assert_equal [r config get save] {save {3600 1 300 100 60 10000}}
|
|
|
|
r config set save ""
|
|
|
|
r config rewrite
|
|
|
|
restart_server 0 true false
|
|
|
|
assert_equal [r config get save] {save {}}
|
|
|
|
}
|
2021-06-09 08:13:24 -04:00
|
|
|
} {} {external:skip}
|
2021-12-01 03:15:11 -05:00
|
|
|
|
|
|
|
test {CONFIG SET with multiple args} {
|
|
|
|
set some_configs {maxmemory 10000001 repl-backlog-size 10000002 save {3000 5}}
|
|
|
|
|
|
|
|
# Backup
|
|
|
|
set backups {}
|
|
|
|
foreach c [dict keys $some_configs] {
|
|
|
|
lappend backups $c [lindex [r config get $c] 1]
|
|
|
|
}
|
|
|
|
|
|
|
|
# multi config set and veirfy
|
|
|
|
assert_equal [eval "r config set $some_configs"] "OK"
|
|
|
|
dict for {c val} $some_configs {
|
|
|
|
assert_equal [lindex [r config get $c] 1] $val
|
|
|
|
}
|
|
|
|
|
|
|
|
# Restore backup
|
|
|
|
assert_equal [eval "r config set $backups"] "OK"
|
|
|
|
}
|
|
|
|
|
|
|
|
test {CONFIG SET rollback on set error} {
|
|
|
|
# This test passes an invalid percent value to maxmemory-clients which should cause an
|
|
|
|
# input verification failure during the "set" phase before trying to apply the
|
|
|
|
# configuration. We want to make sure the correct failure happens and everything
|
|
|
|
# is rolled back.
|
|
|
|
# backup maxmemory config
|
|
|
|
set mm_backup [lindex [r config get maxmemory] 1]
|
|
|
|
set mmc_backup [lindex [r config get maxmemory-clients] 1]
|
|
|
|
set qbl_backup [lindex [r config get client-query-buffer-limit] 1]
|
|
|
|
# Set some value to maxmemory
|
|
|
|
assert_equal [r config set maxmemory 10000002] "OK"
|
|
|
|
# Set another value to maxmeory together with another invalid config
|
2021-12-15 02:46:32 -05:00
|
|
|
assert_error "ERR CONFIG SET failed (possibly related to argument 'maxmemory-clients') - percentage argument must be less or equal to 100" {
|
2021-12-01 03:15:11 -05:00
|
|
|
r config set maxmemory 10000001 maxmemory-clients 200% client-query-buffer-limit invalid
|
|
|
|
}
|
|
|
|
# Validate we rolled back to original values
|
|
|
|
assert_equal [lindex [r config get maxmemory] 1] 10000002
|
|
|
|
assert_equal [lindex [r config get maxmemory-clients] 1] $mmc_backup
|
|
|
|
assert_equal [lindex [r config get client-query-buffer-limit] 1] $qbl_backup
|
|
|
|
# Make sure we revert back to the previous maxmemory
|
|
|
|
assert_equal [r config set maxmemory $mm_backup] "OK"
|
|
|
|
}
|
|
|
|
|
|
|
|
test {CONFIG SET rollback on apply error} {
|
|
|
|
# This test tries to configure a used port number in redis. This is expected
|
|
|
|
# to pass the `CONFIG SET` validity checking implementation but fail on
|
|
|
|
# actual "apply" of the setting. This will validate that after an "apply"
|
|
|
|
# failure we rollback to the previous values.
|
|
|
|
proc dummy_accept {chan addr port} {}
|
2021-12-02 11:18:18 -05:00
|
|
|
|
2021-12-01 03:15:11 -05:00
|
|
|
set some_configs {maxmemory 10000001 port 0 client-query-buffer-limit 10m}
|
2021-12-02 11:18:18 -05:00
|
|
|
|
2021-12-01 03:15:11 -05:00
|
|
|
# On Linux we also set the oom score adj which has an apply function. This is
|
|
|
|
# used to verify that even successful applies are rolled back if some other
|
|
|
|
# config's apply fails.
|
|
|
|
set oom_adj_avail [expr {!$::external && [exec uname] == "Linux"}]
|
|
|
|
if {$oom_adj_avail} {
|
|
|
|
proc get_oom_score_adj {} {
|
|
|
|
set pid [srv 0 pid]
|
|
|
|
set fd [open "/proc/$pid/oom_score_adj" "r"]
|
|
|
|
set val [gets $fd]
|
|
|
|
close $fd
|
|
|
|
return $val
|
|
|
|
}
|
|
|
|
set some_configs [linsert $some_configs 0 oom-score-adj yes oom-score-adj-values {1 1 1}]
|
|
|
|
set read_oom_adj [get_oom_score_adj]
|
|
|
|
}
|
|
|
|
|
|
|
|
# Backup
|
|
|
|
set backups {}
|
|
|
|
foreach c [dict keys $some_configs] {
|
|
|
|
lappend backups $c [lindex [r config get $c] 1]
|
|
|
|
}
|
|
|
|
|
2021-12-02 11:18:18 -05:00
|
|
|
set used_port [find_available_port $::baseport $::portcount]
|
2021-12-01 03:15:11 -05:00
|
|
|
dict set some_configs port $used_port
|
|
|
|
|
|
|
|
# Run a dummy server on used_port so we know we can't configure redis to
|
|
|
|
# use it. It's ok for this to fail because that means used_port is invalid
|
|
|
|
# anyway
|
2021-12-02 11:18:18 -05:00
|
|
|
catch {socket -server dummy_accept -myaddr 127.0.0.1 $used_port} e
|
|
|
|
if {$::verbose} { puts "dummy_accept: $e" }
|
|
|
|
|
2021-12-01 03:15:11 -05:00
|
|
|
# Try to listen on the used port, pass some more configs to make sure the
|
|
|
|
# returned failure message is for the first bad config and everything is rolled back.
|
2021-12-15 02:46:32 -05:00
|
|
|
assert_error "ERR CONFIG SET failed (possibly related to argument 'port') - Unable to listen on this port*" {
|
2021-12-01 03:15:11 -05:00
|
|
|
eval "r config set $some_configs"
|
|
|
|
}
|
2021-12-02 11:18:18 -05:00
|
|
|
|
2021-12-01 03:15:11 -05:00
|
|
|
# Make sure we reverted back to previous configs
|
|
|
|
dict for {conf val} $backups {
|
|
|
|
assert_equal [lindex [r config get $conf] 1] $val
|
|
|
|
}
|
2021-12-02 11:18:18 -05:00
|
|
|
|
2021-12-01 03:15:11 -05:00
|
|
|
if {$oom_adj_avail} {
|
|
|
|
assert_equal [get_oom_score_adj] $read_oom_adj
|
|
|
|
}
|
2021-12-02 11:18:18 -05:00
|
|
|
|
2021-12-01 03:15:11 -05:00
|
|
|
# Make sure we can still communicate with the server (on the original port)
|
|
|
|
set r1 [redis_client]
|
|
|
|
assert_equal [$r1 ping] "PONG"
|
|
|
|
$r1 close
|
|
|
|
}
|
|
|
|
|
|
|
|
test {CONFIG SET duplicate configs} {
|
2022-04-14 04:18:32 -04:00
|
|
|
assert_error "ERR *duplicate*" {r config set maxmemory 10000001 maxmemory 10000002}
|
2021-12-01 03:15:11 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
test {CONFIG SET set immutable} {
|
2022-04-14 04:18:32 -04:00
|
|
|
assert_error "ERR *immutable*" {r config set daemonize yes}
|
2021-12-01 03:15:11 -05:00
|
|
|
}
|
2021-03-29 11:53:20 -04:00
|
|
|
|
2021-12-08 05:44:10 -05:00
|
|
|
test {CONFIG GET hidden configs} {
|
|
|
|
set hidden_config "key-load-delay"
|
|
|
|
|
|
|
|
# When we use a pattern we shouldn't get the hidden config
|
|
|
|
assert {![dict exists [r config get *] $hidden_config]}
|
|
|
|
|
|
|
|
# When we explicitly request the hidden config we should get it
|
|
|
|
assert {[dict exists [r config get $hidden_config] "$hidden_config"]}
|
|
|
|
}
|
|
|
|
|
2021-12-16 02:01:13 -05:00
|
|
|
test {CONFIG GET multiple args} {
|
|
|
|
set res [r config get maxmemory maxmemory* bind *of]
|
|
|
|
|
|
|
|
# Verify there are no duplicates in the result
|
|
|
|
assert_equal [expr [llength [dict keys $res]]*2] [llength $res]
|
|
|
|
|
|
|
|
# Verify we got both name and alias in result
|
|
|
|
assert {[dict exists $res slaveof] && [dict exists $res replicaof]}
|
|
|
|
|
|
|
|
# Verify pattern found multiple maxmemory* configs
|
|
|
|
assert {[dict exists $res maxmemory] && [dict exists $res maxmemory-samples] && [dict exists $res maxmemory-clients]}
|
|
|
|
|
|
|
|
# Verify we also got the explicit config
|
|
|
|
assert {[dict exists $res bind]}
|
|
|
|
}
|
|
|
|
|
redis-server command line arguments support take one bulk string with spaces for MULTI_ARG configs parsing. And allow options value to use the -- prefix (#10660)
## Take one bulk string with spaces for MULTI_ARG configs parsing
Currently redis-server looks for arguments that start with `--`,
and anything in between them is considered arguments for the config.
like: `src/redis-server --shutdown-on-sigint nosave force now --port 6380`
MULTI_ARG configs behave differently for CONFIG command, vs the command
line argument for redis-server.
i.e. CONFIG command takes one bulk string with spaces in it, while the
command line takes an argv array with multiple values.
In this PR, in config.c, if `argc > 1` we can take them as is,
and if the config is a `MULTI_ARG` and `argc == 1`, we will split it by spaces.
So both of these will be the same:
```
redis-server --shutdown-on-sigint nosave force now --shutdown-on-sigterm nosave force
redis-server --shutdown-on-sigint nosave "force now" --shutdown-on-sigterm nosave force
redis-server --shutdown-on-sigint nosave "force now" --shutdown-on-sigterm "nosave force"
```
## Allow options value to use the `--` prefix
Currently it decides to switch to the next config, as soon as it sees `--`,
even if there was not a single value provided yet to the last config,
this makes it impossible to define a config value that has `--` prefix in it.
For instance, if we want to set the logfile to `--my--log--file`,
like `redis-server --logfile --my--log--file --loglevel verbose`,
current code will handle that incorrectly.
In this PR, now we allow a config value that has `--` prefix in it.
**But note that** something like `redis-server --some-config --config-value1 --config-value2 --loglevel debug`
would not work, because if you want to pass a value to a config starting with `--`, it can only be a single value.
like: `redis-server --some-config "--config-value1 --config-value2" --loglevel debug`
An example (using `--` prefix config value):
```
redis-server --logfile --my--log--file --loglevel verbose
redis-cli config get logfile loglevel
1) "loglevel"
2) "verbose"
3) "logfile"
4) "--my--log--file"
```
### Potentially breaking change
`redis-server --save --loglevel verbose` used to work the same as `redis-server --save "" --loglevel verbose`
now, it'll error!
2022-05-11 04:33:35 -04:00
|
|
|
test {redis-server command line arguments - error cases} {
|
|
|
|
catch {exec src/redis-server --port} err
|
|
|
|
assert_match {*'port'*wrong number of arguments*} $err
|
|
|
|
|
|
|
|
catch {exec src/redis-server --port 6380 --loglevel} err
|
|
|
|
assert_match {*'loglevel'*wrong number of arguments*} $err
|
|
|
|
|
|
|
|
# Take `6379` and `6380` as the port option value.
|
|
|
|
catch {exec src/redis-server --port 6379 6380} err
|
|
|
|
assert_match {*'port "6379" "6380"'*wrong number of arguments*} $err
|
|
|
|
|
|
|
|
# Take `--loglevel` and `verbose` as the port option value.
|
|
|
|
catch {exec src/redis-server --port --loglevel verbose} err
|
|
|
|
assert_match {*'port "--loglevel" "verbose"'*wrong number of arguments*} $err
|
|
|
|
|
|
|
|
# Take `--bla` as the port option value.
|
|
|
|
catch {exec src/redis-server --port --bla --loglevel verbose} err
|
|
|
|
assert_match {*'port "--bla"'*argument couldn't be parsed into an integer*} $err
|
|
|
|
|
|
|
|
# Take `--bla` as the loglevel option value.
|
|
|
|
catch {exec src/redis-server --logfile --my--log--file --loglevel --bla} err
|
|
|
|
assert_match {*'loglevel "--bla"'*argument(s) must be one of the following*} $err
|
|
|
|
|
|
|
|
# Using MULTI_ARG's own check, empty option value
|
|
|
|
catch {exec src/redis-server --shutdown-on-sigint} err
|
|
|
|
assert_match {*'shutdown-on-sigint'*argument(s) must be one of the following*} $err
|
|
|
|
catch {exec src/redis-server --shutdown-on-sigint "now force" --shutdown-on-sigterm} err
|
|
|
|
assert_match {*'shutdown-on-sigterm'*argument(s) must be one of the following*} $err
|
|
|
|
|
|
|
|
# Something like `redis-server --some-config --config-value1 --config-value2 --loglevel debug` would break,
|
|
|
|
# because if you want to pass a value to a config starting with `--`, it can only be a single value.
|
|
|
|
catch {exec src/redis-server --replicaof 127.0.0.1 abc} err
|
|
|
|
assert_match {*'replicaof "127.0.0.1" "abc"'*Invalid master port*} $err
|
|
|
|
catch {exec src/redis-server --replicaof --127.0.0.1 abc} err
|
|
|
|
assert_match {*'replicaof "--127.0.0.1" "abc"'*Invalid master port*} $err
|
|
|
|
catch {exec src/redis-server --replicaof --127.0.0.1 --abc} err
|
|
|
|
assert_match {*'replicaof "--127.0.0.1"'*wrong number of arguments*} $err
|
|
|
|
} {} {external:skip}
|
|
|
|
|
redis-server command line arguments allow passing config name and value in the same arg (#10866)
This commit has two topics.
## Passing config name and value in the same arg
In #10660 (Redis 7.0.1), when we supported the config values that can start with `--` prefix (one of the two topics of that PR),
we broke another pattern: `redis-server redis.config "name value"`, passing both config name
and it's value in the same arg, see #10865
This wasn't a intended change (i.e we didn't realize this pattern used to work).
Although this is a wrong usage, we still like to fix it.
Now we support something like:
```
src/redis-server redis.conf "--maxmemory '700mb'" "--maxmemory-policy volatile-lru" --proc-title-template --my--title--template --loglevel verbose
```
## Changes around --save
Also in this PR, we undo the breaking change we made in #10660 on purpose.
1. `redis-server redis.conf --save --loglevel verbose` (missing `save` argument before anotehr argument).
In 7.0.1, it was throwing an wrong arg error.
Now it will work and reset the save, similar to how it used to be in 7.0.0 and 6.2.x.
3. `redis-server redis.conf --loglevel verbose --save` (missing `save` argument as last argument).
In 6.2, it did not reset the save, which was a bug (inconsistent with the previous bullet).
Now we will make it work and reset the save as well (a bug fix).
2022-06-26 07:36:39 -04:00
|
|
|
test {redis-server command line arguments - allow passing option name and option value in the same arg} {
|
|
|
|
start_server {config "default.conf" args {"--maxmemory 700mb" "--maxmemory-policy volatile-lru"}} {
|
|
|
|
assert_match [r config get maxmemory] {maxmemory 734003200}
|
|
|
|
assert_match [r config get maxmemory-policy] {maxmemory-policy volatile-lru}
|
|
|
|
}
|
|
|
|
} {} {external:skip}
|
|
|
|
|
|
|
|
test {redis-server command line arguments - wrong usage that we support anyway} {
|
|
|
|
start_server {config "default.conf" args {loglevel verbose "--maxmemory '700mb'" "--maxmemory-policy 'volatile-lru'"}} {
|
|
|
|
assert_match [r config get loglevel] {loglevel verbose}
|
|
|
|
assert_match [r config get maxmemory] {maxmemory 734003200}
|
|
|
|
assert_match [r config get maxmemory-policy] {maxmemory-policy volatile-lru}
|
|
|
|
}
|
|
|
|
} {} {external:skip}
|
|
|
|
|
redis-server command line arguments support take one bulk string with spaces for MULTI_ARG configs parsing. And allow options value to use the -- prefix (#10660)
## Take one bulk string with spaces for MULTI_ARG configs parsing
Currently redis-server looks for arguments that start with `--`,
and anything in between them is considered arguments for the config.
like: `src/redis-server --shutdown-on-sigint nosave force now --port 6380`
MULTI_ARG configs behave differently for CONFIG command, vs the command
line argument for redis-server.
i.e. CONFIG command takes one bulk string with spaces in it, while the
command line takes an argv array with multiple values.
In this PR, in config.c, if `argc > 1` we can take them as is,
and if the config is a `MULTI_ARG` and `argc == 1`, we will split it by spaces.
So both of these will be the same:
```
redis-server --shutdown-on-sigint nosave force now --shutdown-on-sigterm nosave force
redis-server --shutdown-on-sigint nosave "force now" --shutdown-on-sigterm nosave force
redis-server --shutdown-on-sigint nosave "force now" --shutdown-on-sigterm "nosave force"
```
## Allow options value to use the `--` prefix
Currently it decides to switch to the next config, as soon as it sees `--`,
even if there was not a single value provided yet to the last config,
this makes it impossible to define a config value that has `--` prefix in it.
For instance, if we want to set the logfile to `--my--log--file`,
like `redis-server --logfile --my--log--file --loglevel verbose`,
current code will handle that incorrectly.
In this PR, now we allow a config value that has `--` prefix in it.
**But note that** something like `redis-server --some-config --config-value1 --config-value2 --loglevel debug`
would not work, because if you want to pass a value to a config starting with `--`, it can only be a single value.
like: `redis-server --some-config "--config-value1 --config-value2" --loglevel debug`
An example (using `--` prefix config value):
```
redis-server --logfile --my--log--file --loglevel verbose
redis-cli config get logfile loglevel
1) "loglevel"
2) "verbose"
3) "logfile"
4) "--my--log--file"
```
### Potentially breaking change
`redis-server --save --loglevel verbose` used to work the same as `redis-server --save "" --loglevel verbose`
now, it'll error!
2022-05-11 04:33:35 -04:00
|
|
|
test {redis-server command line arguments - allow option value to use the `--` prefix} {
|
|
|
|
start_server {config "default.conf" args {--proc-title-template --my--title--template --loglevel verbose}} {
|
|
|
|
assert_match [r config get proc-title-template] {proc-title-template --my--title--template}
|
|
|
|
assert_match [r config get loglevel] {loglevel verbose}
|
|
|
|
}
|
|
|
|
} {} {external:skip}
|
|
|
|
|
redis-server command line arguments allow passing config name and value in the same arg (#10866)
This commit has two topics.
## Passing config name and value in the same arg
In #10660 (Redis 7.0.1), when we supported the config values that can start with `--` prefix (one of the two topics of that PR),
we broke another pattern: `redis-server redis.config "name value"`, passing both config name
and it's value in the same arg, see #10865
This wasn't a intended change (i.e we didn't realize this pattern used to work).
Although this is a wrong usage, we still like to fix it.
Now we support something like:
```
src/redis-server redis.conf "--maxmemory '700mb'" "--maxmemory-policy volatile-lru" --proc-title-template --my--title--template --loglevel verbose
```
## Changes around --save
Also in this PR, we undo the breaking change we made in #10660 on purpose.
1. `redis-server redis.conf --save --loglevel verbose` (missing `save` argument before anotehr argument).
In 7.0.1, it was throwing an wrong arg error.
Now it will work and reset the save, similar to how it used to be in 7.0.0 and 6.2.x.
3. `redis-server redis.conf --loglevel verbose --save` (missing `save` argument as last argument).
In 6.2, it did not reset the save, which was a bug (inconsistent with the previous bullet).
Now we will make it work and reset the save as well (a bug fix).
2022-06-26 07:36:39 -04:00
|
|
|
test {redis-server command line arguments - option name and option value in the same arg and `--` prefix} {
|
|
|
|
start_server {config "default.conf" args {"--proc-title-template --my--title--template" "--loglevel verbose"}} {
|
|
|
|
assert_match [r config get proc-title-template] {proc-title-template --my--title--template}
|
|
|
|
assert_match [r config get loglevel] {loglevel verbose}
|
|
|
|
}
|
|
|
|
} {} {external:skip}
|
|
|
|
|
redis-server command line arguments support take one bulk string with spaces for MULTI_ARG configs parsing. And allow options value to use the -- prefix (#10660)
## Take one bulk string with spaces for MULTI_ARG configs parsing
Currently redis-server looks for arguments that start with `--`,
and anything in between them is considered arguments for the config.
like: `src/redis-server --shutdown-on-sigint nosave force now --port 6380`
MULTI_ARG configs behave differently for CONFIG command, vs the command
line argument for redis-server.
i.e. CONFIG command takes one bulk string with spaces in it, while the
command line takes an argv array with multiple values.
In this PR, in config.c, if `argc > 1` we can take them as is,
and if the config is a `MULTI_ARG` and `argc == 1`, we will split it by spaces.
So both of these will be the same:
```
redis-server --shutdown-on-sigint nosave force now --shutdown-on-sigterm nosave force
redis-server --shutdown-on-sigint nosave "force now" --shutdown-on-sigterm nosave force
redis-server --shutdown-on-sigint nosave "force now" --shutdown-on-sigterm "nosave force"
```
## Allow options value to use the `--` prefix
Currently it decides to switch to the next config, as soon as it sees `--`,
even if there was not a single value provided yet to the last config,
this makes it impossible to define a config value that has `--` prefix in it.
For instance, if we want to set the logfile to `--my--log--file`,
like `redis-server --logfile --my--log--file --loglevel verbose`,
current code will handle that incorrectly.
In this PR, now we allow a config value that has `--` prefix in it.
**But note that** something like `redis-server --some-config --config-value1 --config-value2 --loglevel debug`
would not work, because if you want to pass a value to a config starting with `--`, it can only be a single value.
like: `redis-server --some-config "--config-value1 --config-value2" --loglevel debug`
An example (using `--` prefix config value):
```
redis-server --logfile --my--log--file --loglevel verbose
redis-cli config get logfile loglevel
1) "loglevel"
2) "verbose"
3) "logfile"
4) "--my--log--file"
```
### Potentially breaking change
`redis-server --save --loglevel verbose` used to work the same as `redis-server --save "" --loglevel verbose`
now, it'll error!
2022-05-11 04:33:35 -04:00
|
|
|
test {redis-server command line arguments - save with empty input} {
|
redis-server command line arguments allow passing config name and value in the same arg (#10866)
This commit has two topics.
## Passing config name and value in the same arg
In #10660 (Redis 7.0.1), when we supported the config values that can start with `--` prefix (one of the two topics of that PR),
we broke another pattern: `redis-server redis.config "name value"`, passing both config name
and it's value in the same arg, see #10865
This wasn't a intended change (i.e we didn't realize this pattern used to work).
Although this is a wrong usage, we still like to fix it.
Now we support something like:
```
src/redis-server redis.conf "--maxmemory '700mb'" "--maxmemory-policy volatile-lru" --proc-title-template --my--title--template --loglevel verbose
```
## Changes around --save
Also in this PR, we undo the breaking change we made in #10660 on purpose.
1. `redis-server redis.conf --save --loglevel verbose` (missing `save` argument before anotehr argument).
In 7.0.1, it was throwing an wrong arg error.
Now it will work and reset the save, similar to how it used to be in 7.0.0 and 6.2.x.
3. `redis-server redis.conf --loglevel verbose --save` (missing `save` argument as last argument).
In 6.2, it did not reset the save, which was a bug (inconsistent with the previous bullet).
Now we will make it work and reset the save as well (a bug fix).
2022-06-26 07:36:39 -04:00
|
|
|
start_server {config "default.conf" args {--save --loglevel verbose}} {
|
|
|
|
assert_match [r config get save] {save {}}
|
|
|
|
assert_match [r config get loglevel] {loglevel verbose}
|
|
|
|
}
|
|
|
|
|
|
|
|
start_server {config "default.conf" args {--loglevel verbose --save}} {
|
|
|
|
assert_match [r config get save] {save {}}
|
|
|
|
assert_match [r config get loglevel] {loglevel verbose}
|
|
|
|
}
|
redis-server command line arguments support take one bulk string with spaces for MULTI_ARG configs parsing. And allow options value to use the -- prefix (#10660)
## Take one bulk string with spaces for MULTI_ARG configs parsing
Currently redis-server looks for arguments that start with `--`,
and anything in between them is considered arguments for the config.
like: `src/redis-server --shutdown-on-sigint nosave force now --port 6380`
MULTI_ARG configs behave differently for CONFIG command, vs the command
line argument for redis-server.
i.e. CONFIG command takes one bulk string with spaces in it, while the
command line takes an argv array with multiple values.
In this PR, in config.c, if `argc > 1` we can take them as is,
and if the config is a `MULTI_ARG` and `argc == 1`, we will split it by spaces.
So both of these will be the same:
```
redis-server --shutdown-on-sigint nosave force now --shutdown-on-sigterm nosave force
redis-server --shutdown-on-sigint nosave "force now" --shutdown-on-sigterm nosave force
redis-server --shutdown-on-sigint nosave "force now" --shutdown-on-sigterm "nosave force"
```
## Allow options value to use the `--` prefix
Currently it decides to switch to the next config, as soon as it sees `--`,
even if there was not a single value provided yet to the last config,
this makes it impossible to define a config value that has `--` prefix in it.
For instance, if we want to set the logfile to `--my--log--file`,
like `redis-server --logfile --my--log--file --loglevel verbose`,
current code will handle that incorrectly.
In this PR, now we allow a config value that has `--` prefix in it.
**But note that** something like `redis-server --some-config --config-value1 --config-value2 --loglevel debug`
would not work, because if you want to pass a value to a config starting with `--`, it can only be a single value.
like: `redis-server --some-config "--config-value1 --config-value2" --loglevel debug`
An example (using `--` prefix config value):
```
redis-server --logfile --my--log--file --loglevel verbose
redis-cli config get logfile loglevel
1) "loglevel"
2) "verbose"
3) "logfile"
4) "--my--log--file"
```
### Potentially breaking change
`redis-server --save --loglevel verbose` used to work the same as `redis-server --save "" --loglevel verbose`
now, it'll error!
2022-05-11 04:33:35 -04:00
|
|
|
|
|
|
|
start_server {config "default.conf" args {--save {} --loglevel verbose}} {
|
|
|
|
assert_match [r config get save] {save {}}
|
|
|
|
assert_match [r config get loglevel] {loglevel verbose}
|
|
|
|
}
|
redis-server command line arguments allow passing config name and value in the same arg (#10866)
This commit has two topics.
## Passing config name and value in the same arg
In #10660 (Redis 7.0.1), when we supported the config values that can start with `--` prefix (one of the two topics of that PR),
we broke another pattern: `redis-server redis.config "name value"`, passing both config name
and it's value in the same arg, see #10865
This wasn't a intended change (i.e we didn't realize this pattern used to work).
Although this is a wrong usage, we still like to fix it.
Now we support something like:
```
src/redis-server redis.conf "--maxmemory '700mb'" "--maxmemory-policy volatile-lru" --proc-title-template --my--title--template --loglevel verbose
```
## Changes around --save
Also in this PR, we undo the breaking change we made in #10660 on purpose.
1. `redis-server redis.conf --save --loglevel verbose` (missing `save` argument before anotehr argument).
In 7.0.1, it was throwing an wrong arg error.
Now it will work and reset the save, similar to how it used to be in 7.0.0 and 6.2.x.
3. `redis-server redis.conf --loglevel verbose --save` (missing `save` argument as last argument).
In 6.2, it did not reset the save, which was a bug (inconsistent with the previous bullet).
Now we will make it work and reset the save as well (a bug fix).
2022-06-26 07:36:39 -04:00
|
|
|
|
|
|
|
start_server {config "default.conf" args {--loglevel verbose --save {}}} {
|
|
|
|
assert_match [r config get save] {save {}}
|
|
|
|
assert_match [r config get loglevel] {loglevel verbose}
|
|
|
|
}
|
|
|
|
|
|
|
|
start_server {config "default.conf" args {--proc-title-template --save --save {} --loglevel verbose}} {
|
|
|
|
assert_match [r config get proc-title-template] {proc-title-template --save}
|
|
|
|
assert_match [r config get save] {save {}}
|
|
|
|
assert_match [r config get loglevel] {loglevel verbose}
|
|
|
|
}
|
|
|
|
|
redis-server command line arguments support take one bulk string with spaces for MULTI_ARG configs parsing. And allow options value to use the -- prefix (#10660)
## Take one bulk string with spaces for MULTI_ARG configs parsing
Currently redis-server looks for arguments that start with `--`,
and anything in between them is considered arguments for the config.
like: `src/redis-server --shutdown-on-sigint nosave force now --port 6380`
MULTI_ARG configs behave differently for CONFIG command, vs the command
line argument for redis-server.
i.e. CONFIG command takes one bulk string with spaces in it, while the
command line takes an argv array with multiple values.
In this PR, in config.c, if `argc > 1` we can take them as is,
and if the config is a `MULTI_ARG` and `argc == 1`, we will split it by spaces.
So both of these will be the same:
```
redis-server --shutdown-on-sigint nosave force now --shutdown-on-sigterm nosave force
redis-server --shutdown-on-sigint nosave "force now" --shutdown-on-sigterm nosave force
redis-server --shutdown-on-sigint nosave "force now" --shutdown-on-sigterm "nosave force"
```
## Allow options value to use the `--` prefix
Currently it decides to switch to the next config, as soon as it sees `--`,
even if there was not a single value provided yet to the last config,
this makes it impossible to define a config value that has `--` prefix in it.
For instance, if we want to set the logfile to `--my--log--file`,
like `redis-server --logfile --my--log--file --loglevel verbose`,
current code will handle that incorrectly.
In this PR, now we allow a config value that has `--` prefix in it.
**But note that** something like `redis-server --some-config --config-value1 --config-value2 --loglevel debug`
would not work, because if you want to pass a value to a config starting with `--`, it can only be a single value.
like: `redis-server --some-config "--config-value1 --config-value2" --loglevel debug`
An example (using `--` prefix config value):
```
redis-server --logfile --my--log--file --loglevel verbose
redis-cli config get logfile loglevel
1) "loglevel"
2) "verbose"
3) "logfile"
4) "--my--log--file"
```
### Potentially breaking change
`redis-server --save --loglevel verbose` used to work the same as `redis-server --save "" --loglevel verbose`
now, it'll error!
2022-05-11 04:33:35 -04:00
|
|
|
} {} {external:skip}
|
|
|
|
|
|
|
|
test {redis-server command line arguments - take one bulk string with spaces for MULTI_ARG configs parsing} {
|
|
|
|
start_server {config "default.conf" args {--shutdown-on-sigint nosave force now --shutdown-on-sigterm "nosave force"}} {
|
|
|
|
assert_match [r config get shutdown-on-sigint] {shutdown-on-sigint {nosave now force}}
|
|
|
|
assert_match [r config get shutdown-on-sigterm] {shutdown-on-sigterm {nosave force}}
|
|
|
|
}
|
|
|
|
} {} {external:skip}
|
|
|
|
|
2021-06-10 08:39:33 -04:00
|
|
|
# Config file at this point is at a weird state, and includes all
|
2020-09-09 08:43:11 -04:00
|
|
|
# known keywords. Might be a good idea to avoid adding tests here.
|
2011-12-19 04:21:50 -05:00
|
|
|
}
|
Protected configs and sensitive commands (#9920)
Block sensitive configs and commands by default.
* `enable-protected-configs` - block modification of configs with the new `PROTECTED_CONFIG` flag.
Currently we add this flag to `dbfilename`, and `dir` configs,
all of which are non-mutable configs that can set a file redis will write to.
* `enable-debug-command` - block the `DEBUG` command
* `enable-module-command` - block the `MODULE` command
These have a default value set to `no`, so that these features are not
exposed by default to client connections, and can only be set by modifying the config file.
Users can change each of these to either `yes` (allow all access), or `local` (allow access from
local TCP connections and unix domain connections)
Note that this is a **breaking change** (specifically the part about MODULE command being disabled by default).
I.e. we don't consider DEBUG command being blocked as an issue (people shouldn't have been using it),
and the few configs we protected are unlikely to have been set at runtime anyway.
On the other hand, it's likely to assume some users who use modules, load them from the config file anyway.
Note that's the whole point of this PR, for redis to be more secure by default and reduce the attack surface on
innocent users, so secure defaults will necessarily mean a breaking change.
2021-12-19 03:46:16 -05:00
|
|
|
|
|
|
|
start_server {tags {"introspection external:skip"} overrides {enable-protected-configs {no} enable-debug-command {no}}} {
|
|
|
|
test {cannot modify protected configuration - no} {
|
2022-04-14 04:18:32 -04:00
|
|
|
assert_error "ERR *protected*" {r config set dir somedir}
|
|
|
|
assert_error "ERR *DEBUG command not allowed*" {r DEBUG HELP}
|
Protected configs and sensitive commands (#9920)
Block sensitive configs and commands by default.
* `enable-protected-configs` - block modification of configs with the new `PROTECTED_CONFIG` flag.
Currently we add this flag to `dbfilename`, and `dir` configs,
all of which are non-mutable configs that can set a file redis will write to.
* `enable-debug-command` - block the `DEBUG` command
* `enable-module-command` - block the `MODULE` command
These have a default value set to `no`, so that these features are not
exposed by default to client connections, and can only be set by modifying the config file.
Users can change each of these to either `yes` (allow all access), or `local` (allow access from
local TCP connections and unix domain connections)
Note that this is a **breaking change** (specifically the part about MODULE command being disabled by default).
I.e. we don't consider DEBUG command being blocked as an issue (people shouldn't have been using it),
and the few configs we protected are unlikely to have been set at runtime anyway.
On the other hand, it's likely to assume some users who use modules, load them from the config file anyway.
Note that's the whole point of this PR, for redis to be more secure by default and reduce the attack surface on
innocent users, so secure defaults will necessarily mean a breaking change.
2021-12-19 03:46:16 -05:00
|
|
|
} {} {needs:debug}
|
|
|
|
}
|
|
|
|
|
|
|
|
start_server {config "minimal.conf" tags {"introspection external:skip"} overrides {protected-mode {no} enable-protected-configs {local} enable-debug-command {local}}} {
|
|
|
|
test {cannot modify protected configuration - local} {
|
|
|
|
# verify that for local connection it doesn't error
|
|
|
|
r config set dbfilename somename
|
|
|
|
r DEBUG HELP
|
|
|
|
|
|
|
|
# Get a non-loopback address of this instance for this test.
|
|
|
|
set myaddr [get_nonloopback_addr]
|
|
|
|
if {$myaddr != "" && ![string match {127.*} $myaddr]} {
|
|
|
|
# Non-loopback client should fail
|
|
|
|
set r2 [get_nonloopback_client]
|
2022-04-14 04:18:32 -04:00
|
|
|
assert_error "ERR *protected*" {$r2 config set dir somedir}
|
|
|
|
assert_error "ERR *DEBUG command not allowed*" {$r2 DEBUG HELP}
|
Protected configs and sensitive commands (#9920)
Block sensitive configs and commands by default.
* `enable-protected-configs` - block modification of configs with the new `PROTECTED_CONFIG` flag.
Currently we add this flag to `dbfilename`, and `dir` configs,
all of which are non-mutable configs that can set a file redis will write to.
* `enable-debug-command` - block the `DEBUG` command
* `enable-module-command` - block the `MODULE` command
These have a default value set to `no`, so that these features are not
exposed by default to client connections, and can only be set by modifying the config file.
Users can change each of these to either `yes` (allow all access), or `local` (allow access from
local TCP connections and unix domain connections)
Note that this is a **breaking change** (specifically the part about MODULE command being disabled by default).
I.e. we don't consider DEBUG command being blocked as an issue (people shouldn't have been using it),
and the few configs we protected are unlikely to have been set at runtime anyway.
On the other hand, it's likely to assume some users who use modules, load them from the config file anyway.
Note that's the whole point of this PR, for redis to be more secure by default and reduce the attack surface on
innocent users, so secure defaults will necessarily mean a breaking change.
2021-12-19 03:46:16 -05:00
|
|
|
}
|
|
|
|
} {} {needs:debug}
|
Allow most CONFIG SET during loading, block some commands in async-loading (#9878)
## background
Till now CONFIG SET was blocked during loading.
(In the not so distant past, GET was disallowed too)
We recently (not released yet) added an async-loading mode, see #9323,
and during that time it'll serve CONFIG SET and any other command.
And now we realized (#9770) that some configs, and commands are dangerous
during async-loading.
## changes
* Allow most CONFIG SET during loading (both on async-loading and normal loading)
* Allow CONFIG REWRITE and CONFIG RESETSTAT during loading
* Block a few config during loading (`appendonly`, `repl-diskless-load`, and `dir`)
* Block a few commands during loading (list below)
## the blocked commands:
* SAVE - obviously we don't wanna start a foregreound save during loading 8-)
* BGSAVE - we don't mind to schedule one, but we don't wanna fork now
* BGREWRITEAOF - we don't mind to schedule one, but we don't wanna fork now
* MODULE - we obviously don't wanna unload a module during replication / rdb loading
(MODULE HELP and MODULE LIST are not blocked)
* SYNC / PSYNC - we're in the middle of RDB loading from master, must not allow sync
requests now.
* REPLICAOF / SLAVEOF - we're in the middle of replicating, maybe it makes sense to let
the user abort it, but he couldn't do that so far, i don't wanna take any risk of bugs due to odd state.
* CLUSTER - only allow [HELP, SLOTS, NODES, INFO, MYID, LINKS, KEYSLOT, COUNTKEYSINSLOT,
GETKEYSINSLOT, RESET, REPLICAS, COUNT_FAILURE_REPORTS], for others, preserve the status quo
## other fixes
* processEventsWhileBlocked had an issue when being nested, this could happen with a busy script
during async loading (new), but also in a busy script during AOF loading (old). this lead to a crash in
the scenario described in #6988
2021-12-22 07:11:16 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
test {config during loading} {
|
2023-04-18 09:14:26 -04:00
|
|
|
start_server [list overrides [list key-load-delay 50 loading-process-events-interval-bytes 1024 rdbcompression no save "900 1"]] {
|
Allow most CONFIG SET during loading, block some commands in async-loading (#9878)
## background
Till now CONFIG SET was blocked during loading.
(In the not so distant past, GET was disallowed too)
We recently (not released yet) added an async-loading mode, see #9323,
and during that time it'll serve CONFIG SET and any other command.
And now we realized (#9770) that some configs, and commands are dangerous
during async-loading.
## changes
* Allow most CONFIG SET during loading (both on async-loading and normal loading)
* Allow CONFIG REWRITE and CONFIG RESETSTAT during loading
* Block a few config during loading (`appendonly`, `repl-diskless-load`, and `dir`)
* Block a few commands during loading (list below)
## the blocked commands:
* SAVE - obviously we don't wanna start a foregreound save during loading 8-)
* BGSAVE - we don't mind to schedule one, but we don't wanna fork now
* BGREWRITEAOF - we don't mind to schedule one, but we don't wanna fork now
* MODULE - we obviously don't wanna unload a module during replication / rdb loading
(MODULE HELP and MODULE LIST are not blocked)
* SYNC / PSYNC - we're in the middle of RDB loading from master, must not allow sync
requests now.
* REPLICAOF / SLAVEOF - we're in the middle of replicating, maybe it makes sense to let
the user abort it, but he couldn't do that so far, i don't wanna take any risk of bugs due to odd state.
* CLUSTER - only allow [HELP, SLOTS, NODES, INFO, MYID, LINKS, KEYSLOT, COUNTKEYSINSLOT,
GETKEYSINSLOT, RESET, REPLICAS, COUNT_FAILURE_REPORTS], for others, preserve the status quo
## other fixes
* processEventsWhileBlocked had an issue when being nested, this could happen with a busy script
during async loading (new), but also in a busy script during AOF loading (old). this lead to a crash in
the scenario described in #6988
2021-12-22 07:11:16 -05:00
|
|
|
# create a big rdb that will take long to load. it is important
|
|
|
|
# for keys to be big since the server processes events only once in 2mb.
|
|
|
|
# 100mb of rdb, 100k keys will load in more than 5 seconds
|
|
|
|
r debug populate 100000 key 1000
|
|
|
|
|
|
|
|
restart_server 0 false false
|
|
|
|
|
|
|
|
# make sure it's still loading
|
|
|
|
assert_equal [s loading] 1
|
|
|
|
|
|
|
|
# verify some configs are allowed during loading
|
|
|
|
r config set loglevel debug
|
|
|
|
assert_equal [lindex [r config get loglevel] 1] debug
|
|
|
|
|
|
|
|
# verify some configs are forbidden during loading
|
|
|
|
assert_error {LOADING*} {r config set dir asdf}
|
|
|
|
|
|
|
|
# make sure it's still loading
|
|
|
|
assert_equal [s loading] 1
|
|
|
|
|
|
|
|
# no need to keep waiting for loading to complete
|
|
|
|
exec kill [srv 0 pid]
|
|
|
|
}
|
|
|
|
} {} {external:skip}
|
Fix bugs in CONFIG REWRITE, omitting rename-command and include lines, and inserting comments around module and acl configs (#10761)
A regression from #10285 (redis 7.0).
CONFIG REWRITE would put lines with: `include`, `rename-command`,
`user`, `loadmodule`, and any module specific config in a comment.
For ACL `user`, `loadmodule` and module specific configs would be
re-inserted at the end (instead of updating existing lines), so the only
implication is a messy config file full of comments.
But for `rename-command` and `include`, the implication would be that
they're now missing, so a server restart would lose them.
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-06-02 01:36:55 -04:00
|
|
|
|
|
|
|
test {CONFIG REWRITE handles rename-command properly} {
|
|
|
|
start_server {tags {"introspection"} overrides {rename-command {flushdb badger}}} {
|
|
|
|
assert_error {ERR unknown command*} {r flushdb}
|
|
|
|
|
|
|
|
r config rewrite
|
|
|
|
restart_server 0 true false
|
|
|
|
|
|
|
|
assert_error {ERR unknown command*} {r flushdb}
|
|
|
|
}
|
|
|
|
} {} {external:skip}
|
|
|
|
|
2022-06-02 07:03:47 -04:00
|
|
|
test {CONFIG REWRITE handles alias config properly} {
|
|
|
|
start_server {tags {"introspection"} overrides {hash-max-listpack-entries 20 hash-max-ziplist-entries 21}} {
|
|
|
|
assert_equal [r config get hash-max-listpack-entries] {hash-max-listpack-entries 21}
|
|
|
|
assert_equal [r config get hash-max-ziplist-entries] {hash-max-ziplist-entries 21}
|
|
|
|
r config set hash-max-listpack-entries 100
|
|
|
|
|
|
|
|
r config rewrite
|
|
|
|
restart_server 0 true false
|
|
|
|
|
|
|
|
assert_equal [r config get hash-max-listpack-entries] {hash-max-listpack-entries 100}
|
|
|
|
}
|
|
|
|
# test the order doesn't matter
|
|
|
|
start_server {tags {"introspection"} overrides {hash-max-ziplist-entries 20 hash-max-listpack-entries 21}} {
|
|
|
|
assert_equal [r config get hash-max-listpack-entries] {hash-max-listpack-entries 21}
|
|
|
|
assert_equal [r config get hash-max-ziplist-entries] {hash-max-ziplist-entries 21}
|
|
|
|
r config set hash-max-listpack-entries 100
|
|
|
|
|
|
|
|
r config rewrite
|
|
|
|
restart_server 0 true false
|
|
|
|
|
|
|
|
assert_equal [r config get hash-max-listpack-entries] {hash-max-listpack-entries 100}
|
|
|
|
}
|
|
|
|
} {} {external:skip}
|