redict/tests/unit/tracking.tcl

903 lines
32 KiB
Tcl
Raw Normal View History

Add reply_schema to command json files (internal for now) (#10273) Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845 Since ironing the details of the reply schema of each and every command can take a long time, we would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch. Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build. ### Background In #9656 we add a lot of information about Redis commands, but we are missing information about the replies ### Motivation 1. Documentation. This is the primary goal. 2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like. 3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing testsuite, see the "Testing" section) ### Schema The idea is to supply some sort of schema for the various replies of each command. The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3. Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with and without the `FULL` modifier) We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema. Example for `BZPOPMIN`: ``` "reply_schema": { "oneOf": [ { "description": "Timeout reached and no elements were popped.", "type": "null" }, { "description": "The keyname, popped member, and its score.", "type": "array", "minItems": 3, "maxItems": 3, "items": [ { "description": "Keyname", "type": "string" }, { "description": "Member", "type": "string" }, { "description": "Score", "type": "number" } ] } ] } ``` #### Notes 1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI, where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one. 2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply schema for documentation (and possibly to create a fuzzer that validates the replies) 3. For documentation, the description field will include an explanation of the scenario in which the reply is sent, including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one is with `WITHSCORES` and the other is without. 4. For documentation, there will be another optional field "notes" in which we will add a short description of the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat array, for example) Given the above: 1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/) (given that "description" and "notes" are comprehensive enough) 2. We can generate a client in a strongly typed language (but the return type could be a conceptual `union` and the caller needs to know which schema is relevant). see the section below for RESP2 support. 3. We can create a fuzzer for RESP3. ### Limitations (because we are using the standard json-schema) The problem is that Redis' replies are more diverse than what the json format allows. This means that, when we convert the reply to a json (in order to validate the schema against it), we lose information (see the "Testing" section below). The other option would have been to extend the standard json-schema (and json format) to include stuff like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that seemed like too much work, so we decided to compromise. Examples: 1. We cannot tell the difference between an "array" and a "set" 2. We cannot tell the difference between simple-string and bulk-string 3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems` compares (member,score) tuples and not just the member name. ### Testing This commit includes some changes inside Redis in order to verify the schemas (existing and future ones) are indeed correct (i.e. describe the actual response of Redis). To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands it executed and their replies. For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with `--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with `--log-req-res --force-resp3`) You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate `.reqres` files (same dir as the `stdout` files) which contain request-response pairs. These files are later on processed by `./utils/req-res-log-validator.py` which does: 1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c) 2. For each request-response pair, it validates the response against the request's reply_schema (obtained from the extended COMMAND DOCS) 5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use the existing redis test suite, rather than attempt to write a fuzzer. #### Notes about RESP2 1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to accept RESP3 as the future RESP) 2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3 so that we can validate it, we will need to know how to convert the actual reply to the one expected. - number and boolean are always strings in RESP2 so the conversion is easy - objects (maps) are always a flat array in RESP2 - others (nested array in RESP3's `ZRANGE` and others) will need some special per-command handling (so the client will not be totally auto-generated) Example for ZRANGE: ``` "reply_schema": { "anyOf": [ { "description": "A list of member elements", "type": "array", "uniqueItems": true, "items": { "type": "string" } }, { "description": "Members and their scores. Returned in case `WITHSCORES` was used.", "notes": "In RESP2 this is returned as a flat array", "type": "array", "uniqueItems": true, "items": { "type": "array", "minItems": 2, "maxItems": 2, "items": [ { "description": "Member", "type": "string" }, { "description": "Score", "type": "number" } ] } } ] } ``` ### Other changes 1. Some tests that behave differently depending on the RESP are now being tested for both RESP, regardless of the special log-req-res mode ("Pub/Sub PING" for example) 2. Update the history field of CLIENT LIST 3. Added basic tests for commands that were not covered at all by the testsuite ### TODO - [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g. when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896 - [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode - [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator) - [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output of the tests - https://github.com/redis/redis/issues/11897 - [x] (probably a separate PR) add all missing schemas - [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res - [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to fight with the tcl including mechanism a bit) - [x] issue: module API - https://github.com/redis/redis/issues/11898 - [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899 Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com> Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com> Co-authored-by: Oran Agra <oran@redislabs.com> Co-authored-by: Shaya Potter <shaya@redislabs.com>
2023-03-11 03:14:16 -05:00
# logreqres:skip because it seems many of these tests rely heavily on RESP2
start_server {tags {"tracking network logreqres:skip"}} {
# Create a deferred client we'll use to redirect invalidation
# messages to.
set rd_redirection [redis_deferring_client]
$rd_redirection client id
set redir_id [$rd_redirection read]
$rd_redirection subscribe __redis__:invalidate
$rd_redirection read ; # Consume the SUBSCRIBE reply.
# Create another client that's not used as a redirection client
# We should always keep this client's buffer clean
set rd [redis_deferring_client]
# Client to be used for SET and GET commands
# We don't read this client's buffer
set rd_sg [redis_client]
proc clean_all {} {
uplevel {
# We should make r TRACKING off first. If r is in RESP3,
# r FLUSH ALL will send us tracking-redir-broken or other
# info which will not be consumed.
r CLIENT TRACKING off
$rd QUIT
$rd_redirection QUIT
set rd [redis_deferring_client]
set rd_redirection [redis_deferring_client]
$rd_redirection client id
set redir_id [$rd_redirection read]
$rd_redirection subscribe __redis__:invalidate
$rd_redirection read ; # Consume the SUBSCRIBE reply.
r FLUSHALL
r HELLO 2
r config set tracking-table-max-keys 1000000
}
}
2020-04-22 05:24:19 -04:00
test {Clients are able to enable tracking and redirect it} {
r CLIENT TRACKING on REDIRECT $redir_id
} {*OK}
test {The other connection is able to get invalidations} {
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
r SET a{t} 1
r SET b{t} 1
r GET a{t}
r INCR b{t} ; # This key should not be notified, since it wasn't fetched.
r INCR a{t}
set keys [lindex [$rd_redirection read] 2]
assert {[llength $keys] == 1}
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
assert {[lindex $keys 0] eq {a{t}}}
}
test {The client is now able to disable tracking} {
# Make sure to add a few more keys in the tracking list
# so that we can check for leaks, as a side effect.
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
r MGET a{t} b{t} c{t} d{t} e{t} f{t} g{t}
r CLIENT TRACKING off
} {*OK}
test {Clients can enable the BCAST mode with the empty prefix} {
r CLIENT TRACKING on BCAST REDIRECT $redir_id
} {*OK*}
test {The connection gets invalidation messages about all the keys} {
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
r MSET a{t} 1 b{t} 2 c{t} 3
set keys [lsort [lindex [$rd_redirection read] 2]]
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
assert {$keys eq {a{t} b{t} c{t}}}
}
test {Clients can enable the BCAST mode with prefixes} {
r CLIENT TRACKING off
r CLIENT TRACKING on BCAST REDIRECT $redir_id PREFIX a: PREFIX b:
r MULTI
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
r INCR a:1{t}
r INCR a:2{t}
r INCR b:1{t}
r INCR b:2{t}
# we should not get this key
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
r INCR c:1{t}
r EXEC
# Because of the internals, we know we are going to receive
# two separated notifications for the two different prefixes.
set keys1 [lsort [lindex [$rd_redirection read] 2]]
set keys2 [lsort [lindex [$rd_redirection read] 2]]
set keys [lsort [list {*}$keys1 {*}$keys2]]
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
assert {$keys eq {a:1{t} a:2{t} b:1{t} b:2{t}}}
}
test {Adding prefixes to BCAST mode works} {
r CLIENT TRACKING on BCAST REDIRECT $redir_id PREFIX c:
r INCR c:1234
set keys [lsort [lindex [$rd_redirection read] 2]]
assert {$keys eq {c:1234}}
}
2020-04-22 05:24:19 -04:00
test {Tracking NOLOOP mode in standard mode works} {
r CLIENT TRACKING off
r CLIENT TRACKING on REDIRECT $redir_id NOLOOP
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
r MGET otherkey1{t} loopkey{t} otherkey2{t}
$rd_sg SET otherkey1{t} 1; # We should get this
r SET loopkey{t} 1 ; # We should not get this
$rd_sg SET otherkey2{t} 1; # We should get this
2020-04-22 05:24:19 -04:00
# Because of the internals, we know we are going to receive
# two separated notifications for the two different keys.
set keys1 [lsort [lindex [$rd_redirection read] 2]]
set keys2 [lsort [lindex [$rd_redirection read] 2]]
2020-04-22 05:24:19 -04:00
set keys [lsort [list {*}$keys1 {*}$keys2]]
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
assert {$keys eq {otherkey1{t} otherkey2{t}}}
2020-04-22 05:24:19 -04:00
}
test {Tracking NOLOOP mode in BCAST mode works} {
r CLIENT TRACKING off
r CLIENT TRACKING on BCAST REDIRECT $redir_id NOLOOP
$rd_sg SET otherkey1 1; # We should get this
2020-04-22 05:24:19 -04:00
r SET loopkey 1 ; # We should not get this
$rd_sg SET otherkey2 1; # We should get this
# Because $rd_sg send command synchronously, we know we are
# going to receive two separated notifications.
set keys1 [lsort [lindex [$rd_redirection read] 2]]
set keys2 [lsort [lindex [$rd_redirection read] 2]]
2020-04-22 05:24:19 -04:00
set keys [lsort [list {*}$keys1 {*}$keys2]]
assert {$keys eq {otherkey1 otherkey2}}
}
test {Tracking gets notification of expired keys} {
r CLIENT TRACKING off
r CLIENT TRACKING on BCAST REDIRECT $redir_id NOLOOP
r SET mykey myval px 1
r SET mykeyotherkey myval ; # We should not get it
after 1000
set keys [lsort [lindex [$rd_redirection read] 2]]
assert {$keys eq {mykey}}
}
test {Tracking gets notification of lazy expired keys} {
r CLIENT TRACKING off
r CLIENT TRACKING on BCAST REDIRECT $redir_id NOLOOP
# Use multi-exec to expose a race where the key gets an two invalidations
# in the same event loop, once by the client so filtered by NOLOOP, and
# the second one by the lazy expire
r MULTI
r SET mykey{t} myval px 1
r SET mykeyotherkey{t} myval ; # We should not get it
r DEBUG SLEEP 0.1
r GET mykey{t}
r EXEC
set keys [lsort [lindex [$rd_redirection read] 2]]
assert {$keys eq {mykey{t}}}
} {} {needs:debug}
test {HELLO 3 reply is correct} {
set reply [r HELLO 3]
assert_equal [dict get $reply proto] 3
}
test {HELLO without protover} {
set reply [r HELLO 3]
assert_equal [dict get $reply proto] 3
set reply [r HELLO]
assert_equal [dict get $reply proto] 3
set reply [r HELLO 2]
assert_equal [dict get $reply proto] 2
set reply [r HELLO]
assert_equal [dict get $reply proto] 2
2020-12-24 07:33:53 -05:00
# restore RESP3 for next test
r HELLO 3
}
test {RESP3 based basic invalidation} {
r CLIENT TRACKING off
r CLIENT TRACKING on
$rd_sg SET key1 1
r GET key1
$rd_sg SET key1 2
r read
} {invalidate key1}
test {RESP3 tracking redirection} {
r CLIENT TRACKING off
r CLIENT TRACKING on REDIRECT $redir_id
$rd_sg SET key1 1
r GET key1
$rd_sg SET key1 2
set res [lindex [$rd_redirection read] 2]
assert {$res eq {key1}}
}
test {Invalidations of previous keys can be redirected after switching to RESP3} {
r HELLO 2
$rd_sg SET key1 1
r GET key1
r HELLO 3
$rd_sg SET key1 2
set res [lindex [$rd_redirection read] 2]
assert {$res eq {key1}}
}
test {Invalidations of new keys can be redirected after switching to RESP3} {
r HELLO 3
$rd_sg SET key1 1
r GET key1
$rd_sg SET key1 2
set res [lindex [$rd_redirection read] 2]
assert {$res eq {key1}}
}
test {Invalid keys should not be tracked for scripts in NOLOOP mode} {
$rd_sg CLIENT TRACKING off
$rd_sg CLIENT TRACKING on NOLOOP
$rd_sg HELLO 3
$rd_sg SET key1 1
assert_equal "1" [$rd_sg GET key1]
# For write command in script, invalid key should not be tracked with NOLOOP flag
$rd_sg eval "return redis.call('set', 'key1', '2')" 1 key1
assert_equal "2" [$rd_sg GET key1]
$rd_sg CLIENT TRACKING off
}
test {Tracking only occurs for scripts when a command calls a read-only command} {
r CLIENT TRACKING off
r CLIENT TRACKING on
$rd_sg MSET key2{t} 1 key2{t} 1
# If a script doesn't call any read command, don't track any keys
r EVAL "redis.call('set', 'key3{t}', 'bar')" 2 key1{t} key2{t}
$rd_sg MSET key2{t} 2 key1{t} 2
Cleanup around script_caller, fix tracking of scripts and ACL logging for RM_Call (#11770) * Make it clear that current_client is the root client that was called by external connection * add executing_client which is the client that runs the current command (can be a module or a script) * Remove script_caller that was used for commands that have CLIENT_SCRIPT to get the client that called the script. in most cases, that's the current_client, and in others (when being called from a module), it could be an intermediate client when we actually want the original one used by the external connection. bugfixes: * RM_Call with C flag should log ACL errors with the requested user rather than the one used by the original client, this also solves a crash when RM_Call is used with C flag from a detached thread safe context. * addACLLogEntry would have logged info about the script_caller, but in case the script was issued by a module command we actually want the current_client. the exception is when RM_Call is called from a timer event, in which case we don't have a current_client. behavior changes: * client side tracking for scripts now tracks the keys that are read by the script instead of the keys that are declared by the caller for EVAL other changes: * Log both current_client and executing_client in the crash log. * remove prepareLuaClient and resetLuaClient, being dead code that was forgotten. * remove scriptTimeSnapshot and snapshot_time and instead add cmd_time_snapshot that serves all commands and is reset only when execution nesting starts. * remove code to propagate CLIENT_FORCE_REPL from the executed command to the script caller since scripts aren't propagated anyway these days and anyway this flag wouldn't have had an effect since CLIENT_PREVENT_PROP is added by scriptResetRun. * fix a module GIL violation issue in afterSleep that was introduced in #10300 (unreleased)
2023-02-16 01:07:35 -05:00
assert_equal "PONG" [r ping]
Cleanup around script_caller, fix tracking of scripts and ACL logging for RM_Call (#11770) * Make it clear that current_client is the root client that was called by external connection * add executing_client which is the client that runs the current command (can be a module or a script) * Remove script_caller that was used for commands that have CLIENT_SCRIPT to get the client that called the script. in most cases, that's the current_client, and in others (when being called from a module), it could be an intermediate client when we actually want the original one used by the external connection. bugfixes: * RM_Call with C flag should log ACL errors with the requested user rather than the one used by the original client, this also solves a crash when RM_Call is used with C flag from a detached thread safe context. * addACLLogEntry would have logged info about the script_caller, but in case the script was issued by a module command we actually want the current_client. the exception is when RM_Call is called from a timer event, in which case we don't have a current_client. behavior changes: * client side tracking for scripts now tracks the keys that are read by the script instead of the keys that are declared by the caller for EVAL other changes: * Log both current_client and executing_client in the crash log. * remove prepareLuaClient and resetLuaClient, being dead code that was forgotten. * remove scriptTimeSnapshot and snapshot_time and instead add cmd_time_snapshot that serves all commands and is reset only when execution nesting starts. * remove code to propagate CLIENT_FORCE_REPL from the executed command to the script caller since scripts aren't propagated anyway these days and anyway this flag wouldn't have had an effect since CLIENT_PREVENT_PROP is added by scriptResetRun. * fix a module GIL violation issue in afterSleep that was introduced in #10300 (unreleased)
2023-02-16 01:07:35 -05:00
# If a script calls a read command, just the read keys
r EVAL "redis.call('get', 'key2{t}')" 2 key1{t} key2{t}
$rd_sg MSET key2{t} 2 key3{t} 2
assert_equal {invalidate key2{t}} [r read]
Cleanup around script_caller, fix tracking of scripts and ACL logging for RM_Call (#11770) * Make it clear that current_client is the root client that was called by external connection * add executing_client which is the client that runs the current command (can be a module or a script) * Remove script_caller that was used for commands that have CLIENT_SCRIPT to get the client that called the script. in most cases, that's the current_client, and in others (when being called from a module), it could be an intermediate client when we actually want the original one used by the external connection. bugfixes: * RM_Call with C flag should log ACL errors with the requested user rather than the one used by the original client, this also solves a crash when RM_Call is used with C flag from a detached thread safe context. * addACLLogEntry would have logged info about the script_caller, but in case the script was issued by a module command we actually want the current_client. the exception is when RM_Call is called from a timer event, in which case we don't have a current_client. behavior changes: * client side tracking for scripts now tracks the keys that are read by the script instead of the keys that are declared by the caller for EVAL other changes: * Log both current_client and executing_client in the crash log. * remove prepareLuaClient and resetLuaClient, being dead code that was forgotten. * remove scriptTimeSnapshot and snapshot_time and instead add cmd_time_snapshot that serves all commands and is reset only when execution nesting starts. * remove code to propagate CLIENT_FORCE_REPL from the executed command to the script caller since scripts aren't propagated anyway these days and anyway this flag wouldn't have had an effect since CLIENT_PREVENT_PROP is added by scriptResetRun. * fix a module GIL violation issue in afterSleep that was introduced in #10300 (unreleased)
2023-02-16 01:07:35 -05:00
assert_equal "PONG" [r ping]
# RO variants work like the normal variants
Cleanup around script_caller, fix tracking of scripts and ACL logging for RM_Call (#11770) * Make it clear that current_client is the root client that was called by external connection * add executing_client which is the client that runs the current command (can be a module or a script) * Remove script_caller that was used for commands that have CLIENT_SCRIPT to get the client that called the script. in most cases, that's the current_client, and in others (when being called from a module), it could be an intermediate client when we actually want the original one used by the external connection. bugfixes: * RM_Call with C flag should log ACL errors with the requested user rather than the one used by the original client, this also solves a crash when RM_Call is used with C flag from a detached thread safe context. * addACLLogEntry would have logged info about the script_caller, but in case the script was issued by a module command we actually want the current_client. the exception is when RM_Call is called from a timer event, in which case we don't have a current_client. behavior changes: * client side tracking for scripts now tracks the keys that are read by the script instead of the keys that are declared by the caller for EVAL other changes: * Log both current_client and executing_client in the crash log. * remove prepareLuaClient and resetLuaClient, being dead code that was forgotten. * remove scriptTimeSnapshot and snapshot_time and instead add cmd_time_snapshot that serves all commands and is reset only when execution nesting starts. * remove code to propagate CLIENT_FORCE_REPL from the executed command to the script caller since scripts aren't propagated anyway these days and anyway this flag wouldn't have had an effect since CLIENT_PREVENT_PROP is added by scriptResetRun. * fix a module GIL violation issue in afterSleep that was introduced in #10300 (unreleased)
2023-02-16 01:07:35 -05:00
# If a RO script doesn't call any read command, don't track any keys
r EVAL_RO "redis.call('ping')" 2 key1{t} key2{t}
$rd_sg MSET key2{t} 2 key1{t} 2
Cleanup around script_caller, fix tracking of scripts and ACL logging for RM_Call (#11770) * Make it clear that current_client is the root client that was called by external connection * add executing_client which is the client that runs the current command (can be a module or a script) * Remove script_caller that was used for commands that have CLIENT_SCRIPT to get the client that called the script. in most cases, that's the current_client, and in others (when being called from a module), it could be an intermediate client when we actually want the original one used by the external connection. bugfixes: * RM_Call with C flag should log ACL errors with the requested user rather than the one used by the original client, this also solves a crash when RM_Call is used with C flag from a detached thread safe context. * addACLLogEntry would have logged info about the script_caller, but in case the script was issued by a module command we actually want the current_client. the exception is when RM_Call is called from a timer event, in which case we don't have a current_client. behavior changes: * client side tracking for scripts now tracks the keys that are read by the script instead of the keys that are declared by the caller for EVAL other changes: * Log both current_client and executing_client in the crash log. * remove prepareLuaClient and resetLuaClient, being dead code that was forgotten. * remove scriptTimeSnapshot and snapshot_time and instead add cmd_time_snapshot that serves all commands and is reset only when execution nesting starts. * remove code to propagate CLIENT_FORCE_REPL from the executed command to the script caller since scripts aren't propagated anyway these days and anyway this flag wouldn't have had an effect since CLIENT_PREVENT_PROP is added by scriptResetRun. * fix a module GIL violation issue in afterSleep that was introduced in #10300 (unreleased)
2023-02-16 01:07:35 -05:00
assert_equal "PONG" [r ping]
Cleanup around script_caller, fix tracking of scripts and ACL logging for RM_Call (#11770) * Make it clear that current_client is the root client that was called by external connection * add executing_client which is the client that runs the current command (can be a module or a script) * Remove script_caller that was used for commands that have CLIENT_SCRIPT to get the client that called the script. in most cases, that's the current_client, and in others (when being called from a module), it could be an intermediate client when we actually want the original one used by the external connection. bugfixes: * RM_Call with C flag should log ACL errors with the requested user rather than the one used by the original client, this also solves a crash when RM_Call is used with C flag from a detached thread safe context. * addACLLogEntry would have logged info about the script_caller, but in case the script was issued by a module command we actually want the current_client. the exception is when RM_Call is called from a timer event, in which case we don't have a current_client. behavior changes: * client side tracking for scripts now tracks the keys that are read by the script instead of the keys that are declared by the caller for EVAL other changes: * Log both current_client and executing_client in the crash log. * remove prepareLuaClient and resetLuaClient, being dead code that was forgotten. * remove scriptTimeSnapshot and snapshot_time and instead add cmd_time_snapshot that serves all commands and is reset only when execution nesting starts. * remove code to propagate CLIENT_FORCE_REPL from the executed command to the script caller since scripts aren't propagated anyway these days and anyway this flag wouldn't have had an effect since CLIENT_PREVENT_PROP is added by scriptResetRun. * fix a module GIL violation issue in afterSleep that was introduced in #10300 (unreleased)
2023-02-16 01:07:35 -05:00
# If a RO script calls a read command, just the read keys
r EVAL_RO "redis.call('get', 'key2{t}')" 2 key1{t} key2{t}
$rd_sg MSET key2{t} 2 key3{t} 2
assert_equal {invalidate key2{t}} [r read]
assert_equal "PONG" [r ping]
}
test {RESP3 Client gets tracking-redir-broken push message after cached key changed when rediretion client is terminated} {
r CLIENT TRACKING on REDIRECT $redir_id
$rd_sg SET key1 1
r GET key1
$rd_redirection QUIT
assert_equal OK [$rd_redirection read]
$rd_sg SET key1 2
set MAX_TRIES 100
set res -1
for {set i 0} {$i <= $MAX_TRIES && $res < 0} {incr i} {
set res [lsearch -exact [r PING] "tracking-redir-broken"]
}
assert {$res >= 0}
# Consume PING reply
assert_equal PONG [r read]
# Reinstantiating after QUIT
set rd_redirection [redis_deferring_client]
$rd_redirection CLIENT ID
set redir_id [$rd_redirection read]
$rd_redirection SUBSCRIBE __redis__:invalidate
$rd_redirection read ; # Consume the SUBSCRIBE reply
}
test {Different clients can redirect to the same connection} {
r CLIENT TRACKING on REDIRECT $redir_id
$rd CLIENT TRACKING on REDIRECT $redir_id
assert_equal OK [$rd read] ; # Consume the TRACKING reply
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
$rd_sg MSET key1{t} 1 key2{t} 1
r GET key1{t}
$rd GET key2{t}
assert_equal 1 [$rd read] ; # Consume the GET reply
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
$rd_sg INCR key1{t}
$rd_sg INCR key2{t}
set res1 [lindex [$rd_redirection read] 2]
set res2 [lindex [$rd_redirection read] 2]
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
assert {$res1 eq {key1{t}}}
assert {$res2 eq {key2{t}}}
}
test {Different clients using different protocols can track the same key} {
$rd HELLO 3
set reply [$rd read] ; # Consume the HELLO reply
assert_equal 3 [dict get $reply proto]
$rd CLIENT TRACKING on
assert_equal OK [$rd read] ; # Consume the TRACKING reply
$rd_sg set key1 1
r GET key1
$rd GET key1
assert_equal 1 [$rd read] ; # Consume the GET reply
$rd_sg INCR key1
set res1 [lindex [$rd_redirection read] 2]
$rd PING ; # Non redirecting client has to talk to the server in order to get invalidation message
set res2 [lindex [split [$rd read] " "] 1]
assert_equal PONG [$rd read] ; # Consume the PING reply, which comes together with the invalidation message
assert {$res1 eq {key1}}
assert {$res2 eq {key1}}
}
test {No invalidation message when using OPTIN option} {
r CLIENT TRACKING on OPTIN REDIRECT $redir_id
$rd_sg SET key1 1
r GET key1 ; # This key should not be notified, since OPTIN is on and CLIENT CACHING yes wasn't called
$rd_sg SET key1 2
# Preparing some message to consume on $rd_redirection so we don't get blocked
r CLIENT TRACKING off
r CLIENT TRACKING on REDIRECT $redir_id
$rd_sg SET key2 1
r GET key2 ; # This key should be notified
$rd_sg SET key2 2
set res [lindex [$rd_redirection read] 2]
assert {$res eq {key2}}
}
test {Invalidation message sent when using OPTIN option with CLIENT CACHING yes} {
r CLIENT TRACKING on OPTIN REDIRECT $redir_id
$rd_sg SET key1 3
r CLIENT CACHING yes
r GET key1
$rd_sg SET key1 4
set res [lindex [$rd_redirection read] 2]
assert {$res eq {key1}}
}
test {Invalidation message sent when using OPTOUT option} {
r CLIENT TRACKING off
r CLIENT TRACKING on OPTOUT REDIRECT $redir_id
$rd_sg SET key1 1
r GET key1
$rd_sg SET key1 2
set res [lindex [$rd_redirection read] 2]
assert {$res eq {key1}}
}
test {No invalidation message when using OPTOUT option with CLIENT CACHING no} {
$rd_sg SET key1 1
r CLIENT CACHING no
r GET key1 ; # This key should not be notified, since OPTOUT is on and CLIENT CACHING no was called
$rd_sg SET key1 2
# Preparing some message to consume on $rd_redirection so we don't get blocked
$rd_sg SET key2 1
r GET key2 ; # This key should be notified
$rd_sg SET key2 2
set res [lindex [$rd_redirection read] 2]
assert {$res eq {key2}}
}
test {Able to redirect to a RESP3 client} {
$rd_redirection UNSUBSCRIBE __redis__:invalidate ; # Need to unsub first before we can do HELLO 3
set res [$rd_redirection read] ; # Consume the UNSUBSCRIBE reply
assert_equal {__redis__:invalidate} [lindex $res 1]
$rd_redirection HELLO 3
set res [$rd_redirection read] ; # Consume the HELLO reply
assert_equal [dict get $reply proto] 3
$rd_redirection SUBSCRIBE __redis__:invalidate
set res [$rd_redirection read] ; # Consume the SUBSCRIBE reply
assert_equal {__redis__:invalidate} [lindex $res 1]
r CLIENT TRACKING on REDIRECT $redir_id
$rd_sg SET key1 1
r GET key1
$rd_sg INCR key1
set res [lindex [$rd_redirection read] 1]
assert {$res eq {key1}}
$rd_redirection HELLO 2
set res [$rd_redirection read] ; # Consume the HELLO reply
assert_equal [dict get $res proto] 2
}
test {After switching from normal tracking to BCAST mode, no invalidation message is produced for pre-BCAST keys} {
r CLIENT TRACKING off
r HELLO 3
r CLIENT TRACKING on
$rd_sg SET key1 1
r GET key1
r CLIENT TRACKING off
r CLIENT TRACKING on BCAST
$rd_sg INCR key1
set inv_msg [r PING]
set ping_reply [r read]
assert {$inv_msg eq {invalidate key1}}
assert {$ping_reply eq {PONG}}
}
test {BCAST with prefix collisions throw errors} {
set r [redis_client]
catch {$r CLIENT TRACKING ON BCAST PREFIX FOOBAR PREFIX FOO} output
assert_match {ERR Prefix 'FOOBAR'*'FOO'*} $output
catch {$r CLIENT TRACKING ON BCAST PREFIX FOO PREFIX FOOBAR} output
assert_match {ERR Prefix 'FOO'*'FOOBAR'*} $output
$r CLIENT TRACKING ON BCAST PREFIX FOO PREFIX BAR
catch {$r CLIENT TRACKING ON BCAST PREFIX FO} output
assert_match {ERR Prefix 'FO'*'FOO'*} $output
catch {$r CLIENT TRACKING ON BCAST PREFIX BARB} output
assert_match {ERR Prefix 'BARB'*'BAR'*} $output
$r CLIENT TRACKING OFF
}
test {hdel deliver invalidate message after response in the same connection} {
r CLIENT TRACKING off
r HELLO 3
r CLIENT TRACKING on
r HSET myhash f 1
r HGET myhash f
set res [r HDEL myhash f]
assert_equal $res 1
set res [r read]
assert_equal $res {invalidate myhash}
}
test {Tracking invalidation message is not interleaved with multiple keys response} {
r CLIENT TRACKING off
r HELLO 3
r CLIENT TRACKING on
# We need disable active expire, so we can trigger lazy expire
r DEBUG SET-ACTIVE-EXPIRE 0
r MULTI
r MSET x{t} 1 y{t} 2
r PEXPIRE y{t} 100
r GET y{t}
r EXEC
after 110
# Read expired key y{t}, generate invalidate message about this key
set res [r MGET x{t} y{t}]
assert_equal $res {1 {}}
# Consume the invalidate message which is after command response
set res [r read]
assert_equal $res {invalidate y{t}}
r DEBUG SET-ACTIVE-EXPIRE 1
} {OK} {needs:debug}
test {Tracking invalidation message is not interleaved with transaction response} {
r CLIENT TRACKING off
r HELLO 3
r CLIENT TRACKING on
r MSET a{t} 1 b{t} 2
r GET a{t}
# Start a transaction, make a{t} generate an invalidate message
r MULTI
r INCR a{t}
r GET b{t}
set res [r EXEC]
assert_equal $res {2 2}
set res [r read]
# Consume the invalidate message which is after command response
assert_equal $res {invalidate a{t}}
}
test {Tracking invalidation message of eviction keys should be before response} {
# Get the current memory limit and calculate a new limit.
r CLIENT TRACKING off
r HELLO 3
r CLIENT TRACKING on
# make the previous test is really done before sampling used_memory
wait_lazyfree_done r
set used [expr {[s used_memory] - [s mem_not_counted_for_evict]}]
set limit [expr {$used+100*1024}]
set old_policy [lindex [r config get maxmemory-policy] 1]
r config set maxmemory $limit
# We set policy volatile-random, so only keys with ttl will be evicted
r config set maxmemory-policy volatile-random
# Add a volatile key and tracking it.
r setex volatile-key 10000 x
r get volatile-key
# We use SETBIT here, so we can set a big key and get the used_memory
# bigger than maxmemory. Next command will evict volatile keys. We
# can't use SET, as SET uses big input buffer, so it will fail.
r setbit big-key 1600000 0 ;# this will consume 200kb
# volatile-key is evicted before response.
set res [r getbit big-key 0]
assert_equal $res {invalidate volatile-key}
set res [r read]
assert_equal $res 0
r config set maxmemory-policy $old_policy
r config set maxmemory 0
}
test {Unblocked BLMOVE gets notification after response} {
r RPUSH list2{t} a
$rd HELLO 3
$rd read
$rd CLIENT TRACKING on
$rd read
# Tracking key list2{t}
$rd LRANGE list2{t} 0 -1
$rd read
# We block on list1{t}
$rd BLMOVE list1{t} list2{t} left left 0
wait_for_blocked_clients_count 1
# unblock $rd, list2{t} gets element and generate invalidation message
r rpush list1{t} foo
assert_equal [$rd read] {foo}
assert_equal [$rd read] {invalidate list2{t}}
}
2020-05-14 05:27:31 -04:00
test {Tracking gets notification on tracking table key eviction} {
r CLIENT TRACKING off
r CLIENT TRACKING on REDIRECT $redir_id NOLOOP
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
r MSET key1{t} 1 key2{t} 2
2020-05-14 05:27:31 -04:00
# Let the server track the two keys for us
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
r MGET key1{t} key2{t}
2020-05-14 05:27:31 -04:00
# Force the eviction of all the keys but one:
r config set tracking-table-max-keys 1
# Note that we may have other keys in the table for this client,
# since we disabled/enabled tracking multiple time with the same
# ID, and tracking does not do ID cleanups for performance reasons.
# So we check that eventually we'll receive one or the other key,
# otherwise the test will die for timeout.
while 1 {
set keys [lindex [$rd_redirection read] 2]
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
if {$keys eq {key1{t}} || $keys eq {key2{t}}} break
2020-05-14 05:27:31 -04:00
}
# We should receive an expire notification for one of
# the two keys (only one must remain)
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 08:13:24 -04:00
assert {$keys eq {key1{t}} || $keys eq {key2{t}}}
2020-05-14 05:27:31 -04:00
}
test {Invalidation message received for flushall} {
clean_all
r CLIENT TRACKING on REDIRECT $redir_id
$rd_sg SET key1 1
r GET key1
$rd_sg FLUSHALL
set msg [$rd_redirection read]
assert {[lindex msg 2] eq {} }
}
test {Invalidation message received for flushdb} {
clean_all
r CLIENT TRACKING on REDIRECT $redir_id
$rd_sg SET key1 1
r GET key1
$rd_sg FLUSHDB
set msg [$rd_redirection read]
assert {[lindex msg 2] eq {} }
}
test {Test ASYNC flushall} {
clean_all
r CLIENT TRACKING on REDIRECT $redir_id
r GET key1
r GET key2
assert_equal [s 0 tracking_total_keys] 2
$rd_sg FLUSHALL ASYNC
assert_equal [s 0 tracking_total_keys] 0
assert_equal [lindex [$rd_redirection read] 2] {}
}
test {flushdb tracking invalidation message is not interleaved with transaction response} {
clean_all
r HELLO 3
r CLIENT TRACKING on
r SET a{t} 1
r GET a{t}
r MULTI
r FLUSHDB
set res [r EXEC]
assert_equal $res {OK}
# Consume the invalidate message which is after command response
r read
} {invalidate {}}
# Keys are defined to be evicted 100 at a time by default.
# If after eviction the number of keys still surpasses the limit
# defined in tracking-table-max-keys, we increases eviction
# effort to 200, and then 300, etc.
# This test tests this effort incrementation.
test {Server is able to evacuate enough keys when num of keys surpasses limit by more than defined initial effort} {
clean_all
set NUM_OF_KEYS_TO_TEST 250
set TRACKING_TABLE_MAX_KEYS 1
r CLIENT TRACKING on REDIRECT $redir_id
for {set i 0} {$i < $NUM_OF_KEYS_TO_TEST} {incr i} {
$rd_sg SET key$i $i
r GET key$i
}
r config set tracking-table-max-keys $TRACKING_TABLE_MAX_KEYS
# If not enough keys are evicted, we won't get enough invalidation
# messages, and "$rd_redirection read" will block.
# If too many keys are evicted, we will get too many invalidation
# messages, and the assert will fail.
for {set i 0} {$i < $NUM_OF_KEYS_TO_TEST - $TRACKING_TABLE_MAX_KEYS} {incr i} {
$rd_redirection read
}
$rd_redirection PING
assert {[$rd_redirection read] eq {pong {}}}
}
test {Tracking info is correct} {
clean_all
r CLIENT TRACKING on REDIRECT $redir_id
$rd_sg SET key1 1
$rd_sg SET key2 2
r GET key1
r GET key2
$rd CLIENT TRACKING on BCAST PREFIX prefix:
assert [string match *OK* [$rd read]]
$rd_sg SET prefix:key1 1
$rd_sg SET prefix:key2 2
set info [r info]
regexp "\r\ntracking_total_items:(.*?)\r\n" $info _ total_items
regexp "\r\ntracking_total_keys:(.*?)\r\n" $info _ total_keys
regexp "\r\ntracking_total_prefixes:(.*?)\r\n" $info _ total_prefixes
regexp "\r\ntracking_clients:(.*?)\r\n" $info _ tracking_clients
assert {$total_items == 2}
assert {$total_keys == 2}
assert {$total_prefixes == 1}
assert {$tracking_clients == 2}
}
test {CLIENT GETREDIR provides correct client id} {
set res [r CLIENT GETREDIR]
assert_equal $redir_id $res
r CLIENT TRACKING off
set res [r CLIENT GETREDIR]
assert_equal -1 $res
r CLIENT TRACKING on
set res [r CLIENT GETREDIR]
assert_equal 0 $res
}
test {CLIENT TRACKINGINFO provides reasonable results when tracking off} {
r CLIENT TRACKING off
set res [r client trackinginfo]
set flags [dict get $res flags]
assert_equal {off} $flags
set redirect [dict get $res redirect]
assert_equal {-1} $redirect
set prefixes [dict get $res prefixes]
assert_equal {} $prefixes
}
test {CLIENT TRACKINGINFO provides reasonable results when tracking on} {
r CLIENT TRACKING on
set res [r client trackinginfo]
set flags [dict get $res flags]
assert_equal {on} $flags
set redirect [dict get $res redirect]
assert_equal {0} $redirect
set prefixes [dict get $res prefixes]
assert_equal {} $prefixes
}
test {CLIENT TRACKINGINFO provides reasonable results when tracking on with options} {
r CLIENT TRACKING on REDIRECT $redir_id noloop
set res [r client trackinginfo]
set flags [dict get $res flags]
assert_equal {on noloop} $flags
set redirect [dict get $res redirect]
assert_equal $redir_id $redirect
set prefixes [dict get $res prefixes]
assert_equal {} $prefixes
}
test {CLIENT TRACKINGINFO provides reasonable results when tracking optin} {
r CLIENT TRACKING off
r CLIENT TRACKING on optin
set res [r client trackinginfo]
set flags [dict get $res flags]
assert_equal {on optin} $flags
set redirect [dict get $res redirect]
assert_equal {0} $redirect
set prefixes [dict get $res prefixes]
assert_equal {} $prefixes
r CLIENT CACHING yes
set res [r client trackinginfo]
set flags [dict get $res flags]
assert_equal {on optin caching-yes} $flags
}
test {CLIENT TRACKINGINFO provides reasonable results when tracking optout} {
r CLIENT TRACKING off
r CLIENT TRACKING on optout
set res [r client trackinginfo]
set flags [dict get $res flags]
assert_equal {on optout} $flags
set redirect [dict get $res redirect]
assert_equal {0} $redirect
set prefixes [dict get $res prefixes]
assert_equal {} $prefixes
r CLIENT CACHING no
set res [r client trackinginfo]
set flags [dict get $res flags]
assert_equal {on optout caching-no} $flags
}
test {CLIENT TRACKINGINFO provides reasonable results when tracking bcast mode} {
r CLIENT TRACKING off
r CLIENT TRACKING on BCAST PREFIX foo PREFIX bar
set res [r client trackinginfo]
set flags [dict get $res flags]
assert_equal {on bcast} $flags
set redirect [dict get $res redirect]
assert_equal {0} $redirect
set prefixes [lsort [dict get $res prefixes]]
assert_equal {bar foo} $prefixes
r CLIENT TRACKING off
r CLIENT TRACKING on BCAST
set res [r client trackinginfo]
set prefixes [dict get $res prefixes]
assert_equal {{}} $prefixes
}
test {CLIENT TRACKINGINFO provides reasonable results when tracking redir broken} {
clean_all
r HELLO 3
r CLIENT TRACKING on REDIRECT $redir_id
$rd_sg SET key1 1
r GET key1
$rd_redirection QUIT
assert_equal OK [$rd_redirection read]
$rd_sg SET key1 2
set res [lsearch -exact [r read] "tracking-redir-broken"]
assert {$res >= 0}
set res [r client trackinginfo]
set flags [dict get $res flags]
assert_equal {on broken_redirect} $flags
set redirect [dict get $res redirect]
assert_equal $redir_id $redirect
set prefixes [dict get $res prefixes]
assert_equal {} $prefixes
}
test {Regression test for #11715} {
# This issue manifests when a client invalidates keys through the max key
# limit, which invalidates keys to get Redis below the limit, but no command is
# then executed. This can occur in several ways but the simplest is through
# multi-exec which queues commands.
clean_all
r config set tracking-table-max-keys 2
# The cron will invalidate keys if we're above the limit, so disable it.
r debug pause-cron 1
# Set up a client that has listened to 2 keys and start a multi, this
# sets up the crash for later.
$rd HELLO 3
$rd read
$rd CLIENT TRACKING on
assert_match "OK" [$rd read]
$rd mget "1{tag}" "2{tag}"
assert_match "{} {}" [$rd read]
$rd multi
assert_match "OK" [$rd read]
# Reduce the tracking table keys to 1, this doesn't immediately take affect, but
# instead will apply on the next command.
r config set tracking-table-max-keys 1
# This command will get queued, so make sure this command doesn't crash.
$rd ping
$rd exec
# Validate we got some invalidation message and then the command was queued.
assert_match "invalidate *{tag}" [$rd read]
assert_match "QUEUED" [$rd read]
assert_match "PONG" [$rd read]
r debug pause-cron 0
} {OK} {needs:debug}
foreach resp {3 2} {
test "RESP$resp based basic invalidation with client reply off" {
# This entire test is mostly irrelevant for RESP2, but we run it anyway just for some extra coverage.
clean_all
$rd hello $resp
$rd read
$rd client tracking on
$rd read
$rd_sg set foo bar
$rd get foo
$rd read
$rd client reply off
$rd_sg set foo bar2
if {$resp == 3} {
assert_equal {invalidate foo} [$rd read]
} elseif {$resp == 2} { } ;# Just coverage
# Verify things didn't get messed up and no unexpected reply was pushed to the client.
$rd client reply on
assert_equal {OK} [$rd read]
$rd ping
assert_equal {PONG} [$rd read]
}
}
test {RESP3 based basic redirect invalidation with client reply off} {
clean_all
set rd_redir [redis_deferring_client]
$rd_redir hello 3
$rd_redir read
$rd_redir client id
set rd_redir_id [$rd_redir read]
$rd client tracking on redirect $rd_redir_id
$rd read
$rd_sg set foo bar
$rd get foo
$rd read
$rd_redir client reply off
$rd_sg set foo bar2
assert_equal {invalidate foo} [$rd_redir read]
# Verify things didn't get messed up and no unexpected reply was pushed to the client.
$rd_redir client reply on
assert_equal {OK} [$rd_redir read]
$rd_redir ping
assert_equal {PONG} [$rd_redir read]
$rd_redir close
}
test {RESP3 based basic tracking-redir-broken with client reply off} {
clean_all
$rd hello 3
$rd read
$rd client tracking on redirect $redir_id
$rd read
$rd_sg set foo bar
$rd get foo
$rd read
$rd client reply off
$rd_redirection quit
$rd_redirection read
$rd_sg set foo bar2
set res [lsearch -exact [$rd read] "tracking-redir-broken"]
assert_morethan_equal $res 0
# Verify things didn't get messed up and no unexpected reply was pushed to the client.
$rd client reply on
assert_equal {OK} [$rd read]
$rd ping
assert_equal {PONG} [$rd read]
}
$rd_redirection close
$rd_sg close
$rd close
}
Add reply_schema to command json files (internal for now) (#10273) Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845 Since ironing the details of the reply schema of each and every command can take a long time, we would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch. Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build. ### Background In #9656 we add a lot of information about Redis commands, but we are missing information about the replies ### Motivation 1. Documentation. This is the primary goal. 2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like. 3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing testsuite, see the "Testing" section) ### Schema The idea is to supply some sort of schema for the various replies of each command. The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3. Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with and without the `FULL` modifier) We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema. Example for `BZPOPMIN`: ``` "reply_schema": { "oneOf": [ { "description": "Timeout reached and no elements were popped.", "type": "null" }, { "description": "The keyname, popped member, and its score.", "type": "array", "minItems": 3, "maxItems": 3, "items": [ { "description": "Keyname", "type": "string" }, { "description": "Member", "type": "string" }, { "description": "Score", "type": "number" } ] } ] } ``` #### Notes 1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI, where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one. 2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply schema for documentation (and possibly to create a fuzzer that validates the replies) 3. For documentation, the description field will include an explanation of the scenario in which the reply is sent, including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one is with `WITHSCORES` and the other is without. 4. For documentation, there will be another optional field "notes" in which we will add a short description of the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat array, for example) Given the above: 1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/) (given that "description" and "notes" are comprehensive enough) 2. We can generate a client in a strongly typed language (but the return type could be a conceptual `union` and the caller needs to know which schema is relevant). see the section below for RESP2 support. 3. We can create a fuzzer for RESP3. ### Limitations (because we are using the standard json-schema) The problem is that Redis' replies are more diverse than what the json format allows. This means that, when we convert the reply to a json (in order to validate the schema against it), we lose information (see the "Testing" section below). The other option would have been to extend the standard json-schema (and json format) to include stuff like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that seemed like too much work, so we decided to compromise. Examples: 1. We cannot tell the difference between an "array" and a "set" 2. We cannot tell the difference between simple-string and bulk-string 3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems` compares (member,score) tuples and not just the member name. ### Testing This commit includes some changes inside Redis in order to verify the schemas (existing and future ones) are indeed correct (i.e. describe the actual response of Redis). To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands it executed and their replies. For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with `--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with `--log-req-res --force-resp3`) You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate `.reqres` files (same dir as the `stdout` files) which contain request-response pairs. These files are later on processed by `./utils/req-res-log-validator.py` which does: 1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c) 2. For each request-response pair, it validates the response against the request's reply_schema (obtained from the extended COMMAND DOCS) 5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use the existing redis test suite, rather than attempt to write a fuzzer. #### Notes about RESP2 1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to accept RESP3 as the future RESP) 2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3 so that we can validate it, we will need to know how to convert the actual reply to the one expected. - number and boolean are always strings in RESP2 so the conversion is easy - objects (maps) are always a flat array in RESP2 - others (nested array in RESP3's `ZRANGE` and others) will need some special per-command handling (so the client will not be totally auto-generated) Example for ZRANGE: ``` "reply_schema": { "anyOf": [ { "description": "A list of member elements", "type": "array", "uniqueItems": true, "items": { "type": "string" } }, { "description": "Members and their scores. Returned in case `WITHSCORES` was used.", "notes": "In RESP2 this is returned as a flat array", "type": "array", "uniqueItems": true, "items": { "type": "array", "minItems": 2, "maxItems": 2, "items": [ { "description": "Member", "type": "string" }, { "description": "Score", "type": "number" } ] } } ] } ``` ### Other changes 1. Some tests that behave differently depending on the RESP are now being tested for both RESP, regardless of the special log-req-res mode ("Pub/Sub PING" for example) 2. Update the history field of CLIENT LIST 3. Added basic tests for commands that were not covered at all by the testsuite ### TODO - [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g. when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896 - [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode - [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator) - [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output of the tests - https://github.com/redis/redis/issues/11897 - [x] (probably a separate PR) add all missing schemas - [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res - [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to fight with the tcl including mechanism a bit) - [x] issue: module API - https://github.com/redis/redis/issues/11898 - [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899 Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com> Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com> Co-authored-by: Oran Agra <oran@redislabs.com> Co-authored-by: Shaya Potter <shaya@redislabs.com>
2023-03-11 03:14:16 -05:00
# Just some extra covergae for --log-req-res, because we do not
# run the full tracking unit in that mode
start_server {tags {"tracking network"}} {
test {Coverage: Basic CLIENT CACHING} {
set rd_redirection [redis_deferring_client]
$rd_redirection client id
set redir_id [$rd_redirection read]
assert_equal {OK} [r CLIENT TRACKING on OPTIN REDIRECT $redir_id]
assert_equal {OK} [r CLIENT CACHING yes]
r CLIENT TRACKING off
} {OK}
test {Coverage: Basic CLIENT REPLY} {
r CLIENT REPLY on
} {OK}
test {Coverage: Basic CLIENT TRACKINGINFO} {
r CLIENT TRACKINGINFO
} {flags off redirect -1 prefixes {}}
test {Coverage: Basic CLIENT GETREDIR} {
r CLIENT GETREDIR
} {-1}
}