When we are loading data, it is not safe to generate data
through DEBUG POPULATE. POPULATE may generate keys, causing
panic when loading data with duplicate keys.
Move some declerations from cluster.h to cluster_legacy.h.
The items moved are specific to the legacy clustering
implementation and DO NOT require any other refactoring
other than moving them from one file to another.
Signed-off-by: Josh Hershberg <yehoshua@redis.com>
Make redis-cli --bigkeys and --memkeys usable on a replicas in cluster
mode, by sending the READONLY command. This is only if -c is also given.
We don't detect if a node is a master or a replica so we send READONLY
in both cases. The READONLY has no effect on masters.
Release notes:
Make redis-cli --bigkeys and --memkeys usable on cluster replicas
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Co-authored-by: Oran Agra <oran@redislabs.com>
Add documentation of the URI format in the `--help` output of
`redis-cli` and `redis-benchmark`.
In particular, it's good for users to know that they need to specify
"default" as the username when authenticating without a username. Other
details of the URI format are described too, like scheme and dbnum.
It used to be possible to connect to Redis using an URL with an empty
username, like `redis-cli -u redis://:PASSWORD@localhost:6379/0`. This
was broken in 6.2 (#8048), and there was a discussion about it #9186.
Now, users need to specify "default" as the username and it's better to
document it.
Refer to #12746 for more details.
We have some test cases of swapdb with watchkey but missing seperate
basic swapdb test cases, unhappy path and flushdb after swapdb. So added
the test cases in keyspace.tcl.
CI reports that this test failed, the reason is because during
the command processing, the node processed PING/PONG, resulting
in ping_sent or pong_received mismatch.
Change to use MULTI to avoid timing issue. The test was introduced
in #12224.
This is currently harmless, since we have already cleared the dict
before, it will reset the rehashidx to -1, and in incrementallyRehash
we will call dictIsRehashing to check.
It would be nice to empty the list to avoid meaningless attempts, and
the code is also unified to reduce misunderstandings.
When using DB iterator, it will use dictInitSafeIterator to init a old safe
dict iterator. When dbIteratorNext is used, it will jump to the next slot db
dict when we are done a dict. During this process, we do not have any calls to
dictResumeRehashing, which causes the dict's pauserehash to always be > 0.
And at last, it will be returned directly in dictRehashMilliseconds, which causes
us to have slot dict in a state where rehash cannot be completed.
In the "expire scan should skip dictionaries with lot's of empty buckets" test,
adding a `keys *` can reproduce the problem stably. `keys *` will call dbIteratorNext
to trigger a traversal of all slot dicts.
Added dbReleaseIterator and dbIteratorInitNextSafeIterator methods to call dictResetIterator.
Issue was introduced in #11695.
This PR introduces a new macro, serverAssertWithInfoDebug, to do complex assertions only for debugging. The main intention is to allow running complex operations during tests without impacting runtime performance. This assertion is enabled when setting DEBUG_ASSERTIONS.
The DEBUG_ASSERTIONS flag is set for the daily and CI variants of `test-sanitizer-address`.
Test recently added fails on timeout in valgrind in GH actions.
Locally with valgrind the test finishes within 1.5 sec(s). Couldn't find
any issue due to lack of reproducibility. Increasing the timeout and
adding an additional log to the test to understand how many keys
were left at the end.
Redis 7.2 (#9406) introduced a new modules event, `RedisModuleEvent_Key`.
This new event allows the module to read the key data just before it is removed
from the database (either deleted, expired, evicted, or overwritten).
When the key is removed from the database, either by active expire or eviction.
The new event was not called as part of an execution unit. This can cause an
issue if the module registers a post notification job inside the event. This job will
not be executed atomically with the expiration/eviction operation and will not
replicated inside a Multi/Exec. Moreover, the post notification job will be executed
right after the event where it is still not safe to perform any write operation, this will
violate the promise that post notification job will be called atomically with the
operation that triggered it and **only when it is safe to write**.
This PR fixes the issue by wrapping each expiration/eviction of a key with an execution
unit. This makes sure the entire operation will run atomically and all the post notification
jobs will be executed at the end where it is safe to write.
Tests were modified to verify the fix.
This change overcomes many stability issues experienced with the
vmactions action.
We need to limit VMs to 8GB for better stability, as the 13GB default
seems to hang them occasionally.
Shell code has been simplified since this action seem to use `bash -e`
which will abort on non-zero exit codes anyway.
changes the `gettimeofday` caller, by removing an unused optional output argument.
It would take 2 benefits:
- simplify code, discard unnecessary arg.
- possibly faster due to the implementation in kernel.
Clarify the errors related to the cluster mode in the script, return the command that encountered an execution error along with the specific error message.
---------
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Currently, we do not write the following sensitive commands into the ~/.rediscli_history file:
ACL SETUSER username [rule [rule ...]]
AUTH [username] password
HELLO [AUTH username password]
MIGRATE host port <key | ""> destination-db timeout [[AUTH password | AUTH2 username password]]
CONFIG SET masterauth master-password
CONFIG SET masteruser username
CONFIG SET requirepass foobared
However, we still write the following sensitive commands into the ~/.rediscli_history file:
ACL GETUSER username
Sentinel CONFIG set sentinel-pass password
Sentinel CONFIG set sentinel-user username
Sentinel set mastername auth-pass password
Sentinel set mastername auth-user username
This change adds the commands of the second list to be skipped from being written to the history file.
Reverts the skipping defrag tests in cluster mode (done in #12672.
instead it skips only some defrag tests that are relevant for cluster modes.
The test now run well after investigating and making the changes in #12674 and #12694.
Co-authored-by: Oran Agra <oran@redislabs.com>
Optimize the performance of SCAN commands when a match pattern can only contain keys from a
single slot in cluster mode. This can happen when the pattern contains a hash tag before any
wildcard matchers or when the key contains no matchers.
In #12476 server.stat_client_qbuf_limit_disconnections was added. It is written in readQueryFromClient, which may be called by multiple threads when io-threads and io-threads-do-reads are turned on. Somehow we missed to make it an atomic variable.
when find a key ,if redis is rehashing, currently we should lookup both tables (ht[0] and ht[1]).
if we use the key's index comparing to the rehashidx,if index < rehashidx,then we can conclude:
1. it is rehashing(rehashidx is -1 if it is not rehashing)
2. we can't find key in ht[0] so just continue to find key in ht[1]
The possible performance gain here, is not the looping over the linked list (which is empty),
but rather the lookup in the table (which could be a cache miss).
---------
Co-authored-by: zhangshihua003 <zhangshihua003@ke.com>
Co-authored-by: sundb <sundbcn@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: judeng <abc3844@126.com>
This PR optimizes the time complexity of findSlotByKeyIndex from O(log^2(N)) to O(log(N)) by using the
tree structure of binary index tree to find a slot in one search of the index.
As part of #11695 independent dictionaries were introduced per slot.
Time complexity to discover total no. of buckets across all dictionaries
increased to O(N) with straightforward implementation of iterating over
all dictionaries and adding dictBuckets of each.
To optimize the time complexity, we could maintain a global counter at
db level to keep track of the count of buckets and update it on the start
and end of rehashing.
---------
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Fixing issues described in #12672, started after #11695
Related to #12674
Fixes the `defrag didn't stop' issue.
In some cases of how the keys were stored in memory
defrag_later_item_in_progress was not getting reset once we finish
defragging the later items and we move to the next slot. This stopped
the scan to happen in the later slots and did not get
Fix some outdated comments and add comment for moduleNotifyKeyspaceEvent
we added in #11084 since it seems a bit implicit.
---------
Co-authored-by: Oran Agra <oran@redislabs.com>
Test failure on freebsd CI:
```
*** [err]: expire scan should skip dictionaries with lot's of empty buckets in tests/unit/expire.tcl
scan didn't handle slot skipping logic.
```
Observation:
expiry of keys might happen before the empty buckets are formed and won't help with the expiry skip logic validation.
Solution:
Disable expiration until the empty buckets are formed.
Fixing issues started after #11695 when the defrag tests are being executed in cluster mode too.
For some reason, it looks like the defragmentation is over too quickly, before the test is able to
detect that it's running.
so now instead of waiting to see that it's active, we wait to see that it did some work
```
[err]: Active defrag big list: cluster in tests/unit/memefficiency.tcl
defrag not started.
[err]: Active defrag big keys: cluster in tests/unit/memefficiency.tcl
defrag didn't stop.
```
Temporarily disabling few of the defrag tests in cluster mode to make the daily run stable:
Active defrag eval scripts
Active defrag big keys
Active defrag big list
Active defrag edge case
Dictionary iterator logic in the `tryResizeHashTables` method is picking the next
(incorrect) dictionary while the cursor is at a given slot. This could lead to some
dictionary/slot getting skipped from resizing.
Also stabilize the test.
problem introduced recently in #11695
Before this commit, Unix socket setup performed chmod(2) on the socket
file after calling listen(2). Depending on what umask is used, this
could leave the file with the wrong permissions for a short period of
time. As a result, another process could exploit this race condition and
establish a connection that would otherwise not be possible.
We now make sure the socket permissions are set up prior to calling
listen(2).
(cherry picked from commit 1119ecae6fd8796fa337df2212f09173ab6c7b0a)
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Using heap allocation during signal handlers is unsafe.
This PR purpose is to replace all the heap allocations done within the signal
handlers raised upon server crash and assertions.
These were added in #12453.
writeStacktraces(): allocates the stacktraces output array on the calling thread's
stack and assigns the address to a global variable.
It calls `ThreadsManager_runOnThreads()` that invokes `collect_stacktrace_data()`
by each thread: each thread writes to a different location in the above array to allow
sync writes.
get_ready_to_signal_threads_tids(): instead of allocating the `tids` array, it receives it
as a fixed size array parameter, allocated on on the stack of the calling function, and
returns the number of valid threads. The array size is hard-coded to 50.
`ThreadsManager_runOnThreads():` To avoid the outputs array allocation, the
**callback signature** was changed. Now it should return void. This function return type
has also changed to int - returns 1 if successful, and 0 otherwise.
Other unsafe calls will be handled in following PRs
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
when a server in the test suite crashes and is restarted by redstart_server, we didn't clean it's pid from the list.
we can see that when the corrupt-dump-fuzzer hangs, it has a long list of servers to lean, but in fact they're all already dead.
In #11568 we removed the NOSCRIPT flag from commands and keep the BLOCKING flag.
Aiming to allow them in scripts and let them implicitly behave in the non-blocking way.
In that sense, the old behavior was to allow LPOP and reject BLPOP, and the new behavior,
is to allow BLPOP too, and fail it only in case it ends up blocking.
So likewise, so far we allowed XREAD and rejected XREAD BLOCK, and we will now allow
that too, and only reject it if it ends up blocking.
In #10536, we introduced the assert, some older versions of servers
(like 7.0) doesn't gossip shard_id, so we will not add the node to
cluster->shards, and node->shard_id is filled in randomly and may not
be found here.
It causes that if we add a 7.2 node to a 7.0 cluster and allocate slots
to the 7.2 node, the 7.2 node will crash when it hits this assert. Somehow
like #12538.
In this PR, we remove the assert and replace it with an unconditional removal.
When entering pubsub mode and using the redis-cli only
connect command, we need to reset pubsub_mode because
we switch to a different connection.
This will affect the prompt when the connection is successful,
and redis-cli will crash when the connect fails:
```
127.0.0.1:6379> subscribe ch
1) "subscribe"
2) "ch"
3) (integer) 1
127.0.0.1:6379(subscribed mode)> connect 127.0.0.1 6380
127.0.0.1:6380(subscribed mode)> ping
PONG
127.0.0.1:6380(subscribed mode)> connect a b
Could not connect to Redis at a:0: Name or service not known
Segmentation fault
```