## Issue
During the client input/output buffer processing, the memory usage is
incrementally updated to keep track of clients going beyond a certain
threshold `maxmemory-clients` to be evicted. However, this additional
tracking activity leads to unnecessary CPU cycles wasted when no
client-eviction is required. It is applicable in two cases.
* `maxmemory-clients` is set to `0` which equates to no client eviction
(applicable to all clients)
* `CLIENT NO-EVICT` flag is set to `ON` which equates to a particular
client not applicable for eviction.
## Solution
* Disable client memory usage tracking during the read/write flow when
`maxmemory-clients` is set to `0` or `client no-evict` is `on`.
The memory usage is tracked only during the `clientCron` i.e. it gets
periodically updated.
* Cleanup the clients from the memory usage bucket when client eviction
is disabled.
* When the maxmemory-clients config is enabled or disabled at runtime,
we immediately update the memory usage buckets for all clients (tested
scanning 80000 took some 20ms)
Benchmark shown that this can improve performance by about 5% in
certain situations.
Co-authored-by: Oran Agra <oran@redislabs.com>
In most cases when a listpack or intset is converted to a dict, the conversion
is trigged when adding an element. The extra element is added after conversion
to dict (in all cases except when the conversion is triggered by
set-max-intset-entries being reached).
If set-max-listpack-entries is set to a power of two, let's say 128, when
adding the 129th element, the 128 element listpack is first converted to a dict
with a hashtable presized for 128 elements. After converting to dict, the 129th
element is added to the dict which immediately triggers incremental rehashing
to size 256.
This commit instead presizes the dict to one more element, with the assumption
that conversion to dict is followed by adding another element, so the dict
doesn't immediately need rehashing.
Co-authored-by: sundb <sundbcn@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
There is a issue with --sentinel:
```
[root]# src/redis-server sentinel.conf --sentinel --loglevel verbose
*** FATAL CONFIG FILE ERROR (Redis 255.255.255) ***
Reading the configuration file, at line 352
>>> 'sentinel "--loglevel" "verbose"'
Unrecognized sentinel configuration statement
```
This is because in #10660 (Redis 7.0.1), `--` prefix change break it.
In this PR, we will handle `--sentinel` the same as we did for `--save`
in #10866. i.e. it's a pseudo config option with no value.
This is take 2 of `GEOSEARCH BYBOX` optimizations based on haversine
distance formula when longitude diff is 0.
The first one was in #11535 .
- Given longitude diff is 0 the asin(sqrt(a)) on the haversine is asin(sin(abs(u))).
- arcsin(sin(x)) equal to x when x ∈[−𝜋/2,𝜋/2].
- Given latitude is between [−𝜋/2,𝜋/2] we can simplifiy arcsin(sin(x)) to x.
On the sample dataset with 60M datapoints, we've measured 55% increase
in the achievable ops/sec.
This mechanism aims to reduce calls to malloc and free when
preparing the arguments the script sends to redis commands.
This is a mechanism was originally implemented in 48c49c4
and 4f68655, and was removed in #10220 (thinking it's not needed
and that it has no impact), but it now turns out it was wrong, and it
indeed provides some 5% performance improvement.
The implementation is a little bit too simplistic, it assumes consecutive
calls use the same size in the same arg index, but that's arguably
sufficient since it's only aimed at caching very small things.
We could even consider always pre-allocating args to the full
LUA_CMD_OBJCACHE_MAX_LEN (64 bytes) rather than the right size for the argument,
that would increase the chance they'll be able to be re-used.
But in some way this is already happening since we're using
sdsalloc, which in turn uses s_malloc_usable and takes ownership
of the full side of the allocation, so we are padded to the allocator
bucket size.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: sundb <sundbcn@gmail.com>
GEODIST used snprintf("%.4f") for the reply using addReplyDoubleDistance,
which was slow. This PR optimizes it without breaking compatibility by following
the approach of ll2string with some changes to match the use case of distance
and precision. I.e. we multiply it by 10000 format it as an integer, and then add
a decimal point. This can achieve about 35% increase in the achievable ops/sec.
Co-authored-by: Oran Agra <oran@redislabs.com>
The test failed with ERR DUMP payload version or checksum are wrong.
And it only fails on CentOS, this is due to the fact that tcl8.5 does not correctly
parse the hexadecimal abbreviation. And in Ubuntu we are using tcl8.6.
profiling EVALSHA\ we see that luaReplyToRedisReply takes 8.73% out of the
56.90% of luaCallFunction CPU cycles.
Using addReplyStatusLength instead of directly composing the protocol to avoid
sdscatprintf and addReplySds ( which imply multiple sdslen calls ).
The new approach drops
luaReplyToRedisReply CPU cycles to 3.77%
This test sets the master ping interval to 1 hour, in order to avoid
pings in the replicatoin stream incrementing the replication offset,
however, it didn't increase the repl-timeout so on slow machines
where the test took more than 60 seconds, the replicas would drop
and reconnect.
```
*** [err]: PSYNC2: Partial resync after restart using RDB aux fields in tests/integration/psync2.tcl
Replica didn't partial sync
```
The test would detect 4 additional partial syncs where it expects
only one.
1. "Fixed" the current code so that seen-time/idle actually refers to interaction
attempts (as documented; breaking change)
2. Added active-time/inactive to refer to successful interaction (what
seen-time/idle used to be)
At first, I tried to avoid changing the behavior of seen-time/idle but then realized
that, in this case, the odds are the people read the docs and implemented their
code based on the docs (which didn't match the behavior).
For the most part, that would work fine, except that issue #9996 was found.
I was working under the assumption that people relied on the docs, and for
the most part, it could have worked well enough. so instead of fixing the docs,
as I would usually do, I fixed the code to match the docs in this particular case.
Note that, in case the consumer has never read any entries, the values
for both "active-time" (XINFO FULL) and "inactive" (XINFO CONSUMERS) will
be -1, meaning here that the consumer was never active.
Note that seen/active time is only affected by XREADGROUP / X[AUTO]CLAIM, not
by XPENDING, XINFO, and other "read-only" stream CG commands (always has been,
even before this PR)
Other changes:
* Another behavioral change (arguably a bugfix) is that XREADGROUP and X[AUTO]CLAIM
create the consumer regardless of whether it was able to perform some reading/claiming
* RDB format change to save the `active_time`, and set it to the same value of `seen_time` in old rdb files.
Add a new module event `RedisModule_Event_Key`, this event is fired
when a key is removed from the keyspace.
The event includes an open key that can be used for reading the key before
it is removed. Modules can also extract the key-name, and use RM_Open
or RM_Call to access key from within that event, but shouldn't modify anything
from within this event.
The following sub events are available:
- `REDISMODULE_SUBEVENT_KEY_DELETED`
- `REDISMODULE_SUBEVENT_KEY_EXPIRED`
- `REDISMODULE_SUBEVENT_KEY_EVICTED`
- `REDISMODULE_SUBEVENT_KEY_OVERWRITE`
The data pointer can be casted to a RedisModuleKeyInfo structure
with the following fields:
```
RedisModuleKey *key; // Opened Key
```
### internals
* We also add two dict functions:
`dictTwoPhaseUnlinkFind` finds an element from the table, also get the plink of the entry.
The entry is returned if the element is found. The user should later call `dictTwoPhaseUnlinkFree`
with it in order to unlink and release it. Otherwise if the key is not found, NULL is returned.
These two functions should be used in pair. `dictTwoPhaseUnlinkFind` pauses rehash and
`dictTwoPhaseUnlinkFree` resumes rehash.
* We change `dbOverwrite` to `dbReplaceValue` which just replaces the value of the key and
doesn't fire any events. The "overwrite" part (which emits events) is just when called from `setKey`,
the other places that called dbOverwrite were ones that just update the value in-place (INCR*, SPOP,
and dbUnshareStringValue). This should not have any real impact since `moduleNotifyKeyUnlink` and
`signalDeletedKeyAsReady` wouldn't have mattered in these cases anyway (i.e. module keys and
stream keys didn't have direct calls to dbOverwrite)
* since we allow doing RM_OpenKey from withing these callbacks, we temporarily disable lazy expiry.
* We also temporarily disable lazy expiry when we are in unlink/unlink2 callback and keyspace
notification callback.
* Move special definitions to the top of redismodule.h
This is needed to resolve compilation errors with RedisModuleKeyInfoV1
that carries a RedisModuleKey member.
Co-authored-by: Oran Agra <oran@redislabs.com>
As being discussed in #10981 we see a degradation in performance
between v6.2 and v7.0 of Redis on the EVAL command.
After profiling the current unstable branch we can see that we call the
expensive function evalCalcFunctionName twice.
The current "fix" is to basically avoid calling evalCalcFunctionName and
even dictFind(lua_scripts) twice for the same command.
Instead we cache the current script's dictEntry (for both Eval and Functions)
in the current client so we don't have to repeat these calls.
The exception would be when doing an EVAL on a new script that's not yet
in the script cache. in that case we will call evalCalcFunctionName (and even
evalExtractShebangFlags) twice.
Co-authored-by: Oran Agra <oran@redislabs.com>
Before this PR, we use sdsMakeRoomFor() to expand the size of hyperloglog
string (sparse representation). And because sdsMakeRoomFor() uses a greedy
strategy (allocate about twice what we need), the memory we allocated for the
hyperloglog may be more than `server.hll_sparse_max_bytes` bytes.
The memory more than` server.hll_sparse_max_bytes` will be wasted.
In this pull request, tone down the greediness of the allocation growth, and also
make sure it'll never request more than `server.hll_sparse_max_bytes`.
This could in theory mean the size of the hyperloglog string is insufficient for the
increment we need, should be ok since in this case we promote the hyperloglog
to dense representation, an assertion was added to make sure.
This PR also add some tests and fixes some typo and indentation issues.
redis-benchmark: when trying to get the CONFIG before benchmark,
avoid printing any warning on most errors (e.g. NOPERM error).
avoid aborting the benchmark on NOPERM.
keep the warning only when we abort the benchmark on a NOAUTH error
In replica, the key expired before master's `INCR` was arrived, so INCR
creates a new key in the replica and the test failed.
```
*** [err]: Replication of an expired key does not delete the expired key in tests/integration/replication-4.tcl
Expected '0' to be equal to '1' (context: type eval line 13 cmd {assert_equal 0 [$slave exists k]} proc ::test)
```
This test is very likely to do a false positive if the `wait_for_ofs_sync`
takes longer than the expiration time, so give it a few more chances.
The test was introduced in #9572.
Add an option "withscores" to ZRANK and ZREVRANK.
Add `[withscore]` option to both `zrank` and `zrevrank`, like this:
```
z[rev]rank key member [withscore]
```
In scenarios in which we have large datasets and the elements are not
contained within the range we do spurious calls do sdsdup and sdsfree.
I.e. instead of pre-creating an sds before we know if we're gonna use it
or not, change the role of geoAppendIfWithinShape to just do geoWithinShape,
and let the caller create the string only when needed.
Co-authored-by: Oran Agra <oran@redislabs.com>
Add an error message when PID file fails to be written. This has historically been considered a best effort failure, but we don't even report the failure.
It look like it will generate a warning in FreeBSD:
```
./server.h:105:9: warning: 'member2struct' macro redefined [-Wmacro-redefined]
#define member2struct(struct_name, member_name, member_addr) \
^
/usr/include/sys/param.h:365:9: note: previous definition is here
#define member2struct(s, m, x) \
^
```
Add a `redis_` prefix to it, avoid the warning, introduced in #11511
The cluster-announce-port/cluster-announce-bus-port/cluster-announce-tls-port should take effect at runtime
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
When we're shrinking the hash table, we don't need to hash the keys.
Since the table sizes are powers of two, we can simply mask the bucket
index in the larger table to get the bucket index in the smaller table. We
avoid loading the keys into memory and save CPU time.
### Summary of API additions
* `RedisModule_AddPostNotificationJob` - new API to call inside a key space
notification (and on more locations in the future) and allow to add a post job as describe above.
* New module option, `REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS`,
allows to disable Redis protection of nested key-space notifications.
* `RedisModule_GetModuleOptionsAll` - gets the mask of all supported module options so a module
will be able to check if a given option is supported by the current running Redis instance.
### Background
The following PR is a proposal of handling write operations inside module key space notifications.
After a lot of discussions we came to a conclusion that module should not perform any write
operations on key space notification.
Some examples of issues that such write operation can cause are describe on the following links:
* Bad replication oreder - https://github.com/redis/redis/pull/10969
* Used after free - https://github.com/redis/redis/pull/10969#issuecomment-1223771006
* Used after free - https://github.com/redis/redis/pull/9406#issuecomment-1221684054
There are probably more issues that are yet to be discovered. The underline problem with writing
inside key space notification is that the notification runs synchronously, this means that the notification
code will be executed in the middle on Redis logic (commands logic, eviction, expire).
Redis **do not assume** that the data might change while running the logic and such changes
can crash Redis or cause unexpected behaviour.
The solution is to state that modules **should not** perform any write command inside key space
notification (we can chose whether or not we want to force it). To still cover the use-case where
module wants to perform a write operation as a reaction to key space notifications, we introduce
a new API , `RedisModule_AddPostNotificationJob`, that allows to register a callback that will be
called by Redis when the following conditions hold:
* It is safe to perform any write operation.
* The job will be called atomically along side the operation that triggers it (in our case, key
space notification).
Module can use this new API to safely perform any write operation and still achieve atomicity
between the notification and the write.
Although currently the API is supported on key space notifications, the API is written in a generic
way so that in the future we will be able to use it on other places (server events for example).
### Technical Details
Whenever a module uses `RedisModule_AddPostNotificationJob` the callback is added to a list
of callbacks (called `modulePostExecUnitJobs`) that need to be invoke after the current execution
unit ends (whether its a command, eviction, or active expire). In order to trigger those callback
atomically with the notification effect, we call those callbacks on `postExecutionUnitOperations`
(which was `propagatePendingCommands` before this PR). The new function fires the post jobs
and then calls `propagatePendingCommands`.
If the callback perform more operations that triggers more key space notifications. Those keys
space notifications might register more callbacks. Those callbacks will be added to the end
of `modulePostExecUnitJobs` list and will be invoke atomically after the current callback ends.
This raises a concerns of entering an infinite loops, we consider infinite loops as a logical bug
that need to be fixed in the module, an attempt to protect against infinite loops by halting the
execution could result in violation of the feature correctness and so **Redis will make no attempt
to protect the module from infinite loops**
In addition, currently key space notifications are not nested. Some modules might want to allow
nesting key-space notifications. To allow that and keep backward compatibility, we introduce a
new module option called `REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS`.
Setting this option will disable the Redis key-space notifications nesting protection and will
pass this responsibility to the module.
### Redis infrastructure
This PR promotes the existing `propagatePendingCommands` to an "Execution Unit" concept,
which is called after each atomic unit of execution,
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com>
Optimize geohashGetDistanceIfInRectangle when there are many misses.
It calls 3x geohashGetDistance. The first 2 times we call them to produce intermediate results.
This PR focus on optimizing for those 2 intermediate results.
1 Reduce expensive computation on intermediate geohashGetDistance with same long
2 Avoid expensive lon_distance calculation if lat_distance fails beforehand
Co-authored-by: Oran Agra <oran@redislabs.com>
In #11511 we introduced member_offset which has a sanitizer warning:
```
multi.c:390:26: runtime error: member access within null pointer of type 'watchedKey' (aka 'struct watchedKey')
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior multi.c:390:26
```
We can use offsetof() from stddef.h. This is part of the standard lib
just to avoid this UB :) Sanitizer should not complain after we change
this.
1. Use offsetof instead of member_offset, so we can delete this now
2. Changed (uint8_t*) cast to (char*).
This does not matter much but according to standard, we are only allowed
to cast pointers to its own type, char* and void*. Let's try to follow
the rules.
This change was suggested by tezc and the comments is also from him.
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Fix compile warning for SHA1Transform() method under alpine with GCC 12.
Warning:
```
In function 'SHA1Update',
inlined from 'SHA1Final' at sha1.c:187:9:
sha1.c:144:13: error: 'SHA1Transform' reading 64 bytes from a region of size 0 [-Werror=stringop-overread]
144 | SHA1Transform(context->state, &data[i]);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
sha1.c:144:13: note: referencing argument 2 of type 'const unsigned char[64]'
sha1.c: In function 'SHA1Final':
sha1.c:56:6: note: in a call to function 'SHA1Transform'
56 | void SHA1Transform(uint32_t state[5], const unsigned char buffer[64])
| ^~~~~~~~~~~~~
```
This warning is a false positive because it has been determined in the loop judgment that there must be 64 chars after position `i`
```c
for ( ; i + 63 < len; i += 64) {
SHA1Transform(context->state, &data[i]);
}
```
Reference: e1d7d3e40a
Command SENTINEL DEBUG could be no arguments, which display all
configurable arguments and their values.
Update the command arguments in the docs (json file) to indicate that
arguments are optional
In unwatchAllKeys() function, we traverse all the keys watched by the client,
and for each key we need to remove the client from the list of clients watching that key.
This is implemented by listSearchKey which traverses the list of clients.
If we can reach the node of the list of clients from watchedKey in O(1) time,
then we do not need to call listSearchKey anymore.
Changes in this PR: put the node of the list of clients of each watched key in the
db inside the watchedKey structure. In this way, for every key watched by the client,
we can get the watchedKey structure and then reach the node in the list of clients in
db->watched_keys to remove it from that list.
From the perspective of the list of clients watching the key, the list node is inside a
watchedKey structure, so we can get to the watchedKey struct from the listnode by
struct member offset math. And because of this, node->value is not used, we can point
node->value to the list itself, so that we don't need to fetch the list of clients from the dict.
Technically, these commands were deprecated as of 2.6.12, with the
introduction of the respective arguments to SET.
In reality, the deprecation note will only be added in 7.2.0.
This PR add `assert_refcount_morethan`, and modify `assert_refcount` to skip
the `OBJECT REFCOUNT` check with `needs:debug` flag. Use them to modify all
`OBJECT REFCOUNT` calls and also update the tests/README to be more specific.
The reasoning is that some of these tests could be testing something important,
and along the way also add a check for the refcount, and it could be a shame to skip
the whole test just because the refcount functionality is missing or blocked.
but much like the fact that some redis variants may not support DEBUG,
and still we want to run the majority of the test for coverage, and just skip the digest match.
Now, according to the comments, if the truncated file is not the last file,
it will be considered as a fatal error.
And the return code will updated to AOF_FAILED, then server will exit
without any error message to the client.
Similar to other error situations, this PR add an explicit error message
for this case and make the client know clearly what happens.
This payload produces a set with duplicate elements (listpack encoding):
```
restore _key 0 "\x14\x25\x25\x00\x00\x00\x0A\x00\x06\x01\x82\x5F\x35\x03\x04\x01\x82\x5F\x31\x03\x82\x5F\x33\x03\x00\x01\x82\x5F\x39\x03\x82\x5F\x33\x03\x08\x01\x02\x01\xFF\x0B\x00\x31\xBE\x7D\x41\x01\x03\x5B\xEC"
smembers key
1) "6"
2) "_5"
3) "4"
4) "_1"
5) "_3" ---> dup
6) "0"
7) "_9"
8) "_3" ---> dup
9) "8"
10) "2"
```
This kind of sets will cause SDIFF to hang, SDIFF generated a broken
protocol and left the client hung. (Expected ten elements, but only
got nine elements due to the duplication.)
If we set `sanitize-dump-payload` to yes, we will be able to find
the duplicate elements and report "ERR Bad data format".
Discovered and discussed in #11290.
This PR also improve prints when corrupt-dump-fuzzer hangs, it will
print the cmds and the payload, an example like:
```
Testing integration/corrupt-dump-fuzzer
[TIMEOUT]: clients state report follows.
sock6 => (SPAWNED SERVER) pid:28884
Killing still running Redis server 28884
commands caused test to hang:
SDIFF __key
payload that caused test to hang: "\x14\balabala"
```
Co-authored-by: Oran Agra <oran@redislabs.com>
Apparently we used to set `loglevel debug` for tls in spawn_instance.
I.e. cluster and sentinel tests used to run with debug logging, only when tls mode was enabled.
this was probably a leftover from when creating the tls mode tests.
it cause a new test created for #11214 to fail in tls mode.
At the same time, in order to better distinguish the tests, change the
name of `test-centos7-tls` to `test-centos7-tls-module`, change the name
of `test-centos7-tls-no-tls` to `test-centos7-tls-module-no-tls`.
Note that in `test-centos7-tls-module`, we did not pass `--tls-module`
in sentinel test because it is not supported, see 4faddf1, added in #9320.
So only `test-ubuntu-tls` fails in daily CI.
Co-authored-by: Oran Agra <oran@redislabs.com>
The following example will create an empty set (listpack encoding):
```
> RESTORE key 0
"\x14\x25\x25\x00\x00\x00\x00\x00\x02\x01\x82\x5F\x37\x03\x06\x01\x82\x5F\x35\x03\x82\x5F\x33\x03\x00\x01\x82\x5F\x31\x03\x82\x5F\x39\x03\x04\xA9\x08\x01\xFF\x0B\x00\xA3\x26\x49\xB4\x86\xB0\x0F\x41"
OK
> SCARD key
(integer) 0
> SRANDMEMBER key
Error: Server closed the connection
```
In the spirit of #9297, skip empty set when loading RDB_TYPE_SET_LISTPACK.
Introduced in #11290
Introduce Shard IDs to logically group nodes in cluster mode.
1. Added a new "shard_id" field to "cluster nodes" output and nodes.conf after "hostname"
2. Added a new PING extension to propagate "shard_id"
3. Handled upgrade from pre-7.2 releases automatically
4. Refactored PING extension assembling/parsing logic
Behavior of Shard IDs:
Replicas will always follow the shards of their reported primaries. If a primary updates its shard ID, the replica will follow. (This need not follow for cluster v2) This is not an expected use case.
Improve memory efficiency of list keys
## Description of the feature
The new listpack encoding uses the old `list-max-listpack-size` config
to perform the conversion, which we can think it of as a node inside a
quicklist, but without 80 bytes overhead (internal fragmentation included)
of quicklist and quicklistNode structs.
For example, a list key with 5 items of 10 chars each, now takes 128 bytes
instead of 208 it used to take.
## Conversion rules
* Convert listpack to quicklist
When the listpack length or size reaches the `list-max-listpack-size` limit,
it will be converted to a quicklist.
* Convert quicklist to listpack
When a quicklist has only one node, and its length or size is reduced to half
of the `list-max-listpack-size` limit, it will be converted to a listpack.
This is done to avoid frequent conversions when we add or remove at the bounding size or length.
## Interface changes
1. add list entry param to listTypeSetIteratorDirection
When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry,
so when changing the direction, we need to use the current node (listTypeEntry->p) to
update `listTypeIterator->lpi` to the next node in the reverse direction.
## Benchmark
### Listpack VS Quicklist with one node
* LPUSH - roughly 0.3% improvement
* LRANGE - roughly 13% improvement
### Both are quicklist
* LRANGE - roughly 3% improvement
* LRANGE without pipeline - roughly 3% improvement
From the benchmark, as we can see from the results
1. When list is quicklist encoding, LRANGE improves performance by <5%.
2. When list is listpack encoding, LRANGE improves performance by ~13%,
the main enhancement is brought by `addListListpackRangeReply()`.
## Memory usage
1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each.
shows memory usage down by 35.49%, from 214MB to 138MB.
## Note
1. Add conversion callback to support doing some work before conversion
Since the quicklist iterator decompresses the current node when it is released, we can
no longer decompress the quicklist after we convert the list.
Both functions and eval are marked as "no-monitor", since we want to explicitly feed in the script command before the commands generated by the script. Note that we want this behavior generally, so that commands can redact arguments before being added to the monitor.
The test introduced in #11482 fail on ARM (extra CI):
```
*** [err]: RESP2: RM_ReplyWithDouble: NaN in tests/unit/moduleapi/reply.tcl
Expected '-nan' to be equal to 'nan' (context: type eval line 3 cmd
{assert_equal "-nan" [r rw.double 0 0]} proc ::test)
*** [err]: RESP3: RM_ReplyWithDouble: NaN in tests/unit/moduleapi/reply.tcl
Expected ',-nan' to be equal to ',nan' (context: type eval line 8 cmd
{assert_equal ",-nan" [r rw.double 0 0]} proc ::test)
```
It looks like there is no negative nan on ARM.
In moduleFireServerEvent we change the real client DB to 0 on freeClient in case the event is REDISMODULE_EVENT_CLIENT_CHANGE.
It results in a crash if the client is blocked on a key on other than DB 0.
The DB change is not necessary even for module-client, as we set its DB to 0 on either createClient or moduleReleaseTempClient.
Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
The test introduced in #11482 fail on mac:
```
*** [err]: RESP3: RM_ReplyWithDouble: inf in tests/unit/moduleapi/reply.tcl
Expected 'Inf' to be equal to 'inf'
(context: type eval line 6 cmd {assert_equal Inf [r rw.double inf]} proc ::test)
```
Looks like the mac platform returns inf instead of Inf in this case, this PR
uses readraw to verify the protocol.
Adding a test to cover the already existing behavior of NAN replies,
to accompany the PR that adds them to the RESP3 spec:
https://github.com/redis/redis-specifications/pull/10
This PR also covers Inf replies that are already in the spec, as well as RESP2 coverage.
Fix a few issues with the recent #11463
* use exitFromChild instead of exit
* test should ignore defunct process since that's what we expect to
happen for thees child processes when the parent dies.
* fix typo
Co-authored-by: Binbin <binloveplay1314@qq.com>
Small sets with not only integer elements are listpack encoded, by default
up to 128 elements, max 64 bytes per element, new config `set-max-listpack-entries`
and `set-max-listpack-value`. This saves memory for small sets compared to using a hashtable.
Sets with only integers, even very small sets, are still intset encoded (up to 1G
limit, etc.). Larger sets are hashtable encoded.
This PR increments the RDB version, and has an effect on OBJECT ENCODING
Possible conversions when elements are added:
intset -> listpack
listpack -> hashtable
intset -> hashtable
Note: No conversion happens when elements are deleted. If all elements are
deleted and then added again, the set is deleted and recreated, thus implicitly
converted to a smaller encoding.
Clients should not use this command.
Instead, clients should simply close the connection when they're not used anymore.
Terminating a connection on the client side is preferable, as it eliminates `TIME_WAIT`
lingering sockets on the server side.