- The argument `u` in for `ar` is ignored (and generates warnings since `D` became the default.
All it does is avoid updating unchanged objects (shouldn't have any impact on our build)
- Enable `LUA_USE_MKSTEMP` to force the use of `mkstemp()` instead of `tmpname()` (which is dead
code in redis anyway).
- Remove unused variable `c` in `f_parser()`
- Removed misleadingly indented space in `luaL_loadfile()` and ``addfield()`
Co-authored-by: Oran Agra <oran@redislabs.com>
There's a rare case which leads to stagnation in the defragger, causing
it to keep scanning the keyspace and do nothing (not moving any
allocation), this happens when all the allocator slabs of a certain bin
have the same % utilization, but the slab from which new allocations are
made have a lower utilization.
this commit fixes it by removing the current slab from the overall
average utilization of the bin, and also eliminate any precision loss in
the utilization calculation and move the decision about the defrag to
reside inside jemalloc.
and also add a test that consistently reproduce this issue.
obuf based eviction tests run until eviction occurs instead of assuming a certain
amount of writes will fill the obuf enough for eviction to occur.
This handles the kernel buffering written data and emptying the obuf even though
no one actualy reads from it.
The tests have a new timeout of 20sec: if the test doesn't pass after 20 sec it'll fail.
Hopefully this enough for our slow CI targets.
This also eliminates the need to skip some tests in TLS.
Tracking invalidation messages were sometimes sent in inconsistent order,
before the command's reply rather than after.
In addition to that, they were sometimes embedded inside other commands
responses, like MULTI-EXEC and MGET.
Implement createPipe() to combine creating pipe and setting flags, also reduce
system calls by prioritizing pipe2() over pipe().
Without createPipe(), we have to call pipe() to create a pipe and then call some
functions (like anetCloexec() and anetNonBlock()) of anet.c to set flags respectively,
which leads to some extra system calls, now we can leverage pipe2() to combine
them and make the process of creating pipe more convergent in createPipe().
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Co-authored-by: Oran Agra <oran@redislabs.com>
When queuing a multi command we duplicated the argv (meaning an alloc
and a memcpy). This isn't needed since we can use the previously allocated
argv and just reset the client objects argv to NULL. This should saves some
memory and is a minor optimization in heavy MULTI/EXEC traffic, especially
if there are lots of arguments.
The new value indicates how long Redis wait to
acquire the GIL after sleep. This can help identify
problems where a module perform some background
operation for a long time (with the GIL held) and
blocks the Redis main thread.
Scenario:
1. client block on command `XREAD BLOCK 0 STREAMS mystream $`
2. in a module, calling `XADD mystream * field value` via lua from a timer callback
3. client will receive response after some latency up to 100ms
Reason:
When `XADD` signal the key `mystream` as ready, `beforeSleep` in next eventloop will call
`handleClientsBlockedOnKeys` to unblock the client and add pending data to write but not
actually install a write handler, so next redis will block in `aeApiPoll` up to 100ms given `hz`
config as default 10, pending data will be sent in another next eventloop by
`handleClientsWithPendingWritesUsingThreads`.
Calling `handleClientsBlockedOnKeys` before `handleClientsWithPendingWritesUsingThreads`
in `beforeSleep` solves the problem.
* Reduce delay between publishes to allow less time to write the obufs.
* More subscribed clients to buffer more data per publish.
* Make sure main connection isn't evicted (it has a large qbuf).
Changes in #9528 lead to memory leak if the command implementation
used rewriteClientCommandArgument inside MULTI-EXEC.
Adding an explicit test for that case since the test that uncovered it
didn't specifically target this scenario
When LUA call our C code, by default, the LUA stack has room for 10
elements. In most cases, this is more than enough but sometimes it's not
and the caller must verify the LUA stack size before he pushes elements.
On 3 places in the code, there was no verification of the LUA stack size.
On specific inputs this missing verification could have lead to invalid
memory write:
1. On 'luaReplyToRedisReply', one might return a nested reply that will
explode the LUA stack.
2. On 'redisProtocolToLuaType', the Redis reply might be deep enough
to explode the LUA stack (notice that currently there is no such
command in Redis that returns such a nested reply, but modules might
do it)
3. On 'ldbRedis', one might give a command with enough arguments to
explode the LUA stack (all the arguments will be pushed to the LUA
stack)
This commit is solving all those 3 issues by calling 'lua_checkstack' and
verify that there is enough room in the LUA stack to push elements. In
case 'lua_checkstack' returns an error (there is not enough room in the
LUA stack and it's not possible to increase the stack), we will do the
following:
1. On 'luaReplyToRedisReply', we will return an error to the user.
2. On 'redisProtocolToLuaType' we will exit with panic (we assume this
scenario is rare because it can only happen with a module).
3. On 'ldbRedis', we return an error.
Recently merged PR introduced a leak when loading AOF files.
This was because argv_len wasn't set, so rewriteClientCommandArgument
would shrink the argv array and updating argc to a small value.
The protocol parsing on 'ldbReplParseCommand' (LUA debugging)
Assumed protocol correctness. This means that if the following
is given:
*1
$100
test
The parser will try to read additional 94 unallocated bytes after
the client buffer.
This commit fixes this issue by validating that there are actually enough
bytes to read. It also limits the amount of data that can be sent by
the debugger client to 1M so the client will not be able to explode
the memory.
Co-authored-by: meir@redislabs.com <meir@redislabs.com>
- fix possible heap corruption in ziplist and listpack resulting by trying to
allocate more than the maximum size of 4GB.
- prevent ziplist (hash and zset) from reaching size of above 1GB, will be
converted to HT encoding, that's not a useful size.
- prevent listpack (stream) from reaching size of above 1GB.
- XADD will start a new listpack if the new record may cause the previous
listpack to grow over 1GB.
- XADD will respond with an error if a single stream record is over 1GB
- List type (ziplist in quicklist) was truncating strings that were over 4GB,
now it'll respond with an error.
Co-authored-by: sundb <sundbcn@gmail.com>
This change sets a low limit for multibulk and bulk length in the
protocol for unauthenticated connections, so that they can't easily
cause redis to allocate massive amounts of memory by sending just a few
characters on the network.
The new limits are 10 arguments of 16kb each (instead of 1m of 512mb)
The redis-cli command line tool and redis-sentinel service may be vulnerable
to integer overflow when parsing specially crafted large multi-bulk network
replies. This is a result of a vulnerability in the underlying hiredis
library which does not perform an overflow check before calling the calloc()
heap allocation function.
This issue only impacts systems with heap allocators that do not perform their
own overflow checks. Most modern systems do and are therefore not likely to
be affected. Furthermore, by default redis-sentinel uses the jemalloc allocator
which is also not vulnerable.
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
The vulnerability involves changing the default set-max-intset-entries
configuration parameter to a very large value and constructing specially
crafted commands to manipulate sets
Since we measure the COW size in this test by changing some keys and reading
the reported COW size, we need to ensure that the "dismiss mechanism" (#8974)
will not free memory and reduce the COW size.
For that, this commit changes the size of the keys to 512B (less than a page).
and because some keys may fall into the same page, we are modifying ten keys
on each iteration and check for at least 50% change in the COW size.
Note that this breaks compatibility because in the past doing:
DECRBY x -9223372036854775808
would succeed (and create an invalid result) and now this returns an error.
Remove hard coded multi-bulk limit (was 1,048,576), new limit is INT_MAX.
When client sends an m-bulk that's higher than 1024, we initially only allocate
the argv array for 1024 arguments, and gradually grow that allocation as arguments
are received.
1. Remove forward declarations from header files to functions that do not exist:
hmsetCommand and rdbSaveTime.
2. Minor phrasing fixes in #9519
3. Add missing sdsfree(title) and fix typo in redis-benchmark.
4. Modify some error comments in some zset commands.
5. Fix copy-paste bug comment in syncWithMaster about `ip-address`.
Fixing CI test issues introduced in #8687
- valgrind warnings in readQueryFromClient when client was freed by processInputBuffer
- adding DEBUG pause-cron for tests not to be time dependent.
- skipping a test that depends on socket buffers / events not compatible with TLS
- making sure client got subscribed by not using deferring client
In the `HRANDFIELD`, `SRANDMEMBER` and `ZRANDMEMBER` commands,
There are some strategies that could in some rare cases return an unfair random.
these cases are where s small dict happens be be hashed unevenly.
Specifically when `count*ZRANDMEMBER_SUB_STRATEGY_MUL > size`,
using `dictGetRandomKey` to randomize from a dict will result in an unfair random result.
Minor optimize getMaxmemoryState, when server.maxmemory is not set,
don't count AOF and replicas buffers.
Co-authored-by: Viktor Söderqvist <viktor@zuiderkwast.se>
This commit makes it possible to explicitly trim the allocation of a
RedisModuleString.
Currently, Redis automatically trims strings that have been retained by
a module command when it returns. However, this is not thread safe and
may result with corruption in threaded modules.
Supporting explicit trimming offers a backwards compatible workaround to
this problem.