The flag should be set before TLS negotiation begins to avoid a race
condition where a fork+exec before it is completed ends up leaking the
file descriptor.
Fixes a regression introduced due to a new (safer) way of rewriting configuration files. In the past the file was simply overwritten (same inode), but now Redis creates a new temporary file and later renames it over the old one.
The temp file typically gets created with 0600 permissions so we later fchmod it to fix that. Unlike open with O_CREAT, fchmod doesn't consider umask so we have to do that explicitly.
Fixes#8369
The output for COMMAND command was wrong for some commands.
clients can use firstkey,lastkey,step to find (some) key name arguments, and the
"movablekeys" flag to know that they can't know all (or any) of the key name arguments.
These commands had the wrong output:
1. GEORADIUS*_RO used to have "movablekeys" (which it doesn't really need)
2. XREAD and XREADGROUP used to have (1,1,1). but that's completely wrong.
3. Z*STORE used to have (0,0,0) but it can at lest give the index of the dstkey (1,1,1)
Fix broken formatting in `RM_Call` and `RM_CreateDataType`,
`RM_SubscribeToServerEvent` (nested lists, etc. in list items).
Unhide docs of `RM_LoadDataTypeFromString` and
`RM_SaveDataTypeToString` by removing blank line between docs and
function.
Clarification added to `RM__Assert`: Recommentation to use the
`RedisModule_Assert` macro instead.
All names containing underscores (variable and macro names) are
wrapped in backticks (if not already wrapped in backticks). This
prevents underscore from being interpreted as italics in some
cases.
Names including a wildcard star, e.g. RM_Defrag*(), is wrapped in
backticks (and RM replaced by RedisModule in this case). This
prevents the * from being interpreted as an italics marker.
A list item with a sublist, a paragraph and another sublist is a
combination which seems impossible to achieve with RedCarped
markdown, so the one occurrence of this is rewritten.
Various trivial changes (typos, backticks, etc.).
Ruby script:
* Replace `RM_Xyz` with `RedisModule_Xyz` in docs. (RM is correct
when refering to the C code but RedisModule is correct in the
API docs.)
* Automatic backquotes around C functions like `malloc()`.
* Turn URLs into links. The link text is the URL itself.
* Don't add backticks inside bold (**...**)
Sentinel uses execve to run scripts, so it needs to use FD_CLOEXEC
on all file descriptors, so that they're not accessible by the script it runs.
This commit includes a change to the sentinel tests, which verifies no
FDs are left opened when the script is executed.
This was a regression from #7625 (only in 6.2 RC2).
This makes it possible again to implement blocking list and zset
commands using the modules API.
This commit also includes a test case for the reverse: A module
unblocks a client blocked on BLPOP by inserting elements using
RedisModule_ListPush(). This already works, but it was untested.
Previously invalid configuration errors were not very specific and in some cases hard to understand.
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
* Adds ASYNC and SYNC arguments to SCRIPT FLUSH
* Adds SYNC argument to FLUSHDB and FLUSHALL
* Adds new config to control the default behavior of FLUSHDB, FLUSHALL and SCRIPT FLUASH.
the new behavior is as follows:
* FLUSH[ALL|DB],SCRIPT FLUSH: Determine sync or async according to the
value of lazyfree-lazy-user-flush.
* FLUSH[ALL|DB],SCRIPT FLUSH ASYNC: Always flushes the database in an async manner.
* FLUSH[ALL|DB],SCRIPT FLUSH SYNC: Always flushes the database in a sync manner.
The prefix is changed from `RM_` to `module` on the following
internal functions, to prevent them from appearing in the API docs:
RM_LogRaw -> moduleLogRaw
RM_FreeCallReplyRec -> moduleFreeCallReplyRec
RM_ZsetAddFlagsToCoreFlags -> moduleZsetAddFlagsToCoreFlags
RM_ZsetAddFlagsFromCoreFlags -> moduleZsetAddFlagsFromCoreFlags
Fixes markdown formatting errors and some functions not showing
up in the generated documentation at all.
Ruby script (gendoc.rb) fixes:
* Modified automatic instertion of backquotes:
* Don't add backquotes around names which are already preceded by a
backquote. Fixes for example \`RedisModule_Reply\*\` which turning
into \`\`RedisModule_Reply\`\*\` messes up the formatting.
* Add backquotes around types such as RedisModuleString (in addition
to function names `RedisModule_[A-z()]*` and macro names
`REDISMODULE_[A-z]*`).
* Require 4 spaces indentation for disabling automatic backquotes, i.e.
code blocks. Fixes continuations of list items (indented 2 spaces).
* More permissive extraction of doc comments:
* Allow doc comments starting with `/**`.
* Make space before `*` on each line optional.
* Make space after `/*` and `/**` optional (needed when appearing on
its own line).
Markdown fixes in module.c:
* Fix code blocks not indented enough (4 spaces needed).
* Add black line before code blocks and lists where missing (needed).
* Enclose special markdown characters `_*^<>` in backticks to prevent them
from messing up formatting.
* Lists with `1)` changed to `1.` for proper markdown lists.
* Remove excessive indentation which causes text to be unintentionally
rendered as code blocks.
* Other minor formatting fixes.
Other fixes in module.c:
* Remove blank lines between doc comment and function definition. A blank
line here makes the Ruby script exclude the function in docs.
* Change zunionInterDiffGenericCommand to use lookupKeyRead if dstkey is null
* Change zrangeGenericCommand to use lookupKey Write if dstkey isn't null
ZRANGESTORE and UNION, ZINTER, ZDIFF are all new commands (6.2 RC1 and RC2).
In redis 6.0 the ZRANGE was using lookupKeyRead, and ZUNIONSTORE / ZINTERSTORE were using lookupKeyWrite.
So there bugs are introduced in 6.2 and will be resolved before it is released.
the implications of this bug are also not big:
The sole difference between LookupKeyRead and LookupKeyWrite is for command executed on a replica, which are not received from its master client. (for the master, and for the master client on the replica, these two functions behave the same)!
This fixes three issues:
1. Using debug SLEEP was impacting the subsequent test, and causing it to pass reliably even though it should have failed. There was exactly 5 seconds of artificial pause (after 1000, wait 3000, wait 1000) between the debug sleep 5 and when we needed to unblock the client in the subsequent test. Now the test properly makes sure the client is unblocked, and the subsequent test is fixed.
2. Minor, the client pause types were using & comparisons instead of ==, since it was previously a flag.
3. Test is faster now that some of the hand wavy time is removed.
When the server state changes and blocked clients are being dropped, the
paused clients should not be dropped, they're safe to keep since unlike
other blocked types, these commands are not half way though processing,
and the commands they sent may get rejected according to the new server
state.
- the last COW report wasn't always read from the pipe
(receiveLastChildInfo wasn't used)
- but in fact, there's no reason we won't always try to drain that pipe
so i'm unifying receiveLastChildInfo with receiveChildInfo
- adjust threshold of the COW test when run in accurate mode
- add some prints in case this test fails again
- fix indentation, page size, and PID! in MacOS proc info
p.s. it seems that pri_pages_dirtied is always 0
This PR adds another trimming strategy to XADD and XTRIM named MINID
(complements the existing MAXLEN).
It also adds a new LIMIT argument that allows incremental trimming by repeated
calls (rather than all at once).
This provides the ability to trim all records older than a certain ID (which makes it
possible for the user to trim by age too).
Example:
XTRIM mystream MINID ~ 1608540753 will trim entries with id < 1608540753,
but might not trim all (because of the ~ modifier)
The purpose is to ease the use of streams. many users use streams as logs and
the common case is wanting a log
of the last X seconds rather than a log that contains maximum X entries (new
MINID vs existing MAXLEN)
The new LIMIT modifier is only supported when the trim strategy uses ~.
i.e. when the user asked for exact trimming, it all happens in one go (no
possibility for incremental trimming).
However, when ~ is provided, we trim full rax nodes, up to the limit number
of records.
The default limit is 100*stream_node_max_entries (used when LIMIT is not
provided).
I.e. this is a behavior change (even if the existing MAXLEN strategy is used).
An explicit limit of 0 means unlimited (but note that it's not the default).
Other changes:
Refactor arg parsing code for XADD and XTRIM to use common code.
Older arm64 Linux kernels have a bug that could lead to data corruption during
background save under the following scenario:
1) jemalloc uses MADV_FREE on a page,
2) jemalloc reuses and writes the page,
3) Redis forks the background save process, and
4) Linux performs page reclamation.
Under these conditions, Linux will reclaim the page wrongfully and the
background save process will read zeros when it tries to read the page.
The bug has been fixed in Linux with commit:
ff1712f953e27f0b0718762ec17d0adb15c9fd0b ("arm64: pgtable: Ensure dirty bit is
preserved across pte_wrprotect()")
This Commit adds an ignore-warnings config, when not found, redis will
print a warning and exit on startup (default behavior).
Co-authored-by: Oran Agra <oran@redislabs.com>
Add INFO field, rdb_active_cow_size, to report COW of a live fork child while
it's active.
- once in 1024 keys check the time, and if there's more than one second since
the last report send a report to the parent via the pipe.
- refactor the child_info_data struct, it's an implementation detail that
shouldn't be in the server struct, and not used to communicate data between
caller and callee
- remove the magic value from that struct (not sure what it was good for), and
instead add handling of short reads.
- add another value to the structure, cow_type, to indicate if the report is
for the new rdb_active_cow_size field, or it's the last report of a
successful operation
- add new Module API to report the active COW
- add more asserts variants to test.tcl
This is a refactory commit, isn't suppose to have any actual impact.
it does the following:
- keep just one server struct fork child pid variable instead of 3
- have one server struct variable indicating the purpose of the current fork
child.
- redisFork is now responsible of updating the server struct with the pid,
which means it can be the one that calls updateDictResizePolicy
- move child info pipe handling into redisFork instead of having them
repeated outside
- there are two classes of fork purposes, mutually exclusive group (AOF, RDB,
Module), and one that can create several forks to coexist in parallel (LDB,
but maybe Modules some day too, Module API allows for that).
- minor fix to killRDBChild:
unlike killAppendOnlyChild and TerminateModuleForkChild, the killRDBChild
doesn't clear the pid variable or call wait4, so checkChildrenDone does
the cleanup for it.
This commit removes the explicit calls to rdbRemoveTempFile, closeChildInfoPipe,
updateDictResizePolicy, which didn't do any harm, but where unnecessary.
Add ZRANGESTORE command, and improve ZSTORE command to deprecated Z[REV]RANGE[BYSCORE|BYLEX].
Syntax for the new ZRANGESTORE command:
ZRANGESTORE [BYSCORE | BYLEX] [REV] [LIMIT offset count]
New syntax for ZRANGE:
ZRANGE [BYSCORE | BYLEX] [REV] [WITHSCORES] [LIMIT offset count]
Old syntax for ZRANGE:
ZRANGE [WITHSCORES]
Other ZRANGE commands remain unchanged.
The implementation uses common code for all of these, by utilizing a consumer interface that in one
command response to the client, and in the other command stores a zset key.
Co-authored-by: Oran Agra <oran@redislabs.com>
New command: XAUTOCLAIM <key> <group> <consumer> <min-idle-time> <start> [COUNT <count>] [JUSTID]
The purpose is to claim entries from a stale consumer without the usual
XPENDING+XCLAIM combo which takes two round trips.
The syntax for XAUTOCLAIM is similar to scan: A cursor is returned (streamID)
by each call and should be used as start for the next call. 0-0 means the scan is complete.
This PR extends the deferred reply mechanism for any bulk string (not just counts)
This PR carries some unrelated test code changes:
- Renames the term "client" into "consumer" in the stream-cgroups test
- And also changes DEBUG SLEEP into "after"
Co-authored-by: Oran Agra <oran@redislabs.com>
instead of asking for the extra new space it wanted, it asked to grow the
string by the size it already has too.
i.e. a string of 1000 bytes, needing to grow by 10 bytes, would have been
asking for an **additional** 1010 bytes.
Turns out the RDB checksum in Redis 6.0 on bigendian is broken.
It always returned 0, so the RDB files are generated as if checksum is
disabled, and will be loaded ok on littleendian, and on bigendian.
But it'll not be able to load RDB files generated on littleendian or older versions.
Similarly DUMP and RESTORE will work on the same version (0==0),
but will be unable to exchange dump payloads with littleendian or old versions.
When a Lua script returns a map to redis (a feature which was added in
redis 6 together with RESP3), it would have returned the value first and
the key second.
If the client was using RESP2, it was getting them out of order, and if
the client was in RESP3, it was getting a map of value => key.
This was happening regardless of the Lua script using redis.setresp(3)
or not.
This also affects a case where the script was returning a map which it got
from from redis by doing something like: redis.setresp(3); return redis.call()
This fix is a breaking change for redis 6.0 users who happened to rely
on the wrong order (either ones that used redis.setresp(3), or ones that
returned a map explicitly).
This commit also includes other two changes in the tests:
1. The test suite now handles RESP3 maps as dicts rather than nested
lists
2. Remove some redundant (duplicate) tests from tracking.tcl
* man-like consistent long formatting
* Uppercases commands, subcommands and options
* Adds 'HELP' to HELP for all
* Lexicographical order
* Uses value notation and other .md likeness
* Moves const char *help to top
* Keeps it under 80 chars
* Misc help typos, consistent conjuctioning (i.e return and not returns)
* Uses addReplySubcommandSyntaxError(c) all over
Signed-off-by: Itamar Haber <itamar@redislabs.com>
This PR not only fixes the problem that swapdb does not make the
transaction fail, but also optimizes the FLUSHALL and FLUSHDB command to
set the CLIENT_DIRTY_CAS flag to avoid unnecessary traversal of clients.
FLUSHDB was changed to first iterate on all watched keys, and then on the
clients watching each key.
Instead of iterating though all clients, and for each iterate on watched keys.
Co-authored-by: Oran Agra <oran@redislabs.com>
If AOF file contains a long Lua script that timed out, then the `evalCommand` calls
`blockingOperationEnds` which sets `server.blocked_last_cron` to 0. later on,
the AOF `whileBlockedCron` function asserts that this value is not 0.
The fix allows nesting call to `blockingOperationStarts` and `blockingOperationEnds`.
The issue was first introduce in this commit: 9ef8d2f67 (Redis 6.2 RC1)
This is a recent problem, introduced by 7471743 (redis 6.0)
The implications are:
The sole difference between LookupKeyRead and LookupKeyWrite is for command
executed on a replica, which are not received from its master client. (for the master,
and for the master client on the replica, these two functions behave the same)!
Since SORT is a write command, this bug only implicates a writable-replica.
And these are its implications:
- SORT STORE will behave as it did before the above mentioned commit (like before
redis 6.0). on a writable-replica an already logically expired the key would have
appeared missing. (store dest key would be deleted, instead of being populated
with the data from the already logically expired key)
- SORT (the non store variant, which in theory could have been executed on
read-only-replica if it weren't for the write flag), will (in redis 6.0) have a new bug
and return the data from the already logically expired key instead of empty response.
New command flags similar to what SADD already has.
Co-authored-by: huangwei03 <huangwei03@kuaishou.com>
Co-authored-by: Itamar Haber <itamar@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
In #7726 (part of 6.2), we added a mechanism for whileBlockedCron, this
mechanism has an assertion to make sure the timestamp in
whileBlockedCron was always set correctly before the blocking operation
starts.
I now found (thanks to our CI) two bugs in that area:
1) CONFIG RESETSTAT (if it was allowed during loading) would have
cleared this var
2) the call stopLoading (which calls whileBlockedCron) was made too
early, while the rio is still in use, in which case the update_cksum
(rdbLoadProgressCallback) may still be called and whileBlockedCron
can assert.
getRDB is "designed" to work in two modes: one for redis-cli --rdb and
one for redis-cli --cluster backup.
in the later case it uses the hiredis connection from the cluster nodes
and it used to free it without nullifying the context, so a later
attempt to free the context would crash.
I suppose the reason it seems to want to free the hiredis context ASAP
is that it wants to disconnect the replica link, so that replication
buffers will not be accumulated.
The crash log attempts to print the current client info, and when it
does that it attempts to check if the first argument happens to be a key
but it did so for commands with no arguments too, which caused the crash
log to crash half way and not reach its end.
This Commit pushes forward the observability on overall error statistics and command statistics within redis-server:
It extends INFO COMMANDSTATS to have
- failed_calls in - so we can keep track of errors that happen from the command itself, broken by command.
- rejected_calls - so we can keep track of errors that were triggered outside the commmand processing per se
Adds a new section to INFO, named ERRORSTATS that enables keeping track of the different errors that
occur within redis ( within processCommand and call ) based on the reply Error Prefix ( The first word
after the "-", up to the first space ).
This commit also fixes RM_ReplyWithError so that it can be correctly identified as an error reply.
Recently efaf09ee4 started using addReplyErrorSds in place of
addReplySds the later takes ownership of the string but the former did
not.
This introduced memory leaks when a script returns an error to redis,
and also in clusterRedirectClient (two new usages of
addReplyErrorSds which was mostly unused till now.
This commit chagnes two thanks.
1. change addReplyErrorSds to take ownership of the error string.
2. scripting.c doesn't actually need to use addReplyErrorSds, it's a
perfect match for addReplyErrorFormat (replaces newlines with spaces)
Adds: `L/RPOP <key> [count]`
Implements no. 2 of the following strategies:
1. Loop on listTypePop - this would result in multiple calls for memory freeing and allocating (see 769167a079)
2. Iterate the range to build the reply, then call quickListDelRange - this requires two iterations and **is the current choice**
3. Refactor quicklist to have a pop variant of quickListDelRange - probably optimal but more complex
Also:
* There's a historical check for NULL after calling listTypePop that was converted to an assert.
* This refactors common logic shared between LRANGE and the new form of LPOP/RPOP into addListRangeReply (adds test for b/w compat)
* Consequently, it may have made sense to have `LRANGE l -1 -2` and `LRANGE l 9 0` be legit and return a reverse reply. Due to historical reasons that would be, however, a breaking change.
* Added minimal comments to existing commands to adhere to the style, make core dev life easier and get commit karma, naturally.
The commit deals with the syncWithMaster and the ugly state machine in it.
It attempts to clean it a bit, but more importantly it uses pipeline for
part of the work (rather than 7 round trips, we now have 4).
i.e. the connect and PING are separate, then AUTH + 3 REPLCONF in one pipeline,
and finally the PSYNC (must be separate since the master has to have an empty
output buffer).
This is just a refactoring commit.
This function was never actually used as a synchronous (do both send or
receive), it was always used only ine one of the two modes, which meant it
has to take extra arguments that are not relevant for the other.
Besides that, a tool that sends a synchronous command, it not something
we want in our toolbox (synchronous IO in single threaded app is evil).
sendSynchronousCommand was now refactored into separate sending and
receiving APIs, and the sending part has two variants, one taking vaargs,
and the other taking argc+argv (and an optional length array which means
you can use binary sds strings).
As discussed in https://github.com/antirez/redis/issues/7364, it is good
to have a HELLO command variant, which does not switch the current proto
version of a redis server.
While `HELLO` will work, it introduced a certain difficulty on parsing
options of the command. We will need to offset the index of authentication
and setname option by -1.
So 0 is marked a special version meaning non-switching. And we do not
need to change the code much.
When a database on a 64 bit build grows past 2^31 keys, the underlying hash table expands to 2^32 buckets. After this point, the algorithms for selecting random elements only return elements from half of the available buckets because they use random() which has a range of 0 to 2^31 - 1. This causes problems for eviction policies which use dictGetSomeKeys or dictGetRandomKey. Over time they cause the hash table to become unbalanced because, while new keys are spread out evenly across all buckets, evictions come from only half of the available buckets. Eventually this half of the table starts to run out of keys and it takes longer and longer to find candidates for eviction. This continues until no more evictions can happen.
This solution addresses this by using a 64 bit PRNG instead of libc random().
Co-authored-by: Greg Femec <gfemec@google.com>
Turns out that when the fork child crashes, the crash log was deleting
the pidfile from the disk (although the parent is still running.
Now we set the pidfile of the fork process to NULL so the fork process
will never deletes it.
Normally IO threads should simply read data from the socket into the
buffer and attempt to parse it.
If a protocol error is detected, a reply is generated which may result
with installing a write handler which is not thread safe. This fix
delays that until the client is processed back in the main thread.
Fixes#8220
In the distant history there was only the read flag for commands, and whatever
command that didn't have the read flag was a write one.
Then we added the write flag, but some portions of the code still used !read
Also some commands that don't work on the keyspace at all, still have the read
flag.
Changes in this commit:
1. remove the read-only flag from TIME, ECHO, ROLE and LASTSAVE
2. EXEC command used to decides if it should propagate a MULTI by looking at
the command flags (!read & !admin).
When i was about to change it to look at the write flag instead, i realized
that this would cause it not to propagate a MULTI for PUBLISH, EVAL, and
SCRIPT, all 3 are not marked as either a read command or a write one (as
they should), but all 3 are calling forceCommandPropagation.
So instead of introducing a new flag to denote a command that "writes" but
not into the keyspace, and still needs propagation, i decided to rely on
the forceCommandPropagation, and just fix the code to propagate MULTI when
needed rather than depending on the command flags at all.
The implication of my change then is that now it won't decide to propagate
MULTI when it sees one of these: SELECT, PING, INFO, COMMAND, TIME and
other commands which are neither read nor write.
3. Changing getNodeByQuery and clusterRedirectBlockedClientIfNeeded in
cluster.c to look at !write rather than read flag.
This should have no implications, since these code paths are only reachable
for commands which access keys, and these are always marked as either read
or write.
This commit improve MULTI propagation tests, for modules and a bunch of
other special cases, all of which used to pass already before that commit.
the only one that test change that uncovered a change of behavior is the
one that DELs a non-existing key, it used to propagate an empty
multi-exec block, and no longer does.
In response to large client query buffer optimization introduced in 1898e6c. The calculation of the amount of
remaining bytes we need to write to the query buffer was calculated wrong, as a result we are unnecessarily
growing the client query buffer by sdslen(c->querybuf) always. This fix corrects that behavior.
Please note the previous behavior prior to the before-mentioned change was correctly calculating the remaining
additional bytes, and this change makes that calculate to be consistent.
Useful context, the argument of size `ll` starts at qb_pos (which is now the beginning of the sds), but much of it
may have already been read from the socket, so we only need to grow the sds for the remainder of it.
If we only has one node in cluster or before 8fdc857, we don't know myself ip, so we should use config.hostip for myself.
However, we should use the IP from the command response to update node->ip if it exists and is different from config.hostip
otherwise, when there's more than one node in cluster, if we use -h with virtual IP or DNS, benchmark doesn't show node real ip and port of myself even though it could get right IP and port by CLUSTER NODES command.
Fix wrong server dirty increment in
* spopWithCountCommand
* hsetCommand
* ltrimCommand
* pfaddCommand
Some didn't increment the amount of fields (just one per command).
Others had excessive increments.