A single client pointer is added in the server struct. This is
initialized by the first RM_Call() and reused for every subsequent
RM_Call() except if it's already in use, which means that it's not
used for (recursive) module calls to modules. For these, a new
"fake" client is created each time.
Other changes:
* Avoid allocating a dict iterator in pubsubUnsubscribeAllChannels
when not needed
Originally this was limited to IPv6 address length, but effectively it
has been used for host names and now that Sentinel accepts that as well
we need to be able to store full hostnames.
Fixes#8507
* Adding current_save_keys_total and current_save_keys_processed info fields.
Present in replication, BGSAVE and AOFRW.
* Changing RM_SendChildCOWInfo() to RM_SendChildHeartbeat(double progress)
* Adding new info field current_fork_perc. Present in Replication, BGSAVE, AOFRW,
and module forks.
This commit enables tracking time of the background tasks and on replies,
opening the door for properly tracking commands that rely on blocking / background
work via the slowlog, latency history, and commandstats.
Some notes:
- The time spent blocked waiting for key changes, or blocked on synchronous
replication is not accounted for.
- **This commit does not affect latency tracking of commands that are non-blocking
or do not have background work.** ( meaning that it all stays the same with exception to
`BZPOPMIN`,`BZPOPMAX`,`BRPOP`,`BLPOP`, etc... and module's commands that rely
on background threads ).
- Specifically for latency history command we've added a new event class named
`command-unblocking` that will enable latency monitoring on commands that spawn
background threads to do the work.
- For blocking commands we're now considering the total time of a command as the
time spent on call() + the time spent on replying when unblocked.
- For Modules commands that rely on background threads we're now considering the
total time of a command as the time spent on call (main thread) + the time spent on
the background thread ( if marked within `RedisModule_MeasureTimeStart()` and
`RedisModule_MeasureTimeEnd()` ) + the time spent on replying (main thread)
To test for this feature we've added a `unit/moduleapi/blockonbackground` test that relies on
a module that blocks the client and sleeps on the background for a given time.
- check blocked command that uses RedisModule_MeasureTimeStart() is tracking background time
- check blocked command that uses RedisModule_MeasureTimeStart() is tracking background time even in timeout
- check blocked command with multiple calls RedisModule_MeasureTimeStart() is tracking the total background time
- check blocked command without calling RedisModule_MeasureTimeStart() is not reporting background time
New commands:
`HRANDFIELD [<count> [WITHVALUES]]`
`ZRANDMEMBER [<count> [WITHSCORES]]`
Algorithms are similar to the one in SRANDMEMBER.
Both return a simple bulk response when no arguments are given, and an array otherwise.
In case values/scores are requested, RESP2 returns a long array, and RESP3 a nested array.
note: in all 3 commands, the only option that also provides random order is the one with negative count.
Changes to SRANDMEMBER
* Optimization when count is 1, we can use the more efficient algorithm of non-unique random
* optimization: work with sds strings rather than robj
Other changes:
* zzlGetScore: when zset needs to convert string to double, we use safer memcpy (in
case the buffer is too small)
* Solve a "bug" in SRANDMEMBER test: it intended to test a positive count (case 3 or
case 4) and by accident used a negative count
Co-authored-by: xinluton <xinluton@qq.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
if option `set-proc-title' is no, then do nothing for proc title.
The reason has been explained long ago, see following:
We update redis to 2.8.8, then found there are some side effect when
redis always change the process title.
We run several slave instance on one computer, and all these salves
listen on unix socket only, then ps will show:
1 S redis 18036 1 0 80 0 - 56130 ep_pol 14:02 ? 00:00:31 /usr/sbin/redis-server *:0
1 S redis 23949 1 0 80 0 - 11074 ep_pol 15:41 ? 00:00:00 /usr/sbin/redis-server *:0
for redis 2.6 the output of ps is like following:
1 S redis 18036 1 0 80 0 - 56130 ep_pol 14:02 ? 00:00:31 /usr/sbin/redis-server /etc/redis/a.conf
1 S redis 23949 1 0 80 0 - 11074 ep_pol 15:41 ? 00:00:00 /usr/sbin/redis-server /etc/redis/b.conf
Later is more informational in our case. The situation
is worse when we manage the config and process running
state by salt. Salt check the process by running "ps |
grep SIG" (for Gentoo System) to check the running
state, where SIG is the string to search for when
looking for the service process with ps. Previously, we
define sig as "/usr/sbin/redis-server
/etc/redis/a.conf". Since the ps output is identical for
our case, so we have no way to check the state of
specified redis instance.
So, for our case, we prefer the old behavior, i.e, do
not change the process title for the main redis process.
Or add an option such as "set-proc-title [yes|no]" to
control this behavior.
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
This commit introduces two new command and two options for an existing command
GETEX <key> [PERSIST][EX seconds][PX milliseconds] [EXAT seconds-timestamp]
[PXAT milliseconds-timestamp]
The getexCommand() function implements extended options and variants of the GET
command. Unlike GET command this command is not read-only. Only one of the options
can be used at a given time.
1. PERSIST removes any TTL associated with the key.
2. EX Set expiry TTL in seconds.
3. PX Set expiry TTL in milliseconds.
4. EXAT Same like EX instead of specifying the number of seconds representing the
TTL (time to live), it takes an absolute Unix timestamp
5. PXAT Same like PX instead of specifying the number of milliseconds representing the
TTL (time to live), it takes an absolute Unix timestamp
Command would return either the bulk string, error or nil.
GETDEL <key>
Would delete the key after getting.
SET key value [NX] [XX] [KEEPTTL] [GET] [EX <seconds>] [PX <milliseconds>]
[EXAT <seconds-timestamp>][PXAT <milliseconds-timestamp>]
Two new options added here are EXAT and PXAT
Key implementation notes
- `SET` with `PX/EX/EXAT/PXAT` is always translated to `PXAT` in `AOF`. When relative time is
specified (`PX/EX`), replication will always use `PX`.
- `setexCommand` and `psetexCommand` would no longer need translation in `feedAppendOnlyFile`
as they are modified to invoke `setGenericCommand ` with appropriate flags which will take care of
correct AOF translation.
- `GETEX` without any optional argument behaves like `GET`.
- `GETEX` command is never propagated, It is either propagated as `PEXPIRE[AT], or PERSIST`.
- `GETDEL` command is propagated as `DEL`
- Combined the validation for `SET` and `GETEX` arguments.
- Test cases to validate AOF/Replication propagation
In some scenarios, such as remote backup, we only want to get remote
redis server db snapshot. Currently, redis-cli acts as a replica and
sends SYNC to redis, but redis still accumulates replication buffer
in the replica client output buffer, that may result in using vast
memory, or failing to transfer RDB because of client-output-buffer-limit.
In this commit, we add 'replconf rdb-only 0|1', redis doesn't send
incremental replication buffer to them if they send 'replconf rdb-only 1',
so we can reduce used memory and improve success of getting RDB.
This commit fixes a well known and an annoying issue in Sentinel mode.
Cause of this issue:
Currently, Redis rewrite process works well in server mode, however in sentinel mode,
the sentinel config has variant semantics for different configurations, in example configuration
https://github.com/redis/redis/blob/unstable/sentinel.conf, we put comments on these.
However the rewrite process only treat the sentinel config as a single option. During rewrite
process, it will mess up with the lines and comments.
Approaches:
In order to solve this issue, we need to differentiate different subconfig options in sentinel separately,
for example, sentinel monitor <master-name> <ip> <redis-port> <quorum>
we can treat it as sentinel monitor option, instead of the sentinel option.
This commit also fixes the dependency issue when putting configurations in sentinel.conf.
For example before this commit,we must put
`sentinel monitor <master-name> <ip> <redis-port> <quorum>` before
`sentinel auth-pass <master-name> <password>` for a single master,
otherwise the server cannot start and will return error. This commit fixes this issue, as long as
the monitoring master was configured, no matter the sequence is, the sentinel can start and run properly.
Fixes a regression introduced due to a new (safer) way of rewriting configuration files. In the past the file was simply overwritten (same inode), but now Redis creates a new temporary file and later renames it over the old one.
The temp file typically gets created with 0600 permissions so we later fchmod it to fix that. Unlike open with O_CREAT, fchmod doesn't consider umask so we have to do that explicitly.
Fixes#8369
* Adds ASYNC and SYNC arguments to SCRIPT FLUSH
* Adds SYNC argument to FLUSHDB and FLUSHALL
* Adds new config to control the default behavior of FLUSHDB, FLUSHALL and SCRIPT FLUASH.
the new behavior is as follows:
* FLUSH[ALL|DB],SCRIPT FLUSH: Determine sync or async according to the
value of lazyfree-lazy-user-flush.
* FLUSH[ALL|DB],SCRIPT FLUSH ASYNC: Always flushes the database in an async manner.
* FLUSH[ALL|DB],SCRIPT FLUSH SYNC: Always flushes the database in a sync manner.
- the last COW report wasn't always read from the pipe
(receiveLastChildInfo wasn't used)
- but in fact, there's no reason we won't always try to drain that pipe
so i'm unifying receiveLastChildInfo with receiveChildInfo
- adjust threshold of the COW test when run in accurate mode
- add some prints in case this test fails again
- fix indentation, page size, and PID! in MacOS proc info
p.s. it seems that pri_pages_dirtied is always 0
Older arm64 Linux kernels have a bug that could lead to data corruption during
background save under the following scenario:
1) jemalloc uses MADV_FREE on a page,
2) jemalloc reuses and writes the page,
3) Redis forks the background save process, and
4) Linux performs page reclamation.
Under these conditions, Linux will reclaim the page wrongfully and the
background save process will read zeros when it tries to read the page.
The bug has been fixed in Linux with commit:
ff1712f953e27f0b0718762ec17d0adb15c9fd0b ("arm64: pgtable: Ensure dirty bit is
preserved across pte_wrprotect()")
This Commit adds an ignore-warnings config, when not found, redis will
print a warning and exit on startup (default behavior).
Co-authored-by: Oran Agra <oran@redislabs.com>
Add INFO field, rdb_active_cow_size, to report COW of a live fork child while
it's active.
- once in 1024 keys check the time, and if there's more than one second since
the last report send a report to the parent via the pipe.
- refactor the child_info_data struct, it's an implementation detail that
shouldn't be in the server struct, and not used to communicate data between
caller and callee
- remove the magic value from that struct (not sure what it was good for), and
instead add handling of short reads.
- add another value to the structure, cow_type, to indicate if the report is
for the new rdb_active_cow_size field, or it's the last report of a
successful operation
- add new Module API to report the active COW
- add more asserts variants to test.tcl
This is a refactory commit, isn't suppose to have any actual impact.
it does the following:
- keep just one server struct fork child pid variable instead of 3
- have one server struct variable indicating the purpose of the current fork
child.
- redisFork is now responsible of updating the server struct with the pid,
which means it can be the one that calls updateDictResizePolicy
- move child info pipe handling into redisFork instead of having them
repeated outside
- there are two classes of fork purposes, mutually exclusive group (AOF, RDB,
Module), and one that can create several forks to coexist in parallel (LDB,
but maybe Modules some day too, Module API allows for that).
- minor fix to killRDBChild:
unlike killAppendOnlyChild and TerminateModuleForkChild, the killRDBChild
doesn't clear the pid variable or call wait4, so checkChildrenDone does
the cleanup for it.
This commit removes the explicit calls to rdbRemoveTempFile, closeChildInfoPipe,
updateDictResizePolicy, which didn't do any harm, but where unnecessary.
Add ZRANGESTORE command, and improve ZSTORE command to deprecated Z[REV]RANGE[BYSCORE|BYLEX].
Syntax for the new ZRANGESTORE command:
ZRANGESTORE [BYSCORE | BYLEX] [REV] [LIMIT offset count]
New syntax for ZRANGE:
ZRANGE [BYSCORE | BYLEX] [REV] [WITHSCORES] [LIMIT offset count]
Old syntax for ZRANGE:
ZRANGE [WITHSCORES]
Other ZRANGE commands remain unchanged.
The implementation uses common code for all of these, by utilizing a consumer interface that in one
command response to the client, and in the other command stores a zset key.
Co-authored-by: Oran Agra <oran@redislabs.com>
New command: XAUTOCLAIM <key> <group> <consumer> <min-idle-time> <start> [COUNT <count>] [JUSTID]
The purpose is to claim entries from a stale consumer without the usual
XPENDING+XCLAIM combo which takes two round trips.
The syntax for XAUTOCLAIM is similar to scan: A cursor is returned (streamID)
by each call and should be used as start for the next call. 0-0 means the scan is complete.
This PR extends the deferred reply mechanism for any bulk string (not just counts)
This PR carries some unrelated test code changes:
- Renames the term "client" into "consumer" in the stream-cgroups test
- And also changes DEBUG SLEEP into "after"
Co-authored-by: Oran Agra <oran@redislabs.com>
This PR not only fixes the problem that swapdb does not make the
transaction fail, but also optimizes the FLUSHALL and FLUSHDB command to
set the CLIENT_DIRTY_CAS flag to avoid unnecessary traversal of clients.
FLUSHDB was changed to first iterate on all watched keys, and then on the
clients watching each key.
Instead of iterating though all clients, and for each iterate on watched keys.
Co-authored-by: Oran Agra <oran@redislabs.com>
If AOF file contains a long Lua script that timed out, then the `evalCommand` calls
`blockingOperationEnds` which sets `server.blocked_last_cron` to 0. later on,
the AOF `whileBlockedCron` function asserts that this value is not 0.
The fix allows nesting call to `blockingOperationStarts` and `blockingOperationEnds`.
The issue was first introduce in this commit: 9ef8d2f67 (Redis 6.2 RC1)
This Commit pushes forward the observability on overall error statistics and command statistics within redis-server:
It extends INFO COMMANDSTATS to have
- failed_calls in - so we can keep track of errors that happen from the command itself, broken by command.
- rejected_calls - so we can keep track of errors that were triggered outside the commmand processing per se
Adds a new section to INFO, named ERRORSTATS that enables keeping track of the different errors that
occur within redis ( within processCommand and call ) based on the reply Error Prefix ( The first word
after the "-", up to the first space ).
This commit also fixes RM_ReplyWithError so that it can be correctly identified as an error reply.
Adds: `L/RPOP <key> [count]`
Implements no. 2 of the following strategies:
1. Loop on listTypePop - this would result in multiple calls for memory freeing and allocating (see 769167a079)
2. Iterate the range to build the reply, then call quickListDelRange - this requires two iterations and **is the current choice**
3. Refactor quicklist to have a pop variant of quickListDelRange - probably optimal but more complex
Also:
* There's a historical check for NULL after calling listTypePop that was converted to an assert.
* This refactors common logic shared between LRANGE and the new form of LPOP/RPOP into addListRangeReply (adds test for b/w compat)
* Consequently, it may have made sense to have `LRANGE l -1 -2` and `LRANGE l 9 0` be legit and return a reverse reply. Due to historical reasons that would be, however, a breaking change.
* Added minimal comments to existing commands to adhere to the style, make core dev life easier and get commit karma, naturally.
The commit deals with the syncWithMaster and the ugly state machine in it.
It attempts to clean it a bit, but more importantly it uses pipeline for
part of the work (rather than 7 round trips, we now have 4).
i.e. the connect and PING are separate, then AUTH + 3 REPLCONF in one pipeline,
and finally the PSYNC (must be separate since the master has to have an empty
output buffer).
In the distant history there was only the read flag for commands, and whatever
command that didn't have the read flag was a write one.
Then we added the write flag, but some portions of the code still used !read
Also some commands that don't work on the keyspace at all, still have the read
flag.
Changes in this commit:
1. remove the read-only flag from TIME, ECHO, ROLE and LASTSAVE
2. EXEC command used to decides if it should propagate a MULTI by looking at
the command flags (!read & !admin).
When i was about to change it to look at the write flag instead, i realized
that this would cause it not to propagate a MULTI for PUBLISH, EVAL, and
SCRIPT, all 3 are not marked as either a read command or a write one (as
they should), but all 3 are calling forceCommandPropagation.
So instead of introducing a new flag to denote a command that "writes" but
not into the keyspace, and still needs propagation, i decided to rely on
the forceCommandPropagation, and just fix the code to propagate MULTI when
needed rather than depending on the command flags at all.
The implication of my change then is that now it won't decide to propagate
MULTI when it sees one of these: SELECT, PING, INFO, COMMAND, TIME and
other commands which are neither read nor write.
3. Changing getNodeByQuery and clusterRedirectBlockedClientIfNeeded in
cluster.c to look at !write rather than read flag.
This should have no implications, since these code paths are only reachable
for commands which access keys, and these are always marked as either read
or write.
This commit improve MULTI propagation tests, for modules and a bunch of
other special cases, all of which used to pass already before that commit.
the only one that test change that uncovered a change of behavior is the
one that DELs a non-existing key, it used to propagate an empty
multi-exec block, and no longer does.
Add a new set of defrag functions that take a defrag context and allow
defragmenting memory blocks and RedisModuleStrings.
Modules can register a defrag callback which will be invoked when the
defrag process handles globals.
Modules with custom data types can also register a datatype-specific
defrag callback which is invoked for keys that require defragmentation.
The callback and associated functions support both one-step and
multi-step options, depending on the complexity of the key as exposed by
the free_effort callback.
Add commands to query geospatial data with bounding box.
Two new commands that replace the existing 4 GEORADIUS* commands.
GEOSEARCH key [FROMMEMBER member] [FROMLOC long lat] [BYRADIUS radius
unit] [BYBOX width height unit] [WITHCORD] [WITHDIST] [WITHASH] [COUNT
count] [ASC|DESC]
GEOSEARCHSTORE dest_key src_key [FROMMEMBER member] [FROMLOC long lat]
[BYRADIUS radius unit] [BYBOX width height unit] [WITHCORD] [WITHDIST]
[WITHASH] [COUNT count] [ASC|DESC] [STOREDIST]
- Add two types of CIRCULAR_TYPE and RECTANGLE_TYPE to achieve different searches
- Judge whether the point is within the rectangle, refer to:
geohashGetDistanceIfInRectangle
This adds a new `tls-client-cert-file` and `tls-client-key-file`
configuration directives which make it possible to use different
certificates for the TLS-server and TLS-client functions of Redis.
This is an optional directive. If it is not specified the `tls-cert-file`
and `tls-key-file` directives are used for TLS client functions as well.
Also, `utils/gen-test-certs.sh` now creates additional server-only and client-only certs and will skip intensive operations if target files already exist.
This adds a copy callback for module data types, in order to make
modules compatible with the new COPY command.
The callback is optional and COPY will fail for keys with data types
that do not implement it.
First, if the ziplist header is surely inside the ziplist, do fast path
decoding rather than the careful one.
In that case, streamline the encoding if-else chain to be executed only
once, and the encoding validity tested at the end.
encourage inlining
likely / unlikely hints for speculative execution
Assertion used _exit(1) to tell the compiler that the code after them is
not reachable and get rid of warnings.
But in some cases assertions are placed inside tight loops, and any
piece of code in them can slow down execution (code cache and other
reasons), instead using either abort() or better yet, unreachable
builtin.
If RESTORE passes successfully with full sanitization, we can't affort
to crash later on assertion due to duplicate records in a hash when
converting it form ziplist to dict.
This means that when doing full sanitization, we must make sure there
are no duplicate records in any of the collections.
The test creates keys with various encodings, DUMP them, corrupt the payload
and RESTORES it.
It utilizes the recently added use-exit-on-panic config to distinguish between
asserts and segfaults.
If the restore succeeds, it runs random commands on the key to attempt to
trigger a crash.
It runs in two modes, one with deep sanitation enabled and one without.
In the first one we don't expect any assertions or segfaults, in the second one
we expect assertions, but no segfaults.
We also check for leaks and invalid reads using valgrind, and if we find them
we print the commands that lead to that issue.
Changes in the code (other than the test):
- Replace a few NPD (null pointer deference) flows and division by zero with an
assertion, so that it doesn't fail the test. (since we set the server to use
`exit` rather than `abort` on assertion).
- Fix quite a lot of flows in rdb.c that could have lead to memory leaks in
RESTORE command (since it now responds with an error rather than panic)
- Add a DEBUG flag for SET-SKIP-CHECKSUM-VALIDATION so that the test don't need
to bother with faking a valid checksum
- Remove a pile of code in serverLogObjectDebugInfo which is actually unsafe to
run in the crash report (see comments in the code)
- fix a missing boundary check in lzf_decompress
test suite infra improvements:
- be able to run valgrind checks before the process terminates
- rotate log files when restarting servers
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
One way this was happening is when a module issued an RM_Call which would inject MULTI.
If the module command that does that was itself issued by something else that already did
added MULTI (e.g. another module, or a Lua script), it would have caused nested MULTI.
In fact the MULTI state in the client or the MULTI_EMITTED flag in the context isn't
the right indication that we need to propagate MULTI or not, because on a nested calls
(possibly a module action called by a keyspace event of another module action), these
flags aren't retained / reflected.
instead there's now a global propagate_in_transaction flag for that.
in addition to that, we now have a global in_eval and in_exec flags, to serve the flags
of RM_GetContextFlags, since their dependence on the current client is wrong for the same
reasons mentioned above.
As we know, redis may reject user's requests or evict some keys if
used memory is over maxmemory. Dictionaries expanding may make
things worse, some big dictionaries, such as main db and expires dict,
may eat huge memory at once for allocating a new big hash table and be
far more than maxmemory after expanding.
There are related issues: #4213#4583
More details, when expand dict in redis, we will allocate a new big
ht[1] that generally is double of ht[0], The size of ht[1] will be
very big if ht[0] already is big. For db dict, if we have more than
64 million keys, we need to cost 1GB for ht[1] when dict expands.
If the sum of used memory and new hash table of dict needed exceeds
maxmemory, we shouldn't allow the dict to expand. Because, if we
enable keys eviction, we still couldn't add much more keys after
eviction and rehashing, what's worse, redis will keep less keys when
redis only remains a little memory for storing new hash table instead
of users' data. Moreover users can't write data in redis if disable
keys eviction.
What this commit changed ?
Add a new member function expandAllowed for dict type, it provide a way
for caller to allow expand or not. We expose two parameters for this
function: more memory needed for expanding and dict current load factor,
users can implement a function to make a decision by them.
For main db dict and expires dict type, these dictionaries may be very
big and cost huge memory for expanding, so we implement a judgement
function: we can stop dict to expand provisionally if used memory will
be over maxmemory after dict expands, but to guarantee the performance
of redis, we still allow dict to expand if dict load factor exceeds the
safe load factor.
Add test cases to verify we don't allow main db to expand when left
memory is not enough, so that avoid keys eviction.
Other changes:
For new hash table size when expand. Before this commit, the size is
that double used of dict and later _dictNextPower. Actually we aim to
control a dict load factor between 0.5 and 1.0. Now we replace *2 with
+1, since the first check is that used >= size, the outcome of before
will usually be the same as _dictNextPower(used+1). The only case where
it'll differ is when dict_can_resize is false during fork, so that later
the _dictNextPower(used*2) will cause the dict to jump to *4 (i.e.
_dictNextPower(1025*2) will return 4096).
Fix rehash test cases due to changing algorithm of new hash table size
when expand.
When replica diskless-load type is swapdb in cluster mode, we didn't backup
keys to slots map, so we will lose keys to slots map if fail to sync.
Now we backup keys to slots map at first, and restore it properly when fail.
This commit includes a refactory/cleanup of the backups mechanism (moving it to db.c and re-structuring it a bit).
Co-authored-by: Oran Agra <oran@redislabs.com>
SELECT used to read the index into a `long` variable, and then pass it to a function
that takes an `int`, possibly causing an overflow before the range check.
Now all these commands use better and cleaner range check, and that also results in
a slight change of the error response in case of an invalid database index.
SELECT:
in the past it would have returned either `-ERR invalid DB index` (if not a number),
or `-ERR DB index is out of range` (if not between 1..16 or alike).
now it'll return either `-ERR value is out of range` (if not a number), or
`-ERR value is out of range, value must between -2147483648 and 2147483647`
(if not in the range for an int), or `-ERR DB index is out of range`
(if not between 0..16 or alike)
MOVE:
in the past it would only fail with `-ERR index out of range` no matter the reason.
now return the same errors as the new ones for SELECT mentioned above.
(i.e. unlike for SELECT even for a value like 17 we changed the error message)
COPY:
doesn't really matter how it behaved in the past (new command), new behavior is
like the above two.
Fixes#7923.
This PR appropriates the special `&` symbol (because `@` and `*` are taken),
followed by a literal value or pattern for describing the Pub/Sub patterns that
an ACL user can interact with. It is similar to the existing key patterns
mechanism in function (additive) and implementation (copy-pasta). It also adds
the allchannels and resetchannels ACL keywords, naturally.
The default user is given allchannels permissions, whereas new users get
whatever is defined by the acl-pubsub-default configuration directive. For
backward compatibility in 6.2, the default of this directive is allchannels but
this is likely to be changed to resetchannels in the next major version for
stronger default security settings.
Unless allchannels is set for the user, channel access permissions are checked
as follows :
* Calls to both PUBLISH and SUBSCRIBE will fail unless a pattern matching the
argumentative channel name(s) exists for the user.
* Calls to PSUBSCRIBE will fail unless the pattern(s) provided as an argument
literally exist(s) in the user's list.
Such failures are logged to the ACL log.
Runtime changes to channel permissions for a user with existing subscribing
clients cause said clients to disconnect unless the new permissions permit the
connections to continue. Note, however, that PSUBSCRIBErs' patterns are matched
literally, so given the change bar:* -> b*, pattern subscribers to bar:* will be
disconnected.
Notes/questions:
* UNSUBSCRIBE, PUNSUBSCRIBE and PUBSUB remain unprotected due to lack of reasons
for touching them.
Fix: When oom-score-adj-values is provided in the config file after
oom-score-adj yes, it'll take an immediate action, before
readOOMScoreAdj was acquired, resulting in an error (out of range score
due to uninitialized value. delay the reaction the real call is made by
main().
Since the values are clamped to -1000..1000, and they're
applied as an offset from the value at startup (which may be -1000), we
need to allow the offsets to reach to +2000 so that a value of +1000 is
achievable in case the value at startup was -1000.
Adding an option for absolute values rather than relative ones.