Commit Graph

480 Commits

Author SHA1 Message Date
杨博东
229327ad8b
Avoid redundant calls to signalKeyAsReady (#7625)
signalKeyAsReady has some overhead (namely dictFind) so we should
only call it when there are clients blocked on the relevant type (BLOCKED_*)
2020-08-11 08:18:09 +03:00
Wang Yuan
1ef014ee6b
Fix applying zero offset to null pointer when creating moduleFreeContextReusedClient (#7323)
Before this fix we where attempting to select a db before creating db the DB, see: #7323

This issue doesn't seem to have any implications, since the selected DB index is 0,
the db pointer remains NULL, and will later be correctly set before using this dummy
client for the first time.

As we know, we call 'moduleInitModulesSystem()' before 'initServer()'. We will allocate
memory for server.db in 'initServer', but we call 'createClient()' that will call 'selectDb()'
in 'moduleInitModulesSystem()', before the databases where created. Instead, we should call
'createClient()' for moduleFreeContextReusedClient after 'initServer()'.
2020-08-08 14:36:41 +03:00
Oran Agra
c17e597d05
Accelerate diskless master connections, and general re-connections (#6271)
Diskless master has some inherent latencies.
1) fork starts with delay from cron rather than immediately
2) replica is put online only after an ACK. but the ACK
   was sent only once a second.
3) but even if it would arrive immediately, it will not
   register in case cron didn't yet detect that the fork is done.

Besides that, when a replica disconnects, it doesn't immediately
attempts to re-connect, it waits for replication cron (one per second).
in case it was already online, it may be important to try to re-connect
as soon as possible, so that the backlog at the master doesn't vanish.

In case it disconnected during rdb transfer, one can argue that it's
not very important to re-connect immediately, but this is needed for the
"diskless loading short read" test to be able to run 100 iterations in 5
seconds, rather than 3 (waiting for replication cron re-connection)

changes in this commit:
1) sync command starts a fork immediately if no sync_delay is configured
2) replica sends REPLCONF ACK when done reading the rdb (rather than on 1s cron)
3) when a replica unexpectedly disconnets, it immediately tries to
   re-connect rather than waiting 1s
4) when when a child exits, if there is another replica waiting, we spawn a new
   one right away, instead of waiting for 1s replicationCron.
5) added a call to connectWithMaster from replicationSetMaster. which is called
   from the REPLICAOF command but also in 3 places in cluster.c, in all of
   these the connection attempt will now be immediate instead of delayed by 1
   second.

side note:
we can add a call to rdbPipeReadHandler in replconfCommand when getting
a REPLCONF ACK from the replica to solve a race where the replica got
the entire rdb and EOF marker before we detected that the pipe was
closed.
in the test i did see this race happens in one about of some 300 runs,
but i concluded that this race is unlikely in real life (where the
replica is on another host and we're more likely to first detect the
pipe was closed.
the test runs 100 iterations in 3 seconds, so in some cases it'll take 4
seconds instead (waiting for another REPLCONF ACK).

Removing unneeded startBgsaveForReplication from updateSlavesWaitingForBgsave
Now that CheckChildrenDone is calling the new replicationStartPendingFork
(extracted from serverCron) there's actually no need to call
startBgsaveForReplication from updateSlavesWaitingForBgsave anymore,
since as soon as updateSlavesWaitingForBgsave returns, CheckChildrenDone is
calling replicationStartPendingFork that handles that anyway.
The code in updateSlavesWaitingForBgsave had a bug in which it ignored
repl-diskless-sync-delay, but removing that code shows that this bug was
hiding another bug, which is that the max_idle should have used >= and
not >, this one second delay has a big impact on my new test.
2020-08-06 16:53:06 +03:00
Oran Agra
90b717e723 Assertion and panic, print crash log without generating SIGSEGV
This makes it possible to add tests that generate assertions, and run
them with valgrind, making sure that there are no memory violations
prior to the assertion.

New config options:
- crash-log-enabled - can be disabled for cleaner core dumps
- crash-memcheck-enabled - useful for faster termination after a crash
- use-exit-on-panic - to be used by the test suite so that valgrind can
  detect leaks and memory corruptions

Other changes:
- Crash log is printed even on system that dont HAVE_BACKTRACE, i.e. in
  both SIGSEGV and assert / panic
- Assertion and panic won't print registers and code around EIP (which
  was useless), but will do fast memory test (which may still indicate
  that the assertion was due to memory corrpution)

I had to reshuffle code in order to re-use it, so i extracted come code
into function without actually doing any changes to the code:
- logServerInfo
- logModulesInfo
- doFastMemoryTest (with the exception of it being conditional)
- dumpCodeAroundEIP

changes to the crash report on segfault:
- logRegisters is called right after the stack trace (before info) done
  just in order to have more re-usable code
- stack trace skips the first two items on the stack (the crash log and
  signal handler functions)
2020-08-06 16:47:27 +03:00
Tyson Andre
f11f26cc53
Add a ZMSCORE command returning an array of scores. (#7593)
Syntax: `ZMSCORE KEY MEMBER [MEMBER ...]`

This is an extension of #2359
amended by Tyson Andre to work with the changed unstable API,
add more tests, and consistently return an array.

- It seemed as if it would be more likely to get reviewed
  after updating the implementation.

Currently, multi commands or lua scripting to call zscore multiple times
would almost definitely be less efficient than a native ZMSCORE
for the following reasons:

- Need to fetch the set from the string every time instead of reusing the C
  pointer.
- Using pipelining or multi-commands would result in more bytes sent by
  the client for the repeated `ZMSCORE KEY` sections.
- Need to specially encode the data and decode it from the client
  for lua-based solutions.
- The fastest solution I've seen for large sets(thousands or millions)
  involves lua and a variadic ZADD, then a ZINTERSECT, then a ZRANGE 0 -1,
  then UNLINK of a temporary set (or lua). This is still inefficient.

Co-authored-by: Tyson Andre <tysonandre775@hotmail.com>
2020-08-04 17:49:33 +03:00
Arun Ranganathan
f6cad30bb6
Show threading configuration in INFO output (#7446)
Co-authored-by: Oran Agra <oran@redislabs.com>
2020-07-29 08:46:44 +03:00
Jiayuan Chen
f31260b044
Add optional tls verification (#7502)
Adds an `optional` value to the previously boolean `tls-auth-clients` configuration keyword.

Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
2020-07-28 10:45:21 +03:00
grishaf
4126ca466f
Fix prepareForShutdown function declaration (#7566) 2020-07-26 08:27:30 +03:00
Meir Shpilraien (Spielrein)
8d82639319
This PR introduces a new loaded keyspace event (#7536)
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Itamar Haber <itamar@redislabs.com>
2020-07-23 12:38:51 +03:00
Yossi Gottlieb
3e6f2b1a45
TLS: Session caching configuration support. (#7420)
* TLS: Session caching configuration support.
* TLS: Remove redundant config initialization.
2020-07-10 11:33:47 +03:00
Oran Agra
5977a94842
RESTORE ABSTTL won't store expired keys into the db (#7472)
Similarly to EXPIREAT with TTL in the past, which implicitly deletes the
key and return success, RESTORE should not store key that are already
expired into the db.
When used together with REPLACE it should emit a DEL to keyspace
notification and replication stream.
2020-07-10 10:02:37 +03:00
Salvatore Sanfilippo
6bbbdd26f4
Merge pull request #7390 from oranagra/exec_fails_abort
EXEC always fails with EXECABORT and multi-state is cleared
2020-06-23 13:12:52 +02:00
Oran Agra
65a3307bc9 EXEC always fails with EXECABORT and multi-state is cleared
In order to support the use of multi-exec in pipeline, it is important that
MULTI and EXEC are never rejected and it is easy for the client to know if the
connection is still in multi state.

It was easy to make sure MULTI and DISCARD never fail (done by previous
commits) since these only change the client state and don't do any actual
change in the server, but EXEC is a different story.

Since in the past, it was possible for clients to handle some EXEC errors and
retry the EXEC, we now can't affort to return any error on EXEC other than
EXECABORT, which now carries with it the real reason for the abort too.

Other fixes in this commit:
- Some checks that where performed at the time of queuing need to be re-
  validated when EXEC runs, for instance if the transaction contains writes
  commands, it needs to be aborted. there was one check that was already done
  in execCommand (-READONLY), but other checks where missing: -OOM, -MISCONF,
  -NOREPLICAS, -MASTERDOWN
- When a command is rejected by processCommand it was rejected with addReply,
  which was not recognized as an error in case the bad command came from the
  master. this will enable to count or MONITOR these errors in the future.
- make it easier for tests to create additional (non deferred) clients.
- add tests for the fixes of this commit.
2020-06-23 12:01:33 +03:00
antirez
e63a5ba122 LPOS: implement the final design. 2020-06-10 12:49:15 +02:00
Paul Spooren
a7936ef96d LRANK: Add command (the command will be renamed LPOS).
The `LRANK` command returns the index (position) of a given element
within a list. Using the `direction` argument it is possible to specify
going from head to tail (acending, 1) or from tail to head (decending,
-1). Only the first found index is returend. The complexity is O(N).

When using lists as a queue it can be of interest at what position a
given element is, for instance to monitor a job processing through a
work queue. This came up within the Python `rq` project which is based
on Redis[0].

[0]: https://github.com/rq/rq/issues/1197

Signed-off-by: Paul Spooren <mail@aparcar.org>
2020-06-10 12:07:40 +02:00
antirez
6a16a636bf Replication: showLatestBacklog() refactored out. 2020-05-28 10:08:16 +02:00
antirez
22472fe5a1 Remove the meaningful offset feature.
After a closer look, the Redis core devleopers all believe that this was
too fragile, caused many bugs that we didn't expect and that were very
hard to track. Better to find an alternative solution that is simpler.
2020-05-27 12:06:33 +02:00
antirez
adc5df1bc3 Make disconnectSlaves() synchronous in the base case.
Otherwise we run into that:

Backtrace:
src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035]
src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390]
src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a]
src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282]
src/redis-server 127.0.0.1:21322[0x4552d4]
src/redis-server 127.0.0.1:21322[0x4c5593]
src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786]
src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d]
src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830]
src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409]

Since we disconnect all the replicas and free the replication backlog in
certain replication paths, and the code that will free the replication
backlog expects that no replica is connected.

However we still need to free the replicas asynchronously in certain
cases, as documented in the top comment of disconnectSlaves().
2020-05-22 19:29:09 +02:00
antirez
1a7cd2c0e2 Cache master without checking of deferred close flags.
The context is issue #7205: since the introduction of threaded I/O we close
clients asynchronously by default from readQueryFromClient(). So we
should no longer prevent the caching of the master client, to later
PSYNC incrementally, if such flags are set. However we also don't want
the master client to be cached with such flags (would be closed
immediately after being restored). And yet we want a way to understand
if a master was closed because of a protocol error, and in that case
prevent the caching.
2020-05-15 10:19:13 +02:00
antirez
bc4667acbc Track events processed while blocked globally.
Related to #7234.
2020-05-14 10:06:27 +02:00
Titouan Christophe
1f246bfcc8 make struct user anonymous (only typedefed)
This works because this struct is never referenced by its name,
but always by its type.

This prevents a conflict with struct user from <sys/user.h>
when compiling against uclibc.

Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
2020-05-05 11:35:03 +02:00
Salvatore Sanfilippo
1aee086a09
Merge pull request #7192 from hwware/trackingprefix
Client Side Caching: Add Number of Tracking Prefix Stats in Server Info
2020-05-04 11:06:44 +02:00
hwware
e697ef6dec Client Side Caching: Add Tracking Prefix Number Stats in Server Info 2020-05-02 19:20:44 -04:00
zhenwei pi
1a0deab2a5 Support setcpuaffinity on linux/bsd
Currently, there are several types of threads/child processes of a
redis server. Sometimes we need deeply optimise the performance of
redis, so we would like to isolate threads/processes.

There were some discussion about cpu affinity cases in the issue:
https://github.com/antirez/redis/issues/2863

So implement cpu affinity setting by redis.conf in this patch, then
we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
bgsave_cpulist by cpu list.

Examples of cpulist in redis.conf:
server_cpulist 0-7:2      means cpu affinity 0,2,4,6
bio_cpulist 1,3           means cpu affinity 1,3
aof_rewrite_cpulist 8-11  means cpu affinity 8,9,10,11
bgsave_cpulist 1,10-11    means cpu affinity 1,10,11

Test on linux/freebsd, both work fine.

Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
2020-05-02 21:19:47 +08:00
Oran Agra
4447ddc8bb Keep track of meaningful replication offset in replicas too
Now both master and replicas keep track of the last replication offset
that contains meaningful data (ignoring the tailing pings), and both
trim that tail from the replication backlog, and the offset with which
they try to use for psync.

the implication is that if someone missed some pings, or even have
excessive pings that the promoted replica has, it'll still be able to
psync (avoid full sync).

the downside (which was already committed) is that replicas running old
code may fail to psync, since the promoted replica trims pings form it's
backlog.

This commit adds a test that reproduces several cases of promotions and
demotions with stale and non-stale pings

Background:
The mearningful offset on the master was added recently to solve a problem were
the master is left all alone, injecting PINGs into it's backlog when no one is
listening and then gets demoted and tries to replicate from a replica that didn't
have any of the PINGs (or at least not the last ones).

however, consider this case:
master A has two replicas (B and C) replicating directly from it.
there's no traffic at all, and also no network issues, just many pings in the
tail of the backlog. now B gets promoted, A becomes a replica of B, and C
remains a replica of A. when A gets demoted, it trims the pings from its
backlog, and successfully replicate from B. however, C is still aware of
these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
that's not in the backlog anymore (since A trimmed the tail of it's backlog),
and be forced to do a full sync (something it didn't have to do before the
meaningful offset fix).

Besides that, the psync2 test was always failing randomly here and there, it
turns out the reason were PINGs. Investigating it shows the following scenario:

cycle 1: redis #1 is master, and all the rest are direct replicas of #1
cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
now we see that when #1 is demoted it prints:
17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
and when #3 connects to the demoted #2, #2 says:
17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964

so the issue here is that the meaningful offset feature saved the day for the
demoted master (since it needs to sync from a replica that didn't get the last
ping), but it didn't help one of the other replicas which did get the last ping.
2020-04-27 15:52:23 +02:00
antirez
8a7f255cd0 LCS -> STRALGO LCS.
STRALGO should be a container for mostly read-only string
algorithms in Redis. The algorithms should have two main
characteristics:

1. They should be non trivial to compute, and often not part of
programming language standard libraries.
2. They should be fast enough that it is a good idea to have optimized C
implementations.

Next thing I would love to see? A small strings compression algorithm.
2020-04-24 16:54:32 +02:00
antirez
94f2e7f9f9 Tracking: NOLOOP internals implementation. 2020-04-21 10:51:46 +02:00
antirez
6ad44b76b9 Use the special static refcount for stack objects. 2020-04-09 16:25:30 +02:00
antirez
d88f52ee7d RDB: refactor some RDB loading code into dbAddRDBLoad(). 2020-04-09 16:21:48 +02:00
antirez
399a6b2b47 incrRefCount(): abort on statically allocated object. 2020-04-09 16:20:41 +02:00
antirez
30adc62232 RDB: load files faster avoiding useless free+realloc.
Reloading of the RDB generated by

    DEBUG POPULATE 5000000
    SAVE

is now 25% faster.

This commit also prepares the ability to have more flexibility when
loading stuff from the RDB, since we no longer use dbAdd() but can
control exactly how things are added in the database.
2020-04-09 10:24:46 +02:00
antirez
f69876280c Speedup INFO by counting client memory incrementally.
Related to #5145.

Design note: clients may change type when they turn into replicas or are
moved into the Pub/Sub category and so forth. Moreover the recomputation
of the bytes used is problematic for obvious reasons: it changes
continuously, so as a conservative way to avoid accumulating errors,
each client remembers the contribution it gave to the sum, and removes
it when it is freed or before updating it with the new memory usage.
2020-04-07 12:07:54 +02:00
Salvatore Sanfilippo
094b47391d
Merge pull request #6243 from soloestoy/expand-lazy-free-server-del
lazyfree: add a new configuration lazyfree-lazy-user-del
2020-04-06 17:27:39 +02:00
antirez
121c51f4f3 Merge branch 'lcs' into unstable 2020-04-06 13:51:55 +02:00
Salvatore Sanfilippo
af5c11874c
Merge pull request #6797 from patpatbear/issue_#6565_memory_borderline
Check OOM at script start to get stable lua OOM state.
2020-04-06 11:59:01 +02:00
Salvatore Sanfilippo
eb9ca33414
Merge pull request #6694 from oranagra/signal_modified_key
modules don't signalModifiedKey in setKey() since that's done (optionally) in RM_CloseKey
2020-04-02 19:00:20 +02:00
antirez
1010c1b43e LCS: initial functionality implemented. 2020-04-01 16:13:18 +02:00
Salvatore Sanfilippo
0c52ce6c8e
Merge pull request #7037 from guybe7/fix_module_replicate_multi
Modules: Test MULTI/EXEC replication of RM_Replicate
2020-03-31 17:00:57 +02:00
Guy Benoish
d6eb3afd13 Modules: Test MULTI/EXEC replication of RM_Replicate
Makse sure call() doesn't wrap replicated commands with
a redundant MULTI/EXEC

Other, unrelated changes:
1. Formatting compiler warning in INFO CLIENTS
2. Use CLIENT_ID_AOF instead of UINT64_MAX
2020-03-31 13:55:51 +03:00
antirez
1b4bc60999 Merge branch 'pubsub_patterns_boost' of https://github.com/leeyiw/redis into leeyiw-pubsub_patterns_boost 2020-03-31 12:40:08 +02:00
antirez
9dcf878f1b Fix module commands propagation double MULTI bug.
37a10cef introduced automatic wrapping of MULTI/EXEC for the
alsoPropagate API. However this collides with the built-in mechanism
already present in module.c. To avoid complex changes near Redis 6 GA
this commit introduces the ability to exclude call() MUTLI/EXEC wrapping
for also propagate in order to continue to use the old code paths in
module.c.
2020-03-31 11:00:45 +02:00
antirez
dd7e61d77f timeout.c created: move client timeouts code there. 2020-03-27 16:35:03 +01:00
antirez
0e22cb2680 Precise timeouts: cleaup the table on unblock.
Now that this mechanism is the sole one used for blocked clients
timeouts, it is more wise to cleanup the table when the client unblocks
for any reason. We use a flag: CLIENT_IN_TO_TABLE, in order to avoid a
radix tree lookup when the client was already removed from the table
because we processed it by scanning the radix tree.
2020-03-27 16:35:03 +01:00
antirez
13e4c2a9b1 Precise timeouts: fix comments after functional change. 2020-03-27 16:35:03 +01:00
antirez
aa9d92d94a Precise timeouts: use only radix tree for timeouts. 2020-03-27 16:35:03 +01:00
antirez
324a8c91d0 Precise timeouts: working initial implementation. 2020-03-27 16:35:03 +01:00
antirez
8d116cc8a2 Precise timeouts: refactor unblocking on timeout. 2020-03-27 16:35:02 +01:00
antirez
57fa355e56 PSYNC2: meaningful offset implemented.
A very commonly signaled operational problem with Redis master-replicas
sets is that, once the master becomes unavailable for some reason,
especially because of network problems, many times it wont be able to
perform a partial resynchronization with the new master, once it rejoins
the partition, for the following reason:

1. The master becomes isolated, however it keeps sending PINGs to the
replicas. Such PINGs will never be received since the link connection is
actually already severed.
2. On the other side, one of the replicas will turn into the new master,
setting its secondary replication ID offset to the one of the last
command received from the old master: this offset will not include the
PINGs sent by the master once the link was already disconnected.
3. When the master rejoins the partion and is turned into a replica, its
offset will be too advanced because of the PINGs, so a PSYNC will fail,
and a full synchronization will be required.

Related to issue #7002 and other discussion we had in the past around
this problem.
2020-03-25 15:26:37 +01:00
Salvatore Sanfilippo
493a7f9823
Merge pull request #6951 from yangbodong22011/feature-bitfield-ro
Added BITFIELD_RO variants for read-only operations.
2020-03-23 11:23:21 +01:00
WuYunlong
f6029fb925 Fix master replica inconsistency for upgrading scenario.
Before this commit, when upgrading a replica, expired keys will not
be loaded, thus causing replica having less keys in db. To this point,
master and replica's keys is logically consistent. However, before
the keys in master and replica are physically consistent, that is,
they have the same dbsize, if master got a problem and the replica
got promoted and becomes new master of that partition, and master
updates a key which does not exist on master, but physically exists
on the old master(new replica), the old master would refuse to update
the key, thus causing master and replica data inconsistent.

How could this happen?
That's all because of the wrong judgement of roles while starting up
the server. We can not use server.masterhost to judge if the server
is master or replica, since it fails in cluster mode.

When we start the server, we load rdb and do want to load expired keys,
and do not want to have the ability to active expire keys, if it is
a replica.
2020-03-18 16:22:07 +08:00