Commit Graph

365 Commits

Author SHA1 Message Date
Oran Agra
c17e597d05
Accelerate diskless master connections, and general re-connections (#6271)
Diskless master has some inherent latencies.
1) fork starts with delay from cron rather than immediately
2) replica is put online only after an ACK. but the ACK
   was sent only once a second.
3) but even if it would arrive immediately, it will not
   register in case cron didn't yet detect that the fork is done.

Besides that, when a replica disconnects, it doesn't immediately
attempts to re-connect, it waits for replication cron (one per second).
in case it was already online, it may be important to try to re-connect
as soon as possible, so that the backlog at the master doesn't vanish.

In case it disconnected during rdb transfer, one can argue that it's
not very important to re-connect immediately, but this is needed for the
"diskless loading short read" test to be able to run 100 iterations in 5
seconds, rather than 3 (waiting for replication cron re-connection)

changes in this commit:
1) sync command starts a fork immediately if no sync_delay is configured
2) replica sends REPLCONF ACK when done reading the rdb (rather than on 1s cron)
3) when a replica unexpectedly disconnets, it immediately tries to
   re-connect rather than waiting 1s
4) when when a child exits, if there is another replica waiting, we spawn a new
   one right away, instead of waiting for 1s replicationCron.
5) added a call to connectWithMaster from replicationSetMaster. which is called
   from the REPLICAOF command but also in 3 places in cluster.c, in all of
   these the connection attempt will now be immediate instead of delayed by 1
   second.

side note:
we can add a call to rdbPipeReadHandler in replconfCommand when getting
a REPLCONF ACK from the replica to solve a race where the replica got
the entire rdb and EOF marker before we detected that the pipe was
closed.
in the test i did see this race happens in one about of some 300 runs,
but i concluded that this race is unlikely in real life (where the
replica is on another host and we're more likely to first detect the
pipe was closed.
the test runs 100 iterations in 3 seconds, so in some cases it'll take 4
seconds instead (waiting for another REPLCONF ACK).

Removing unneeded startBgsaveForReplication from updateSlavesWaitingForBgsave
Now that CheckChildrenDone is calling the new replicationStartPendingFork
(extracted from serverCron) there's actually no need to call
startBgsaveForReplication from updateSlavesWaitingForBgsave anymore,
since as soon as updateSlavesWaitingForBgsave returns, CheckChildrenDone is
calling replicationStartPendingFork that handles that anyway.
The code in updateSlavesWaitingForBgsave had a bug in which it ignored
repl-diskless-sync-delay, but removing that code shows that this bug was
hiding another bug, which is that the max_idle should have used >= and
not >, this one second delay has a big impact on my new test.
2020-08-06 16:53:06 +03:00
Oran Agra
f7e7775990
module hook for master link up missing on successful psync (#7584)
besides, hooks test was time sensitive. when the replica managed to
reconnect quickly after the client kill, the test would fail
2020-07-31 13:14:29 +03:00
zhaozhao.zz
13e50935a8
replication: need handle -NOPERM error after send ping (#7538) 2020-07-20 22:21:55 +08:00
Oran Agra
a176cb56a3
diskless master disconnect replicas when rdb child failed (#7518)
in case the rdb child failed, crashed or terminated unexpectedly redis
would have marked the replica clients with repl_put_online_on_ack and
then kill them only after a minute when no ack was received.

it would not stream anything to these connections, so the only effect of
this bug is a delay of 1 minute in the replicas attempt to re-connect.
2020-07-14 20:21:59 +03:00
antirez
61074b43a6 Revert "Implements sendfile for redis."
This reverts commit 9cf500a3f6.
2020-06-06 11:42:45 +02:00
antirez
d1e23e04aa Revert "avoid using sendfile if tls-replication is enabled"
This reverts commit b9abecfc4c.
2020-06-06 11:42:41 +02:00
antirez
6a16a636bf Replication: showLatestBacklog() refactored out. 2020-05-28 10:08:16 +02:00
antirez
484af8ed53 Drop useless line from replicationCacheMaster(). 2020-05-27 17:08:51 +02:00
antirez
22472fe5a1 Remove the meaningful offset feature.
After a closer look, the Redis core devleopers all believe that this was
too fragile, caused many bugs that we didn't expect and that were very
hard to track. Better to find an alternative solution that is simpler.
2020-05-27 12:06:33 +02:00
antirez
94c026cd19 Merge branch 'unstable' of github.com:/antirez/redis into unstable 2020-05-26 23:55:52 +02:00
antirez
7d35939206 Replication: log backlog creation event. 2020-05-26 23:55:18 +02:00
Salvatore Sanfilippo
caf7c50408
Merge pull request #7328 from oranagra/daily_tls_test
avoid using sendfile if tls-replication is enabled
2020-05-26 13:19:55 +02:00
Oran Agra
b9abecfc4c avoid using sendfile if tls-replication is enabled
this obviously broke the tests, but went unnoticed so far since tls
wasn't often tested.
2020-05-26 13:52:06 +03:00
antirez
92a3ff6168 Clarify what is happening in PR #7320. 2020-05-25 11:47:38 +02:00
zhaozhao.zz
eec769be59 PSYNC2: second_replid_offset should be real meaningful offset
After adjustMeaningfulReplOffset(), all the other related variable
should be updated, including server.second_replid_offset.

Or the old version redis like 5.0 may receive wrong data from
replication stream, cause redis 5.0 can sync with redis 6.0,
but doesn't know meaningful offset.
2020-05-25 11:17:54 +08:00
antirez
adc5df1bc3 Make disconnectSlaves() synchronous in the base case.
Otherwise we run into that:

Backtrace:
src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035]
src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390]
src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a]
src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282]
src/redis-server 127.0.0.1:21322[0x4552d4]
src/redis-server 127.0.0.1:21322[0x4c5593]
src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786]
src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d]
src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830]
src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409]

Since we disconnect all the replicas and free the replication backlog in
certain replication paths, and the code that will free the replication
backlog expects that no replica is connected.

However we still need to free the replicas asynchronously in certain
cases, as documented in the top comment of disconnectSlaves().
2020-05-22 19:29:09 +02:00
antirez
07c6bee78f Merge branch 'unstable' of github.com:/antirez/redis into unstable 2020-05-22 16:31:05 +02:00
antirez
b407590cee Fix #7306 less aggressively.
Citing from the issue:

btw I suggest we change this fix to something else:
* We revert the fix.
* We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset.
This way we can avoid disconnections when there is no trimming of the backlog.

Note that we now want to disconnect replicas asynchronously in
disconnectSlaves(), because it's in general safer now that we can call
it from freeClient(). Otherwise for instance the command:

    CLIENT KILL TYPE master

May crash: clientCommand() starts running the linked of of clients,
looking for clients to kill. However it finds the master, kills it
calling freeClient(), but this in turn calls replicationCacheMaster()
that may also call disconnectSlaves() now. So the linked list iterator
of the clientCommand() will no longer be valid.
2020-05-22 16:29:53 +02:00
Salvatore Sanfilippo
40161553e2
Merge pull request #7096 from ShooterIT/sendfile
Implements sendfile for redis.
2020-05-22 13:55:46 +02:00
Salvatore Sanfilippo
285817b28a
Merge pull request #7305 from madolson/unstable-connection
EAGAIN not handled for TLS during diskless load
2020-05-22 12:25:40 +02:00
Qu Chen
42f5da5d2d Disconnect chained replicas when the replica performs PSYNC with the master always to avoid replication offset mismatch between master and chained replicas. 2020-05-21 18:42:10 -07:00
Madelyn Olson
5109f16b77 EAGAIN for tls during diskless load 2020-05-21 15:20:59 -07:00
antirez
624742d9b4 Remove the client from CLOSE_ASAP list before caching the master.
This was broken in 1a7cd2c: we identified a crash in the CI, what
was happening before the fix should be like that:

1. The client gets in the async free list.
2. However freeClient() gets called again against the same client
   which is a master.
3. The client arrived in freeClient() with the CLOSE_ASAP flag set.
4. The master gets cached, but NOT removed from the CLOSE_ASAP linked
   list.
5. The master client that was cached was immediately removed since it
   was still in the list.
6. Redis accessed a freed cached master.

This is how the crash looked like:

=== REDIS BUG REPORT START: Cut & paste starting from here ===
1092:S 16 May 2020 11:44:09.731 # Redis 999.999.999 crashed by signal: 11
1092:S 16 May 2020 11:44:09.731 # Crashed running the instruction at: 0x447e18
1092:S 16 May 2020 11:44:09.731 # Accessing address: 0xffffffffffffffff
1092:S 16 May 2020 11:44:09.731 # Failed assertion:  (:0)

------ STACK TRACE ------
EIP:
src/redis-server 127.0.0.1:21300(readQueryFromClient+0x48)[0x447e18]

And the 0xffff address access likely comes from accessing an SDS that is
set to NULL (we go -1 offset to read the header).
2020-05-16 17:15:35 +02:00
antirez
1a7cd2c0e2 Cache master without checking of deferred close flags.
The context is issue #7205: since the introduction of threaded I/O we close
clients asynchronously by default from readQueryFromClient(). So we
should no longer prevent the caching of the master client, to later
PSYNC incrementally, if such flags are set. However we also don't want
the master client to be cached with such flags (would be closed
immediately after being restored). And yet we want a way to understand
if a master was closed because of a protocol error, and in that case
prevent the caching.
2020-05-15 10:19:13 +02:00
Oran Agra
4447ddc8bb Keep track of meaningful replication offset in replicas too
Now both master and replicas keep track of the last replication offset
that contains meaningful data (ignoring the tailing pings), and both
trim that tail from the replication backlog, and the offset with which
they try to use for psync.

the implication is that if someone missed some pings, or even have
excessive pings that the promoted replica has, it'll still be able to
psync (avoid full sync).

the downside (which was already committed) is that replicas running old
code may fail to psync, since the promoted replica trims pings form it's
backlog.

This commit adds a test that reproduces several cases of promotions and
demotions with stale and non-stale pings

Background:
The mearningful offset on the master was added recently to solve a problem were
the master is left all alone, injecting PINGs into it's backlog when no one is
listening and then gets demoted and tries to replicate from a replica that didn't
have any of the PINGs (or at least not the last ones).

however, consider this case:
master A has two replicas (B and C) replicating directly from it.
there's no traffic at all, and also no network issues, just many pings in the
tail of the backlog. now B gets promoted, A becomes a replica of B, and C
remains a replica of A. when A gets demoted, it trims the pings from its
backlog, and successfully replicate from B. however, C is still aware of
these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
that's not in the backlog anymore (since A trimmed the tail of it's backlog),
and be forced to do a full sync (something it didn't have to do before the
meaningful offset fix).

Besides that, the psync2 test was always failing randomly here and there, it
turns out the reason were PINGs. Investigating it shows the following scenario:

cycle 1: redis #1 is master, and all the rest are direct replicas of #1
cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
now we see that when #1 is demoted it prints:
17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
and when #3 connects to the demoted #2, #2 says:
17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964

so the issue here is that the meaningful offset feature saved the day for the
demoted master (since it needs to sync from a replica that didn't get the last
ping), but it didn't help one of the other replicas which did get the last ping.
2020-04-27 15:52:23 +02:00
ShooterIT
9cf500a3f6 Implements sendfile for redis. 2020-04-14 23:56:34 +08:00
zhaozhao.zz
36ee294e83 PSYNC2: reset backlog_idx and master_repl_offset correctly 2020-03-28 20:59:01 +08:00
antirez
97f1c808ce PSYNC2: fix backlog_idx when adjusting for meaningful offset
See #7002.
2020-03-27 16:20:02 +01:00
antirez
57fa355e56 PSYNC2: meaningful offset implemented.
A very commonly signaled operational problem with Redis master-replicas
sets is that, once the master becomes unavailable for some reason,
especially because of network problems, many times it wont be able to
perform a partial resynchronization with the new master, once it rejoins
the partition, for the following reason:

1. The master becomes isolated, however it keeps sending PINGs to the
replicas. Such PINGs will never be received since the link connection is
actually already severed.
2. On the other side, one of the replicas will turn into the new master,
setting its secondary replication ID offset to the one of the last
command received from the old master: this offset will not include the
PINGs sent by the master once the link was already disconnected.
3. When the master rejoins the partion and is turned into a replica, its
offset will be too advanced because of the PINGs, so a PSYNC will fail,
and a full synchronization will be required.

Related to issue #7002 and other discussion we had in the past around
this problem.
2020-03-25 15:26:37 +01:00
antirez
c80d81c80a Improve comments of replicationCacheMasterUsingMyself(). 2020-03-23 16:17:35 +01:00
antirez
2091d73ef5 Merge branch 'unstable' of github.com:/antirez/redis into unstable 2020-03-12 13:25:01 +01:00
Salvatore Sanfilippo
13707f988b
Merge pull request #6687 from jtru/systemd-integration-fixes
Signal systemd readiness atfer Partial Resync
2020-03-06 13:15:10 +01:00
antirez
3bab69e921 Make sync RDB deletion configurable. Default to no. 2020-03-04 17:44:21 +01:00
antirez
d16bb64fa4 Check that the file exists in removeRDBUsedToSyncReplicas(). 2020-03-04 12:55:49 +01:00
antirez
ff024e3064 Log RDB deletion in persistence-less instances. 2020-03-04 11:19:55 +01:00
antirez
f0acdee4c5 Introduce bg_unlink(). 2020-03-04 11:10:54 +01:00
antirez
5e2319c326 Remove RDB files used for replication in persistence-less instances. 2020-03-03 14:58:15 +01:00
Hengjian Tang
1ab5508324 modify the read buf size according to the write buf size PROTO_IOBUF_LEN defined before 2020-02-25 15:55:28 +08:00
Salvatore Sanfilippo
9c00bdd86e
Merge pull request #6822 from guybe7/diskless_load_module_hook_fix
Diskless-load emptyDb-related fixes
2020-02-06 13:10:00 +01:00
Guy Benoish
92dc5e1fa4 Diskless-load emptyDb-related fixes
1. Call emptyDb even in case of diskless-load: We want modules
   to get the same FLUSHDB event as disk-based replication.
2. Do not fire any module events when flushing the backups array.
3. Delete redundant call to signalFlushedDb (Called from emptyDb).
2020-02-06 16:48:02 +05:30
Salvatore Sanfilippo
5558c0e4cf
Merge pull request #6848 from oranagra/opt_use_diskless_load_calls
reduce repeated calls to use_diskless_load
2020-02-06 10:30:39 +01:00
Oran Agra
aac6a4cf13 move restartAOFAfterSYNC from replicaofCommand to replicationUnsetMaster
replicationUnsetMaster can be called from other places, not just
replicaofCOmmand, and all of these need to restart AOF
2020-02-06 10:14:32 +02:00
Oran Agra
485d5d4a18 reduce repeated calls to use_diskless_load
this function possibly iterates on the module list
2020-02-06 09:41:45 +02:00
ShooterIT
2bc8db9ca5 Rename rdb asynchronously 2019-12-31 21:45:32 +08:00
Johannes Truschnigg
dfb598cf33 Signal systemd readiness atfer Partial Resync
"Partial Resynchronization" is a special variant of replication success
that we have to tell systemd about if it is managing redis-server via a
Type=Notify service unit.
2019-12-19 21:47:24 +01:00
Johannes Truschnigg
641c64ada1 Use libsystemd's sd_notify for communicating redis status to systemd
Instead of replicating a subset of libsystemd's sd_notify(3) internally,
use the dynamic library provided by systemd to communicate with the
service manager.

When systemd supervision was auto-detected or configured, communicate
the actual server status (i.e. "Loading dataset", "Waiting for
master<->replica sync") to systemd, instead of declaring readiness right
after initializing the server process.
2019-11-19 18:55:44 +02:00
Oran Agra
51c3ff8d75 Modules hooks: complete missing hooks for the initial set of hooks
* replication hooks: role change, master link status, replica online/offline
* persistence hooks: saving, loading, loading progress
* misc hooks: cron loop, shutdown, module loaded/unloaded
* change the way hooks test work, and add tests for all of the above

startLoading() now gets flag indicating what is loaded.
stopLoading() now gets an indication of success or failure.
adding startSaving() and stopSaving() with similar args and role.
2019-10-29 17:59:09 +02:00
Wander Hillen
fb1f4f4e7e
Merge branch 'unstable' into minor-typos 2019-10-25 10:18:26 +02:00
Yossi Gottlieb
0db3b0a0ff Merge remote-tracking branch 'upstream/unstable' into tls 2019-10-16 17:08:07 +03:00
Salvatore Sanfilippo
8e3d910d4e
Merge pull request #6429 from charsyam/feature/typo-slave
[trivial] fix typos salves to slaves in replication.c
2019-10-10 14:56:43 +02:00
antirez
747be463d2 Cluster: fix memory leak of cached master.
This is what happened:

1. Instance starts, is a slave in the cluster configuration, but
actually server.masterhost is not set, so technically the instance
is acting like a master.

2. loadDataFromDisk() calls replicationCacheMasterUsingMyself() even if
the instance is a master, in the case it is logically a slave and the
cluster is enabled. So now we have a cached master even if the instance
is practically configured as a master (from the POV of
server.masterhost value and so forth).

3. clusterCron() sees that the instance requires to replicate from its
master, because logically it is a slave, so it calls
replicationSetMaster() that will in turn call
replicationCacheMasterUsingMyself(): before this commit, this call would
overwrite the old cached master, creating a memory leak.
2019-10-10 10:23:34 +02:00
Yossi Gottlieb
61733ded14 TLS: Configuration options.
Add configuration options for TLS protocol versions, ciphers/cipher
suites selection, etc.
2019-10-07 21:07:27 +03:00
Oran Agra
5a47794606 diskless replication rdb transfer uses pipe, and writes to sockets form the parent process.
misc:
- handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents
- fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed)
- add key-load-delay config for testing
- trim connShutdown which is no longer needed
- rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs
- don't detect rdb child exited (don't call wait3) until we detect the pipe is closed
- Cleanup bad optimization from rio.c, add another one
2019-10-07 21:06:30 +03:00
Yossi Gottlieb
b087dd1db6 TLS: Connections refactoring and TLS support.
* Introduce a connection abstraction layer for all socket operations and
integrate it across the code base.
* Provide an optional TLS connections implementation based on OpenSSL.
* Pull a newer version of hiredis with TLS support.
* Tests, redis-cli updates for TLS support.
2019-10-07 21:06:13 +03:00
charsyam
bea0384f5d fix type salves to slaves 2019-10-07 23:48:11 +09:00
antirez
de1f82aa33 Function renamed hasForkChild() -> hasActiveChildProcess(). 2019-09-27 12:03:09 +02:00
Salvatore Sanfilippo
6129758558
Merge branch 'unstable' into modules_fork 2019-09-27 11:24:06 +02:00
Salvatore Sanfilippo
959fb5cf68
Merge pull request #6235 from oranagra/module_rdb_load_errors
Allow modules to handle RDB loading errors.
2019-09-26 11:52:42 +02:00
antirez
0e9b5adbd3 Replication: clarify why repl_put_online_on_ack exists at all. 2019-08-05 17:38:15 +02:00
Oran Agra
40c4183196 Log message when modules prevent diskless-load 2019-07-30 16:32:58 +03:00
Oran Agra
4339706e07 Avoid diskelss-load if modules did not declare they handle read errors 2019-07-30 15:11:57 +03:00
Oran Agra
56258c6b7d Module API for Forking
* create module API for forking child processes.
* refactor duplicate code around creating and tracking forks by AOF and RDB.
* child processes listen to SIGUSR1 and dies exitFromChild in order to
  eliminate a valgrind warning of unhandled signal.
* note that BGSAVE error reply has changed.

valgrind error is:
  Process terminating with default action of signal 10 (SIGUSR1)
2019-07-17 16:40:24 +03:00
antirez
7c9f6ebc8f Diskless replica: fix disklessLoadRestoreBackups() bug. 2019-07-10 12:36:26 +02:00
antirez
3bbb9a1413 Diskless replica: refactoring of DBs backups. 2019-07-10 11:42:26 +02:00
antirez
81b18fa3a0 Diskless replica: a few aesthetic changes to replication.c. 2019-07-08 18:32:47 +02:00
Oran Agra
2de544cfcc diskless replication on slave side (don't store rdb to file), plus some other related fixes
The implementation of the diskless replication was currently diskless only on the master side.
The slave side was still storing the received rdb file to the disk before loading it back in and parsing it.

This commit adds two modes to load rdb directly from socket:
1) when-empty
2) using "swapdb"
the third mode of using diskless slave by flushdb is risky and currently not included.

other changes:
--------------
distinguish between aof configuration and state so that we can re-enable aof only when sync eventually
succeeds (and not when exiting from readSyncBulkPayload after a failed attempt)
also a CONFIG GET and INFO during rdb loading would have lied

When loading rdb from the network, don't kill the server on short read (that can be a network error)

Fix rdb check when performed on preamble AOF

tests:
run replication tests for diskless slave too
make replication test a bit more aggressive
Add test for diskless load swapdb
2019-07-08 15:37:48 +03:00
antirez
074d24df1e Narrow the effects of PR #6029 to the exact state.
CLIENT PAUSE may be used, in other contexts, for a long time making all
the slaves time out. Better for now to be more specific about what
should disable senidng PINGs.

An alternative to that would be to virtually refresh the slave
interactions when clients are paused, however for now I went for this
more conservative solution.
2019-05-15 12:16:43 +02:00
Salvatore Sanfilippo
caf74e507e
Merge pull request #6029 from chendq8/clientpause
fix cluster failover time out
2019-05-15 12:03:19 +02:00
chendianqiang
11f2c6b115 stop ping when client pause 2019-04-17 21:20:10 +08:00
Salvatore Sanfilippo
fcac342955
Merge pull request #3830 from oranagra/diskless_capa_pr
several bugfixes to diskless replication
2019-03-22 17:41:40 +01:00
antirez
b3408e9a9b More sensible name for function: restartAOFAfterSYNC().
Related to #3829.
2019-03-21 17:21:29 +01:00
antirez
9588fd52ac Mostly aesthetic changes to restartAOF().
See #3829.
2019-03-21 17:18:24 +01:00
Oran Agra
b2e03f8329 diskless replication - notify slave when rdb transfer failed
in diskless replication - master was not notifing the slave that rdb transfer
terminated on error, and lets slave wait for replication timeout
2019-03-20 17:46:19 +02:00
oranagra
c9e2900efc bugfix to restartAOF, exit will never happen since retry will get negative.
also reduce an excess sleep
2019-03-20 17:20:07 +02:00
antirez
14b17c3615 replicaofCommand() refactoring: stay into 80 cols. 2019-03-18 11:34:40 +01:00
antirez
8a46d32be2 Make comment in #5911 stay inside 80 cols. 2019-03-10 09:48:06 +01:00
John Sully
5b52bc738b Replicas aren't allowed to run the replicaof command 2019-03-09 11:04:48 -05:00
zhaozhao.zz
ea9d3aefec ACL: add masteruser configuration for replication
In mostly production environment, normal user's behavior should be
limited.

Now in redis ACL mechanism we can do it like that:

    user default on +@all ~* -@dangerous nopass
    user admin on +@all ~* >someSeriousPassword

Then the default normal user can not execute dangerous commands like
FLUSHALL/KEYS.

But some admin commands are in dangerous category too like PSYNC,
and the configurations above will forbid replica from sync with master.

Finally I think we could add a new configuration for replication,
it is masteruser option, like this:

    masteruser admin
    masterauth someSeriousPassword

Then replica will try AUTH admin someSeriousPassword and get privilege
to execute PSYNC. If masteruser is NULL, replica would AUTH with only
masterauth like before.
2019-02-12 17:12:37 +08:00
ArkayZheng
76f20729fc Fix the output bug in rename exceptions. 2019-01-25 21:48:23 +08:00
antirez
4dc69497f5 Refactoring: always kill AOF/RDB child via helper functions. 2019-01-21 11:28:44 +01:00
Salvatore Sanfilippo
adfaf548e3
Merge branch 'unstable' into fixChildInfoPipeFdLeak 2019-01-21 11:20:56 +01:00
Salvatore Sanfilippo
9f939610f3
Merge pull request #5797 from trevor211/fixUpdateDictResizePolicy
Fix update dict resize policy
2019-01-21 11:14:48 +01:00
WuYunlong
440385de14 Fix child info pipe fd leak when child process gets killed. 2019-01-21 17:48:45 +08:00
WuYunlong
f004a3e7ff Update dict resize policy when rdb child process gets killed. 2019-01-21 17:33:18 +08:00
antirez
2c66c525f9 ACL: configure the master connection without user. 2019-01-17 18:33:36 +01:00
antirez
709a6612eb RESP3: addReplyString() -> addReplyProto().
The function naming was totally nuts. Let's fix it as we break PRs
anyway with RESP3 refactoring and changes.
2019-01-09 17:00:30 +01:00
antirez
07bce54093 RESP3: Use new deferred len API in replication.c. 2019-01-09 17:00:29 +01:00
antirez
06a4acb7d3 When replica kills a pending RDB save during SYNC, log it.
This logs what happens in the context of the fix in PR #5367.
2018-10-31 11:47:10 +01:00
Salvatore Sanfilippo
6204d8c139
Merge pull request #5367 from nUl1/fullresync-stopbgsave
Prevent RDB autosave from overwriting full resync results
2018-10-31 11:42:04 +01:00
antirez
3d07ed983e Fix typo in replicationCron() comment. 2018-10-05 18:30:45 +02:00
Andrey Bugaevskiy
466c277b4f Move child termination to readSyncBulkPayload 2018-09-27 19:38:58 +03:00
Andrey Bugaevskiy
98a64523c4 Prevent RDB autosave from overwriting full resync results
During the full database resync we may still have unsaved changes
on the receiving side. This causes a race condition between
synced data rename/load and the rename of rdbSave tempfile.
2018-09-19 19:58:39 +03:00
antirez
61b7a176ef Slave removal: replication.c logs fixed. 2018-09-11 15:32:28 +02:00
antirez
ef2c7a5bbb Slave removal: SLAVEOF -> REPLICAOF. SLAVEOF is now an alias. 2018-09-11 15:32:28 +02:00
Oran Agra
d55598988b fix rare replication stream corruption with disk-based replication
The slave sends \n keepalive messages to the master while parsing the rdb,
and later sends REPLCONF ACK once a second. rarely, the master recives both
a linefeed char and a REPLCONF in the same read, \n*3\r\n$8\r\nREPLCONF\r\n...
and it tries to trim two chars (\r\n) from the query buffer,
trimming the '*' from *3\r\n$8\r\nREPLCONF\r\n...

then the master tries to process a command starting with '3' and replies to
the slave a bunch of -ERR and one +OK.
although the slave silently ignores these (prints a log message), this corrupts
the replication offset at the slave since the slave increases the replication
offset, and the master did not.

other than the fix in processInlineBuffer, i did several other improvments
while hunting this very rare bug.

- when redis replies with "unknown command" it includes a portion of the
  arguments, not just the command name. so it would be easier to understand
  what was recived, in my case, on the slave side,  it was -ERR, but
  the "arguments" were the interesting part (containing info on the error).
- about a year ago i added code in addReplyErrorLength to print the error to
  the log in case of a reply to master (since this string isn't actually
  trasmitted to the master), now changed that block to print a similar log
  message to indicate an error being sent from the master to the slave.
  note that the slave is marked as CLIENT_SLAVE only after PSYNC was received,
  so this will not cause any harm for REPLCONF, and will only indicate problems
  that are gonna corrupt the replication stream anyway.
- two places were c->reply was emptied, and i wanted to reset sentlen
  this is a precaution (i did not actually see such a problem), since a
  non-zero sentlen will cause corruption to be transmitted on the socket.
2018-07-17 12:51:49 +03:00
Oran Agra
bf680b6f8c slave buffers were wasteful and incorrectly counted causing eviction
A) slave buffers didn't count internal fragmentation and sds unused space,
   this caused them to induce eviction although we didn't mean for it.

B) slave buffers were consuming about twice the memory of what they actually needed.
- this was mainly due to sdsMakeRoomFor growing to twice as much as needed each time
  but networking.c not storing more than 16k (partially fixed recently in 237a38737).
- besides it wasn't able to store half of the new string into one buffer and the
  other half into the next (so the above mentioned fix helped mainly for small items).
- lastly, the sds buffers had up to 30% internal fragmentation that was wasted,
  consumed but not used.

C) inefficient performance due to starting from a small string and reallocing many times.

what i changed:
- creating dedicated buffers for reply list, counting their size with zmalloc_size
- when creating a new reply node from, preallocate it to at least 16k.
- when appending a new reply to the buffer, first fill all the unused space of the
  previous node before starting a new one.

other changes:
- expose mem_not_counted_for_evict info field for the benefit of the test suite
- add a test to make sure slave buffers are counted correctly and that they don't cause eviction
2018-07-16 16:43:42 +03:00
Jack Drogon
93238575f7 Fix typo 2018-07-03 18:19:46 +02:00
antirez
677d10b2a8 Set repl_down_since to zero on state change.
PR #5081 fixes an "interesting" bug about Redis Cluster failover but in
general about the updating of repl_down_since, that is used in order to
count the time a slave was left disconnected from its master.

While the fix provided resolves the specific issue, in general the
validity of repl_down_since is limited to states that are different
than the state CONNECTED, and the disconnected time is set when the
state is DISCONNECTED. However from CONNECTED to other states, the state
machine must always go to DISCONNECTED first. So it makes sense to set
the field to zero (since it is meaningless in that context) when the
state is set to CONNECTED.
2018-07-03 12:42:14 +02:00
WuYunlong
2e167f7d0e fix server.repl_down_since resetting, so that slaves could failover
automatically as expected.
2018-06-30 09:39:08 +08:00
antirez
27178a3fde Fix type of argslen in sendSynchronousCommand().
Related to #5037.
2018-06-26 14:38:35 +02:00