redict/tests/integration/replication.tcl

180 lines
6.9 KiB
Tcl
Raw Normal View History

start_server {tags {"repl"}} {
start_server {} {
test {First server should have role slave after SLAVEOF} {
r -1 slaveof [srv 0 host] [srv 0 port]
wait_for_condition 50 100 {
[s -1 role] eq {slave} &&
[string match {*master_link_status:up*} [r -1 info replication]]
} else {
fail "Can't turn the instance into a slave"
}
}
test {BRPOPLPUSH replication, when blocking against empty list} {
set rd [redis_deferring_client]
$rd brpoplpush a b 5
r lpush a foo
wait_for_condition 50 100 {
[r debug digest] eq [r -1 debug digest]
} else {
fail "Master and slave have different digest: [r debug digest] VS [r -1 debug digest]"
}
}
test {BRPOPLPUSH replication, list exists} {
set rd [redis_deferring_client]
r lpush c 1
r lpush c 2
r lpush c 3
$rd brpoplpush c d 5
after 1000
assert_equal [r debug digest] [r -1 debug digest]
}
}
}
2010-06-02 17:14:55 -04:00
start_server {tags {"repl"}} {
r set mykey foo
2014-07-31 14:39:49 -04:00
start_server {} {
test {Second server should have role master at first} {
s role
} {master}
2014-07-31 14:39:49 -04:00
test {SLAVEOF should start with link status "down"} {
r slaveof [srv -1 host] [srv -1 port]
s master_link_status
} {down}
2014-07-31 14:39:49 -04:00
test {The role should immediately be changed to "slave"} {
s role
} {slave}
wait_for_sync r
test {Sync should have transferred keys from master} {
r get mykey
} {foo}
2014-07-31 14:39:49 -04:00
test {The link status should be up} {
s master_link_status
} {up}
2014-07-31 14:39:49 -04:00
test {SET on the master should immediately propagate} {
r -1 set mykey bar
A reimplementation of blocking operation internals. Redis provides support for blocking operations such as BLPOP or BRPOP. This operations are identical to normal LPOP and RPOP operations as long as there are elements in the target list, but if the list is empty they block waiting for new data to arrive to the list. All the clients blocked waiting for th same list are served in a FIFO way, so the first that blocked is the first to be served when there is more data pushed by another client into the list. The previous implementation of blocking operations was conceived to serve clients in the context of push operations. For for instance: 1) There is a client "A" blocked on list "foo". 2) The client "B" performs `LPUSH foo somevalue`. 3) The client "A" is served in the context of the "B" LPUSH, synchronously. Processing things in a synchronous way was useful as if "A" pushes a value that is served by "B", from the point of view of the database is a NOP (no operation) thing, that is, nothing is replicated, nothing is written in the AOF file, and so forth. However later we implemented two things: 1) Variadic LPUSH that could add multiple values to a list in the context of a single call. 2) BRPOPLPUSH that was a version of BRPOP that also provided a "PUSH" side effect when receiving data. This forced us to make the synchronous implementation more complex. If client "B" is waiting for data, and "A" pushes three elemnents in a single call, we needed to propagate an LPUSH with a missing argument in the AOF and replication link. We also needed to make sure to replicate the LPUSH side of BRPOPLPUSH, but only if in turn did not happened to serve another blocking client into another list ;) This were complex but with a few of mutually recursive functions everything worked as expected... until one day we introduced scripting in Redis. Scripting + synchronous blocking operations = Issue #614. Basically you can't "rewrite" a script to have just a partial effect on the replicas and AOF file if the script happened to serve a few blocked clients. The solution to all this problems, implemented by this commit, is to change the way we serve blocked clients. Instead of serving the blocked clients synchronously, in the context of the command performing the PUSH operation, it is now an asynchronous and iterative process: 1) If a key that has clients blocked waiting for data is the subject of a list push operation, We simply mark keys as "ready" and put it into a queue. 2) Every command pushing stuff on lists, as a variadic LPUSH, a script, or whatever it is, is replicated verbatim without any rewriting. 3) Every time a Redis command, a MULTI/EXEC block, or a script, completed its execution, we run the list of keys ready to serve blocked clients (as more data arrived), and process this list serving the blocked clients. 4) As a result of "3" maybe more keys are ready again for other clients (as a result of BRPOPLPUSH we may have push operations), so we iterate back to step "3" if it's needed. The new code has a much simpler semantics, and a simpler to understand implementation, with the disadvantage of not being able to "optmize out" a PUSH+BPOP as a No OP. This commit will be tested with care before the final merge, more tests will be added likely.
2012-09-04 04:37:49 -04:00
wait_for_condition 500 100 {
[r 0 get mykey] eq {bar}
} else {
fail "SET on master did not propagated on slave"
}
}
2011-10-17 04:40:11 -04:00
test {FLUSHALL should replicate} {
r -1 flushall
if {$::valgrind} {after 2000}
list [r -1 dbsize] [r 0 dbsize]
} {0 0}
2014-06-23 03:08:51 -04:00
test {ROLE in master reports master with a slave} {
set res [r -1 role]
lassign $res role offset slaves
assert {$role eq {master}}
assert {$offset > 0}
assert {[llength $slaves] == 1}
lassign [lindex $slaves 0] master_host master_port slave_offset
assert {$slave_offset <= $offset}
2014-06-23 03:08:51 -04:00
}
test {ROLE in slave reports slave in connected state} {
set res [r role]
lassign $res role master_host master_port slave_state slave_offset
assert {$role eq {slave}}
assert {$slave_state eq {connected}}
}
}
}
foreach dl {no yes} {
start_server {tags {"repl"}} {
set master [srv 0 client]
$master config set repl-diskless-sync $dl
set master_host [srv 0 host]
set master_port [srv 0 port]
set slaves {}
set load_handle0 [start_write_load $master_host $master_port 3]
set load_handle1 [start_write_load $master_host $master_port 5]
set load_handle2 [start_write_load $master_host $master_port 20]
set load_handle3 [start_write_load $master_host $master_port 8]
set load_handle4 [start_write_load $master_host $master_port 4]
start_server {} {
lappend slaves [srv 0 client]
start_server {} {
lappend slaves [srv 0 client]
start_server {} {
lappend slaves [srv 0 client]
test "Connect multiple slaves at the same time (issue #141), diskless=$dl" {
# Send SALVEOF commands to slaves
[lindex $slaves 0] slaveof $master_host $master_port
[lindex $slaves 1] slaveof $master_host $master_port
[lindex $slaves 2] slaveof $master_host $master_port
# Wait for all the three slaves to reach the "online"
# state from the POV of the master.
set retry 500
while {$retry} {
set info [r -3 info]
if {[string match {*slave0:*state=online*slave1:*state=online*slave2:*state=online*} $info]} {
break
} else {
incr retry -1
after 100
}
}
if {$retry == 0} {
error "assertion:Slaves not correctly synchronized"
}
# Wait that slaves acknowledge they are online so
# we are sure that DBSIZE and DEBUG DIGEST will not
# fail because of timing issues.
wait_for_condition 500 100 {
[lindex [[lindex $slaves 0] role] 3] eq {connected} &&
[lindex [[lindex $slaves 1] role] 3] eq {connected} &&
[lindex [[lindex $slaves 2] role] 3] eq {connected}
} else {
fail "Slaves still not connected after some time"
}
# Stop the write load
stop_write_load $load_handle0
stop_write_load $load_handle1
stop_write_load $load_handle2
stop_write_load $load_handle3
stop_write_load $load_handle4
# Make sure that slaves and master have same
# number of keys
wait_for_condition 500 100 {
[$master dbsize] == [[lindex $slaves 0] dbsize] &&
[$master dbsize] == [[lindex $slaves 1] dbsize] &&
[$master dbsize] == [[lindex $slaves 2] dbsize]
} else {
fail "Different number of keys between masted and slave after too long time."
}
# Check digests
set digest [$master debug digest]
set digest0 [[lindex $slaves 0] debug digest]
set digest1 [[lindex $slaves 1] debug digest]
set digest2 [[lindex $slaves 2] debug digest]
assert {$digest ne 0000000000000000000000000000000000000000}
assert {$digest eq $digest0}
assert {$digest eq $digest1}
assert {$digest eq $digest2}
}
}
}
}
}
}