Same as the original bind fixes (we just missed these the
first time around).
This helps Redis not automatically send
connections from the first IP on an interface if we are bound
to a specific IP address (e.g. with multiple IP aliases on one
interface, you want to send from _your_ IP, not from the first IP
on the interface).
This caused BGSAVE to be triggered a second time without any need when
we switch from socket to disk target via the command
CONFIG SET repl-diskless-sync no
and there is already a slave waiting for the BGSAVE to start.
Also comments clarified about what is happening.
EWOULDBLOCK with the fdset rio target is returned when we try to write
but the send timeout socket option triggered an error. Better to
translate the error in something the user can actually recognize as a
timeout.
We need to avoid that a child -> slaves transfer can continue forever.
We use the same timeout used as global replication timeout, which is
documented to also affect I/O operations during bulk transfers.
To perform a socket write() for each RDB rio API write call was
extremely unefficient, so now rio has minimal buffering capabilities.
Writes are accumulated into a buffer and only when a given limit is
reacehd are actually wrote to the N slaves FDs.
Trivia: rio lacked support for buffering since our targets were:
1) Memory buffers.
2) C standard I/O.
Both were buffered already.
This is useful for normal replication in order to refresh the slave
when we are persisting on disk, but for diskless replication the
child is already receiving data while in WAIT_BGSAVE_END state.
If we turn from diskless to disk-based replication via CONFIG SET, we
need a way to start a BGSAVE if there are slaves alerady waiting for a
BGSAVE to start. Normally with disk-based replication we do it as soon
as the previous child exits, but when there is a configuration change
via CONFIG SET, we may have slaves in WAIT_BGSAVE_START state without
an RDB background process currently active.
Fdset target is used when we want to write an RDB file directly to
slave's sockets. In this setup as long as there is a single slave that
is still receiving our payload, we want to continue sennding instead of
aborting. However rio calls should abort of no FD is ok.
Also we want the errors reported so that we can signal the parent who is
ok and who is broken, so there is a new set integers with the state of
each fd. Zero is ok, non-zero is the errno of the failure, if avaialble,
or a generic EIO.