Basically: test to make sure we can load cmsgpack
and do some sanity checks to make sure pack/unpack works
properly. We also have a bonus test for circular encoding
and decoding because I was curious how it worked.
Main reasons for upgrade:
- Remove a warning when building Redis
- Add multi pack/unpack
- Improve memory usage and use Lua allocator properly
- Fix some edge case encoding/decoding bugs
cjson calls isinf, but some Solaris versions don't have isinf
even with the attempted fix we have in deps/Makefile.
We can harmlessly include the Redis solarisfixes.h header to
give cjson proper isinf.
Note: cjson has a compile-time setting for using their own defined
isinf, but the Redis definition in solarisfixes.h is more complete.
Fixes antirez#1620
The new cjson has some improvements over our current version including
increased platform compatability, a new resource limit to restrict
decode depth, and better invalid number handling.
One minor change was required to deps/Makefile because this version
of cjson doesn't export itself globally, so we added a quick little
define of -DENABLE_CJSON_GLOBAL.
cjson now has an optional higher performing float parsing interface,
but we are not including it (g_fmt.c, dtoa.c) because it requires
endianness declaration during compile time.
This commit is exactly lua_cjson.c from 2.1.0 with one minor
change of altering the two Lua includes for local search
instead of system-wide importing.
RDB EOF detection was relying on the final part of the RDB transfer to
be a magic 40 bytes EOF marker. However as the slave is put online
immediately, and because of sockets timeouts, the replication stream is
actually contiguous with the RDB file.
This means that to detect the EOF correctly we should either:
1) Scan all the stream searching for the mark. Sucks CPU-wise.
2) Start to send the replication stream only after an acknowledge.
3) Implement a proper chunked encoding.
For now solution "2" was picked, so the master does not start to send
ASAP the stream of commands in the case of diskless replication. We wait
for the first REPLCONF ACK command from the slave, that certifies us
that the slave correctly loaded the RDB file and is ready to get more
data.
Same as the original bind fixes (we just missed these the
first time around).
This helps Redis not automatically send
connections from the first IP on an interface if we are bound
to a specific IP address (e.g. with multiple IP aliases on one
interface, you want to send from _your_ IP, not from the first IP
on the interface).
This caused BGSAVE to be triggered a second time without any need when
we switch from socket to disk target via the command
CONFIG SET repl-diskless-sync no
and there is already a slave waiting for the BGSAVE to start.
Also comments clarified about what is happening.
EWOULDBLOCK with the fdset rio target is returned when we try to write
but the send timeout socket option triggered an error. Better to
translate the error in something the user can actually recognize as a
timeout.
We need to avoid that a child -> slaves transfer can continue forever.
We use the same timeout used as global replication timeout, which is
documented to also affect I/O operations during bulk transfers.
To perform a socket write() for each RDB rio API write call was
extremely unefficient, so now rio has minimal buffering capabilities.
Writes are accumulated into a buffer and only when a given limit is
reacehd are actually wrote to the N slaves FDs.
Trivia: rio lacked support for buffering since our targets were:
1) Memory buffers.
2) C standard I/O.
Both were buffered already.
This is useful for normal replication in order to refresh the slave
when we are persisting on disk, but for diskless replication the
child is already receiving data while in WAIT_BGSAVE_END state.