In previous commits we moved the FULLRESYNC to the moment we start the
BGSAVE, so that the offset we provide is the right one. However this
also means that we need to re-emit the SELECT statement every time a new
slave starts to accumulate the changes.
To obtian this effect in a more clean way, the function that sends the
FULLRESYNC reply was overloaded with a more important role of also doing
this and chanigng the slave state. So it was renamed to
replicationSetupSlaveForFullResync() to better reflect what it does now.
This commit attempts to fix a bug involving PSYNC and diskless
replication (currently experimental) found by Yuval Inbar from Redis Labs
and that was later found to have even more far reaching effects (the bug also
exists when diskstore is off).
The gist of the bug is that, a Redis master replies with +FULLRESYNC to
a PSYNC attempt that fails and requires a full resynchronization.
However, the baseline offset sent along with FULLRESYNC was always the
current master replication offset. This is not ok, because there are
many reasosn that may delay the RDB file creation. And... guess what,
the master offset we communicate must be the one of the time the RDB
was created. So for example:
1) When the BGSAVE for replication is delayed since there is one
already but is not good for replication.
2) When the BGSAVE is not needed as we attach one currently ongoing.
3) When because of diskless replication the BGSAVE is delayed.
In all the above cases the PSYNC reply is wrong and the slave may
reconnect later claiming to need a wrong offset: this may cause
data curruption later.
Using chained replication where C is slave of B which is in turn slave of
A, if B reconnects the replication link with A but discovers it is no
longer possible to PSYNC, slaves of B must be disconnected and PSYNC
not allowed, since the new B dataset may be completely different after
the synchronization with the master.
Note that there are varius semantical differences in the way this is
handled now compared to the past. In the past the semantics was:
1. When a slave lost connection with its master, disconnected the chained
slaves ASAP. Which is not needed since after a successful PSYNC with the
master, the slaves can continue and don't need to resync in turn.
2. However after a failed PSYNC the replication backlog was not reset, so a
slave was able to PSYNC successfully even if the instance did a full
sync with its master, containing now an entirely different data set.
Now instead chained slaves are not disconnected when the slave lose the
connection with its master, but only when it is forced to full SYNC with
its master. This means that if the slave having chained slaves does a
successful PSYNC all its slaves can continue without troubles.
See issue #2694 for more details.
This gives us a 24 bytes size class which is dict.c dictEntry size, thus
improving the memory efficiency of Redis significantly.
Moreover other non 16 bytes aligned tiny classes are added that further
reduce the fragmentation of the allocator.
Technically speaking LG_QUANTUM should be 4 on i386 / AMD64 because of
SSE types and other 16 bytes types, however we don't use those, and our
jemalloc only targets Redis.
New versions of Jemalloc will have an explicit configure switch in order
to specify the quantum value for a platform without requiring any change
to the Jemalloc source code: we'll switch to this system when available.
This change was originally proposed by Oran Agra (@oranagra) as a change
to the Jemalloc script to generate the size classes define. We ended
doing it differently by changing LG_QUANTUM since it is apparently the
supported Jemalloc method to obtain a 24 bytes size class, moreover it
also provides us other potentially useful size classes.
Related to issue #2510.
When empty strings are created, or when sdsMakeRoomFor() is called, we
are likely into an appending pattern. Use at least type 8 SDS strings
since TYPE 5 does not remember the free allocation size and requires to
call sdsMakeRoomFor() at every new piece appended.
The previos attempt to process each client at least once every ten
seconds was not a good idea, because:
1. Usually because of the past min iterations set to 50, you get much
better processing period most of the times.
2. However when there are many clients and a normal setting for
server.hz, the edge case is triggered, and waiting 10 seconds for a
BLPOP that asked for 1 second is not ok.
3. Moreover, because of the high min-itereations limit of 50, when HZ
was set to an high value, the actual behavior was to process a lot of
clients per second.
Also the function checking for timeouts called gettimeofday() at each
iteration which can be costly.
The new implementation will try to process each client once per second,
gets the current time as argument, and does not attempt to process more
than 5 clients per iteration if not needed.
So now:
1. The CPU usage of an idle Redis process is the same or better.
2. The CPU usage of a busy Redis process is the same or better.
3. However a non trivial amount of work may be performed per iteration
when there are many many clients. In this particular case the user may
want to raise the "HZ" value if needed.
Btw with 4000 clients it was still not possible to noticy any actual
latency created by processing 400 clients per second, since the work
performed for each client is pretty small.
This is an attempt to use the refcount feature of the sds.c fork
provided in the Pull Request #2509. A new type, SDS_TYPE_5 is introduced
having a one byte header with just the string length, without
information about the available additional length at the end of the
string (this means that sdsMakeRoomFor() will be required each time
we want to append something, since the string will always report to have
0 bytes available).
More work needed in order to avoid common SDS functions will pay the
cost of this type. For example both sdscatprintf() and sdscatfmt()
should try to upgrade to SDS_TYPE_8 ASAP when appending chars.