The core of this change is the implementation of stream trimming, and
the resulting MAXLEN option of XADD as a trivial result of having
trimming functionalities. MAXLEN already works but in order to be more
efficient listpack GC should be implemented, currently marked as a TODO
item inside the comments.
Listpack max size is a tradeoff between space and time. A 2k max entry
puts the memory usage approximately at a similar order of magnitude (5
million entries went from 96 to 120 MB), but the range queries speed
doubled (because there are half entries to scan in the average case).
Lower values could be considered, or maybe this parameter should be
made tunable.
We used to have the master ID stored at the start of the listpack,
however using the key directly makes more sense in order to create a
space efficient representation: anyway the key at the radix tree is very
unlikely to change because of how the stream is implemented. Moreover on
nodes merging, to rewrite the merged listpacks is anyway the most
sensible operation, and we can use the iterator and the append-to-stream
function in order to avoid re-implementing the code needed for merging.
This commit also adds two items at the start of the listpack: the
number of valid items inside the listpack, and the number of items
marked as deleted. This means that there is no need to scan a listpack
in order to understand if it's a good candidate for garbage collection,
if the ration between valid/deleted items triggers the GC.
The approach used is to set a fixed header at the start of every
listpack blob (that contains many entries). The header contains a
"master" ID and fields, that are initially just obtained from the first
entry inserted in the listpack, so that the first enty is always well
compressed. Later every new entry is checked against these fields, and
if it matches, the SAMEFIELD flag is set in the entry so that we know to
just use the master entry flags. The IDs are always delta-encoded
against the first entry. This approach avoids cascading effects in which
entries are encoded depending on the previous entries, in order to avoid
complexity and rewritings of the data when data is removed in the middle
(which is a planned feature).
blockForKeys() was not freeing the allocation holding the ID when the
key was already found busy. Fortunately the unit test checked explicitly
for blocking multiple times for the same key (copying a regression in
the blocking lists tests), so the bug was detected by the Redis test leak
checker.
XADD was suboptimal in the first incarnation of the command, not being
able to accept an ID (very useufl for replication), nor options for
having capped streams.
The keyspace notification for streams was not implemented.
A client may lose a lot of time between invocations of blocking XREAD,
for example because it is processing the messages or for any other
cause. When it returns back, it may provide a low enough message ID that
the server will block to send an unreasonable number of messages in a
single call. For this reason we set a COUNT when the client is blocked
with XREAD calls, even if no COUNT is given. This is arbitrarily set to
1000 because it's enough to avoid slowing down the reception of many
messages, but low enough to avoid to block.
With lists we need to signal only on key creation, but streams can
provide data to clients listening at every new item added.
To make this slightly more efficient we now track different classes of
blocked clients to avoid signaling keys when there is nobody listening.
A typical case is when the stream is used as a time series DB and
accessed only by range with XRANGE.
After a few attempts it looked quite saner to just add the last item ID
at the end of the serialized listpacks, instead of scanning the last
listpack loaded from head to tail just to fetch it. It's a disk space VS
CPU-and-simplicity tradeoff basically.
Related to #4483. As suggested by @soloestoy, we can retrieve the SHA1
from the body. Given that in the new implementation using AUX fields we
ended copying around a lot to create new objects and strings, extremize
such concept and trade CPU for space inside the RDB file.
This is currently needed in order to fix#4483, but this can be
useful in other contexts, so maybe later we may want to remove the
conditionals and always save/load scripts.
Note that we are using the "lua" AUX field here, in order to guarantee
backward compatibility of the RDB file. The unknown AUX fields must be
discarded by past versions of Redis.
Doing the following ended with a broken server.executable:
1. Start Redis with src/redis-server
2. Send CONFIG SET DIR /tmp/
3. Send DEBUG RESTART
At this point we called execve with an argv[0] that is no longer related
to the new path. So after the restart the absolute path of the
executable is recomputed in the wrong way. With this fix we pass the
absolute path already computed as argv[0].