After the thread in #6537 and thanks to the suggestions received, this
commit updates the original patch in order to:
1. Solve the problem of updating the time in multiple places by updating
it in call().
2. Avoid introducing a new field but use our cached time.
This required some minor refactoring to the function updating the time,
and the introduction of a new cached time in microseconds in order to
use less gettimeofday() calls.
Calling lookupKey*() many times to search a key in one command
may get different result.
That's because lookupKey*() calls expireIfNeeded(), and delete
the key when reach the expire time. So we can get an robj before
the expire time, but a NULL after the expire time.
The worst is that may lead to Redis crash, for example
`RPOPLPUSH foo foo` the first time we get a list form `foo` and
hold the pointer, but when we get `foo` again it's expired and
deleted. Now we hold a freed memory, when execute rpoplpushHandlePush()
redis crash.
To fix it, we can refactor the judgment about whether a key is expired,
using the same basetime `server.cmd_start_mstime` instead of calling
mstime() everytime.
Now that the call also invalidates client side caching slots, it is
important that after an internal flush operation we both send the
notifications to the clients and, at the same time, are able to reclaim
the memory of the tracking table. This may even fix a few edge cases
related to MULTI/EXEC + WATCH during resync, not sure, but in general
looks more correct.
Otherwise what happens is that the tracking table will never get garbage
collected if there are no longer clients with tracking enabled.
Now the invalidation function immediately checks if there is any table
allocated, otherwise it returns ASAP, so the overhead when the feature
is not used should be near zero.
* create module API for forking child processes.
* refactor duplicate code around creating and tracking forks by AOF and RDB.
* child processes listen to SIGUSR1 and dies exitFromChild in order to
eliminate a valgrind warning of unhandled signal.
* note that BGSAVE error reply has changed.
valgrind error is:
Process terminating with default action of signal 10 (SIGUSR1)
The implementation of the diskless replication was currently diskless only on the master side.
The slave side was still storing the received rdb file to the disk before loading it back in and parsing it.
This commit adds two modes to load rdb directly from socket:
1) when-empty
2) using "swapdb"
the third mode of using diskless slave by flushdb is risky and currently not included.
other changes:
--------------
distinguish between aof configuration and state so that we can re-enable aof only when sync eventually
succeeds (and not when exiting from readSyncBulkPayload after a failed attempt)
also a CONFIG GET and INFO during rdb loading would have lied
When loading rdb from the network, don't kill the server on short read (that can be a network error)
Fix rdb check when performed on preamble AOF
tests:
run replication tests for diskless slave too
make replication test a bit more aggressive
Add test for diskless load swapdb
jemalloc 5 doesn't immediately release memory back to the OS, instead there's a decaying
mechanism, which doesn't work when there's no traffic (no allocations).
this is most evident if there's no traffic after flushdb, the RSS will remain high.
1) enable jemalloc background purging
2) explicitly purge in flushdb
Add tests to check basic functionality of this optional keyword, and also tested with
a module (redisgraph). Checked quickly with valgrind, no issues.
Copies name the type name canonicalisation code from `typeCommand`, perhaps this would
be better factored out to prevent the two diverging and both needing to be edited to
add new `OBJ_*` types, but this is a little fiddly with C strings.
The [redis-doc](https://github.com/antirez/redis-doc/blob/master/commands.json) repo
will need to be updated with this new arg if accepted.
A quirk to be aware of here is that the GEO commands are backed by zsets not their own
type, so they're not distinguishable from other zsets.
Additionally, for sparse types this has the same behaviour as `MATCH` in that it may
return many empty results before giving something, even for large `COUNT`s.
The old version could not handle the fact that "STREAMS" is a valid key
name for streams. Now we really try to parse the command like the
command implementation would do.
Related to #5028 and 4857.
The loop allocated a buffer for the right number of keys positions, then
overflowed it going past the limit.
Related to #4857 and cause of the memory violation seen in #5028.