STRALGO should be a container for mostly read-only string
algorithms in Redis. The algorithms should have two main
characteristics:
1. They should be non trivial to compute, and often not part of
programming language standard libraries.
2. They should be fast enough that it is a good idea to have optimized C
implementations.
Next thing I would love to see? A small strings compression algorithm.
If redis crashes early, before lua is set up (like, if File Descriptor 0 is closed before exec), it will crash again trying to print memory statistics.
Related to #5145.
Design note: clients may change type when they turn into replicas or are
moved into the Pub/Sub category and so forth. Moreover the recomputation
of the bytes used is problematic for obvious reasons: it changes
continuously, so as a conservative way to avoid accumulating errors,
each client remembers the contribution it gave to the sum, and removes
it when it is freed or before updating it with the new memory usage.
Example: Client uses a pipe to send the following to a
stale replica:
MULTI
.. do something ...
DISCARD
The replica will reply the MUTLI with -MASTERDOWN and
execute the rest of the commands... A client using a
pipe might not be aware that MULTI failed until it's
too late.
I can't think of a reason why MULTI/EXEC/DISCARD should
not be executed on stale replicas...
Also, enable MULTI/EXEC/DISCARD during loading
Makse sure call() doesn't wrap replicated commands with
a redundant MULTI/EXEC
Other, unrelated changes:
1. Formatting compiler warning in INFO CLIENTS
2. Use CLIENT_ID_AOF instead of UINT64_MAX
37a10cef introduced automatic wrapping of MULTI/EXEC for the
alsoPropagate API. However this collides with the built-in mechanism
already present in module.c. To avoid complex changes near Redis 6 GA
this commit introduces the ability to exclude call() MUTLI/EXEC wrapping
for also propagate in order to continue to use the old code paths in
module.c.
Now that this mechanism is the sole one used for blocked clients
timeouts, it is more wise to cleanup the table when the client unblocks
for any reason. We use a flag: CLIENT_IN_TO_TABLE, in order to avoid a
radix tree lookup when the client was already removed from the table
because we processed it by scanning the radix tree.
A very commonly signaled operational problem with Redis master-replicas
sets is that, once the master becomes unavailable for some reason,
especially because of network problems, many times it wont be able to
perform a partial resynchronization with the new master, once it rejoins
the partition, for the following reason:
1. The master becomes isolated, however it keeps sending PINGs to the
replicas. Such PINGs will never be received since the link connection is
actually already severed.
2. On the other side, one of the replicas will turn into the new master,
setting its secondary replication ID offset to the one of the last
command received from the old master: this offset will not include the
PINGs sent by the master once the link was already disconnected.
3. When the master rejoins the partion and is turned into a replica, its
offset will be too advanced because of the PINGs, so a PSYNC will fail,
and a full synchronization will be required.
Related to issue #7002 and other discussion we had in the past around
this problem.
Redis refusing to run MULTI or EXEC during script timeout may cause partial
transactions to run.
1) if the client sends MULTI+commands+EXEC in pipeline without waiting for
response, but these arrive to the shards partially while there's a busy script,
and partially after it eventually finishes: we'll end up running only part of
the transaction (since multi was ignored, and exec would fail).
2) similar to the above if EXEC arrives during busy script, it'll be ignored and
the client state remains in a transaction.
the 3rd test which i added for a case where MULTI and EXEC are ok, and
only the body arrives during busy script was already handled correctly
since processCommand calls flagTransaction
Before this commit, when upgrading a replica, expired keys will not
be loaded, thus causing replica having less keys in db. To this point,
master and replica's keys is logically consistent. However, before
the keys in master and replica are physically consistent, that is,
they have the same dbsize, if master got a problem and the replica
got promoted and becomes new master of that partition, and master
updates a key which does not exist on master, but physically exists
on the old master(new replica), the old master would refuse to update
the key, thus causing master and replica data inconsistent.
How could this happen?
That's all because of the wrong judgement of roles while starting up
the server. We can not use server.masterhost to judge if the server
is master or replica, since it fails in cluster mode.
When we start the server, we load rdb and do want to load expired keys,
and do not want to have the ability to active expire keys, if it is
a replica.
1. server.repl_no_slaves_since can be set when a MONITOR client disconnects
2. c->repl_ack_time can be set by a newline from a MONITOR client
3. Improved comments
SELECT, and HELLO are commands that may be executed by the client
as soon as it connects, there's no reason to block them, preventing the
client from doing the rest of his sequence (which might just be INFO or
CONFIG, etc).
MONITOR, DEBUG, SLOWLOG, TIME, LASTSAVE are all non-data accessing
commands, which there's no reason to block.
Checking OOM by `getMaxMemoryState` inside script might get different result
with `freeMemoryIfNeededAndSafe` at script start, because lua stack and
arguments also consume memory.
This leads to memory `borderline` when memory grows near server.maxmemory:
- `freeMemoryIfNeededAndSafe` at script start detects no OOM, no memory freed
- `getMaxMemoryState` inside script detects OOM, script aborted
We solve this 'borderline' issue by saving OOM state at script start to get
stable lua OOM state.
related to issue #6565 and #5250.
So error message `ERR only (P)SUBSCRIBE / (P)UNSUBSCRIBE / PING / QUIT allowed in this context` will become
`ERR 'get' command submitted, but only (P)SUBSCRIBE / (P)UNSUBSCRIBE / PING / QUIT allowed in this context`
since the refactory of config.c, it was initialized from config_hz in initServer
but apparently that's too late since the config file loading creates objects
which call LRU_CLOCK
This message is there for ten years, but is hardly useful.
Moreover it is likely that it will fill an entire disk if log ratation
is not configured, for no good reasons.
Changes in behavior:
- Change server.stream_node_max_entries from int64_t to long long, so that it can be used by the generic infra
- standard error reply instead of "repl-backlog-size must be 1 or greater" and such
- tls-port and a few TLS booleans were readable (config get) even when USE_OPENSSL was off (now they aren't)
- syslog-enabled, syslog-ident, cluster-enabled, appendfilename, and supervised didn't have a get (now they do)
- pidfile was initialized to NULL in InitServerConfig but had CONFIG_DEFAULT_PID_FILE in rewriteConfig (so the real default was "", but rewrite would cause it to be set), fixed the rewrite.
- TLS config in server.h was uninitialized (if no tls config args were provided)
Adding test for sanity and coverage
- add capability for each config to have a callback to check if value is valid and return error string
will enable converting many of the remaining custom configs into generic ones (reducing the x4 repetition for set,get,config,rewrite)
- add capability for each config to to run some update code after config is changed (only for CONFIG SET)
will also enable converting many of the remaining custom configs into generic ones
- add capability to move default values from server.h and server.c to config.c
will reduce many excess lines in server.h and server.c (plus, no need to rebuild the entire code base when a default change 8-))
other behavior changes:
- fix bug in bool config get (always returning 'yes')
- fix a bug in modifying jemalloc-bg-thread at runtime (didn't call set_jemalloc_bg_thread, due to bad merge conflict resolution (my fault))
- side effect when a failed attempt to enable activedefrag at runtime, we now respond with -ERR and not with -DISABLED
Random command like SPOP with count is replicated as
some SREM operations, and store them in also_propagate
array to propagate after the call, but this would break
atomicity.
To keep the command's atomicity, wrap also_propagate
array with MULTI/EXEC.
Is it sufficient... ? -- Yes it is. In standalone mode, we say READY=1
at the comment point; however in replicated mode, we delay sending
READY=1 until the replication sync completes.
This adds Makefile/build-system support for USE_SYSTEMD=(yes|no|*). This
variable's value determines whether or not libsystemd will be linked at
build-time.
If USE_SYSTEMD is set to "yes", make will use PKG_CONFIG to check for
libsystemd's presence, and fail the build early if it isn't
installed/detected properly.
If USE_SYSTEM is set to "no", libsystemd will *not* be linked, even if
support for it is available on the system redis is being built on.
For any other value that USE_SYSTEM might assume (e.g. "auto"),
PKG_CONFIG will try to determine libsystemd's presence, and set up the
build process to link against it, if it was indicated as being
installed/available.
This approach has a number of repercussions of its own, most importantly
the following: If you build redis on a system that actually has systemd
support, but no libsystemd-dev package(s) installed, you'll end up
*without* support for systemd notification/status reporting support in
redis-server. This changes established runtime behaviour.
I'm not sure if the build system and/or the server binary should
indicate this. I'm also wondering if not actually having
systemd-notify-support, but requesting it via the server's config,
should result in a fatal error now.
Instead of replicating a subset of libsystemd's sd_notify(3) internally,
use the dynamic library provided by systemd to communicate with the
service manager.
When systemd supervision was auto-detected or configured, communicate
the actual server status (i.e. "Loading dataset", "Waiting for
master<->replica sync") to systemd, instead of declaring readiness right
after initializing the server process.
Reduce default minimum effort, so that when fragmentation is just detected,
the impact on the latency will be minor.
Reduce the default maximum effort, mainly to prevent a case were a sudden
massive deletions, won't trigger an aggressive defrag that will cause latency.
When activedefrag is disabled mid-run, reset the 'running' info field, and
clear the scan cursor, so that when it'll be re-enabled, a new fresh scan will
start.
Clearing the 'running' variable is important since lowering the defragger
tunables mid-scan won't help, the defragger only considers new threshold when
a new scan starts, and during a scan it can only become more aggressive,
(when more severe fragmentation is detected), it'll never go less aggressive.
So by temporarily disabling activedefrag, one can lower th the tunables.
Removing the experimantal warning.
One problem with the solution proposed so far in #6537 is that key
lookups outside a command execution via call(), still used a cached
time. The cached time needed to be refreshed in multiple places,
especially because of modules callbacks from timers, cluster bus, and
thread safe contexts, that may use RM_Open().
In order to avoid this problem, this commit introduces the ability to
detect if we are inside call(): this way we can use the reference fixed
time only when we are in the context of a command execution or Lua
script, but for the asynchronous lookups, we can still use mstime() to
get a fresh time reference.
After the thread in #6537 and thanks to the suggestions received, this
commit updates the original patch in order to:
1. Solve the problem of updating the time in multiple places by updating
it in call().
2. Avoid introducing a new field but use our cached time.
This required some minor refactoring to the function updating the time,
and the introduction of a new cached time in microseconds in order to
use less gettimeofday() calls.
Calling lookupKey*() many times to search a key in one command
may get different result.
That's because lookupKey*() calls expireIfNeeded(), and delete
the key when reach the expire time. So we can get an robj before
the expire time, but a NULL after the expire time.
The worst is that may lead to Redis crash, for example
`RPOPLPUSH foo foo` the first time we get a list form `foo` and
hold the pointer, but when we get `foo` again it's expired and
deleted. Now we hold a freed memory, when execute rpoplpushHandlePush()
redis crash.
To fix it, we can refactor the judgment about whether a key is expired,
using the same basetime `server.cmd_start_mstime` instead of calling
mstime() everytime.
- Add RM_GetServerInfo and friends
- Add auto memory for new opaque struct
- Add tests for new APIs
other minor fixes:
- add const in various char pointers
- requested_section in modulesCollectInfo was actually not sds but char*
- extract new string2d out of getDoubleFromObject for code reuse
Add module API for
* replication hooks: role change, master link status, replica online/offline
* persistence hooks: saving, loading, loading progress
* misc hooks: cron loop, shutdown, module loaded/unloaded
* change the way hooks test work, and add tests for all of the above
startLoading() now gets flag indicating what is loaded.
stopLoading() now gets an indication of success or failure.
adding startSaving() and stopSaving() with similar args and role.
As we know if a module exports module-side data types,
unload it is not allowed. This rule is the same with
blocked clients in module, because we use background
threads to implement module blocked clients, and it's
not safe to unload a module if there are background
threads running. So it's necessary to check if any
blocked clients running in this module when unload it.
Moreover, after that we can ensure that if no modules,
then no module blocked clients even module unloaded.
So, we can call moduleHandleBlockedClients only when
we have installed modules.
This is what happened:
1. Instance starts, is a slave in the cluster configuration, but
actually server.masterhost is not set, so technically the instance
is acting like a master.
2. loadDataFromDisk() calls replicationCacheMasterUsingMyself() even if
the instance is a master, in the case it is logically a slave and the
cluster is enabled. So now we have a cached master even if the instance
is practically configured as a master (from the POV of
server.masterhost value and so forth).
3. clusterCron() sees that the instance requires to replicate from its
master, because logically it is a slave, so it calls
replicationSetMaster() that will in turn call
replicationCacheMasterUsingMyself(): before this commit, this call would
overwrite the old cached master, creating a memory leak.
misc:
- handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents
- fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed)
- add key-load-delay config for testing
- trim connShutdown which is no longer needed
- rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs
- don't detect rdb child exited (don't call wait3) until we detect the pipe is closed
- Cleanup bad optimization from rio.c, add another one
* Introduce a connection abstraction layer for all socket operations and
integrate it across the code base.
* Provide an optional TLS connections implementation based on OpenSSL.
* Pull a newer version of hiredis with TLS support.
* Tests, redis-cli updates for TLS support.
cluster.c - stack buffer memory alignment
The pointer 'buf' is cast to a more strictly aligned pointer type
evict.c - lazyfree_lazy_eviction, lazyfree_lazy_eviction always called
defrag.c - bug in dead code
server.c - casting was missing parenthesis
rax.c - indentation / newline suggested an 'else if' was intended
It seeems that since I added the creation of the jemalloc thread redis
sometimes fails to start with the following error:
Inconsistency detected by ld.so: dl-tls.c: 493: _dl_allocate_tls_init: Assertion `listp->slotinfo[cnt].gen <= GL(dl_tls_generation)' failed!
This seems to be due to a race bug in ld.so, in which TLS creation on the
thread, collide with dlopen.
Move the creation of BIO and jemalloc threads to after modules are loaded.
plus small bugfix when trying to disable the jemalloc thread at runtime
We don't want that the API could be used directly in an unsafe way,
without checking if there is an active child. Now the safety checks are
moved directly in the function performing the operations.
We can't expect SIGUSR1 to have any specific value range, so let's
define an exit code that we can handle in a special way.
This also fixes an #include <wait.h> that is not standard.
SipHash expects a 128-bit key, and we were indeed generating 128-bits,
but restricting them to hex characters 0-9a-f, effectively giving us
only 4 bits-per-byte of key material, and 64 bits overall.
Now, we skip the hex conversion and supply 128 bits of unfiltered
random data.
* create module API for forking child processes.
* refactor duplicate code around creating and tracking forks by AOF and RDB.
* child processes listen to SIGUSR1 and dies exitFromChild in order to
eliminate a valgrind warning of unhandled signal.
* note that BGSAVE error reply has changed.
valgrind error is:
Process terminating with default action of signal 10 (SIGUSR1)
The implementation of the diskless replication was currently diskless only on the master side.
The slave side was still storing the received rdb file to the disk before loading it back in and parsing it.
This commit adds two modes to load rdb directly from socket:
1) when-empty
2) using "swapdb"
the third mode of using diskless slave by flushdb is risky and currently not included.
other changes:
--------------
distinguish between aof configuration and state so that we can re-enable aof only when sync eventually
succeeds (and not when exiting from readSyncBulkPayload after a failed attempt)
also a CONFIG GET and INFO during rdb loading would have lied
When loading rdb from the network, don't kill the server on short read (that can be a network error)
Fix rdb check when performed on preamble AOF
tests:
run replication tests for diskless slave too
make replication test a bit more aggressive
Add test for diskless load swapdb
jemalloc 5 doesn't immediately release memory back to the OS, instead there's a decaying
mechanism, which doesn't work when there's no traffic (no allocations).
this is most evident if there's no traffic after flushdb, the RSS will remain high.
1) enable jemalloc background purging
2) explicitly purge in flushdb
Now threads are stopped even when the connections drop immediately to
zero, not allowing the networking code to detect the condition and stop
the threads. serverCron() will handle that.
This is just an experiment for now, there are a couple of race
conditions, mostly harmless for the performance gain experiment that
this commit represents so far.
The general idea here is to take Redis single threaded and instead
fan-out on expansive kernel calls: write(2) in this case, but the same
concept could be easily implemented for read(2) and protcol parsing.
However just threading writes like in this commit, is enough to evaluate
if the approach is sounding.
Fixes#6012.
As long as "INFO is broken", this should be adequate IMO. Once we rework
`INFO`, perhaps into RESP3, this implementation should be revisited.
Adding another new filed categories at the end of
command reply, it's easy to read and distinguish
flags and categories, also compatible with old format.
That's not REALLY needed, but... right now with LASTSAVE being the only
command tagged as "admin" but not "dangerous" what happens is that after
rewrites the rewrite engine will produce from the rules:
user default on +@all ~* -@dangerous nopass
The rewrite:
user default on nopass ~* +@all -@admin -@dangerous +lastsave
Which is correct but will have users wondering about why LASTSAVE has
something special.
Since LASTSAVE after all also leaks information about the underlying
server configuration, that may not be great for SAAS vendors, let's tag
it as dangerous as well and forget about this issue :-)
This way the behavior is very similar to the past one.
This is useful in order to remember the user she probably failed to
configure a password correctly.