The rio structure is referenced in the global 'riostate' structure
in order for the logging functions to be always able to access the state
of the "pseudo-loading" of the RDB, needed for the check.
Courtesy of Valgrind.
They were under /deps since they originate from a different source tree,
however at this point they are very modified and we took ownership of
both the files making changes, fixing bugs, so there is no upgrade path
from the original code tree.
Given that, better to move the code under /src with proper dependencies
and with a more simpler editing experience.
strict_strtoll() has a bug that reports the empty string as ok and
parses it as zero.
Apparently nobody ever replaced this old call with the faster/saner
string2ll() which is used otherwise in the rest of the Redis core.
This commit close#3333.
In issues #3361 / #3365 a problem was reported / fixed with redis-cli
not updating correctly the current DB on error after SELECT.
In theory this bug was fixed in 0042fb0e, but actually the commit only
fixed the prompt updating, not the fact the state was set in a wrong
way.
This commit removes the check in the prompt update, now that hopefully
it is the state that is correct, there is no longer need for this check.
This commit both fixes the crash reported with issue #3364 and
also properly closes the old links after the Sentinel address for the
other masters gets updated.
The two problems where:
1. The Sentinel that switched address may not monitor all the masters,
it is possible that there is no match, and the 'match' variable is
NULL. Now we check for no match and 'continue' to the next master.
2. By ispecting the code because of issue "1" I noticed that there was a
problem in the code that disconnects the link of the Sentinel that
needs the address update. Basically link->disconnected is non-zero
even if just *a single link* (cc -- command link or pc -- pubsub
link) are disconnected, so to check with if (link->disconnected)
in order to close the links risks to leave one link connected.
I was able to manually reproduce the crash at "1" and verify that the
commit resolves the issue.
Close#3364.
So far we used an external program (later executed within Redis) and
parser in order to check RDB files for correctness. This forces, at each
RDB format update, to have two copies of the same format implementation
that are hard to keep in sync. Morover the former RDB checker only
checked the very high-level format of the file, without actually trying
to load things in memory. Certain corruptions can only be handled by
really loading key-value pairs.
This first commit attempts to unify the Redis RDB loadig code with the
task of checking the RDB file for correctness. More work is needed but
it looks like a sounding direction so far.
The quicklist takes a cached version of the ziplist representation size
in bytes. The implementation must update this length every time the
underlying ziplist changes. However quicklistReplaceAtIndex() failed to
fix the length.
During LSET calls, the size of the ziplist blob and the cached size
inside the quicklist diverged. Later, when this size is used in an
authoritative way, for example during nodes splitting in order to copy
the nodes, we end with a duplicated node that may contain random
garbage.
This commit should fix issue #3343, however several problems were found
reviewing the quicklist.c code in search of this bug that should be
addressed soon or later.
For example:
1. To take a cached ziplist length is fragile since failing to update it
leads to this kind of issues.
2. The node splitting code needs auditing. For example it works just for
a side effect of ziplistDeleteRange() to be able to cope with a wrong
count of elements to remove. The code inside quicklist.c assumes that
-1 means "delete till the end" while actually it's just a count of how
many elements to delete, and is an unsigned count. So -1 gets converted
into the maximum integer, and just by chance the ziplist code stops
deleting elements after there are no more to delete.
3. Node splitting is extremely inefficient, it copies the node and
removes elements from both nodes even when actually there is to move a
single entry from one node to the other, or when the new resulting node
is empty at all so there is nothing to copy but just to create a new
node.
However at least for Redis 3.2 to introduce fresh code inside
quicklist.c may be even more risky, so instead I'm writing a better
fuzzy tester to stress the internals a bit more in order to anticipate
other possible bugs.
This bug was found using a fuzzy tester written after having some clue
about where the bug could be. The tester eventually created a ~2000
commands sequence able to always crash Redis. I wrote a better version
of the tester that searched for the smallest sequence that could crash
Redis automatically. Later this smaller sequence was minimized by
removing random commands till it still crashed the server. This resulted
into a sequence of 7 commands. With this small sequence it was just a
matter of filling the code with enough printf() to understand enough
state to fix the bug.
Display the nodes summary once the cluster is established using
redis-trib.rb
After the cluster meet and join was done, when the summary was shown, it
was giving info regarding the nodes. This fix ensures that confusion
where the slaves were shown as masters.
Fix would be to reset the nodes and reload the cluster information
before checking the cluster status after creating it.
This commit changes what provided by PR #3315 (merged) in order to
let the user specify the log level as a string.
The define could be also used, but when this happens, they must be
decoupled from the defines in the Redis core, like in the other part of
the Redis modules implementations, so that a switch statement (or a
function) remaps between the two, otherwise we are no longer free to
change the internal Redis defines.
Most of the time to check the last element is the way to go, however
there are patterns where the contrary is the best choice. Zig-zag
scanning implemented in this commmit always checks the obvious element
first (the last added -- think at a loop where the last element
allocated gets freed again and again), and continues checking one
element in the head and one in the tail.
Thanks to @dvisrky that fixed the original implementation of the
function and proposed zig zag scanning.
This bug most experienced effect was an inability of Redis to
reconfigure back old masters to slaves after they are reachable again
after a failover. This was due to failing to reset the count of the
pending commands properly, so the master appeared fovever down.
Was introduced in Redis 3.2 new Sentinel connection sharing feature
which is a lot more complex than the 3.0 code, but more scalable.
Many thanks to people reporting the issue, and especially to
@sskorgal for investigating the issue in depth.
Hopefully closes#3285.
I recently introduced populating the autocomplete help array with the
COMMAND command if available. However this was performed before parsing
the arguments, defaulting to instance 6379. After the connection is
performed it remains stable.
The effect is that if there is an instance running on port 6339,
whatever port you specify is ignored and 6379 is connected to instead.
The right port will be selected only after a reconnection.
Close#3314.
Reference issue #3218.
Checking the code I can't find a reason why the original RESTORE
code was so opinionated about restoring only the current version. The
code in to `rdb.c` appears to be capable as always to restore data from
older versions of Redis, and the only places where it is needed the
current version in order to correctly restore data, is while loading the
opcodes, not the values itself as it happens in the case of RESTORE.
For the above reasons, this commit enables RESTORE to accept older
versions of values payloads.
Comment format fixed + local var modified from camel case to underscore
separators as Redis code base normally does (camel case is mostly used
for global symbols like structure names, function names, global vars,
...).
Now that modules receive RedisModuleString objects on loading, they are
allowed to call the String API, so the context must be released
correctly.
Related to #3293.
All lists are now represented via quicklists.
Quicklists are never represented referencing robj structures, so trying
to compress their representation does not make sense. That the new way
is faster was experimentally verified with micro benchmarks in order to
prove that the intuition was correct.
Probably there is no compiler that will actaully break the code or raise
a signal for unsigned -> signed overflowing conversion, still it was
apparently possible to write it in a more correct way.
All tests passing.
Compiling Redis worked as a side effect of jemalloc target specifying
-ldl as needed linker options, otherwise it is not provided during
linking and dlopen() API will remain unresolved symbols.
In modules we fill a set of function pointers defined in redismodule.h,
populating a set of APIs that are callable from the module. We use this
manual process instead of resorting to dynamic linking so that we have
exact control on how we pass the API to the module, and we can even pass
different functions for the same name, depending on the API version
declared by the module.
However if the function pointers in redismodule.h and the functions
defined in module.c have the same name, they conflict since the core
exports the symbols to the module.
There is probably some compiler flags trick to avoid this, but in order
to be safer in the future and be more easily compatible with different
builidng systems, this commit changes the internal function prefix from
RedisModule_ to RM_, so for example:
RM_StringSet() will be exported as RedisModule_StringSet()
Use the COMMAND output to fill with partial information the built-in
help. This makes redis-cli able to at least complete commands that are
exported by the Redis server it is connected to, but were not available
in the help.h file when the redis-cli binary was compiled.
Fix a possible race condition of sdown event detection if sentinel's connection to master/slave/sentinel became disconnected just after the last PONG and before the next PING.
This fixes a bug introduced by d827dbf, and makes the code consistent
with the logic of always allowing, while the cluster is down, commands
that don't target any key.
As a side effect the code is also simpler now.
This fixes issue #3043.
Before this fix, after a complete resharding of a master slots
to other nodes, the master remains empty and the slaves migrate away
to other masters with non-zero nodes. However the old master now empty,
is no longer considered a target for migration, because the system has
no way to tell it had slaves in the past.
This fix leaves the algorithm used in the past untouched, but adds a
new rule. When a new or old master which is empty and without slaves,
are assigend with their first slot, if other masters in the cluster have
slaves, they are automatically considered to be targets for replicas
migration.
I've renamed maxmemoryToString to evictPolicyToString since that is
more accurate (and easier to mentally connect with the correct data), as
well as updated the function to user server.maxmemory_policy rather than
server.maxmemory. Now with a default config it is actually returning
the correct policy rather than volatile-lru.
This fix was suggested by Anthony LaTorre, that provided also a good
test case that was used to verify the fix.
The problem with the old implementation is that, the time returned by
a timer event (that is the time after it want to run again) is added
to the event *start time*. So if the event takes, in order to run, more
than the time it says it want to be scheduled again for running, an
infinite loop is triggered.
The new bitfield command is an extension to the Redis bit operations,
where not just single bit operations are performed, but the array of
bits composing a string, can be addressed at random, not aligned
offsets, with any width unsigned and signed integers like u8, s5, u10
(up to 64 bit signed integers and 63 bit unsigned integers).
The BITFIELD command supports subcommands that can SET, GET, or INCRBY
those arbitrary bit counters, with multiple overflow semantics.
Trivial and credits:
A similar command was imagined a few times in the past, but for
some reason looked a bit far fetched or not well specified.
Finally the command was proposed again in a clear form by
Yoav Steinberg from Redis Labs, that proposed a set of commands on
arbitrary sized integers stored at bit offsets.
Starting from this proposal I wrote an initial specification of a single
command with sub-commands similar to what Yoav envisioned, using short
names for types definitions, and adding control on the overflow.
This commit is the resulting implementation.
Examples:
BITFIELD mykey OVERFLOW wrap INCRBY i2 10 -1 GET i2 10