This started out as #2158 by sunheehnus, but I kept rewriting it
until I could understand things more easily and get a few more
correctness guarantees out of the readability flow.
The original commit created and returned a new ziplist with the contents of
both input ziplists, but I prefer to grow one of the input ziplists
and destroy the other one.
So, instead of malloc+copy as in #2158, the merge now reallocs one of
the existing ziplists and copies the other ziplist into the new space.
Also added merge test cases to ziplistTest()
This replaces individual ziplist vs. linkedlist representations
for Redis list operations.
Big thanks for all the reviews and feedback from everybody in
https://github.com/antirez/redis/pull/2143
The previous test wasn't returning the new ziplist, so the test
was invalid. Now the test works properly.
These problems were simultaenously discovered in #2154 and that
PR also had an additional fix we included here.
zipEntry was returning a struct, but that caused some
problems with tests under 32 bit builds.
The tests run better if we operate on structs allocated in the
caller without worrying about copying on return.
Previously, many files had individual main() functions for testing,
but each required being compiled with their own testing flags.
That gets difficult when you have 8 different flags you need
to set just to run all tests (plus, some test files required
other files to be compiled aaginst them, and it seems some didn't
build at all without including the rest of Redis).
Now all individual test main() funcions are renamed to a test
function for the file itself and one global REDIS_TEST define enables
testing across the entire codebase.
Tests can now be run with:
- `./redis-server test <test>`
e.g. ./redis-server test ziplist
If REDIS_TEST is not defined, then no tests get included and no
tests are included in the final redis-server binary.
1. Server unxtime may remain not updated while loading AOF, so ETA is
not updated correctly.
2. Number of processed byte was not initialized.
3. Possible division by zero condition (likely cause of issue #1932).
1. memory leak in t_set.c has been fixed
2. end-of-line spaces has been removed (from all over the place)
3. for loops have been ordered up to match existing Redis style (less weird)
4. comments format has been fixed (added * in the beggining of every comment line)
setTypeRandomElements() now returns unsigned long, and also uses unsigned long for anything related to count of members.
spopWithCountCommand() now uses unsigned long elements_returned instead of int, for values returned from setTypeRandomElements()
If we woke up to accept a connection, but we can't
accept it, inform the user of the error going on
with their networking.
(The previous message was the same for success or error!)
spopCommand() now runs spopWithCountCommand() in case the <count> param is found.
Added intsetRandomMembers() to Intset: Copies N random members from the set into inputted 'values' array. Uses either the Knuth or Floyd sample algos depending on ratio count/size.
Added setTypeRandomElements() to SET type: Returns a number of random elements from a non empty set. This is a version of setTypeRandomElement() that is modified in order to return multiple entries, using dictGetRandomKeys() and intsetRandomMembers().
Added tests for SPOP with <count>: unit/type/set, unit/scripting, integration/aof
--
Cleaned up code a bit to match with required Redis coding style
Otherwise there are security risks, especially when providing Redis as a
service, the user may "sniff" for admin commands renamed to an
unguessable string via rename-command in redis.conf.
Previously the string was created empty then re-sized
to fit the offset, but sds resize causes the sds to
over-allocate by at least 1 MB (which is a lot when
you are operating at bit-level access).
This also improves the speed of initial sets by 2% to 6%
based on quick testing.
Patch logic provided by @oranagra
Fixes#1918
Improvements:
- Return empty string if asking for non-existing section (INFO foo)
- Fix potential memory leak (caused by sdsempty() then returned if >2 args)
- Clean up argument parsing
- Allow "all" as valid section (same as "default" or zero args currently)
- Move strcasecmp to end of evaluation chain in conditionals
Also, since we're C99, I moved some variable declarations to be closer
to where they are actually used (saves us from needing to free an empty info
if detect argument errors up front).
Closes#1915Closes#1966
There is no standard cross-platform way of obtaining
system memory info, but I found a useful function
convering all common platforms. I removed support
for uncommon Redis platforms (windows, AIX) and left
others intact.
For more info, see:
http://nadeausoftware.com/articles/2012/09/c_c_tip_how_get_physical_memory_size_system
The system memory info is cached on startup, but some systems
may be able to change the amount of memory visible to Redis
at runtime if Redis is deployed in a VM or container.
Also see #1820
Slaves key expire is orchestrated by the master. Sometimes the master
will send the synthesized DEL to expire keys on the slave with a non
trivial delay (when the key is not accessed, only the incremental expiry
algorithm will expire it in background).
During that time, a key is logically expired, but slaves still return
the key if you GET (or whatever) it. This is a bad behavior.
However we can't simply trust the slave view of the key, since we need
the master to be able to send write commands to update the slave data
set, and DELs should only happen when the key is expired in the master
in order to ensure consistency.
However 99.99% of the issues with this behavior is when a client which
is not a master sends a read only command. In this case we are safe and
can consider the key as non existing.
This commit does a few changes in order to make this sane:
1. lookupKeyRead() is modified in order to return NULL if the above
conditions are met.
2. Calls to lookupKeyRead() in commands actually writing to the data set
are repliaced with calls to lookupKeyWrite().
There are redundand checks, so for example, if in "2" something was
overlooked, we should be still safe, since anyway, when the master
writes the behavior is to don't care about what expireIfneeded()
returns.
This commit is related to #1768, #1770, #2131.
I guess the initial goal of the initialization was to suppress GCC
warning, but if we have to initialize, we can do it with the base-case
value instead of NULL which is never retained.
in the case (all chars of the string s found in 'cset' ),
line[573] will no more do the same thing line[572] did.
this will be more faster especially in the case that the string s is very long and all chars of string s found in 'cset'
Track bandwidth used by clients and replication (but diskless
replication is not tracked since the actual transfer happens in the
child process).
This includes a refactoring that makes tracking new instantaneous
metrics simpler.
PFCOUNT is technically speaking a write command, since the cached value
of the HLL is exposed in the data structure (design error, mea culpa), and
can be modified by PFCOUNT.
However if we flag PFCOUNT as "w", read only slaves can't execute the
command, which is a problem since there are environments where slaves
are used to scale PFCOUNT reads.
Nor it is possible to just prevent PFCOUNT to modify the data structure
in slaves, since without the cache we lose too much efficiency.
So while this commit allows slaves to create a temporary inconsistency
(the strings representing the HLLs in the master and slave can be
different in certain moments) it is actually harmless.
In the long run this should be probably fixed by turning the HLL into a
more opaque representation, for example by storing the cached value in
the part of the string which is not exposed (this should be possible
with SDS strings).
bulk_data field size was not removed from the count. It is not possible
to declare it simply as 'char bulk_data[]' since the structure is nested
into another structure.
Because of (not so) recent Redis changes, now the LRU internally
reported unit is milliseconds, not seconds, but the DEBUG OBJECT output
was still claiming seconds while providing milliseconds.
However OBJECT IDLETIME was working as expected, which is the correct
API to use.
zmalloc(0) cauesd to actually trigger a non-zero allocation since with
standard libc malloc we have our own zmalloc header for memory tracking,
but at the same time the returned pointer is at the end of the block and
not in the middle. This triggers a false positive when testing with
valgrind.
When the inline protocol args count is 0, we now avoid reallocating
c->argv, preventing the issue to happen.
Issue: #2157
As the SET command is parsed, it remembers which options are already set
and if a duplicate option is found, raises an error because it is
essentially an invalid syntax.
It still allows mutually exclusive options like EX and PX because taking
an option over another (precedence) is not essentially a syntactic
error.
Sentinel queries the INFO from every master and from every replica of
every master.
We can cache the INFO results in Sentinel so Sentinel can be a single
place to quickly get all INFO output for an entire Sentinel monitoring
group.
This commit gives us SENTINEL INFO-CACHE in two forms:
- SENTINEL INFO-CACHE — returns all masters and all replicas
- SENTINEL INFO-CACHE master0 master1 ... masterN — vararg specify masters
Results are returned as a multibulk reply with two top-level entries
for each master. The first entry for each master is the name of the master.
The second entry is a nested multibulk reply with the contents of INFO,
first for the master, then an additional entry for each of the
replicas.
RDB EOF detection was relying on the final part of the RDB transfer to
be a magic 40 bytes EOF marker. However as the slave is put online
immediately, and because of sockets timeouts, the replication stream is
actually contiguous with the RDB file.
This means that to detect the EOF correctly we should either:
1) Scan all the stream searching for the mark. Sucks CPU-wise.
2) Start to send the replication stream only after an acknowledge.
3) Implement a proper chunked encoding.
For now solution "2" was picked, so the master does not start to send
ASAP the stream of commands in the case of diskless replication. We wait
for the first REPLCONF ACK command from the slave, that certifies us
that the slave correctly loaded the RDB file and is ready to get more
data.
Both upstart and systemd provide a way for daemons to
be supervised, as well as a mechanism for them to
signal their readyness status.
This patch provides compatibility with this functionality while
not interfering with other methods.
With this, it will be possible to use `expect stop` with upstart
and `Type=notify` with systemd.
A more detailed explanation of the mechanism can be found here:
http://spootnik.org/entries/2014/11/09_pid-tracking-in-modern-init-systems.html
if redis works in cluster-mode and redis-cli was run with argv, reconnect if needs.
example:
./redis-cli set foo bar
if return is MOVED redis-cli just do nothing.
Same as the original bind fixes (we just missed these the
first time around).
This helps Redis not automatically send
connections from the first IP on an interface if we are bound
to a specific IP address (e.g. with multiple IP aliases on one
interface, you want to send from _your_ IP, not from the first IP
on the interface).
People mostly use SORT against lists, but our prior
behavior was pretending lists were an unordered bag
requiring a forced-sort when no sort was requested.
We can just use the native list ordering to ensure
consistency across replicaion and scripting calls.
Closes#2079Closes#545 (again)
This caused BGSAVE to be triggered a second time without any need when
we switch from socket to disk target via the command
CONFIG SET repl-diskless-sync no
and there is already a slave waiting for the BGSAVE to start.
Also comments clarified about what is happening.
EWOULDBLOCK with the fdset rio target is returned when we try to write
but the send timeout socket option triggered an error. Better to
translate the error in something the user can actually recognize as a
timeout.
We need to avoid that a child -> slaves transfer can continue forever.
We use the same timeout used as global replication timeout, which is
documented to also affect I/O operations during bulk transfers.
To perform a socket write() for each RDB rio API write call was
extremely unefficient, so now rio has minimal buffering capabilities.
Writes are accumulated into a buffer and only when a given limit is
reacehd are actually wrote to the N slaves FDs.
Trivia: rio lacked support for buffering since our targets were:
1) Memory buffers.
2) C standard I/O.
Both were buffered already.
This is useful for normal replication in order to refresh the slave
when we are persisting on disk, but for diskless replication the
child is already receiving data while in WAIT_BGSAVE_END state.
If we turn from diskless to disk-based replication via CONFIG SET, we
need a way to start a BGSAVE if there are slaves alerady waiting for a
BGSAVE to start. Normally with disk-based replication we do it as soon
as the previous child exits, but when there is a configuration change
via CONFIG SET, we may have slaves in WAIT_BGSAVE_START state without
an RDB background process currently active.
Fdset target is used when we want to write an RDB file directly to
slave's sockets. In this setup as long as there is a single slave that
is still receiving our payload, we want to continue sennding instead of
aborting. However rio calls should abort of no FD is ok.
Also we want the errors reported so that we can signal the parent who is
ok and who is broken, so there is a new set integers with the state of
each fd. Zero is ok, non-zero is the errno of the failure, if avaialble,
or a generic EIO.
A few people have written custom C commands because bit
manipulation isn't exposed through Lua. Let's give
them Mike Pall's bitop.
This adds bitop 1.0.2 (2012-05-08) from http://bitop.luajit.org/
bitop is imported as "bit" into the global namespace.
New Lua commands: bit.tobit, bit.tohex, bit.bnot, bit.band, bit.bor, bit.bxor,
bit.lshift, bit.rshift, bit.arshift, bit.rol, bit.ror, bit.bswap
Verification of working (the asserts would abort on error, so (nil) is correct):
127.0.0.1:6379> eval "assert(bit.tobit(1) == 1); assert(bit.band(1) == 1); assert(bit.bxor(1,2) == 3); assert(bit.bor(1,2,4,8,16,32,64,128) == 255)" 0
(nil)
127.0.0.1:6379> eval 'assert(0x7fffffff == 2147483647, "broken hex literals"); assert(0xffffffff == -1 or 0xffffffff == 2^32-1, "broken hex literals"); assert(tostring(-1) == "-1", "broken tostring()"); assert(tostring(0xffffffff) == "-1" or tostring(0xffffffff) == "4294967295", "broken tostring()")' 0
(nil)
Tests also integrated into the scripting tests and can be run with:
./runtest --single unit/scripting
Tests are excerpted from `bittest.lua` included in the bitop distribution.
With the exception of nodes sending MEET packets: we have to trust them
since they can send us MEET packets only when the cluster is initially
created or because sysadmin manual action.
In the cluster evaluation function we are supposed to set the cluster
state as "fail" if we are among a minority, however the code was not
detecting to be into a minority partition if exactly half the masters
were reachable, which is a minority.
We need to remember what is the saving strategy of the current RDB child
process, since the configuration may be modified at runtime via CONFIG
SET and still we'll need to understand, when the child exists, what to
do and for what goal the process was initiated: to create an RDB file
on disk or to write stuff directly to slave's sockets.
However we don't try to do this if the integer is already inside a range
representable with a shared integer.
The performance gain appears to be around ~15% in micro benchmarks,
however in the long run this also helps to improve locality, so should
have more, hard to measure, benefits.
Some language in the comment was difficult
to understand, so this commit: clarifies wording, removes
unnecessary words, and relocates some dependent clauses
closer to what they actually describe.
I also tried to break up longer chains of thought
(if X, then Y, and Q, and also F, so obviously M)
into more manageable chunks for ease of understanding.
- Remove trailing newlines from redis.conf
- Fix comment misspelling
- Clarifies zipEncodeLength usage and a C API mention (#1243, #1242)
- Fix cluster typos (inspired by @papanikge #1507)
- Fix rewite -> rewrite in a few places (inspired by #682)
Closes#1243, #1242, #1507
The old DEBUG POPULATE form for automatic creation of test keys is:
DEBUG POPULATE <count>
Now an additional form is available:
DEBUG POPULATE <count> <prefix>
When prefix is not specified, it defaults to "key", so the keys are
named incrementally from key:0 to key:<count-1>. Otherwise the specified
prefix is used instead of "key".
The command is useful in order to populate different Redis instances
with key names guaranteed to don't collide. There are other debugging
uses, for example it is possible to add additional N keys using a count
of N and a random prefix at every call.
Following the CLIENT LIST output format, we prefix the unix socket
address with a "/" so that it is different than an IPv4/6 address.
This makes parsing simpler.
Related to #2010.
This fixes a potential bug that was never observed in practice since
what happens is that the asynchronous connect returns ok (to fail later,
calling the handler) every time, so a ping is queued, and sent_ping
happens to always be populated.
Howver technically connect(2) with a non blocking socket may return an
error synchronously, so before this fix the code was not correct.
It is not clear if files open in append only mode will automatically fix
their offset after a truncate(2) operation. This commit makes sure that
we reposition the AOF file descriptor offset at the end of the file
after a truncated AOF is loaded and trimmed to the last valid command.
Recently we introduced the ability to load truncated AOFs, but
unfortuantely the support was broken since the server, after loading the
truncated AOF, continues appending to the file that is corrupted at the
end. The problem is fixed only in the next AOF rewrite.
This commit fixes the issue by truncating the AOF to the last valid
opcode, and aborting if it is not possible to truncate the file
correctly.
The code to check the number of voters was never updated to follow the new
Sentinel specification, so the number of voters was computed using only
the set of Sentinels that provided a vote.
This means that there is a changing majority on partitions, even if
usually the issue is not triggered because of the configured quorum
check (what was broken was the other implicit check that requires anyway
half of the known sentinels to agree in order to start a failover).
Because of the new ability to start with a truncated AOF, we need
to correctly release all the memory on EOF error. Otherwise there is a
small leak, that is not really a problem, but causes a false positive in
the tests that detect memory leaks.
The original implementation was modified in order to allow to
selectively announce a different IP or port, and to rewrite the two
options in the config file after a rewrite.
Previously, GETRANGE of a key containing nothing ("")
would allocate a large (size_t)-1 return value causing
crashes on 32bit builds when it tried to allocate the
4 GB return string.
32 bit builds don't have a big enough long to capture
the same range as a 64 bit build. If we use "long long"
we get proper size limits everywhere.
Also updates size of unsigned comparison to fit new size of `end`.
Fixes#1981
This allows to support datasets with more than 2 billion of keys
(possible in very large memory instances, this bug was actually
reported).
Closes issue #1814.
This just deletes old code that didn't get removed when
logic changed. We were setting offsets that never
got read anywhere.
Since clients are now just cloned, we don't need to track
per-client buffer offsets anywhere because they are all
the same from the original client.
This commit adds a size check after initial config
line parsing to make sure we have *at least* 8 arguments
per line.
Also, instead of asserting for cluster->myself, we just test
and error out normally (since the error does a hard exit anyway).
Closes#1597
For some Solaris flavours, the inet_aton in in resolv library.
Not linking this library will introduce link error.
Improves compatability with older Solaris and still
works on new Solaris.
Closes#1092
The funciton was also modified in order to be more standalone and
produce an output without trailing spaces, making the reuse simpler.
The global variable was renamed in cammel case as most other Redis
globals, except the main ones we refer too many times, like 'server'.
Found by The Mayhem Team (Alexandre Rebert, Thanassis Avgerinos,
Sang Kil Cha, David Brumley, Manuel Egele) Cylab, Carnegie Mellon
University. See http://bugs.debian.org/716259 for more.
Signed-off-by: Chris Lamb <lamby@debian.org>
Fixes#1191
Previously, "MOVE key somestring" would move the key to
DB 0 which is just unexpected and wrong.
String as DB == error.
Test added too.
Modified by @antirez in order to use the getLongLongFromObject() API
instead of strtol().
Fixes#1428
Also adds test for numsub — due to tcl being tcl,
it doesn't capture the "numberness" of the fix,
but now we at least have one test case for numsub.
Closes#1561
Modified by @antirez since the original fix to genInfoString() looked
weak. Probably the clang analyzer complained about `section` being
possibly NULL, and strcasecmp() called with a NULL pointer. In the
practice this can never happen, still for the sake of correctness
the right fix is not to modify only the first call, but to set `section`
to the value of "default" if it happens to be NULL.
Closes#1660
We only want to use the last STORE key, but we have to record
we actually found a STORE key so we can increment the final return
key count.
Test added to prevent further regression.
Closes#1883, #1645, #1647
Previously the end was casted to a smaller type
which resulted in a wrong check and failed
with values larger than handled by unsigned.
Closes#1847, #1844
Cluster leaks memory while connecting due to missing freeaddrinfo()
(Commit modified by @antirez. The freeaddrinfo() call was misplaced so
in case of no address was bound, the memory leak was still there).
Closes#1801
Previously redis-cli would happily show "-1" or "99999"
as valid DB choices.
Now, if the SELECT call returned an error, we don't update
the DB number in the CLI.
Inspired by @anupshendkar in #1313Fixes#566, #1313
Previously, if you did SELECT then AUTH, redis-cli
would show your SELECT'd db even though it didn't
happen.
Note: running into this situation is a (hopefully) very limited
used case of people using multiple DBs and AUTH all at the same
time.
Fixes antirez#1639
Replica migration algorithm modified so that slaves never try to migrate
to masters that were never configured to have slaves in the past.
We want the algorithm to take care of masters that remained without
*working* slaves, but that used to have slaves according to the cluster
configuration.
Based on ideas documented in this blog post:
https://www.facebook.com/notes/facebook-engineering/three-optimization-tips-for-c/10151361643253920
The original code was modified to handle signed integers, reformetted to
fit inside the Redis code base, and was stress-tested with a program
in order to validate the implementation against snprintf().
Redis was measured to be measurably faster from the point of view of
clients in real-world operations because of this change, since sometimes
number to string conversion is used extensively (for example every time
a GET results into an integer encoded object to be returned to the
user).
This is just a quickfix, for the nature of the test the right way to fix
it is to average the error of N runs, since otherwise it is always
possible to get a false positive with a bad run, or to minimize too much
this possibility we may end testing with too much "large" error ranges.
The user @kjmph provided excellent ideas to improve speed of ZUNIONSTORE
(in certain cases by many order of magnitude), together with an
implementation of the ideas.
While the ideas were sounding, the implementation could be improved both
in terms of speed and clearness, so that's my attempt at reimplementing
the speedup proposed, trying to improve by directly using just a
dictionary with an embedded score inside, and reusing the single-pass
aggregate + order-later approach.
Note that you can't apply this commit without applying the previous
commit in this branch that adds a double in the dictEntry value union.
Issue #1786.
For non-empty masters, CLUSTER RESET is denied, and the user requires to
start to reset a node by explicitly clearing it with FLUSHALL.
However CLUSTER RESET when executed with slaves don't have this
restrictions since data is just a replica of the master, and with
read-only slaves it is also not possible to remove the data set. However
the node was turned from slave to master after a reset, without touching
the old slave data. This is 99.99% of times not appropriate and forces
full resets to follow this path to work with both slave and master
nodes:
FLUSHALL
CLUSTER RESET HARD
FLUSHALL
Since we need the first flushall for masters, and the second for slaves.
This commit changes the behavior so that CLUSTER RESET removes the data set
of a slave node during a reset, in the moment it gets turned into a master,
so the new pattern is simply:
FLUSHALL (that may fail for slaves)
CLUSTER RESET
The introduction of --from --to --slots --yes options allow to reshard
from cli in an automated way from scripts. The code is ugly and needs
refactoring as soon as we get it in RC / stable release.
In order to make sure every object has its own private LRU counter, when
maxmemory is enabled tryObjectEncoding() does not use the pool of shared
integers. However when the policy is not LRU-based, it does not make
sense to do so, and it is much better to save memory using shared
integers.
PING can now be called with an additional arugment, behaving exactly
like the ECHO command. PING can now also be called in Pub/Sub mode (with
one more more subscriptions to channels / patterns) in order to trigger
the delivery of an asynchronous pong message with the optional payload.
This fixes issue #420.
The code tested many times if a client had active Pub/Sub subscriptions
by checking the length of a list and dictionary where the patterns and
channels are stored. This was substituted with a client flag called
REDIS_PUBSUB that is simpler to test for. Moreover in order to manage
this flag some code was refactored.
This commit is believed to have no effects in the behavior of the
server.
Previously, the command definition for the OBJECT command specified
a minimum of two args (and that it was variadic), which meant that
if you sent this:
OBJECT foo
When cluster was enabled, it would result in an assertion/SEGFAULT
when Redis was attempting to extract keys.
It appears that OBJECT is not variadic, and only ever takes 3 args.
https://gist.github.com/michael-grunder/25960ce1508396d0d36a