A few people have written custom C commands because bit
manipulation isn't exposed through Lua. Let's give
them Mike Pall's bitop.
This adds bitop 1.0.2 (2012-05-08) from http://bitop.luajit.org/
bitop is imported as "bit" into the global namespace.
New Lua commands: bit.tobit, bit.tohex, bit.bnot, bit.band, bit.bor, bit.bxor,
bit.lshift, bit.rshift, bit.arshift, bit.rol, bit.ror, bit.bswap
Verification of working (the asserts would abort on error, so (nil) is correct):
127.0.0.1:6379> eval "assert(bit.tobit(1) == 1); assert(bit.band(1) == 1); assert(bit.bxor(1,2) == 3); assert(bit.bor(1,2,4,8,16,32,64,128) == 255)" 0
(nil)
127.0.0.1:6379> eval 'assert(0x7fffffff == 2147483647, "broken hex literals"); assert(0xffffffff == -1 or 0xffffffff == 2^32-1, "broken hex literals"); assert(tostring(-1) == "-1", "broken tostring()"); assert(tostring(0xffffffff) == "-1" or tostring(0xffffffff) == "4294967295", "broken tostring()")' 0
(nil)
Tests also integrated into the scripting tests and can be run with:
./runtest --single unit/scripting
Tests are excerpted from `bittest.lua` included in the bitop distribution.
When aof-load-truncated option was introduced, with a default of "yes",
the past behavior of the server to abort with trunncated AOF changed, so
we need to explicitly configure the tests to abort with truncated AOF
by setting the option to no.
Previously, "MOVE key somestring" would move the key to
DB 0 which is just unexpected and wrong.
String as DB == error.
Test added too.
Modified by @antirez in order to use the getLongLongFromObject() API
instead of strtol().
Fixes#1428
Also adds test for numsub — due to tcl being tcl,
it doesn't capture the "numberness" of the fix,
but now we at least have one test case for numsub.
Closes#1561
We only want to use the last STORE key, but we have to record
we actually found a STORE key so we can increment the final return
key count.
Test added to prevent further regression.
Closes#1883, #1645, #1647
Previously the end was casted to a smaller type
which resulted in a wrong check and failed
with values larger than handled by unsigned.
Closes#1847, #1844
In the test we use WAIT when the master and slave are up, and only later the
partition is created killing the master, so we are sure we don't incur
in failure modes that may lose writes in this test: the goal here is to
make sure that the elected slave was replicating correctly with the
master.
In the initialization test for each instance we used to unregister the
old master and register it again to clear the config.
However there is a race condition doing this: as soon as we unregister
and re-register "mymaster", another Sentinel can update the new
configuration with the old state because of gossip "hello" messages.
So the correct procedure is instead, unregister "mymaster" from all the
sentinel instances, and re-register it everywhere again.
Lua scripts are executed in the context of the currently selected
database (as selected by the caller of the script).
However Lua scripts are also free to use the SELECT command in order to
affect other DBs. When SELECT is called frm Lua, the old behavior, before
this commit, was to automatically set the Lua caller selected DB to the
last DB selected by Lua. See for example the following sequence of
commands:
SELECT 0
SET x 10
EVAL "redis.call('select','1')" 0
SET x 20
Before this commit after the execution of this sequence of commands,
we'll have x=10 in DB 0, and x=20 in DB 1.
Because of the problem above, there was a bug affecting replication of
Lua scripts, because of the actual implementation of replication. It was
possible to fix the implementation of Lua scripts in order to fix the
issue, but looking closely, the bug is the consequence of the behavior
of Lua ability to set the caller's DB.
Under the old semantics, a script selecting a different DB, has no simple
ways to restore the state and select back the previously selected DB.
Moreover the script auhtor must remember that the restore is needed,
otherwise the new commands executed by the caller, will be executed in
the context of a different DB.
So this commit fixes both the replication issue, and this hard-to-use
semantics, by removing the ability of Lua, after the script execution,
to force the caller to switch to the DB selected by the Lua script.
The new behavior of the previous sequence of commadns is to just set
X=20 in DB 0. However Lua scripts are still capable of writing / reading
from different DBs if needed.
WARNING: This is a semantical change that will break programs that are
conceived to select the client selected DB via Lua scripts.
This fixes issue #1811.
The new check-for-number behavior of Lua arguments broke
users who use large strings of just integers.
The Lua number check would convert the string to a number, but
that breaks user data because
Lua numbers have limited precision compared to an arbitrarily
precise number wrapped in a string.
Regression fixed and new test added.
Fixes#1118 again.
FLUSHALL will fail on read-only slaves, but there the command is not
needed in order to reset the instance with CLUSTER RESET so errors can
be ignored.
Previously the PID format was:
[PID] Timestamp
But it recently changed to:
PID:X Timestamp
The tcl testing framework was grabbing the PID from \[\d+\], but
that's not valid anymore.
Now we grab the pid from "PID: <PID>" in the part of Redis startup
output to the right of the ASCII logo.
The bug was triggered by running the test with Valgrind (which is a lot
slower and more sensible to timing issues) after the recent changes
that made Redis more promptly able to reply with the -LOADING error.
Behrad Zari discovered [1] and Josiah reported [2]: if you block
and wait for a list to exist, but the list creates from
a non-push command, the blocked client never gets notified.
This commit adds notification of blocked clients into
the DB layer and away from individual commands.
Lists can be created by [LR]PUSH, SORT..STORE, RENAME, MOVE,
and RESTORE. Previously, blocked client notifications were
only triggered by [LR]PUSH. Your client would never get
notified if a list were created by SORT..STORE or RENAME or
a RESTORE, etc.
Blocked client notification now happens in one unified place:
- dbAdd() triggers notification when adding a list to the DB
Two new tests are added that fail prior to this commit.
All test pass.
Fixes#1668
[1]: https://groups.google.com/forum/#!topic/redis-db/k4oWfMkN1NU
[2]: #1668
Better handling of connection errors in order to update the table and
recovery, populate the startup nodes table after fetching the list of
nodes.
More work to do about it, it is still not as reliable as
redis-rb-cluster implementation which is the minimal reference
implementation for Redis Cluster clients.
SPOP, tested in the new test, is among the commands rewritng the
client->argv argument vector (it gets rewritten as SREM) for command
replication purposes.
Because of recent optimizations to client->argv caching in the context
of the Lua internal Redis client, it is important to test for SPOP to be
callable from Lua without bad effects to the other commands.
Sometimes the process is still there but no longer in a state that can
be checked (after being killed). This used to happen after a call to
SHUTDOWN NOSAVE in the scripting unit, causing a false positive.
This makes tests a bit slower, but it is better to test things at a
decent scale instead of using just a few nodes, and for a few tests we
actually need so many nodes.
The test now runs in a self-contained directory.
The general abstractions to run the tests in an environment where
mutliple instances are executed at the same time was extrapolated into
instances.tcl, that will be reused to test Redis Cluster.
This commit sets the failover timeout to 30 seconds instead of the 180
seconds default, and allows to reconfigure multiple slaves at the same
time.
This makes tests less sensible to timing, with the result that there are
less false positives due to normal behaviors that require time to
succeed or to be retried.
However the long term solution is probably some way in order to detect
when a test failed because of timing issues (for example split brain
during leader election) and retry it.
It was verified in practice that this test is able to stress much more
the implementation by introducing errors that were only trivially to
detect with different offsets but impossible to detect starting always
at zero and counting bits the full length of the string.
An unit can abort in the middle for an error. The next unit should not
assume that the instances are in a clean state, and must restart what
was left killed.
Sentinel tests are designed to be dependent on the previous tests in the
same unit, so usually we can't continue with the next test in the same
unit if a previous test failed.
The test was previously performed by removing the master from the
Sentinel monitored masters. The test with the Sentinels crashed is
more similar to real-world partitions / failures.
The area a number of mandatory tests to craete a stable setup for
testing that is not too sensitive to timing issues. All those tests
moved to includes/init-tests, and marked as (init).
It is now possible to kill and restart sentinel or redis instances for
more real-world testing.
The 01 unit tests the capability of Sentinel to update the configuration
of Sentinels rejoining the cluster, however the test is pretty trivial
and more tests should be added.
Some inline test moved into server_is_up procedure.
Also find_available_port was moved into util since it is going
to be used for the Sentinel test as well.
The Redis test uses a server-clients model in order to parallelize the
execution of different tests. However in recent versions of osx not
setting the channel to a binary encoding caused issues even if AFAIK no
binary data is really sent via this channel. However now the channels
are deliberately set to a binary encoding and this solves the issue.
The exact issue was the test not terminating and giving the impression
of running forever, since test clients or servers were unable to
exchange the messages to continue.
When the test is executed using the root account, setting the permission
to 222 does not work as expected, as root can read files with 222
permission.
Now we skip the test if root is detected.
This fixes issue #1034 and the duplicated #1040 issue.
Thanks to Jan-Erik Rediger (@badboy on Github) for finding a way to reproduce the issue.
When keyspace events are enabled, the overhead is not sever but
noticeable, so this commit introduces the ability to select subclasses
of events in order to avoid to generate events the user is not
interested in.
The events can be selected using redis.conf or CONFIG SET / GET.
UNSUBSCRIBE and PUNSUBSCRIBE commands are designed to mass-unsubscribe
the client respectively all the channels and patters if called without
arguments.
However when these functions are called without arguments, but there are
no channels or patters we are subscribed to, the old behavior was to
don't reply at all.
This behavior is broken, as every command should always reply.
Also it is possible that we are no longer subscribed to a channels but we
are subscribed to patters or the other way around, and the client should
be notified with the correct number of subscriptions.
Also it is not pretty that sometimes we did not receive a reply at all
in a redis-cli session from these commands, blocking redis-cli trying
to read the reply.
This fixes issue #714.
The Redis Slow Log always used to log the slow commands executed inside
a MULTI/EXEC block. However also EXEC was logged at the end, which is
perfectly useless.
Now EXEC is no longer logged and a test was added to test this behavior.
This fixes issue #759.
Due to changes in recent releases of osx leaks utility, the osx leak
detection no longer worked. Now it is fixed in a way that should be
backward compatible.
SDIFF used an algorithm that was O(N) where N is the total number
of elements of all the sets involved in the operation.
The algorithm worked like that:
ALGORITHM 1:
1) For the first set, add all the members to an auxiliary set.
2) For all the other sets, remove all the members of the set from the
auxiliary set.
So it is an O(N) algorithm where N is the total number of elements in
all the sets involved in the diff operation.
Cristobal Viedma suggested to modify the algorithm to the following:
ALGORITHM 2:
1) Iterate all the elements of the first set.
2) For every element, check if the element also exists in all the other
remaining sets.
3) Add the element to the auxiliary set only if it does not exist in any
of the other sets.
The complexity of this algorithm on the worst case is O(N*M) where N is
the size of the first set and M the total number of sets involved in the
operation.
However when there are elements in common, with this algorithm we stop
the computation for a given element as long as we find a duplicated
element into another set.
I (antirez) added an additional step to algorithm 2 to make it faster,
that is to sort the set to subtract from the biggest to the
smallest, so that it is more likely to find a duplicate in a larger sets
that are checked before the smaller ones.
WHAT IS BETTER?
None of course, for instance if the first set is much larger than the
other sets the second algorithm does a lot more work compared to the
first algorithm.
Similarly if the first set is much smaller than the other sets, the
original algorithm will less work.
So this commit makes Redis able to guess the number of operations
required by each algorithm, and select the best at runtime according
to the input received.
However, since the second algorithm has better constant times and can do
less work if there are duplicated elements, an advantage is given to the
second algorithm.
EVALSHA used to crash if the SHA1 was not lowercase (Issue #783).
Fixed using a case insensitive dictionary type for the sha -> script
map used for replication of scripts.
The previous behavior was to return -1 if:
1) Existing key but without an expire set.
2) Non existing key.
Now the second case is handled in a different, and TTL will return -2
if the key does not exist at all.
PTTL follows the same behavior as well.
With COPY now MIGRATE does not remove the key from the source instance.
With REPLACE it uses RESTORE REPLACE on the target host so that even if
the key already eixsts in the target instance it will be overwritten.
The options can be used together.
The REPLACE option deletes an existing key with the same name (if any)
and materializes the new one. The default behavior without RESTORE is to
return an error if a key already exists.