Commit Graph

319 Commits

Author SHA1 Message Date
Oran Agra
9ac6cb9ce4 memoryGetKeys helper function so that ACL can limit access to keys for MEMORY command 2020-02-05 09:42:49 +02:00
Leo Murillo
63e46e5f8d Fix bug on KEYS command where pattern starts with * followed by \x00 (null char). 2020-01-07 13:55:26 -06:00
Oran Agra
660645c742 modules don't signalModifiedKey in setKey() since that's done (optionally) in RM_CloseKey 2019-12-23 10:15:52 +02:00
antirez
5b9d3ac6c6 Avoid changing setKey() API after #6679 fix. 2019-12-18 11:58:02 +01:00
zhaozhao.zz
3c0ed0309a lazyfree: add a new configuration lazyfree-lazy-user-del
Delete keys in async way when executing DEL command, if
lazyfree-lazy-user-del is yes.
2019-12-18 16:54:49 +08:00
zhaozhao.zz
24044f3356 add a new SET option KEEPTTL that doesn't remove expire time 2019-12-18 15:20:36 +08:00
Salvatore Sanfilippo
6203ed0474
Merge pull request #6593 from daidaotong/movecommandfix
fix move command: WATCH/MULTI/EXEC mechanism breaks
2019-11-25 10:02:43 +01:00
antirez
ce03d68332 Rename var to fixed_time_expire now that is more general. 2019-11-19 11:28:04 +01:00
Daniel Dai
4ece8bd7a7 fix move command 2019-11-19 00:02:45 -05:00
antirez
2f6fe5ce3a Expire cycle: introduce the new state needed for the new algo. 2019-11-14 18:28:01 +01:00
antirez
0f026af185 Merge branch 'unstable' of github.com:/antirez/redis into unstable 2019-11-06 17:44:42 +01:00
antirez
8b2c0f9044 Update PR #6537: use a fresh time outside call().
One problem with the solution proposed so far in #6537 is that key
lookups outside a command execution via call(), still used a cached
time. The cached time needed to be refreshed in multiple places,
especially because of modules callbacks from timers, cluster bus, and
thread safe contexts, that may use RM_Open().

In order to avoid this problem, this commit introduces the ability to
detect if we are inside call(): this way we can use the reference fixed
time only when we are in the context of a command execution or Lua
script, but for the asynchronous lookups, we can still use mstime() to
get a fresh time reference.
2019-11-06 09:57:29 +01:00
antirez
824f5f0b7a Update PR #6537 patch to for generality.
After the thread in #6537 and thanks to the suggestions received, this
commit updates the original patch in order to:

1. Solve the problem of updating the time in multiple places by updating
it in call().
2. Avoid introducing a new field but use our cached time.

This required some minor refactoring to the function updating the time,
and the introduction of a new cached time in microseconds in order to
use less gettimeofday() calls.
2019-11-05 10:14:34 +01:00
zhaozhao.zz
e542132b07 expires: refactoring judgment about whether a key is expired
Calling lookupKey*() many times to search a key in one command
may get different result.

That's because lookupKey*() calls expireIfNeeded(), and delete
the key when reach the expire time. So we can get an robj before
the expire time, but a NULL after the expire time.

The worst is that may lead to Redis crash, for example
`RPOPLPUSH foo foo` the first time we get a list form `foo` and
hold the pointer, but when we get `foo` again it's expired and
deleted. Now we hold a freed memory, when execute rpoplpushHandlePush()
redis crash.

To fix it, we can refactor the judgment about whether a key is expired,
using the same basetime `server.cmd_start_mstime` instead of calling
mstime() everytime.
2019-11-05 09:56:39 +01:00
Salvatore Sanfilippo
18b62a51e2
Merge branch 'unstable' into module_lru_lfu 2019-11-04 11:04:49 +01:00
Oran Agra
87332ce524 Module API for PUBLISH, FLUSHALL, RANDOMKEY, DBSIZE 2019-11-04 07:40:52 +02:00
Oran Agra
e978bdf9ef Module API for controlling LRU and LFU, and OpenKey without TOUCH
Some commands would want to open a key without touching it's LRU/LFU
similarly to the OBJECT or DEBUG command do.

Other commands may want to implement logic similar to what RESTORE
does (and in the future MIGRATE) and get/set the LRU or LFU.
2019-10-29 15:03:05 +02:00
antirez
ed833c9f99 Modules hooks: implement the FLUSHDB event. 2019-10-23 18:39:53 +02:00
Oran Agra
bf759cc9c3 Merge remote-tracking branch 'antirez/unstable' into jemalloc_purge_bg 2019-10-04 13:53:40 +03:00
antirez
de1f82aa33 Function renamed hasForkChild() -> hasActiveChildProcess(). 2019-09-27 12:03:09 +02:00
Salvatore Sanfilippo
6129758558
Merge branch 'unstable' into modules_fork 2019-09-27 11:24:06 +02:00
antirez
e4b3c8bbc3 emptyDbGeneric(): call signalFlushDb() before deleting the keys.
This was broken since a refactoring performed recently by myself.
2019-07-30 11:20:54 +02:00
antirez
436d02dd7b Move signalFlushedDb() into a better place.
Now that the call also invalidates client side caching slots, it is
important that after an internal flush operation we both send the
notifications to the clients and, at the same time, are able to reclaim
the memory of the tracking table. This may even fix a few edge cases
related to MULTI/EXEC + WATCH during resync, not sure, but in general
looks more correct.
2019-07-22 12:37:47 +02:00
antirez
842b44dc46 Client side caching: call the invalidation functions always.
Otherwise what happens is that the tracking table will never get garbage
collected if there are no longer clients with tracking enabled.
Now the invalidation function immediately checks if there is any table
allocated, otherwise it returns ASAP, so the overhead when the feature
is not used should be near zero.
2019-07-22 12:29:54 +02:00
Oran Agra
56258c6b7d Module API for Forking
* create module API for forking child processes.
* refactor duplicate code around creating and tracking forks by AOF and RDB.
* child processes listen to SIGUSR1 and dies exitFromChild in order to
  eliminate a valgrind warning of unhandled signal.
* note that BGSAVE error reply has changed.

valgrind error is:
  Process terminating with default action of signal 10 (SIGUSR1)
2019-07-17 16:40:24 +03:00
zhaozhao.zz
6191ea90a1 Client side caching: implement trackingInvalidateKeysOnFlush() 2019-07-17 20:33:52 +08:00
Salvatore Sanfilippo
6100a82a3a
Merge pull request #5738 from bmerry/fix-swapdb-int-overflow
Make dbSwapDatabases take args as long
2019-07-12 12:36:33 +02:00
Oran Agra
2de544cfcc diskless replication on slave side (don't store rdb to file), plus some other related fixes
The implementation of the diskless replication was currently diskless only on the master side.
The slave side was still storing the received rdb file to the disk before loading it back in and parsing it.

This commit adds two modes to load rdb directly from socket:
1) when-empty
2) using "swapdb"
the third mode of using diskless slave by flushdb is risky and currently not included.

other changes:
--------------
distinguish between aof configuration and state so that we can re-enable aof only when sync eventually
succeeds (and not when exiting from readSyncBulkPayload after a failed attempt)
also a CONFIG GET and INFO during rdb loading would have lied

When loading rdb from the network, don't kill the server on short read (that can be a network error)

Fix rdb check when performed on preamble AOF

tests:
run replication tests for diskless slave too
make replication test a bit more aggressive
Add test for diskless load swapdb
2019-07-08 15:37:48 +03:00
Salvatore Sanfilippo
722446510f
Merge pull request #6116 from AngusP/scan-types
SCAN: New Feature `SCAN cursor [TYPE type]` modifier suggested in issue #6107
2019-07-08 12:53:34 +02:00
Angus Pearson
6eb52e200c Change typeNameCanonicalize -> getObjectTypeName, and other style changes 2019-07-08 11:04:37 +01:00
antirez
46edb55de9 Client side caching: implement trackingInvalidateKey(). 2019-07-03 19:16:20 +02:00
Angus Pearson
e2adea2188 Add char* typeNameCanonicalize(robj*) to remove duplicate code between SCAN and TYPE commands,
and to keep OBJ_* enum to string canonicalization in one place.
2019-06-10 17:41:44 +01:00
Oran Agra
09f99c2a92 make redis purge jemalloc after flush, and enable background purging thread
jemalloc 5 doesn't immediately release memory back to the OS, instead there's a decaying
mechanism, which doesn't work when there's no traffic (no allocations).
this is most evident if there's no traffic after flushdb, the RSS will remain high.

1) enable jemalloc background purging
2) explicitly purge in flushdb
2019-06-02 15:33:14 +03:00
Angus Pearson
bf963253ec Implement SCAN cursor [TYPE type] modifier suggested in issue #6107.
Add tests to check basic functionality of this optional keyword, and also tested with
a module (redisgraph). Checked quickly with valgrind, no issues.

Copies name the type name canonicalisation code from `typeCommand`, perhaps this would
be better factored out to prevent the two diverging and both needing to be edited to
add new `OBJ_*` types, but this is a little fiddly with C strings.

The [redis-doc](https://github.com/antirez/redis-doc/blob/master/commands.json) repo
will need to be updated with this new arg if accepted.

A quirk to be aware of here is that the GEO commands are backed by zsets not their own
type, so they're not distinguishable from other zsets.

Additionally, for sparse types this has the same behaviour as `MATCH` in that it may
return many empty results before giving something, even for large `COUNT`s.
2019-05-22 16:39:04 +01:00
Dvir Volk
040e52c77f Renamed event name from "miss" to "keymiss" 2019-03-21 20:33:11 +02:00
Dvir Volk
99c2fe0bcf added special flag for keyspace miss notifications 2019-03-21 11:47:14 +02:00
Dvir Volk
8620a434a0 Added keyspace miss notifications support 2019-03-19 13:11:37 +02:00
antirez
35ffbead5c Use dictGetFairRandomKey() for RANDOMKEY as well. 2019-02-19 17:29:51 +01:00
antirez
4dc69497f5 Refactoring: always kill AOF/RDB child via helper functions. 2019-01-21 11:28:44 +01:00
Salvatore Sanfilippo
adfaf548e3
Merge branch 'unstable' into fixChildInfoPipeFdLeak 2019-01-21 11:20:56 +01:00
Salvatore Sanfilippo
9f939610f3
Merge pull request #5797 from trevor211/fixUpdateDictResizePolicy
Fix update dict resize policy
2019-01-21 11:14:48 +01:00
WuYunlong
440385de14 Fix child info pipe fd leak when child process gets killed. 2019-01-21 17:48:45 +08:00
WuYunlong
f004a3e7ff Update dict resize policy when rdb child process gets killed. 2019-01-21 17:33:18 +08:00
antirez
317f8b9d38 RESP3: most null replies converted. 2019-01-09 17:00:29 +01:00
antirez
b507654716 RESP3: Use new deferred len API in dict.c. 2019-01-09 17:00:29 +01:00
Bruce Merry
6b818efeb6 Make dbSwapDatabases take args as long
This prevents an integer overflow bug. Closes #5737.
2018-12-31 11:51:03 +02:00
lsytj0413
6aa9606995 fix a typo: craeted -> created 2018-12-06 11:08:59 +08:00
Salvatore Sanfilippo
e8c108e938
Merge pull request #5469 from soloestoy/stat-key-miss-if-expired
if we read a expired key, misses++
2018-10-24 12:31:28 +02:00
antirez
8749fc3f67 Simplify part of the #5470 patch. 2018-10-24 12:26:27 +02:00
Salvatore Sanfilippo
3c89fb5ffd
Merge pull request #5470 from soloestoy/keys-no-trigger-expire
do not delete expired keys in KEYS command
2018-10-24 12:23:59 +02:00
hujie
63e41ee192
fix typo 2018-10-20 12:18:56 +08:00
zhaozhao.zz
5ddd507624 if we read a expired key, misses++ 2018-10-19 18:16:29 +08:00
zhaozhao.zz
7ab9cba59b do not delete expired keys in KEYS command 2018-10-19 18:00:57 +08:00
zhaozhao.zz
fddeeae724 refactor dbOverwrite to make lazyfree work 2018-07-31 12:07:57 +08:00
antirez
fd174cca23 Remove useless conditional from emptyDb().
Related to #4852.
2018-07-25 16:34:57 +02:00
antirez
1c2352f0dd Make emptyDb() change introduced in #4852 simpler to read. 2018-07-25 16:32:52 +02:00
zhaozhao.zz
dd071ff6c9 optimize flushdb, avoid useless loops 2018-07-25 18:13:34 +08:00
Jack Drogon
93238575f7 Fix typo 2018-07-03 18:19:46 +02:00
zhaozhao.zz
1fcf2737a6 fix some compile warnings 2018-06-28 17:22:59 +08:00
shenlongxing
ec55df117f fix typo 2018-06-21 22:08:09 +08:00
antirez
2076660843 Streams: fix xreadGetKeys() for correctness.
The old version could not handle the fact that "STREAMS" is a valid key
name for streams. Now we really try to parse the command like the
command implementation would do.

Related to #5028 and 4857.
2018-06-18 14:06:06 +02:00
Salvatore Sanfilippo
e670ccffea
Merge pull request #4857 from youjiali1995/fix-command-getkeys
Fix core dump when using some commands with wrong arguments.
2018-06-18 13:54:38 +02:00
antirez
a0b27dae85 Streams: fix xreadGetKeys() buffer overflow.
The loop allocated a buffer for the right number of keys positions, then
overflowed it going past the limit.

Related to #4857 and cause of the memory violation seen in #5028.
2018-06-18 13:51:19 +02:00
antirez
d353023a83 Merge branch 'unstable' of github.com:/antirez/redis into unstable 2018-06-14 18:11:04 +02:00
antirez
d6e8fe77af Fix infinite loop in dbRandomKey().
Thanks to @kevinmcgehee for signaling the issue and reasoning about the
consequences and potential fixes.

Issue #5015.
2018-06-14 18:08:21 +02:00
zhaozhao.zz
2ffa533f85 fix exists command on slave 2018-06-14 01:30:07 +08:00
antirez
e534e9aa83 In scanDatabaseForReadyLists() now we need to handle ZSETs as well.
Since the introduction of ZPOP makes this needed. Thanks to @oranagra
for reporting.
2018-06-12 17:28:40 +02:00
赵磊
10dedc2586 Fix core dump when using 'command getkeys' with wrong arguments. 2018-06-04 15:14:50 +08:00
Itamar Haber
438125b47c Implements [B]Z[REV]POP and the respective unit tests
An implementation of the
[Ze POP Redis Module](https://github.com/itamarhaber/zpop) as core
Redis commands.

Fixes #1861.
2018-04-30 02:10:42 +03:00
Salvatore Sanfilippo
bad679137e
Merge pull request #4679 from youjiali1995/fix-lru
Some commands will udpate lfu-counter many times in one call.
2018-03-22 16:26:01 +01:00
antirez
92696e49d2 expireIfNeeded() needed a top comment documenting the behavior. 2018-02-27 16:44:43 +01:00
antirez
b00c4ffab5 expireIfNeeded() comment: claim -> pretend. 2018-02-27 16:37:37 +01:00
赵磊
aacecbc997 Remove updateLFU() in dbOverwrite(). 2018-02-11 21:02:07 +08:00
antirez
2f8476df91 Fix getKeysUsingCommandTable() in the case of nagative arity.
This fixes a crash with Redis Cluster when OBJECT is mis-used, because
getKeysUsingCommandTable() will call serverPanic() detecting we are
accessing an invalid argument in the case "OBJECT foo" is called.

This bug was introduced when OBJECT HELP was introduced, because the key
argument is set fixed at index 2 in the command table, however now
OBJECT may be called with an insufficient number of arguments to extract
the key.

The "Right Thing" would be to have a specific function to extract keys
from the OBJECT command, however this is kinda of an overkill, so I
preferred to make getKeysUsingCommandTable() more robust and just return
no keys when it's not possible to honor the command table, because new
commands are often added and also there are a number with an HELP
subcommand violating the normal form, and crashing for this trivial
reason or having many command-specific key extraction functions is not
great.
2018-01-12 11:26:29 +01:00
antirez
671b1f6a9d Streams: fix TYPE for stream type. 2017-12-01 10:24:24 +01:00
antirez
6468cb2e82 Streams: fix XREAD ready-key signaling.
With lists we need to signal only on key creation, but streams can
provide data to clients listening at every new item added.
To make this slightly more efficient we now track different classes of
blocked clients to avoid signaling keys when there is nobody listening.
A typical case is when the stream is used as a time series DB and
accessed only by range with XRANGE.
2017-12-01 10:24:24 +01:00
antirez
a7d898334a Streams: XREAD get-key method fixed. 2017-12-01 10:24:24 +01:00
antirez
110041825c Streams: XREAD get-keys method. 2017-12-01 10:24:24 +01:00
antirez
4a377cecd8 Streams: initial work to use blocking lists logic for streams XREAD. 2017-12-01 10:24:24 +01:00
zhaozhao.zz
583c314725 LFU: do some changes about LFU to find hotkeys
Firstly, use access time to replace the decreas time of LFU.
For function LFUDecrAndReturn,
it should only try to get decremented counter,
not update LFU fields, we will update it in an explicit way.
And we will times halve the counter according to the times of
elapsed time than server.lfu_decay_time.
Everytime a key is accessed, we should update the LFU
including update access time, and increment the counter after
call function LFUDecrAndReturn.
If a key is overwritten, the LFU should be also updated.
Then we can use `OBJECT freq` command to get a key's frequence,
and LFUDecrAndReturn should be called in `OBJECT freq` command
in case of the key has not been accessed for a long time,
because we update the access time only when the key is read or
overwritten.
2017-11-27 18:39:22 +01:00
zhaozhao.zz
b541ccef25 PSYNC2: make persisiting replication info more solid
This commit is a reinforcement of commit c1c99e9.

1. Replication information can be stored when the RDB file is
generated by a mater using server.slaveseldb when server.repl_backlog
is not NULL, or set repl_stream_db be -1. That's safe, because
NULL server.repl_backlog will trigger full synchronization,
then master will send SELECT command to replicaiton stream.
2. Only do rdbSave* when rsiptr is not NULL,
if we do rdbSave* without rdbSaveInfo, slave will miss repl-stream-db.
3. Save the replication informations also in the case of
SAVE command, FLUSHALL command and DEBUG reload.
2017-09-20 11:18:10 +02:00
Qu Chen
4740424049 Implement getKeys procedure for georadius and georadiusbymember
commands.
2017-06-14 18:15:48 +02:00
antirez
5877c02c51 Fix PERSIST expired key resuscitation issue #4048. 2017-06-13 10:35:51 +02:00
antirez
7d9dd80db3 Fix getKeysUsingCommandTable() in cluster mode.
Close #3940.
2017-04-19 16:17:08 +02:00
antirez
3f9e2322ec Rax library updated. 2017-04-07 08:46:39 +02:00
antirez
1409c545da Cluster: hash slots tracking using a radix tree. 2017-03-27 16:37:22 +02:00
oranagra
7aa9e6d2ae active memory defragmentation 2016-12-30 03:37:52 +02:00
antirez
04542cff92 Replication: fix the infamous key leakage of writable slaves + EXPIRE.
BACKGROUND AND USE CASEj

Redis slaves are normally write only, however the supprot a "writable"
mode which is very handy when scaling reads on slaves, that actually
need write operations in order to access data. For instance imagine
having slaves replicating certain Sets keys from the master. When
accessing the data on the slave, we want to peform intersections between
such Sets values. However we don't want to intersect each time: to cache
the intersection for some time often is a good idea.

To do so, it is possible to setup a slave as a writable slave, and
perform the intersection on the slave side, perhaps setting a TTL on the
resulting key so that it will expire after some time.

THE BUG

Problem: in order to have a consistent replication, expiring of keys in
Redis replication is up to the master, that synthesize DEL operations to
send in the replication stream. However slaves logically expire keys
by hiding them from read attempts from clients so that if the master did
not promptly sent a DEL, the client still see logically expired keys
as non existing.

Because slaves don't actively expire keys by actually evicting them but
just masking from the POV of read operations, if a key is created in a
writable slave, and an expire is set, the key will be leaked forever:

1. No DEL will be received from the master, which does not know about
such a key at all.

2. No eviction will be performed by the slave, since it needs to disable
eviction because it's up to masters, otherwise consistency of data is
lost.

THE FIX

In order to fix the problem, the slave should be able to tag keys that
were created in the slave side and have an expire set in some way.

My solution involved using an unique additional dictionary created by
the writable slave only if needed. The dictionary is obviously keyed by
the key name that we need to track: all the keys that are set with an
expire directly by a client writing to the slave are tracked.

The value in the dictionary is a bitmap of all the DBs where such a key
name need to be tracked, so that we can use a single dictionary to track
keys in all the DBs used by the slave (actually this limits the solution
to the first 64 DBs, but the default with Redis is to use 16 DBs).

This solution allows to pay both a small complexity and CPU penalty,
which is zero when the feature is not used, actually. The slave-side
eviction is encapsulated in code which is not coupled with the rest of
the Redis core, if not for the hook to track the keys.

TODO

I'm doing the first smoke tests to see if the feature works as expected:
so far so good. Unit tests should be added before merging into the
4.0 branch.
2016-12-13 10:59:54 +01:00
antirez
2669fb8364 PSYNC2: different improvements to Redis replication.
The gist of the changes is that now, partial resynchronizations between
slaves and masters (without the need of a full resync with RDB transfer
and so forth), work in a number of cases when it was impossible
in the past. For instance:

1. When a slave is promoted to mastrer, the slaves of the old master can
partially resynchronize with the new master.

2. Chained slalves (slaves of slaves) can be moved to replicate to other
slaves or the master itsef, without requiring a full resync.

3. The master itself, after being turned into a slave, is able to
partially resynchronize with the new master, when it joins replication
again.

In order to obtain this, the following main changes were operated:

* Slaves also take a replication backlog, not just masters.

* Same stream replication for all the slaves and sub slaves. The
replication stream is identical from the top level master to its slaves
and is also the same from the slaves to their sub-slaves and so forth.
This means that if a slave is later promoted to master, it has the
same replication backlong, and can partially resynchronize with its
slaves (that were previously slaves of the old master).

* A given replication history is no longer identified by the `runid` of
a Redis node. There is instead a `replication ID` which changes every
time the instance has a new history no longer coherent with the past
one. So, for example, slaves publish the same replication history of
their master, however when they are turned into masters, they publish
a new replication ID, but still remember the old ID, so that they are
able to partially resynchronize with slaves of the old master (up to a
given offset).

* The replication protocol was slightly modified so that a new extended
+CONTINUE reply from the master is able to inform the slave of a
replication ID change.

* REPLCONF CAPA is used in order to notify masters that a slave is able
to understand the new +CONTINUE reply.

* The RDB file was extended with an auxiliary field that is able to
select a given DB after loading in the slave, so that the slave can
continue receiving the replication stream from the point it was
disconnected without requiring the master to insert "SELECT" statements.
This is useful in order to guarantee the "same stream" property, because
the slave must be able to accumulate an identical backlog.

* Slave pings to sub-slaves are now sent in a special form, when the
top-level master is disconnected, in order to don't interfer with the
replication stream. We just use out of band "\n" bytes as in other parts
of the Redis protocol.

An old design document is available here:

https://gist.github.com/antirez/ae068f95c0d084891305

However the implementation is not identical to the description because
during the work to implement it, different changes were needed in order
to make things working well.
2016-11-09 15:37:15 +01:00
antirez
c7a4e694ad SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:

    SWAPDB 0 1

This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.

MOTIVATION AND HISTORY
---

The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.

The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.

This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.

At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.

BEHAVIOR WITH BLOCKING OPERATIONS
---

If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.

However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.

It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.

Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
antirez
09a50d34a2 dict.c: dictReplaceRaw() -> dictAddOrFind().
What they say about "naming things" in programming?
2016-09-14 16:43:38 +02:00
antirez
a8e2d0849e LFU: Initial naive eviction cycle.
It is possible to get better results by using the pool like in the LRU
case. Also from tests during the morning I believe the current
implementation has issues in the frequency decay function that should
decrease the counter at periodic intervals.
2016-07-18 13:50:19 +02:00
antirez
5d07984c5d LFU: Redis object level implementation.
Implementation of LFU maxmemory policy for anything related to Redis
objects. Still no actual eviction implemented.
2016-07-15 12:12:58 +02:00
antirez
b46239e58b Expire and LRU related code moved into different files. 2016-07-06 15:24:06 +02:00
antirez
41d804d9dc TTL and TYPE LRU access fixed. TOUCH implemented. 2016-06-14 15:33:59 +02:00
antirez
8ec28002be Modules: support for modules native data types. 2016-06-03 18:14:04 +02:00
antirez
227d68094b Modules: command <-> core interface modified to get flags & keys. 2016-05-10 06:40:09 +02:00
Oran Agra
5e3880a492 various cleanups and minor fixes 2016-04-25 16:49:57 +03:00
antirez
4e252e4c09 MIGRATE: Fix key extraction for new form. 2015-12-11 18:09:01 +01:00
antirez
252cfa0a39 Lazyfree: cond vars to enabled/disable it based on DEL context. 2015-10-02 15:27:57 +02:00