the AOF will be loaded successfully, but the stream will be missing,
i.e inconsistencies with the original db.
this was because XADD with id of 0-0 would error.
add a test to reproduce.
Redis refusing to run MULTI or EXEC during script timeout may cause partial
transactions to run.
1) if the client sends MULTI+commands+EXEC in pipeline without waiting for
response, but these arrive to the shards partially while there's a busy script,
and partially after it eventually finishes: we'll end up running only part of
the transaction (since multi was ignored, and exec would fail).
2) similar to the above if EXEC arrives during busy script, it'll be ignored and
the client state remains in a transaction.
the 3rd test which i added for a case where MULTI and EXEC are ok, and
only the body arrives during busy script was already handled correctly
since processCommand calls flagTransaction
it seems that running two clients at a time is ok too, resuces action
time from 20 minutes to 10. we'll use this for now, and if one day it
won't be enough we'll have to run just the sensitive tests one by one
separately from the others.
this commit also fixes an issue with the defrag test that appears to be
very rare.
seems that github actions are slow, using just one client to reduce
false positives.
also adding verbose, testing only on latest ubuntu, and building on
older one.
when doing that, i can reduce the test threshold back to something saner
I saw that the new defag test for list was failing in CI recently, so i
reduce it's threshold from 12 to 60.
besides that, i add / improve the latency test for that other two defrag
tests (add a sensitive latency and digest / save checks)
and fix bad usage of debug populate (can't overrides existing keys).
this was the original intention, which creates higher fragmentation.
When active defrag kicks in and finds a big list, it will create a bookmark to
a node so that it is able to resume iteration from that node later.
The quicklist manages that bookmark, and updates it in case that node is deleted.
This will increase memory usage only on lists of over 1000 (see
active-defrag-max-scan-fields) quicklist nodes (1000 ziplists, not 1000 items)
by 16 bytes.
In 32 bit build, this change reduces the maximum effective config of
list-compress-depth and list-max-ziplist-size (from 32767 to 8191)
This bug affected RM_StringToLongDouble and HINCRBYFLOAT.
I added tests for both cases.
Main changes:
1. Fixed string2ld to fail if string contains \0 in the middle
2. Use string2ld in getLongDoubleFromObject - No point of
having duplicated code here
The two changes above broke RM_SaveLongDouble/RM_LoadLongDouble
because the long double string was saved with length+1 (An innocent
mistake, but it's actually a bug - The length passed to
RM_SaveLongDouble should not include the last \0).
If a blocked module client times-out (or disconnects, unblocked
by CLIENT command, etc.) we need to call moduleUnblockClient
in order to free memory allocated by the module sub-system
and blocked-client private data
Other changes:
Made blockedonkeys.tcl tests a bit more aggressive in order
to smoke-out potential memory leaks
This commit solves the following bug:
127.0.0.1:6379> XGROUP CREATE x grp $ MKSTREAM
OK
127.0.0.1:6379> XADD x 666 f v
"666-0"
127.0.0.1:6379> XREADGROUP GROUP grp Alice BLOCK 0 STREAMS x >
1) 1) "x"
2) 1) 1) "666-0"
2) 1) "f"
2) "v"
127.0.0.1:6379> XADD x 667 f v
"667-0"
127.0.0.1:6379> XDEL x 667
(integer) 1
127.0.0.1:6379> XREADGROUP GROUP grp Alice BLOCK 0 STREAMS x >
1) 1) "x"
2) (empty array)
The root cause is that we use s->last_id in streamCompareID
while we should use the last *valid* ID
- make lua-replicate-commands mutable (it never was, but i don't see why)
- make tcp-backlog immutable (fix a recent refactory mistake)
- increase the max limit of a few configs to match what they were before
the recent refactory
This commit solves several edge cases that are related to
exhausting the streamID limits: We should correctly calculate
the succeeding streamID instead of blindly incrementing 'seq'
This affects both XREAD and XADD.
Other (unrelated) changes:
Reply with a better error message when trying to add an entry
to a stream that has exhausted last_id
With the previous API, a NULL return value was ambiguous and could
represent either an old value of NULL or an error condition. The new API
returns a status code and allows the old value to be returned
by-reference.
This commit also includes test coverage based on
tests/modules/datatype.c which did not exist at the time of the original
commit.
Changes in behavior:
- Change server.stream_node_max_entries from int64_t to long long, so that it can be used by the generic infra
- standard error reply instead of "repl-backlog-size must be 1 or greater" and such
- tls-port and a few TLS booleans were readable (config get) even when USE_OPENSSL was off (now they aren't)
- syslog-enabled, syslog-ident, cluster-enabled, appendfilename, and supervised didn't have a get (now they do)
- pidfile was initialized to NULL in InitServerConfig but had CONFIG_DEFAULT_PID_FILE in rewriteConfig (so the real default was "", but rewrite would cause it to be set), fixed the rewrite.
- TLS config in server.h was uninitialized (if no tls config args were provided)
Adding test for sanity and coverage
there were two lssues, one is taht BGREWRITEAOF failed since the initial one was still in progress
the solution for this one is to enable appendonly from the server startup so there's no initial aofrw.
the other problem was 0 loading progress events, theory is that on some
platforms a sleep of 1 will cause a much greater delay due to the context
switch, but on other platform it doesn't. in theory a sleep of 100 micro
for 1k keys whould take 100ms, and with hz of 500 we should be gettering
50 events (one every 2ms). in practise it doesn't work like that, so trying
to find a sleep that would be long enough but still not cause the test to take
too long.
Calling XADD with 0-0 or 0 would result in creating an
empty key and storing it in the database.
Even worse, because XADD will reply with error the action
will not be replicated, creating a master-replica
inconsistency
- Adding RM_ScanKey
- Adding tests for RM_ScanKey
- Refactoring RM_Scan API
Changes in RM_Scan
- cleanup in docs and coding convention
- Moving out of experimantal Api
- Adding ctx to scan callback
- Dont use cursor of -1 as an indication of done (can be a valid cursor)
- Set errno when returning 0 for various reasons
- Rename Cursor to ScanCursor
- Test filters key that are not strings, and opens a key if NULL
The implementation expose the following new functions:
1. RedisModule_CursorCreate - allow to create a new cursor object for
keys scanning
2. RedisModule_CursorRestart - restart an existing cursor to restart the
scan
3. RedisModule_CursorDestroy - destroy an existing cursor
4. RedisModule_Scan - scan keys
The RedisModule_Scan function gets a cursor object, a callback and void*
(used as user private data).
The callback will be called for each key in the database proving the key
name and the value as RedisModuleKey.
- the API name was odd, separated to two apis one for LRU and one for LFU
- the LRU idle time was in 1 second resolution, which might be ok for RDB
and RESTORE, but i think modules may need higher resolution
- adding tests for LFU and for handling maxmemory policy mismatch
Add two new functions that leverage the RedisModuleDataType mechanism
for RDB serialization/deserialization and make it possible to use it
to/from arbitrary strings:
* RM_SaveDataTypeToString()
* RM_LoadDataTypeFromString()