1. Local variable 's' hides a parameter of the same name.
```
int anetTcpAccept(char *err, int s, char *ip, size_t ip_len, int *port) {
if ((fd = anetGenericAccept(err,s,(struct sockaddr*)&sa,&salen)) == ANET_ERR){}
}
```
Change the parameter name from `s` to `serversock`,
also unified with the header file definition.
2. Comparison is always false because i <= 48.
```
for (i = 0; i < DICT_STATS_VECTLEN-1; i++) { // i < 49
(i == DICT_STATS_VECTLEN-1)?">= ":"", // i == 49
}
```
`i == DICT_STATS_VECTLEN-1` always result false, it is a dead code.
3. Empty block without comment.
`else if (!strcasecmp(opt,"ch")) {}`, add a comment to avoid warnings.
4. Bit field fill of type int should have explicitly unsigned integral, explicitly signed integral, or enumeration type.
Modify `int fill: QL_FILL_BITS;` to `unsigned int fill: QL_FILL_BITS;`
5. The result of this call to reconnectingRedisCommand is not checked for null, but 80% of calls to reconnectingRedisCommand check for null.
Just a cleanup job like others.
This caused a crash when adding elements larger than 2GB to a set (same goes for hash keys). See #8455.
Details:
* The fix makes the dict hash functions receive a `size_t` instead of an `int`. In practice the dict hash functions
call siphash which receives a `size_t` and the callers to the hash function pass a `size_t` to it so the fix is trivial.
* The issue was recreated by attempting to add a >2gb value to a set. Appropriate tests were added where I create
a set with large elements and check basic functionality on it (SADD, SCARD, SPOP, etc...).
* When I added the tests I also refactored a bit all the tests code which is run under the `--large-memory` flag.
This removed code duplication for the test framework's `write_big_bulk` and `write_big_bulk` code and also takes
care of not allocating the test frameworks helper huge string used by these tests when not run under `--large-memory`.
* I also added the _violoations.tcl_ unit tests to be part of the entire test suite and leaned up non relevant list related
tests that were in there. This was done in this PR because most of the _violations_ tests are "large memory" tests.
- Added sanitizer support. `address`, `undefined` and `thread` sanitizers are available.
- To build Redis with desired sanitizer : `make SANITIZER=undefined`
- There were some sanitizer findings, cleaned up codebase
- Added tests with address and undefined behavior sanitizers to daily CI.
- Added tests with address sanitizer to the per-PR CI (smoke out mem leaks sooner).
Basically, there are three types of issues :
**1- Unaligned load/store** : Most probably, this issue may cause a crash on a platform that
does not support unaligned access. Redis does unaligned access only on supported platforms.
**2- Signed integer overflow.** Although, signed overflow issue can be problematic time to time
and change how compiler generates code, current findings mostly about signed shift or simple
addition overflow. For most platforms Redis can be compiled for, this wouldn't cause any issue
as far as I can tell (checked generated code on godbolt.org).
**3 -Minor leak** (redis-cli), **use-after-free**(just before calling exit());
UB means nothing guaranteed and risky to reason about program behavior but I don't think any
of the fixes here worth backporting. As sanitizers are now part of the CI, preventing new issues
will be the real benefit.
* Enhance dict to support arbitrary metadata carried in dictEntry
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
* Rewrite slot-to-keys mapping to linked lists using dict entry metadata
This is a memory enhancement for Redis Cluster.
The radix tree slots_to_keys (which duplicates all key names prefixed with their
slot number) is replaced with a linked list for each slot. The dict entries of
the same cluster slot form a linked list and the pointers are stored as metadata
in each dict entry of the main DB dict.
This commit also moves the slot-to-key API from db.c to cluster.c.
Co-authored-by: Jim Brunner <brunnerj@amazon.com>
Recently we found two issues in the fuzzer tester: #9302#9285
After fixing them, more problems surfaced and this PR (as well as #9297) aims to fix them.
Here's a list of the fixes
- Prevent an overflow when allocating a dict hashtable
- Prevent OOM when attempting to allocate a huge string
- Prevent a few invalid accesses in listpack
- Improve sanitization of listpack first entry
- Validate integrity of stream consumer groups PEL
- Validate integrity of stream listpack entry IDs
- Validate ziplist tail followed by extra data which start with 0xff
Co-authored-by: sundb <sundbcn@gmail.com>
Reduce dict struct memory overhead
on 64bit dict size goes down from jemalloc's 96 byte bin to its 56 byte bin.
summary of changes:
- Remove `privdata` from callbacks and dict creation. (this affects many files, see "Interface change" below).
- Meld `dictht` struct into the `dict` struct to eliminate struct padding. (this affects just dict.c and defrag.c)
- Eliminate the `sizemask` field, can be calculated from size when needed.
- Convert the `size` field into `size_exp` (exponent), utilizes one byte instead of 8.
Interface change: pass dict pointer to dict type call back functions.
This is instead of passing the removed privdata field. In the future if
we'd like to have private data in the callbacks we can extract it from
the dict type. We can extend dictType to include a custom dict struct
allocator and use it to allocate more data at the end of the dict
struct. This data can then be used to store private data later acccessed
by the callbacks.
1. Add `redis-server test all` support to run all tests.
2. Add redis test to daily ci.
3. Add `--accurate` option to run slow tests for more iterations (so that
by default we run less cycles (shorter time, and less prints).
4. Move dict benchmark to REDIS_TEST.
5. fix some leaks in tests
6. make quicklist tests run on a specific fill set of options rather than huge ranges
7. move some prints in quicklist test outside their loops to reduce prints
8. removing sds.h from dict.c since it is now used in both redis-server and
redis-cli (uses hiredis sds)
The `dict` field `iterators` is misleading and incorrect.
This variable is used for 1 purpose - to pause rehashing.
The current `iterators` field doesn't actually count "iterators".
It counts "safe iterators". But - it doesn't actually count safe iterators
either. For one, it's only incremented once the safe iterator begins to
iterate, not when it's created. It's also incremented in `dictScan` to
prevent rehashing (and commented to make it clear why `iterators` is
being incremented during a scan).
This update renames the field as `pauserehash` and creates 2 helper
macros `dictPauseRehashing(d)` and `dictResumeRehashing(d)`
When a database on a 64 bit build grows past 2^31 keys, the underlying hash table expands to 2^32 buckets. After this point, the algorithms for selecting random elements only return elements from half of the available buckets because they use random() which has a range of 0 to 2^31 - 1. This causes problems for eviction policies which use dictGetSomeKeys or dictGetRandomKey. Over time they cause the hash table to become unbalanced because, while new keys are spread out evenly across all buckets, evictions come from only half of the available buckets. Eventually this half of the table starts to run out of keys and it takes longer and longer to find candidates for eviction. This continues until no more evictions can happen.
This solution addresses this by using a 64 bit PRNG instead of libc random().
Co-authored-by: Greg Femec <gfemec@google.com>
As we know, redis may reject user's requests or evict some keys if
used memory is over maxmemory. Dictionaries expanding may make
things worse, some big dictionaries, such as main db and expires dict,
may eat huge memory at once for allocating a new big hash table and be
far more than maxmemory after expanding.
There are related issues: #4213#4583
More details, when expand dict in redis, we will allocate a new big
ht[1] that generally is double of ht[0], The size of ht[1] will be
very big if ht[0] already is big. For db dict, if we have more than
64 million keys, we need to cost 1GB for ht[1] when dict expands.
If the sum of used memory and new hash table of dict needed exceeds
maxmemory, we shouldn't allow the dict to expand. Because, if we
enable keys eviction, we still couldn't add much more keys after
eviction and rehashing, what's worse, redis will keep less keys when
redis only remains a little memory for storing new hash table instead
of users' data. Moreover users can't write data in redis if disable
keys eviction.
What this commit changed ?
Add a new member function expandAllowed for dict type, it provide a way
for caller to allow expand or not. We expose two parameters for this
function: more memory needed for expanding and dict current load factor,
users can implement a function to make a decision by them.
For main db dict and expires dict type, these dictionaries may be very
big and cost huge memory for expanding, so we implement a judgement
function: we can stop dict to expand provisionally if used memory will
be over maxmemory after dict expands, but to guarantee the performance
of redis, we still allow dict to expand if dict load factor exceeds the
safe load factor.
Add test cases to verify we don't allow main db to expand when left
memory is not enough, so that avoid keys eviction.
Other changes:
For new hash table size when expand. Before this commit, the size is
that double used of dict and later _dictNextPower. Actually we aim to
control a dict load factor between 0.5 and 1.0. Now we replace *2 with
+1, since the first check is that used >= size, the outcome of before
will usually be the same as _dictNextPower(used+1). The only case where
it'll differ is when dict_can_resize is false during fork, so that later
the _dictNextPower(used*2) will cause the dict to jump to *4 (i.e.
_dictNextPower(1025*2) will return 4096).
Fix rehash test cases due to changing algorithm of new hash table size
when expand.
The callback approach we took is very efficient, the module can do any
filtering of keys without building any list and cloning strings, it can
also read data from the key's value. but if the user tries to re-open
the key, or any other key, this can cause dict re-hashing (dictFind does
that), and that's very bad to do from inside dictScan.
this commit protects the dict from doing any rehashing during scan, but
also warns the user not to attempt any writes or command calls from
within the callback, for fear of unexpected side effects and crashes.
Distribution improves dramatically: tests show it clearly. Better to
have a slower implementation than a wrong one, because random member
extraction should be correct or tends to be useless for a number of
tasks.
This change attempts to switch to an hash function which mitigates
the effects of the HashDoS attack (denial of service attack trying
to force data structures to worst case behavior) while at the same time
providing Redis with an hash function that does not expect the input
data to be word aligned, a condition no longer true now that sds.c
strings have a varialbe length header.
Note that it is possible sometimes that even using an hash function
for which collisions cannot be generated without knowing the seed,
special implementation details or the exposure of the seed in an
indirect way (for example the ability to add elements to a Set and
check the return in which Redis returns them with SMEMBERS) may
make the attacker's life simpler in the process of trying to guess
the correct seed, however the next step would be to switch to a
log(N) data structure when too many items in a single bucket are
detected: this seems like an overkill in the case of Redis.
SPEED REGRESION TESTS:
In order to verify that switching from MurmurHash to SipHash had
no impact on speed, a set of benchmarks involving fast insertion
of 5 million of keys were performed.
The result shows Redis with SipHash in high pipelining conditions
to be about 4% slower compared to using the previous hash function.
However this could partially be related to the fact that the current
implementation does not attempt to hash whole words at a time but
reads single bytes, in order to have an output which is endian-netural
and at the same time working on systems where unaligned memory accesses
are a problem.
Further X86 specific optimizations should be tested, the function
may easily get at the same level of MurMurHash2 if a few optimizations
are performed.
Recently we moved the "return ASAP" condition for the Delete() function
from checking .size to checking .used, which is smarter, however while
testing the first table alone always works to ensure the dict is totally
emtpy, when we test the .size field, testing .used requires testing both
T0 and T1, since a rehashing could be in progress.
Notes by @antirez:
This patch was picked from a larger commit by Oran and adapted to change
the API a bit. The basic idea is to avoid double lookups when there is
to use the value of the deleted entry.
BEFORE:
entry = dictFind( ... ); /* 1st lookup. */
/* Do somethjing with the entry. */
dictDelete(...); /* 2nd lookup. */
AFTER:
entry = dictUnlink( ... ); /* 1st lookup. */
/* Do somethjing with the entry. */
dictFreeUnlinkedEntry(entry); /* No lookups!. */