The size of the node depends on the node level, however it is not stored
into the node itself, is an implicit information, so we use
zmalloc_size() in order to compute the sorted set size.
The new SAMPLES option is added, defaulting to 5, and with 0 being a
special value to scan the whole set of elements.
Fixes to the object size computation were made since the original PR
assumed data structures still contaning robj structures, while now after
the lazyfree changes, are all SDS strings.
This code was extracted from @oranagra PR #3223 and modified in order
to provide only certain amounts of information compared to the original
code. It was also moved from DEBUG to the newly introduced MEMORY
command. Thanks to Oran for the implementation and the PR.
It implements detailed memory usage stats that can be useful in both
provisioning and troubleshooting memory usage in Redis.
For most tasks, we need the memory estimation to be O(1) by default.
This commit also implements an initial MEMORY command.
Note that objectComputeSize() takes the number of samples to check as
argument, so MEMORY should be able to get the sample size as option
to make precision VS CPU tradeoff tunable.
Related to: PR #3223.
It is possible to get better results by using the pool like in the LRU
case. Also from tests during the morning I believe the current
implementation has issues in the frequency decay function that should
decrease the counter at periodic intervals.
strict_strtoll() has a bug that reports the empty string as ok and
parses it as zero.
Apparently nobody ever replaced this old call with the faster/saner
string2ll() which is used otherwise in the rest of the Redis core.
This commit close#3333.
Should be much faster, and regardless, the code is more obvious now
compared to generating a string just to get the return value of the
ll2stirng() function.
This replaces individual ziplist vs. linkedlist representations
for Redis list operations.
Big thanks for all the reviews and feedback from everybody in
https://github.com/antirez/redis/pull/2143
In order to make sure every object has its own private LRU counter, when
maxmemory is enabled tryObjectEncoding() does not use the pool of shared
integers. However when the policy is not LRU-based, it does not make
sense to do so, and it is much better to save memory using shared
integers.
Previously, the command definition for the OBJECT command specified
a minimum of two args (and that it was variadic), which meant that
if you sent this:
OBJECT foo
When cluster was enabled, it would result in an assertion/SEGFAULT
when Redis was attempting to extract keys.
It appears that OBJECT is not variadic, and only ever takes 3 args.
https://gist.github.com/michael-grunder/25960ce1508396d0d36a
All the Redis functions that need to modify the string value of a key in
a destructive way (APPEND, SETBIT, SETRANGE, ...) require to make the
object unshared (if refcount > 1) and encoded in raw format (if encoding
is not already REDIS_ENCODING_RAW).
This was cut & pasted many times in multiple places of the code. This
commit puts the small logic needed into a function called
dbUnshareStringValue().