The XCLAIM docs state the XCLAIM increments the delivery counter for
messages. This PR makes the code match the documentation - which seems
like the desired behaviour - whilst still allowing RETRYCOUNT to be
specified manually.
My understanding of the way streamPropagateXCLAIM() works is that this
change will safely propagate to replicas since retry count is pulled
directly from the streamNACK struct.
Fixes#5194
This way the behavior is very similar to the past one.
This is useful in order to remember the user she probably failed to
configure a password correctly.
Few tests had borderline thresholds that were adjusted.
The slave buffers test had two issues, preventing the slave buffer from growing:
1) the slave didn't necessarily go to sleep on time, or woke up too early,
now using SIGSTOP to make sure it goes to sleep exactly when we want.
2) the master disconnected the slave on timeout
This should be able to find new bugs and regressions about the new
sorted set update function when ZADD is used to update an element
already existing.
The test is able to find the bug fixed at 2f282aee immediately.
* allowing --single to be repeated
* adding --only so that only a specific test inside a unit can be run
* adding --skiptill useful to resume a test that crashed passed the problematic unit.
useful together with --clients 1
* adding --skipfile to use a file containing list of tests names to skip
* printing the names of the tests that are skiped by skipfile or denytags
* adding --config to add config file options from command line
it looks like on slow machines we're getting:
[err]: slave buffer are counted correctly in tests/unit/maxmemory.tcl
Expected condition '$slave_buf > 2*1024*1024' to be true (16914 > 2*1024*1024)
this is a result of the slave waking up too early and eating the
slave buffer before the traffic and the test ends.
on slower machines, the active defrag test tended to fail.
although the fragmentation ratio was below the treshold, the defragger was
still in the middle of a scan cycle.
this commit changes:
- the defragger uses the current fragmentation state, rather than the cache one
that is updated by server cron every 100ms. this actually fixes a bug of
starting one excess scan cycle
- the test lets the defragger use more CPU cycles, in hope that the defrag
will be faster, but also give it more time before we give up.