We have them into zmalloc.c, but this is going to replace that
implementation, so that it's possible to use the same idea everywhere
inside the code base.
After the introduction of the list with clients with pending writes, to
process clients incrementally outside of the event loop we also need to
process the pending writes list.
The code was broken and resulted in redis-cli --pipe to, most of the
times, writing everything received in the standard input to the Redis
connection socket without ever reading back the replies, until all the
content to write was written.
This means that Redis had to accumulate all the output in the output
buffers of the client, consuming a lot of memory.
Fixed thanks to the original report of anomalies in the behavior
provided by Twitter user @fsaintjacques.
Georadius works by computing the center + neighbors squares covering all
the area of the specified position and radius. Then a distance filter is
used to remove elements which are actually outside the range.
When a huge radius is used, like 5000 km or more, adjacent neighbors may
collide and be the same, leading to the reporting of the same element
multiple times. This only happens in the edge case of huge radius but is
not ideal.
A robust but slow solution would involve qsorting the range to remove
all the duplicates. However since the collisions are only in adjacent
boxes, for the way they are ordered in the code, it is much faster to
just check if the current box is the same as the previous one processed.
This commit adds a regression test for the bug.
Fixes#2767.
MOVE was not able to move the TTL: when a key was moved into a different
database number, it became persistent like if PERSIST was used.
In some incredible way (I guess almost nobody uses Redis MOVE) this bug
remained unnoticed inside Redis internals for many years.
Finally Andy Grunwald discovered it and opened an issue.
This commit fixes the bug and adds a regression test.
Close#2765.
As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE
attempt returns an error, we scan the list of slaves in order to remove
them since there is no way to serve them currently.
However we check for the replication state BGSAVE_START, which was
modified by rdbSaveToSlaveSockets() before forking(). So when fork fails
the state of slaves remain BGSAVE_END and no cleanup is performed.
This commit fixes the problem by making rdbSaveToSlavesSockets() able to
undo the state change on fork failure.
Before this commit, after triggering a BGSAVE it was up to the caller of
startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in
order to update them accordingly. However when the replication target is
the socket, this is not possible since the process of updating the
slaves and sending the FULLRESYNC reply must be coupled with the process
of starting an RDB save (the reason is, we need to send the FULLSYNC
command and spawn a child that will start to send RDB data to the slaves
ASAP).
This commit moves the responsibility of handling slaves in
WAIT_BGSAVE_START to startBgsavForReplication() so that for both
diskless and disk-based replication we have the same chain of
responsiblity. In order accomodate such change, the syncCommand() also
needs to put the client in the slave list ASAP (just after the initial
checks) and not at the end, so that startBgsavForReplication() can find
the new slave alrady in the list.
Another related change is what happens if the BGSAVE fails because of
fork() or other errors: we now remove the slave from the list of slaves
and send an error, scheduling the slave connection to be terminated.
As a side effect of this change the following errors found by
Oran Agra are fixed (thanks!):
1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned
up, otherwise they remain in a wrong state forever since we setup them
for full resync before actually trying to fork.
2. updateSlavesWaitingBgsave() with replication target set as "socket"
was broken since the function changed the slaves state from
WAIT_BGSAVE_START to WAIT_BGSAVE_END via
replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets()
will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
It is simpler if removing the read event handler from the FD is up to
slaveTryPartialResynchronization, after all it is only called in the
context of syncWithMaster.
This commit also makes sure that on error all the event handlers are
removed from the socket before closing it.
Talking with @oranagra we had to reason a little bit to understand if
this function could ever flush the output buffers of the wrong slaves,
having online state but actually not being ready to receive writes
before the first ACK is received from them (this happens with diskless
replication).
Next time we'll just read this comment.
Add the concept of slaves capabilities to Redis, the slave now presents
to the Redis master with a set of capabilities in the form:
REPLCONF capa SOMECAPA capa OTHERCAPA ...
This has the effect of setting slave->slave_capa with the corresponding
SLAVE_CAPA macros that the master can test later to understand if it
the slave will understand certain formats and protocols of the
replication process. This makes it much simpler to introduce new
replication capabilities in the future in a way that don't break old
slaves or masters.
This patch was designed and implemented together with Oran Agra
(@oranagra).
Our function to read a line with a timeout handles newlines as requests
to refresh the timeout, however the code kept subtracting the buffer
size left every time a newline was received, for a bug in the loop
logic. Fixed by this commit.
For PINGs we use the period configured by the user, but for the newlines
of slaves waiting for an RDB to be created (including slaves waiting for
the FULLRESYNC reply) we need to ping with frequency of 1 second, since
the timeout is fixed and needs to be refreshed.
In previous commits we moved the FULLRESYNC to the moment we start the
BGSAVE, so that the offset we provide is the right one. However this
also means that we need to re-emit the SELECT statement every time a new
slave starts to accumulate the changes.
To obtian this effect in a more clean way, the function that sends the
FULLRESYNC reply was overloaded with a more important role of also doing
this and chanigng the slave state. So it was renamed to
replicationSetupSlaveForFullResync() to better reflect what it does now.
This commit attempts to fix a bug involving PSYNC and diskless
replication (currently experimental) found by Yuval Inbar from Redis Labs
and that was later found to have even more far reaching effects (the bug also
exists when diskstore is off).
The gist of the bug is that, a Redis master replies with +FULLRESYNC to
a PSYNC attempt that fails and requires a full resynchronization.
However, the baseline offset sent along with FULLRESYNC was always the
current master replication offset. This is not ok, because there are
many reasosn that may delay the RDB file creation. And... guess what,
the master offset we communicate must be the one of the time the RDB
was created. So for example:
1) When the BGSAVE for replication is delayed since there is one
already but is not good for replication.
2) When the BGSAVE is not needed as we attach one currently ongoing.
3) When because of diskless replication the BGSAVE is delayed.
In all the above cases the PSYNC reply is wrong and the slave may
reconnect later claiming to need a wrong offset: this may cause
data curruption later.
Using chained replication where C is slave of B which is in turn slave of
A, if B reconnects the replication link with A but discovers it is no
longer possible to PSYNC, slaves of B must be disconnected and PSYNC
not allowed, since the new B dataset may be completely different after
the synchronization with the master.
Note that there are varius semantical differences in the way this is
handled now compared to the past. In the past the semantics was:
1. When a slave lost connection with its master, disconnected the chained
slaves ASAP. Which is not needed since after a successful PSYNC with the
master, the slaves can continue and don't need to resync in turn.
2. However after a failed PSYNC the replication backlog was not reset, so a
slave was able to PSYNC successfully even if the instance did a full
sync with its master, containing now an entirely different data set.
Now instead chained slaves are not disconnected when the slave lose the
connection with its master, but only when it is forced to full SYNC with
its master. This means that if the slave having chained slaves does a
successful PSYNC all its slaves can continue without troubles.
See issue #2694 for more details.
When empty strings are created, or when sdsMakeRoomFor() is called, we
are likely into an appending pattern. Use at least type 8 SDS strings
since TYPE 5 does not remember the free allocation size and requires to
call sdsMakeRoomFor() at every new piece appended.
The previos attempt to process each client at least once every ten
seconds was not a good idea, because:
1. Usually because of the past min iterations set to 50, you get much
better processing period most of the times.
2. However when there are many clients and a normal setting for
server.hz, the edge case is triggered, and waiting 10 seconds for a
BLPOP that asked for 1 second is not ok.
3. Moreover, because of the high min-itereations limit of 50, when HZ
was set to an high value, the actual behavior was to process a lot of
clients per second.
Also the function checking for timeouts called gettimeofday() at each
iteration which can be costly.
The new implementation will try to process each client once per second,
gets the current time as argument, and does not attempt to process more
than 5 clients per iteration if not needed.
So now:
1. The CPU usage of an idle Redis process is the same or better.
2. The CPU usage of a busy Redis process is the same or better.
3. However a non trivial amount of work may be performed per iteration
when there are many many clients. In this particular case the user may
want to raise the "HZ" value if needed.
Btw with 4000 clients it was still not possible to noticy any actual
latency created by processing 400 clients per second, since the work
performed for each client is pretty small.
This is an attempt to use the refcount feature of the sds.c fork
provided in the Pull Request #2509. A new type, SDS_TYPE_5 is introduced
having a one byte header with just the string length, without
information about the available additional length at the end of the
string (this means that sdsMakeRoomFor() will be required each time
we want to append something, since the string will always report to have
0 bytes available).
More work needed in order to avoid common SDS functions will pay the
cost of this type. For example both sdscatprintf() and sdscatfmt()
should try to upgrade to SDS_TYPE_8 ASAP when appending chars.
The command reports information about the hash table internal state
representing the specified database ID.
This can be used in order to investigate rehashings, memory usage issues
and for other debugging purposes.
The new return value is the number of keys existing, among the ones
specified in the command line, counting the same key multiple times if
given multiple times (and if it exists).
See PR #2667.
Rationale:
1. The commands look like internals exposed without a real strong use
case.
2. Whatever there is an use case, the client would implement the
commands client side instead of paying RTT just to use a simple to
reimplement library.
3. They add complexity to an otherwise quite straightforward API.
So for now KILLED ;-)
Instead of successive divisions in iteration the new code uses bitwise
magic to interleave / deinterleave two 32bit values into a 64bit one.
All tests still passing and is measurably faster, so worth it.
Stack traces produced by Redis on crash are the most useful tool we
have to fix non easily reproducible crashes, or even easily reproducible
ones where the user just posts a bug report and does not collaborate
furhter.
By declaring functions "static" they no longer show up in the stack
trace.
If GEOENCODE must be our door to enter the Geocoding implementation
details and do fancy things client side, than return the scores as well
so that we can query the sorted sets directly if we wish to do the same
search multiple times, or want to compute the boxes in the client side
to refine our search needs.
The GIS standard and all the major DBs implementing GIS related
functions take coordinates as x,y that is longitude,latitude.
It was a bad start for Redis to do things differently, so even if this
means that existing users of the Geo module will be required to change
their code, Redis now conforms to the standard.
Usually Redis is very backward compatible, but this is not an exception
to this rule, since this is the first Geo implementation entering the
official Redis source code. It is not wise to try to be backward
compatible with code forks... :-)
Close#2637.
The returned step was in some case not enough towards normal
coordinates (for example when our search position was was already near the
margin of the central area, and we had to match, using the east or west
neighbor, a very far point). Example:
geoadd points 67.575457940146066 -62.001317572780565 far
geoadd points 66.685439060295664 -58.925040587282297 center
georadius points 66.685439060295664 -58.925040587282297 200 km
In the above case the code failed to find a match (happens at smaller
latitudes too) even if far and center are at less than 200km.
Another fix introduced by this commit is a progressively larger area
towards the poles, since meridians are a lot less far away, so we need
to compensate for this.
The current implementation works comparably to the Tcl brute-force
stress tester implemented in the fuzzy test in the geo.tcl unit for
latitudes between -70 and 70, and is pretty accurate over +/-80 too,
with sporadic false negatives.
A more mathematically clean implementation is possible by computing the
meridian distance at the specified latitude and computing the step
according to it.
1. We no longer use a fake client but just rewriting.
2. We group all the inserts into a single ZADD dispatch (big speed win).
3. As a side effect of the correct implementation, replication works.
4. The return value of the command is now correct.
This commit simplifies the implementation in a few ways:
1. zsetScore implementation improved a bit and moved into t_zset.c where
is now also used to implement the ZSCORE command.
2. Range extraction from the sorted set remains a separated
implementation from the one in t_zset.c, but was hyper-specialized in
order to avoid accumulating results into a list and remove the ones
outside the radius.
3. A new type is introduced: geoArray, which can accumulate geoPoint
structures in a vector with power of two expansion policy. This is
useful since we have to call qsort() against it before returning the
result to the user.
4. As a result of 1, 2, 3, the two files zset.c and zset.h are now
removed, including the function to merge two lists (now handled with
functions that can add elements to existing geoArray arrays) and
the machinery used in order to pass zset results.
5. geoPoint structure simplified because of the general code structure
simplification, so we no longer need to take references to objects.
6. Not counting the JSON removal the refactoring removes 200 lines of
code for the same functionalities, with a simpler to read
implementation.
7. GEORADIUS is now 2.5 times faster testing with 10k elements and a
radius resulting in 124 elements returned. However this is mostly a
side effect of the refactoring and simplification. More speed gains
can be achieved by trying to optimize the code.
For some reason the Geo PR included disabling the fact that Redis is
compiled with optimizations. Apparently it was just @mattsta attempt to
speedup the modify-compile-test iteration and there are no other
reasons.
This feature apparently is not going to be very useful, to send a
GEOADD+PUBLISH combo is exactly the same. One that would make a ton of
difference is the ability to subscribe to a position and a radius, and
get the updates in terms of objects entering/exiting the area.
Current todo:
- replace functions in zset.{c,h} with a new unified Redis
zset access API.
Once we get the zset interface fixed, we can squash
relevant commits in this branch and have one nice commit
to merge into unstable.
This commit adds:
- Geo commands
- Tests; runnable with: ./runtest --single unit/geo
- Geo helpers in deps/geohash-int/
- src/geo.{c,h} and src/geojson.{c,h} implementing geo commands
- Updated build configurations to get everything working
- TEMPORARY: src/zset.{c,h} implementing zset score and zset
range reading without writing to client output buffers.
- Modified linkage of one t_zset.c function for use in zset.c
Conflicts:
src/Makefile
src/redis.c
We have a check to rewrite the config properly when a failover is in
progress, in order to add the current (already failed over) master as
slave, and don't include in the slave list the promoted slave itself.
However there was an issue, the variable with the right address was
computed but never used when the code was modified, and no tests are
available for this feature for two reasons:
1. The Sentinel unit test currently does not test Sentinel ability to
persist its state at all.
2. It is a very hard to trigger state since it lasts for little time in
the context of the testing framework.
However this feature should be covered in the test in some way.
The bug was found by @badboy using the clang static analyzer.
Effects of the bug on safety of Sentinel
===
This bug results in severe issues in the following case:
1. A Sentinel is elected leader.
2. During the failover, it persists a wrong config with a known-slave
entry listing the master address.
3. The Sentinel crashes and restarts, reading invalid configuration from
disk.
4. It sees that the slave now does not obey the logical configuration
(should replicate from the current master), so it sends a SLAVEOF
command to the master (since the slave master is the same) creating a
replication loop (attempt to replicate from itself) which Redis is
currently unable to detect.
5. This means that the master is no longer available because of the bug.
However the lack of availability should be only transient (at least
in my tests, but other states could be possible where the problem
is not recovered automatically) because:
6. Sentinels treat masters reporting to be slaves as failing.
7. A new failover is triggered, and a slave is promoted to master.
Bug lifetime
===
The bug is there forever. Commit 16237d78 actually tried to fix the bug
but in the wrong way (the computed variable was never used! My fault).
So this bug is there basically since the start of Sentinel.
Since the bug is hard to trigger, I remember little reports matching
this condition, but I remember at least a few. Also in automated tests
where instances were stopped and restarted multiple times automatically
I remember hitting this issue, however I was not able to reproduce nor
to determine with the information I had at the time what was causing the
issue.
We usually want to reach the master using the address of the interface
Redis is bound to (via the "bind" config option). That's useful since
the master will get (and publish) the slave address getting the peer
name of the incoming socket connection from the slave.
However, when this is not possible, for example because the slave is
bound to the loopback interface but repliaces from a master accessed via
an external interface, we want to still connect with the master even
from a different interface: in this case it is not really important that
the master will provide any other address, while it is vital to be able
to replicate correctly.
Related to issues #2609 and #2612.
This performs a best effort source address binding attempt. If it is
possible to bind the local address and still have a successful
connect(), then this socket is returned. Otherwise the call is retried
without source address binding attempt.
Related to issues #2609 and #2612.
Normally ZADD only returns the number of elements added to a sorted
set, using the RETCH option it returns the sum of elements added or
for which the score was updated.
The PING trigger was improved again by using two fields instead of a
single one to remember when the last ping was sent:
1. The "active" ping is the time at which we sent the last ping that
still received no reply. However we continue to ping non replying
instances even if they have an old active ping: the link may be
disconnected and reconencted in the meantime so the older pings may get
lost even if it's a TCP socket.
2. The "last" ping is the time at which we really sent the last ping
on the wire, and this is used in order to throttle the amount of pings
we send during failures (when no pong is received).
All in all the failure detector effectiveness should be identical but we
avoid to flood instances with pings during failures or when they are
slow.
It's ok to ping as soon as the ping period has elapsed since we received
the last PONG, but it's not good that we ping again if there is a
pending ping... With this change we'll send a new ping if there is one
pending only if two times the ping period elapsed since the ping which
is still pending was sent.
This is useful for debugging and logging activities: given a
sentinelRedisInstance object returns a C string representing the
instance type: master, slave, sentinel.
This new command triggers a config flush to save the in-memory config to
disk. This is useful for cases of a configuration management system or a
package manager wiping out your sentinel config while the process is
still running - and has not yet been restarted. It can also be useful
for scripting a backup and migrate or clone of a running sentinel.
Since with a previous commit Sentinels now persist their unique ID, we
no longer need to detect duplicated Sentinels and re-add them. We remove
and re-add back using different events only in the case of address
switch of the same Sentinel, without generating a new +sentinel event.
Previously Sentinels always changed unique ID across restarts, relying
on the server.runid field. This is not a good idea, and forced Sentinel
to rely on detection of duplicated Sentinels and a potentially dangerous
clean-up and re-add operation of the Sentinel instance that was
rebooted.
Now the ID is generated at the first start and persisted in the
configuration file, so that a given Sentinel will have its unique
ID forever (unless the configuration is manually deleted or there is a
filesystem corruption).
Originally, only the +slave event which occurs when a slave is
reconfigured during sentinelResetMasterAndChangeAddress triggers a flush
of the config to disk. However, newly discovered slaves don't
apparently trigger this flush but do trigger the +slave event issuance.
So if you start up a sentinel, add a master, then add a slave to the
master (as a way to reproduce it) you'll see the +slave event issued,
but the sentinel config won't be updated with the known-slave entry.
This change makes sentinel do the flush of the config if a new slave is
deteted in sentinelRefreshInstanceInfo.
To rewrite the config in the loop that adds slaves back after a master
reset, in order to handle switching to another master, is useless: it
just adds latency since there is an fsync call in the inner loop,
without providing any additional guarantee, but the contrary, since if
after the first loop iteration the server crashes we end with just a
single slave entry losing all the other informations.
It is wiser to rewrite the config at the end when the full new
state is configured.