1. server.repl_no_slaves_since can be set when a MONITOR client disconnects
2. c->repl_ack_time can be set by a newline from a MONITOR client
3. Improved comments
This bug is from the first version of Redis. Probably the problem here
is that before we used an SDS split function that created empty strings
for additional spaces, like in "SET foo bar".
AFAIK later we replaced it with the curretn sdssplitarg() API that has
no such a problem. As a result, we introduced a bug, where it is no
longer possible to do something like:
SET foo ""
Using the inline protocol. Now it is fixed.
looks like each platform implements long double differently (different bit count)
so we can't save them as binary, and we also want to avoid creating a new RDB
format version, so we save these are hex strings using "%La".
This commit includes a change in the arguments of ld2string to support this.
as well as tests for coverage and short reads.
coded by @guybe7
* replication hooks: role change, master link status, replica online/offline
* persistence hooks: saving, loading, loading progress
* misc hooks: cron loop, shutdown, module loaded/unloaded
* change the way hooks test work, and add tests for all of the above
startLoading() now gets flag indicating what is loaded.
stopLoading() now gets an indication of success or failure.
adding startSaving() and stopSaving() with similar args and role.
misc:
- handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents
- fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed)
- add key-load-delay config for testing
- trim connShutdown which is no longer needed
- rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs
- don't detect rdb child exited (don't call wait3) until we detect the pipe is closed
- Cleanup bad optimization from rio.c, add another one
* Introduce a connection abstraction layer for all socket operations and
integrate it across the code base.
* Provide an optional TLS connections implementation based on OpenSSL.
* Pull a newer version of hiredis with TLS support.
* Tests, redis-cli updates for TLS support.
Potentially it is possible that we get interleaved writes, even if
serverLog() makes sure to write into a buffer and then use printf(), so
even this should be ok. However in general POSIX guarantees that writing
to the same file pointer object from multiple threads is safe. Anyway
currently we *reopen* the file at each call, but for the standard output
logging.
The logging functions actually also access global configuration while
performing the log (for instance in order to check the log level, the
log filename and so forth), however dunring the I/O threads execution
we cannot alter such shared state in any way.
Now threads are stopped even when the connections drop immediately to
zero, not allowing the networking code to detect the condition and stop
the threads. serverCron() will handle that.
This is just an experiment for now, there are a couple of race
conditions, mostly harmless for the performance gain experiment that
this commit represents so far.
The general idea here is to take Redis single threaded and instead
fan-out on expansive kernel calls: write(2) in this case, but the same
concept could be easily implemented for read(2) and protcol parsing.
However just threading writes like in this commit, is enough to evaluate
if the approach is sounding.
when redis appends the blocked client reply list to the real client, it didn't
bother to check if it is in fact the master client. so a slave executing that
module command will send replies to the master, causing the master to send the
slave error responses, which will mess up the replication offset
(slave will advance it's replication offset, and the master does not)
Fake clients are used in special situations and are not linked to the
normal clients list, freeing them will always result in Redis crashing
in one way or the other.
It's not common to send replies to fake clients, but we have one usage
in the modules API. When a client is blocked, we associate to the
blocked client object (that is safe to manipulate in a thread), a fake
client that accumulates replies. So because of this bug there was
the problem described in issue #5443.
The fix was verified to work with the provided example module. To write
a regression is very hard and unlikely to be triggered in the future.
Related to #4840.
Note that when we re-enter the event loop with aeProcessEvents() we
don't process timers, nor before/after sleep callbacks, so we should
never end calling freeClientsInAsyncFreeQueue() when re-entering the
loop.
If we are going to read a large object from network
try to make it likely that it will start at c->querybuf
boundary so that we can optimize object creation
avoiding a large copy of data.
But only when the data we have not parsed is less than
or equal to ll+2. If the data length is greater than
ll+2, trimming querybuf is just a waste of time, because
at this time the querybuf contains not only our bulk.
It's easy to reproduce the that:
Time1: call `client pause 10000` on slave.
Time2: redis-benchmark -t set -r 10000 -d 33000 -n 10000.
Then slave hung after 10 seconds.
Technically speaking we don't really need to put the master client in
the clients that need to be processed, since in practice the PING
commands from the master will take care, however it is conceptually more
sane to do so.
Processing command from the master while the slave is in busy state is
not correct, however we cannot, also, just reply -BUSY to the
replication stream commands from the master. The correct solution is to
stop processing data from the master, but just accumulate the stream
into the buffers and resume the processing later.
Related to #5297.
To avoid copying buffers to create a large Redis Object which
exceeding PROTO_IOBUF_LEN 32KB, we just read the remaining data
we need, which may less than PROTO_IOBUF_LEN. But the remaining
len may be zero, if the bulklen+2 equals sdslen(c->querybuf),
in client pause context.
For example:
Time1:
python
>>> import os, socket
>>> server="127.0.0.1"
>>> port=6379
>>> data1="*3\r\n$3\r\nset\r\n$1\r\na\r\n$33000\r\n"
>>> data2="".join("x" for _ in range(33000)) + "\r\n"
>>> data3="\n\n"
>>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>>> s.settimeout(10)
>>> s.connect((server, port))
>>> s.send(data1)
28
Time2:
redis-cli client pause 10000
Time3:
>>> s.send(data2)
33002
>>> s.send(data3)
2
>>> s.send(data3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
socket.error: [Errno 104] Connection reset by peer
To fix that, we should check if remaining is greater than zero.
Function setProtocolError just records proctocol error
details in server log, set client as CLIENT_CLOSE_AFTER_REPLY.
It doesn't care about querybuf sdsrange, because we
will do it after procotol parsing.
This is an optimization for processing pipeline, we discussed a
problem in issue #5229: clients may be paused if we apply `CLIENT
PAUSE` command, and then querybuf may grow too large, the cost of
memmove in sdsrange after parsing a completed command will be
horrible. The optimization is that parsing all commands in queyrbuf
, after that we can just call sdsrange only once.
The slave sends \n keepalive messages to the master while parsing the rdb,
and later sends REPLCONF ACK once a second. rarely, the master recives both
a linefeed char and a REPLCONF in the same read, \n*3\r\n$8\r\nREPLCONF\r\n...
and it tries to trim two chars (\r\n) from the query buffer,
trimming the '*' from *3\r\n$8\r\nREPLCONF\r\n...
then the master tries to process a command starting with '3' and replies to
the slave a bunch of -ERR and one +OK.
although the slave silently ignores these (prints a log message), this corrupts
the replication offset at the slave since the slave increases the replication
offset, and the master did not.
other than the fix in processInlineBuffer, i did several other improvments
while hunting this very rare bug.
- when redis replies with "unknown command" it includes a portion of the
arguments, not just the command name. so it would be easier to understand
what was recived, in my case, on the slave side, it was -ERR, but
the "arguments" were the interesting part (containing info on the error).
- about a year ago i added code in addReplyErrorLength to print the error to
the log in case of a reply to master (since this string isn't actually
trasmitted to the master), now changed that block to print a similar log
message to indicate an error being sent from the master to the slave.
note that the slave is marked as CLIENT_SLAVE only after PSYNC was received,
so this will not cause any harm for REPLCONF, and will only indicate problems
that are gonna corrupt the replication stream anyway.
- two places were c->reply was emptied, and i wanted to reset sentlen
this is a precaution (i did not actually see such a problem), since a
non-zero sentlen will cause corruption to be transmitted on the socket.
Reading the PR gave me the opportunity to better specify what the code
was doing in places where I was not immediately sure about what was
going on. Moreover I documented the structure in server.h so that people
reading the header file will immediately understand what the structure
is useful for.
A) slave buffers didn't count internal fragmentation and sds unused space,
this caused them to induce eviction although we didn't mean for it.
B) slave buffers were consuming about twice the memory of what they actually needed.
- this was mainly due to sdsMakeRoomFor growing to twice as much as needed each time
but networking.c not storing more than 16k (partially fixed recently in 237a38737).
- besides it wasn't able to store half of the new string into one buffer and the
other half into the next (so the above mentioned fix helped mainly for small items).
- lastly, the sds buffers had up to 30% internal fragmentation that was wasted,
consumed but not used.
C) inefficient performance due to starting from a small string and reallocing many times.
what i changed:
- creating dedicated buffers for reply list, counting their size with zmalloc_size
- when creating a new reply node from, preallocate it to at least 16k.
- when appending a new reply to the buffer, first fill all the unused space of the
previous node before starting a new one.
other changes:
- expose mem_not_counted_for_evict info field for the benefit of the test suite
- add a test to make sure slave buffers are counted correctly and that they don't cause eviction
Now you can use:
addReplyError("-MYERRORCODE some message");
If the error code is omitted, the behavior is like in the past,
the generic -ERR will be used.