Before we relied on the global cluster state to make sure all the hash
slots are linked to some node, when getNodeByQuery() is called. So
finding the hash slot unbound was checked with an assertion. However
this is fragile. The cluster state is often updated in the
clusterBeforeSleep() function, and not ASAP on state change, so it may
happen to process clients with a cluster state that is 'ok' but yet
certain hash slots set to NULL.
With this commit the condition is also checked in getNodeByQuery() and
reported with a identical error code of -CLUSTERDOWN but slightly
different error message so that we have more debugging clue in the
future.
Root cause of issue #2288.
Not perfect since The Solution IMHO is to have a DSL with a table of
configuration functions with type, limits, and aux functions to handle
the odd ones. However this hacky macro solution is already better and
forces to put limits in the range of numerical fields.
More field types to be refactored in the next commits hopefully.
Should be much faster, and regardless, the code is more obvious now
compared to generating a string just to get the return value of the
ll2stirng() function.
1. HVSTRLEN -> HSTRLEN. It's unlikely one needs the length of the key,
not clear how the API would work (by value does not make sense) and
there will be better names anyway.
2. Default is to return 0 when field is missing.
3. Default is to return 0 when key is missing.
4. The implementation was slower than needed, and produced unnecessary COW.
Related issue #2415.
1. Remove useless "cs" initialization.
2. Add a "select" var to capture a condition checked multiple times.
3. Avoid duplication of the same if (!copy) conditional.
4. Don't increment dirty if copy is given (no deletion is performed),
otherwise we propagate MIGRATE when not needed.
It's hard to pick a good approach here. A few arguments:
1) There are many exposed instances on the internet.
2) Changing the default when "bind" is not given is very dangerous,
after an upgrade the server changes a fundamental behavior.
3) Usually Redis, when used in a proper way, will be protected *and*
accessed often from other computers, so this new default is likely
not what most people want.
4) However if users end with this default, they are using the example
redis.conf: likely they are reading what is inside, and they'll see
the warning.