Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
# Helper functions to simulate search-in-radius in the Tcl side in order to
2015-06-23 09:02:37 -04:00
# verify the Redis implementation with a fuzzy test.
2021-02-04 11:08:35 -05:00
proc geo_degrad deg { expr { $deg * ( atan ( 1 ) * 8 / 360 ) } }
proc geo_raddeg rad { expr { $rad / ( atan ( 1 ) * 8 / 360 ) } }
2015-06-23 09:02:37 -04:00
2015-06-25 12:05:45 -04:00
proc geo_distance { lon1d lat1d lon2d lat2d} {
2015-06-23 09:02:37 -04:00
set lon1r [ geo_degrad $lon1d ]
2015-06-25 12:05:45 -04:00
set lat1r [ geo_degrad $lat1d ]
2015-06-23 09:02:37 -04:00
set lon2r [ geo_degrad $lon2d ]
2015-06-25 12:05:45 -04:00
set lat2r [ geo_degrad $lat2d ]
2015-06-23 09:02:37 -04:00
set v [ expr { sin ( ( $lon2r - $lon1r ) / 2 ) } ]
2015-06-25 12:05:45 -04:00
set u [ expr { sin ( ( $lat2r - $lat1r ) / 2 ) } ]
2015-06-23 09:02:37 -04:00
expr { 2.0 * 6372797.560856 * \
asin ( sqrt ( $u * $u + cos ( $lat1r ) * cos( $lat2r ) * $v * $v ) ) }
}
2015-06-25 12:05:45 -04:00
proc geo_random_point { lonvar latvar} {
2015-06-23 09:02:37 -04:00
upvar 1 $lonvar lon
2015-06-25 12:05:45 -04:00
upvar 1 $latvar lat
2015-06-23 09:02:37 -04:00
# Note that the actual latitude limit should be -85 to +85, we restrict
# the test to -70 to +70 since in this range the algorithm is more precise
# while outside this range occasionally some element may be missing.
set lon [ expr { -180 + rand( ) * 360 } ]
2015-06-25 12:05:45 -04:00
set lat [ expr { -70 + rand( ) * 140 } ]
2015-06-23 09:02:37 -04:00
}
Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
# Return elements non common to both the lists.
# This code is from http://wiki.tcl.tk/15489
proc compare_lists { List1 List2} {
set DiffList { }
foreach Item $List1 {
if { [ lsearch - exact $List2 $Item ] == -1 } {
lappend DiffList $Item
}
}
foreach Item $List2 {
if { [ lsearch - exact $List1 $Item ] == -1 } {
if { [ lsearch - exact $DiffList $Item ] == -1 } {
lappend DiffList $Item
}
}
}
return $DiffList
}
2021-02-04 11:08:35 -05:00
# return true If a point in circle.
# search_lon and search_lat define the center of the circle,
# and lon, lat define the point being searched.
proc pointInCircle { radius_km lon lat search_lon search_lat} {
set radius_m [ expr { $radius_km * 1000 } ]
set distance [ geo_distance $lon $lat $search_lon $search_lat ]
if { $distance < $radius_m } {
return true
}
return false
}
# return true If a point in rectangle.
# search_lon and search_lat define the center of the rectangle,
# and lon, lat define the point being searched.
# error: can adjust the width and height of the rectangle according to the error
proc pointInRectangle { width_km height_km lon lat search_lon search_lat error} {
set width_m [ expr { $width_km * 1000 * $error / 2 } ]
set height_m [ expr { $height_km * 1000 * $error / 2 } ]
set lon_distance [ geo_distance $lon $lat $search_lon $lat ]
set lat_distance [ geo_distance $lon $lat $lon $search_lat ]
if { $lon_distance > $width_m || $lat_distance > $height_m } {
return false
}
return true
}
GEO* STORE with empty src key delete the dest key and return 0, not empty array (#9271)
With an empty src key, we need to deal with two situations:
1. non-STORE: We should return emptyarray.
2. STORE: Try to delete the store key and return 0.
This applies to both GEOSEARCHSTORE (new to v6.2), and
also GEORADIUS STORE (which was broken since forever)
This pr try to fix #9261. i.e. both STORE variants would have behaved
like the non-STORE variants when the source key was missing,
returning an empty array and not deleting the destination key,
instead of returning 0, and deleting the destination key.
Also add more tests for some commands.
- GEORADIUS: wrong type src key, non existing src key, empty search,
store with non existing src key, store with empty search
- GEORADIUSBYMEMBER: wrong type src key, non existing src key,
non existing member, store with non existing src key
- GEOSEARCH: wrong type src key, non existing src key, empty search,
frommember with non existing member
- GEOSEARCHSTORE: wrong type key, non existing src key,
fromlonlat with empty search, frommember with non existing member
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-08-01 12:32:24 -04:00
proc verify_geo_edge_response_bylonlat { expected_response expected_store_response} {
catch { r georadius src{ t } 1 1 1 km} response
assert_match $expected_response $response
catch { r georadius src{ t } 1 1 1 km store dest{ t } } response
assert_match $expected_store_response $response
catch { r geosearch src{ t } fromlonlat 0 0 byradius 1 km} response
assert_match $expected_response $response
catch { r geosearchstore dest{ t } src{ t } fromlonlat 0 0 byradius 1 km} response
assert_match $expected_store_response $response
}
proc verify_geo_edge_response_bymember { expected_response expected_store_response} {
catch { r georadiusbymember src{ t } member 1 km} response
assert_match $expected_response $response
catch { r georadiusbymember src{ t } member 1 km store dest{ t } } response
assert_match $expected_store_response $response
catch { r geosearch src{ t } frommember member bybox 1 1 km} response
assert_match $expected_response $response
catch { r geosearchstore dest{ t } src{ t } frommember member bybox 1 1 m} response
assert_match $expected_store_response $response
}
adding geo command edge cases tests (#12274)
For geosearch and georadius we have already test coverage for wrong type, but we dont have for geodist, geohash, geopos commands. So adding the wrong type test cases for geodist, geohash, geopos commands.
Existing code, we have verify_geo_edge_response_bymember function for wrong type test cases which has member as an option. But the function is being called in other test cases where the output is not inline with these commnds(geodist, geohash, geopos). So I could not include these commands(geodist, geohash, geopos) as part of existing function, hence implemented a new function verify_geo_edge_response_generic and called from the test case.
2023-06-20 05:50:03 -04:00
proc verify_geo_edge_response_generic { expected_response } {
catch { r geodist src{ t } member 1 km} response
assert_match $expected_response $response
catch { r geohash src{ t } member} response
assert_match $expected_response $response
catch { r geopos src{ t } member} response
assert_match $expected_response $response
}
Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
# The following list represents sets of random seed, search position
# and radius that caused bugs in the past. It is used by the randomized
# test later as a starting point. When the regression vectors are scanned
# the code reverts to using random data.
#
# The format is: seed km lon lat
set regression_vectors {
Fix GEORADIUS edge case with huge radius.
This commit closes issue #3698, at least for now, since the root cause
was not fixed: the bounding box function, for huge radiuses, does not
return a correct bounding box, there are points still within the radius
that are left outside.
So when using GEORADIUS queries with radiuses in the order of 5000 km or
more, it was possible to see, at the edge of the area, certain points
not correctly reported.
Because the bounding box for now was used just as an optimization, and
such huge radiuses are not common, for now the optimization is just
switched off when the radius is near such magnitude.
Three test cases found by the Continuous Integration test were added, so
that we can easily trigger the bug again, both for regression testing
and in order to properly fix it as some point in the future.
2017-07-03 13:38:31 -04:00
{ 1482225976969 7083 81.634948934258375 30.561509253718668 }
{ 1482340074151 5416 - 70.863281847379767 - 46.347003465679947 }
{ 1499014685896 6064 - 89.818768962202014 - 40.463868561416803 }
Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
{ 1412 156 149.29737817929004 15.95807862745508 }
{ 441574 143 59.235461856813856 66.269555127373678 }
{ 160645 187 - 101.88575239939883 49.061997951502917 }
{ 750269 154 - 90.187939661642517 66.615930412251487 }
{ 342880 145 163.03472387745728 64.012747720821181 }
{ 729955 143 137.86663517256579 63.986745399416776 }
{ 939895 151 59.149620271823181 65.204186651485145 }
{ 1412 156 149.29737817929004 15.95807862745508 }
{ 564862 149 84.062063109158544 - 65.685403922426232 }
2019-01-15 02:26:19 -05:00
{ 1546032440391 16751 - 1.8175081637769495 20.665668878082954 }
Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
}
set rv_idx 0
2014-05-12 14:38:17 -04:00
start_server { tags { " g e o " } } {
GEO* STORE with empty src key delete the dest key and return 0, not empty array (#9271)
With an empty src key, we need to deal with two situations:
1. non-STORE: We should return emptyarray.
2. STORE: Try to delete the store key and return 0.
This applies to both GEOSEARCHSTORE (new to v6.2), and
also GEORADIUS STORE (which was broken since forever)
This pr try to fix #9261. i.e. both STORE variants would have behaved
like the non-STORE variants when the source key was missing,
returning an empty array and not deleting the destination key,
instead of returning 0, and deleting the destination key.
Also add more tests for some commands.
- GEORADIUS: wrong type src key, non existing src key, empty search,
store with non existing src key, store with empty search
- GEORADIUSBYMEMBER: wrong type src key, non existing src key,
non existing member, store with non existing src key
- GEOSEARCH: wrong type src key, non existing src key, empty search,
frommember with non existing member
- GEOSEARCHSTORE: wrong type key, non existing src key,
fromlonlat with empty search, frommember with non existing member
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-08-01 12:32:24 -04:00
test { GEO with wrong type src key} {
r set src{ t } wrong_type
verify_geo_edge_response_bylonlat " W R O N G T Y P E * " " W R O N G T Y P E * "
verify_geo_edge_response_bymember " W R O N G T Y P E * " " W R O N G T Y P E * "
adding geo command edge cases tests (#12274)
For geosearch and georadius we have already test coverage for wrong type, but we dont have for geodist, geohash, geopos commands. So adding the wrong type test cases for geodist, geohash, geopos commands.
Existing code, we have verify_geo_edge_response_bymember function for wrong type test cases which has member as an option. But the function is being called in other test cases where the output is not inline with these commnds(geodist, geohash, geopos). So I could not include these commands(geodist, geohash, geopos) as part of existing function, hence implemented a new function verify_geo_edge_response_generic and called from the test case.
2023-06-20 05:50:03 -04:00
verify_geo_edge_response_generic " W R O N G T Y P E * "
GEO* STORE with empty src key delete the dest key and return 0, not empty array (#9271)
With an empty src key, we need to deal with two situations:
1. non-STORE: We should return emptyarray.
2. STORE: Try to delete the store key and return 0.
This applies to both GEOSEARCHSTORE (new to v6.2), and
also GEORADIUS STORE (which was broken since forever)
This pr try to fix #9261. i.e. both STORE variants would have behaved
like the non-STORE variants when the source key was missing,
returning an empty array and not deleting the destination key,
instead of returning 0, and deleting the destination key.
Also add more tests for some commands.
- GEORADIUS: wrong type src key, non existing src key, empty search,
store with non existing src key, store with empty search
- GEORADIUSBYMEMBER: wrong type src key, non existing src key,
non existing member, store with non existing src key
- GEOSEARCH: wrong type src key, non existing src key, empty search,
frommember with non existing member
- GEOSEARCHSTORE: wrong type key, non existing src key,
fromlonlat with empty search, frommember with non existing member
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-08-01 12:32:24 -04:00
}
test { GEO with non existing src key} {
r del src{ t }
verify_geo_edge_response_bylonlat { } 0
verify_geo_edge_response_bymember { } 0
}
test { GEO BYLONLAT with empty search} {
r del src{ t }
r geoadd src{ t } 13.361389 38.115556 " P a l e r m o " 15.087269 37.502669 " C a t a n i a "
verify_geo_edge_response_bylonlat { } 0
}
test { GEO BYMEMBER with non existing member} {
r del src{ t }
r geoadd src{ t } 13.361389 38.115556 " P a l e r m o " 15.087269 37.502669 " C a t a n i a "
verify_geo_edge_response_bymember " E R R * " " E R R * "
}
2014-05-12 14:38:17 -04:00
test { GEOADD create} {
2015-06-25 12:05:45 -04:00
r geoadd nyc - 73.9454966 40.747533 " l i c m a r k e t "
2014-05-12 14:38:17 -04:00
} { 1 }
test { GEOADD update} {
2015-06-25 12:05:45 -04:00
r geoadd nyc - 73.9454966 40.747533 " l i c m a r k e t "
2014-05-12 14:38:17 -04:00
} { 0 }
2021-01-03 10:13:37 -05:00
test { GEOADD update with CH option} {
assert_equal 1 [ r geoadd nyc CH 40.747533 - 73.9454966 " l i c m a r k e t " ]
lassign [ lindex [ r geopos nyc " l i c m a r k e t " ] 0 ] x1 y1
assert { abs ( $x1 ) - 40.747 < 0.001 }
assert { abs ( $y1 ) - 73.945 < 0.001 }
} { }
test { GEOADD update with NX option} {
assert_equal 0 [ r geoadd nyc NX - 73.9454966 40.747533 " l i c m a r k e t " ]
lassign [ lindex [ r geopos nyc " l i c m a r k e t " ] 0 ] x1 y1
assert { abs ( $x1 ) - 40.747 < 0.001 }
assert { abs ( $y1 ) - 73.945 < 0.001 }
} { }
test { GEOADD update with XX option} {
assert_equal 0 [ r geoadd nyc XX - 83.9454966 40.747533 " l i c m a r k e t " ]
lassign [ lindex [ r geopos nyc " l i c m a r k e t " ] 0 ] x1 y1
assert { abs ( $x1 ) - 83.945 < 0.001 }
assert { abs ( $y1 ) - 40.747 < 0.001 }
} { }
test { GEOADD update with CH NX option} {
r geoadd nyc CH NX - 73.9454966 40.747533 " l i c m a r k e t "
} { 0 }
test { GEOADD update with CH XX option} {
r geoadd nyc CH XX - 73.9454966 40.747533 " l i c m a r k e t "
} { 1 }
test { GEOADD update with XX NX option will return syntax error} {
catch {
r geoadd nyc xx nx - 73.9454966 40.747533 " l i c m a r k e t "
} err
set err
2022-04-14 04:18:32 -04:00
} { ERR * syntax* }
2021-01-03 10:13:37 -05:00
test { GEOADD update with invalid option} {
catch {
r geoadd nyc ch xx foo - 73.9454966 40.747533 " l i c m a r k e t "
} err
set err
2022-04-14 04:18:32 -04:00
} { ERR * syntax* }
2021-01-03 10:13:37 -05:00
2015-06-23 04:19:40 -04:00
test { GEOADD invalid coordinates} {
catch {
2015-06-25 12:05:45 -04:00
r geoadd nyc - 73.9454966 40.747533 " l i c m a r k e t " \
2015-06-23 04:19:40 -04:00
foo bar " l u c k m a r k e t "
} err
set err
} { * valid * }
2014-05-12 14:38:17 -04:00
test { GEOADD multi add} {
2015-06-25 12:05:45 -04:00
r geoadd nyc - 73.9733487 40.7648057 " c e n t r a l p a r k n / q / r " - 73.9903085 40.7362513 " u n i o n s q u a r e " - 74.0131604 40.7126674 " w t c o n e " - 73.7858139 40.6428986 " j f k " - 73.9375699 40.7498929 " q 4 " - 73.9564142 40.7480973 4545
2014-05-12 14:38:17 -04:00
} { 6 }
test { Check geoset values} {
r zrange nyc 0 - 1 withscores
} { { wtc one} 1791873972053020 { union square} 1791875485187452 { central park n/ q/ r} 1791875761332224 4545 1791875796750882 { lic market} 1791875804419201 q4 1791875830079666 jfk 1791895905559723 }
test { GEORADIUS simple ( sorted ) } {
2015-06-27 03:43:47 -04:00
r georadius nyc - 73.9798091 40.7598464 3 km asc
2014-05-12 14:38:17 -04:00
} { { central park n/ q/ r} 4545 { union square} }
Add reply_schema to command json files (internal for now) (#10273)
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
Since ironing the details of the reply schema of each and every command can take a long time, we
would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
### Background
In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
### Motivation
1. Documentation. This is the primary goal.
2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
testsuite, see the "Testing" section)
### Schema
The idea is to supply some sort of schema for the various replies of each command.
The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
and without the `FULL` modifier)
We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
Example for `BZPOPMIN`:
```
"reply_schema": {
"oneOf": [
{
"description": "Timeout reached and no elements were popped.",
"type": "null"
},
{
"description": "The keyname, popped member, and its score.",
"type": "array",
"minItems": 3,
"maxItems": 3,
"items": [
{
"description": "Keyname",
"type": "string"
},
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
]
}
```
#### Notes
1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
schema for documentation (and possibly to create a fuzzer that validates the replies)
3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
is with `WITHSCORES` and the other is without.
4. For documentation, there will be another optional field "notes" in which we will add a short description of
the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
array, for example)
Given the above:
1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
(given that "description" and "notes" are comprehensive enough)
2. We can generate a client in a strongly typed language (but the return type could be a conceptual
`union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
3. We can create a fuzzer for RESP3.
### Limitations (because we are using the standard json-schema)
The problem is that Redis' replies are more diverse than what the json format allows. This means that,
when we convert the reply to a json (in order to validate the schema against it), we lose information (see
the "Testing" section below).
The other option would have been to extend the standard json-schema (and json format) to include stuff
like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
seemed like too much work, so we decided to compromise.
Examples:
1. We cannot tell the difference between an "array" and a "set"
2. We cannot tell the difference between simple-string and bulk-string
3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
compares (member,score) tuples and not just the member name.
### Testing
This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
are indeed correct (i.e. describe the actual response of Redis).
To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
it executed and their replies.
For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
`--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
`--log-req-res --force-resp3`)
You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
`.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
These files are later on processed by `./utils/req-res-log-validator.py` which does:
1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
2. For each request-response pair, it validates the response against the request's reply_schema
(obtained from the extended COMMAND DOCS)
5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
the existing redis test suite, rather than attempt to write a fuzzer.
#### Notes about RESP2
1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
accept RESP3 as the future RESP)
2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
so that we can validate it, we will need to know how to convert the actual reply to the one expected.
- number and boolean are always strings in RESP2 so the conversion is easy
- objects (maps) are always a flat array in RESP2
- others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
handling (so the client will not be totally auto-generated)
Example for ZRANGE:
```
"reply_schema": {
"anyOf": [
{
"description": "A list of member elements",
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
},
{
"description": "Members and their scores. Returned in case `WITHSCORES` was used.",
"notes": "In RESP2 this is returned as a flat array",
"type": "array",
"uniqueItems": true,
"items": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": [
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
}
]
}
```
### Other changes
1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
regardless of the special log-req-res mode ("Pub/Sub PING" for example)
2. Update the history field of CLIENT LIST
3. Added basic tests for commands that were not covered at all by the testsuite
### TODO
- [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
- [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
- [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
- [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
of the tests - https://github.com/redis/redis/issues/11897
- [x] (probably a separate PR) add all missing schemas
- [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
- [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
fight with the tcl including mechanism a bit)
- [x] issue: module API - https://github.com/redis/redis/issues/11898
- [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Shaya Potter <shaya@redislabs.com>
2023-03-11 03:14:16 -05:00
test { GEORADIUS_RO simple ( sorted ) } {
r georadius_ro nyc - 73.9798091 40.7598464 3 km asc
} { { central park n/ q/ r} 4545 { union square} }
2020-12-11 19:21:05 -05:00
test { GEOSEARCH simple ( sorted ) } {
2020-12-14 10:15:12 -05:00
r geosearch nyc fromlonlat - 73.9798091 40.7598464 bybox 6 6 km asc
2020-12-11 19:21:05 -05:00
} { { central park n/ q/ r} 4545 { union square} { lic market} }
2020-12-14 10:15:12 -05:00
test { GEOSEARCH FROMLONLAT and FROMMEMBER cannot exist at the same time} {
catch { r geosearch nyc fromlonlat - 73.9798091 40.7598464 frommember xxx bybox 6 6 km asc} e
2020-12-11 19:21:05 -05:00
set e
2022-04-14 04:18:32 -04:00
} { ERR * syntax* }
2020-12-11 19:21:05 -05:00
2020-12-14 10:15:12 -05:00
test { GEOSEARCH FROMLONLAT and FROMMEMBER one must exist} {
2020-12-11 19:21:05 -05:00
catch { r geosearch nyc bybox 3 3 km asc desc withhash withdist withcoord} e
set e
2022-04-14 04:18:32 -04:00
} { ERR * exactly one of FROMMEMBER or FROMLONLAT* }
2020-12-11 19:21:05 -05:00
test { GEOSEARCH BYRADIUS and BYBOX cannot exist at the same time} {
2020-12-14 10:15:12 -05:00
catch { r geosearch nyc fromlonlat - 73.9798091 40.7598464 byradius 3 km bybox 3 3 km asc} e
2020-12-11 19:21:05 -05:00
set e
2022-04-14 04:18:32 -04:00
} { ERR * syntax* }
2020-12-11 19:21:05 -05:00
test { GEOSEARCH BYRADIUS and BYBOX one must exist} {
2020-12-14 10:15:12 -05:00
catch { r geosearch nyc fromlonlat - 73.9798091 40.7598464 asc desc withhash withdist withcoord} e
2020-12-11 19:21:05 -05:00
set e
2022-04-14 04:18:32 -04:00
} { ERR * exactly one of BYRADIUS and BYBOX* }
2020-12-11 19:21:05 -05:00
test { GEOSEARCH with STOREDIST option} {
2020-12-14 10:15:12 -05:00
catch { r geosearch nyc fromlonlat - 73.9798091 40.7598464 bybox 6 6 km asc storedist} e
2020-12-11 19:21:05 -05:00
set e
2022-04-14 04:18:32 -04:00
} { ERR * syntax* }
2020-12-11 19:21:05 -05:00
2015-06-27 03:43:47 -04:00
test { GEORADIUS withdist ( sorted ) } {
r georadius nyc - 73.9798091 40.7598464 3 km withdist asc
2015-06-22 09:00:37 -04:00
} { { { central park n/ q/ r} 0.7750 } { 4545 2.3651 } { { union square} 2.7697 } }
2014-05-12 14:38:17 -04:00
2020-12-11 19:21:05 -05:00
test { GEOSEARCH withdist ( sorted ) } {
2020-12-14 10:15:12 -05:00
r geosearch nyc fromlonlat - 73.9798091 40.7598464 bybox 6 6 km withdist asc
2020-12-11 19:21:05 -05:00
} { { { central park n/ q/ r} 0.7750 } { 4545 2.3651 } { { union square} 2.7697 } { { lic market} 3.1991 } }
2015-06-29 03:52:23 -04:00
test { GEORADIUS with COUNT} {
r georadius nyc - 73.9798091 40.7598464 10 km COUNT 3
} { { central park n/ q/ r} 4545 { union square} }
Add reply_schema to command json files (internal for now) (#10273)
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
Since ironing the details of the reply schema of each and every command can take a long time, we
would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
### Background
In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
### Motivation
1. Documentation. This is the primary goal.
2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
testsuite, see the "Testing" section)
### Schema
The idea is to supply some sort of schema for the various replies of each command.
The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
and without the `FULL` modifier)
We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
Example for `BZPOPMIN`:
```
"reply_schema": {
"oneOf": [
{
"description": "Timeout reached and no elements were popped.",
"type": "null"
},
{
"description": "The keyname, popped member, and its score.",
"type": "array",
"minItems": 3,
"maxItems": 3,
"items": [
{
"description": "Keyname",
"type": "string"
},
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
]
}
```
#### Notes
1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
schema for documentation (and possibly to create a fuzzer that validates the replies)
3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
is with `WITHSCORES` and the other is without.
4. For documentation, there will be another optional field "notes" in which we will add a short description of
the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
array, for example)
Given the above:
1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
(given that "description" and "notes" are comprehensive enough)
2. We can generate a client in a strongly typed language (but the return type could be a conceptual
`union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
3. We can create a fuzzer for RESP3.
### Limitations (because we are using the standard json-schema)
The problem is that Redis' replies are more diverse than what the json format allows. This means that,
when we convert the reply to a json (in order to validate the schema against it), we lose information (see
the "Testing" section below).
The other option would have been to extend the standard json-schema (and json format) to include stuff
like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
seemed like too much work, so we decided to compromise.
Examples:
1. We cannot tell the difference between an "array" and a "set"
2. We cannot tell the difference between simple-string and bulk-string
3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
compares (member,score) tuples and not just the member name.
### Testing
This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
are indeed correct (i.e. describe the actual response of Redis).
To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
it executed and their replies.
For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
`--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
`--log-req-res --force-resp3`)
You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
`.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
These files are later on processed by `./utils/req-res-log-validator.py` which does:
1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
2. For each request-response pair, it validates the response against the request's reply_schema
(obtained from the extended COMMAND DOCS)
5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
the existing redis test suite, rather than attempt to write a fuzzer.
#### Notes about RESP2
1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
accept RESP3 as the future RESP)
2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
so that we can validate it, we will need to know how to convert the actual reply to the one expected.
- number and boolean are always strings in RESP2 so the conversion is easy
- objects (maps) are always a flat array in RESP2
- others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
handling (so the client will not be totally auto-generated)
Example for ZRANGE:
```
"reply_schema": {
"anyOf": [
{
"description": "A list of member elements",
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
},
{
"description": "Members and their scores. Returned in case `WITHSCORES` was used.",
"notes": "In RESP2 this is returned as a flat array",
"type": "array",
"uniqueItems": true,
"items": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": [
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
}
]
}
```
### Other changes
1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
regardless of the special log-req-res mode ("Pub/Sub PING" for example)
2. Update the history field of CLIENT LIST
3. Added basic tests for commands that were not covered at all by the testsuite
### TODO
- [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
- [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
- [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
- [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
of the tests - https://github.com/redis/redis/issues/11897
- [x] (probably a separate PR) add all missing schemas
- [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
- [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
fight with the tcl including mechanism a bit)
- [x] issue: module API - https://github.com/redis/redis/issues/11898
- [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Shaya Potter <shaya@redislabs.com>
2023-03-11 03:14:16 -05:00
test { GEORADIUS with multiple WITH* tokens} {
assert_match { { { central park n/ q/ r} 1791875761332224 { -73.97334 * 40.76480 * } } { 4545 1791875796750882 { -73.95641 * 40.74809 * } } } [ r georadius nyc - 73.9798091 40.7598464 10 km WITHCOORD WITHHASH COUNT 2 ]
assert_match { { { central park n/ q/ r} 1791875761332224 { -73.97334 * 40.76480 * } } { 4545 1791875796750882 { -73.95641 * 40.74809 * } } } [ r georadius nyc - 73.9798091 40.7598464 10 km WITHHASH WITHCOORD COUNT 2 ]
assert_match { { { central park n/ q/ r} 0.7750 1791875761332224 { -73.97334 * 40.76480 * } } { 4545 2.3651 1791875796750882 { -73.95641 * 40.74809 * } } } [ r georadius nyc - 73.9798091 40.7598464 10 km WITHDIST WITHHASH WITHCOORD COUNT 2 ]
}
2021-01-08 11:29:44 -05:00
test { GEORADIUS with ANY not sorted by default} {
r georadius nyc - 73.9798091 40.7598464 10 km COUNT 3 ANY
} { { wtc one} { union square} { central park n/ q/ r} }
test { GEORADIUS with ANY sorted by ASC} {
r georadius nyc - 73.9798091 40.7598464 10 km COUNT 3 ANY ASC
} { { central park n/ q/ r} { union square} { wtc one} }
test { GEORADIUS with ANY but no COUNT} {
catch { r georadius nyc - 73.9798091 40.7598464 10 km ANY ASC} e
set e
2022-04-14 04:18:32 -04:00
} { ERR * ANY* requires* COUNT* }
2021-01-08 11:29:44 -05:00
2015-10-06 03:27:29 -04:00
test { GEORADIUS with COUNT but missing integer argument} {
catch { r georadius nyc - 73.9798091 40.7598464 10 km COUNT} e
set e
2022-04-14 04:18:32 -04:00
} { ERR * syntax* }
2015-10-06 03:27:29 -04:00
2015-06-29 03:52:23 -04:00
test { GEORADIUS with COUNT DESC} {
r georadius nyc - 73.9798091 40.7598464 10 km COUNT 2 DESC
} { { wtc one} q4 }
2015-09-14 17:05:40 -04:00
test { GEORADIUS HUGE, issue # 2767 } {
r geoadd users - 47.271613776683807 - 54.534504198047678 user_000000
llength [ r GEORADIUS users 0 0 50000 km WITHCOORD]
} { 1 }
2014-05-12 14:38:17 -04:00
test { GEORADIUSBYMEMBER simple ( sorted ) } {
r georadiusbymember nyc " w t c o n e " 7 km
} { { wtc one} { union square} { central park n/ q/ r} 4545 { lic market} }
Add reply_schema to command json files (internal for now) (#10273)
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
Since ironing the details of the reply schema of each and every command can take a long time, we
would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
### Background
In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
### Motivation
1. Documentation. This is the primary goal.
2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
testsuite, see the "Testing" section)
### Schema
The idea is to supply some sort of schema for the various replies of each command.
The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
and without the `FULL` modifier)
We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
Example for `BZPOPMIN`:
```
"reply_schema": {
"oneOf": [
{
"description": "Timeout reached and no elements were popped.",
"type": "null"
},
{
"description": "The keyname, popped member, and its score.",
"type": "array",
"minItems": 3,
"maxItems": 3,
"items": [
{
"description": "Keyname",
"type": "string"
},
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
]
}
```
#### Notes
1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
schema for documentation (and possibly to create a fuzzer that validates the replies)
3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
is with `WITHSCORES` and the other is without.
4. For documentation, there will be another optional field "notes" in which we will add a short description of
the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
array, for example)
Given the above:
1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
(given that "description" and "notes" are comprehensive enough)
2. We can generate a client in a strongly typed language (but the return type could be a conceptual
`union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
3. We can create a fuzzer for RESP3.
### Limitations (because we are using the standard json-schema)
The problem is that Redis' replies are more diverse than what the json format allows. This means that,
when we convert the reply to a json (in order to validate the schema against it), we lose information (see
the "Testing" section below).
The other option would have been to extend the standard json-schema (and json format) to include stuff
like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
seemed like too much work, so we decided to compromise.
Examples:
1. We cannot tell the difference between an "array" and a "set"
2. We cannot tell the difference between simple-string and bulk-string
3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
compares (member,score) tuples and not just the member name.
### Testing
This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
are indeed correct (i.e. describe the actual response of Redis).
To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
it executed and their replies.
For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
`--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
`--log-req-res --force-resp3`)
You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
`.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
These files are later on processed by `./utils/req-res-log-validator.py` which does:
1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
2. For each request-response pair, it validates the response against the request's reply_schema
(obtained from the extended COMMAND DOCS)
5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
the existing redis test suite, rather than attempt to write a fuzzer.
#### Notes about RESP2
1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
accept RESP3 as the future RESP)
2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
so that we can validate it, we will need to know how to convert the actual reply to the one expected.
- number and boolean are always strings in RESP2 so the conversion is easy
- objects (maps) are always a flat array in RESP2
- others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
handling (so the client will not be totally auto-generated)
Example for ZRANGE:
```
"reply_schema": {
"anyOf": [
{
"description": "A list of member elements",
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
},
{
"description": "Members and their scores. Returned in case `WITHSCORES` was used.",
"notes": "In RESP2 this is returned as a flat array",
"type": "array",
"uniqueItems": true,
"items": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": [
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
}
]
}
```
### Other changes
1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
regardless of the special log-req-res mode ("Pub/Sub PING" for example)
2. Update the history field of CLIENT LIST
3. Added basic tests for commands that were not covered at all by the testsuite
### TODO
- [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
- [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
- [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
- [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
of the tests - https://github.com/redis/redis/issues/11897
- [x] (probably a separate PR) add all missing schemas
- [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
- [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
fight with the tcl including mechanism a bit)
- [x] issue: module API - https://github.com/redis/redis/issues/11898
- [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Shaya Potter <shaya@redislabs.com>
2023-03-11 03:14:16 -05:00
test { GEORADIUSBYMEMBER_RO simple ( sorted ) } {
r georadiusbymember_ro nyc " w t c o n e " 7 km
} { { wtc one} { union square} { central park n/ q/ r} 4545 { lic market} }
2022-02-21 01:06:58 -05:00
test { GEORADIUSBYMEMBER search areas contain satisfied points in oblique direction} {
r del k1
r geoadd k1 - 0.15307903289794921875 85 n1 0.3515625 85.00019260486917005437 n2
set ret1 [ r GEORADIUSBYMEMBER k1 n1 4891.94 m]
assert_equal $ret1 { n1 n2}
r zrem k1 n1 n2
r geoadd k1 - 4.95211958885192871094 85 n3 11.25 85.0511 n4
set ret2 [ r GEORADIUSBYMEMBER k1 n3 156544 m]
assert_equal $ret2 { n3 n4}
r zrem k1 n3 n4
r geoadd k1 - 45 65.50900022111811438208 n5 90 85.0511 n6
set ret3 [ r GEORADIUSBYMEMBER k1 n5 5009431 m]
assert_equal $ret3 { n5 n6}
}
test { GEORADIUSBYMEMBER crossing pole search} {
r del k1
r geoadd k1 45 65 n1 - 135 85.05 n2
set ret [ r GEORADIUSBYMEMBER k1 n1 5009431 m]
assert_equal $ret { n1 n2}
}
2014-05-12 14:38:17 -04:00
2020-12-11 19:21:05 -05:00
test { GEOSEARCH FROMMEMBER simple ( sorted ) } {
r geosearch nyc frommember " w t c o n e " bybox 14 14 km
} { { wtc one} { union square} { central park n/ q/ r} 4545 { lic market} q4}
test { GEOSEARCH vs GEORADIUS} {
r del Sicily
r geoadd Sicily 13.361389 38.115556 " P a l e r m o " 15.087269 37.502669 " C a t a n i a "
r geoadd Sicily 12.758489 38.788135 " e d g e 1 " 17.241510 38.788135 " e a g e 2 "
set ret1 [ r georadius Sicily 15 37 200 km asc]
assert_equal $ret1 { Catania Palermo}
2020-12-14 10:15:12 -05:00
set ret2 [ r geosearch Sicily fromlonlat 15 37 bybox 400 400 km asc]
2020-12-11 19:21:05 -05:00
assert_equal $ret2 { Catania Palermo eage2 edge1}
}
test { GEOSEARCH non square, long and narrow} {
r del Sicily
2021-02-04 11:08:35 -05:00
r geoadd Sicily 12.75 36.995 " t e s t 1 "
2020-12-11 19:21:05 -05:00
r geoadd Sicily 12.75 36.50 " t e s t 2 "
r geoadd Sicily 13.00 36.50 " t e s t 3 "
# box height=2km width=400km
2021-02-04 11:08:35 -05:00
set ret1 [ r geosearch Sicily fromlonlat 15 37 bybox 400 2 km]
2020-12-11 19:21:05 -05:00
assert_equal $ret1 { test1 }
# Add a western Hemisphere point
r geoadd Sicily - 1 37.00 " t e s t 3 "
2021-02-04 11:08:35 -05:00
set ret2 [ r geosearch Sicily fromlonlat 15 37 bybox 3000 2 km asc]
2020-12-11 19:21:05 -05:00
assert_equal $ret2 { test1 test3}
}
2021-02-04 11:08:35 -05:00
test { GEOSEARCH corner point test} {
r del Sicily
r geoadd Sicily 12.758489 38.788135 edge1 17.241510 38.788135 edge2 17.250000 35.202000 edge3 12.750000 35.202000 edge4 12.748489955781654 37 edge5 15 38.798135872540925 edge6 17.251510044218346 37 edge7 15 35.201864127459075 edge8 12.692799634687903 38.798135872540925 corner1 12.692799634687903 38.798135872540925 corner2 17.200560937451133 35.201864127459075 corner3 12.799439062548865 35.201864127459075 corner4
set ret [ lsort [ r geosearch Sicily fromlonlat 15 37 bybox 400 400 km asc] ]
assert_equal $ret { edge1 edge2 edge5 edge7}
}
2015-06-27 03:43:47 -04:00
test { GEORADIUSBYMEMBER withdist ( sorted ) } {
2014-05-12 14:38:17 -04:00
r georadiusbymember nyc " w t c o n e " 7 km withdist
2015-06-22 09:00:37 -04:00
} { { { wtc one} 0.0000 } { { union square} 3.2544 } { { central park n/ q/ r} 6.7000 } { 4545 6.1975 } { { lic market} 6.8969 } }
2014-05-12 14:38:17 -04:00
2015-06-24 10:34:17 -04:00
test { GEOHASH is able to return geohash strings} {
# Example from Wikipedia.
r del points
2015-06-25 12:05:45 -04:00
r geoadd points - 5.6 42.6 test
2015-06-24 10:34:17 -04:00
lindex [ r geohash points test] 0
2019-12-18 06:54:46 -05:00
} { ezs42e44yx0 }
2015-06-24 10:34:17 -04:00
2022-11-01 01:54:03 -04:00
test { GEOHASH with only key as argument} {
r del points
r geoadd points 10 20 a 30 40 b
set result [ r geohash points]
assert { $result eq { } }
}
2015-06-29 04:47:07 -04:00
test { GEOPOS simple} {
r del points
r geoadd points 10 20 a 30 40 b
lassign [ lindex [ r geopos points a b] 0 ] x1 y1
lassign [ lindex [ r geopos points a b] 1 ] x2 y2
assert { abs ( $x1 - 10 ) < 0.001 }
assert { abs ( $y1 - 20 ) < 0.001 }
assert { abs ( $x2 - 30 ) < 0.001 }
assert { abs ( $y2 - 40 ) < 0.001 }
}
test { GEOPOS missing element} {
r del points
r geoadd points 10 20 a 30 40 b
lindex [ r geopos points a x b] 1
} { }
2022-11-01 01:54:03 -04:00
test { GEOPOS with only key as argument} {
r del points
r geoadd points 10 20 a 30 40 b
set result [ r geopos points]
assert { $result eq { } }
}
2015-06-29 06:44:31 -04:00
test { GEODIST simple & unit} {
r del points
r geoadd points 13.361389 38.115556 " P a l e r m o " \
15.087269 37.502669 " C a t a n i a "
set m [ r geodist points Palermo Catania]
assert { $m > 166274 && $m < 166275 }
set km [ r geodist points Palermo Catania km]
assert { $km > 166.2 && $km < 166.3 }
2022-12-04 03:11:38 -05:00
set dist [ r geodist points Palermo Palermo]
assert { $dist eq 0.0000 }
2015-06-29 06:44:31 -04:00
}
test { GEODIST missing elements} {
r del points
r geoadd points 13.361389 38.115556 " P a l e r m o " \
15.087269 37.502669 " C a t a n i a "
set m [ r geodist points Palermo Agrigento]
assert { $m eq { } }
set m [ r geodist points Ragusa Agrigento]
assert { $m eq { } }
set m [ r geodist empty_key Palermo Catania]
assert { $m eq { } }
}
2016-02-18 08:42:42 -05:00
test { GEORADIUS STORE option: syntax error} {
2021-06-09 08:13:24 -04:00
r del points{ t }
r geoadd points{ t } 13.361389 38.115556 " P a l e r m o " \
15.087269 37.502669 " C a t a n i a "
catch { r georadius points{ t } 13.361389 38.115556 50 km store} e
2016-02-18 08:42:42 -05:00
set e
} { * ERR * syntax* }
2020-12-11 19:21:05 -05:00
test { GEOSEARCHSTORE STORE option: syntax error} {
2021-06-09 08:13:24 -04:00
catch { r geosearchstore abc{ t } points{ t } fromlonlat 13.361389 38.115556 byradius 50 km store abc{ t } } e
2020-12-11 19:21:05 -05:00
set e
} { * ERR * syntax* }
2016-02-18 08:42:42 -05:00
test { GEORANGE STORE option: incompatible options} {
2021-06-09 08:13:24 -04:00
r del points{ t }
r geoadd points{ t } 13.361389 38.115556 " P a l e r m o " \
15.087269 37.502669 " C a t a n i a "
catch { r georadius points{ t } 13.361389 38.115556 50 km store points2{ t } withdist} e
2016-02-18 08:42:42 -05:00
assert_match { * ERR * } $e
2021-06-09 08:13:24 -04:00
catch { r georadius points{ t } 13.361389 38.115556 50 km store points2{ t } withhash} e
2016-02-18 08:42:42 -05:00
assert_match { * ERR * } $e
2021-06-09 08:13:24 -04:00
catch { r georadius points{ t } 13.361389 38.115556 50 km store points2{ t } withcoords} e
2016-02-18 08:42:42 -05:00
assert_match { * ERR * } $e
}
test { GEORANGE STORE option: plain usage} {
2021-06-09 08:13:24 -04:00
r del points{ t }
r geoadd points{ t } 13.361389 38.115556 " P a l e r m o " \
15.087269 37.502669 " C a t a n i a "
r georadius points{ t } 13.361389 38.115556 500 km store points2{ t }
assert_equal [ r zrange points{ t } 0 - 1 ] [ r zrange points2{ t } 0 - 1 ]
2016-02-18 08:42:42 -05:00
}
GEO* STORE with empty src key delete the dest key and return 0, not empty array (#9271)
With an empty src key, we need to deal with two situations:
1. non-STORE: We should return emptyarray.
2. STORE: Try to delete the store key and return 0.
This applies to both GEOSEARCHSTORE (new to v6.2), and
also GEORADIUS STORE (which was broken since forever)
This pr try to fix #9261. i.e. both STORE variants would have behaved
like the non-STORE variants when the source key was missing,
returning an empty array and not deleting the destination key,
instead of returning 0, and deleting the destination key.
Also add more tests for some commands.
- GEORADIUS: wrong type src key, non existing src key, empty search,
store with non existing src key, store with empty search
- GEORADIUSBYMEMBER: wrong type src key, non existing src key,
non existing member, store with non existing src key
- GEOSEARCH: wrong type src key, non existing src key, empty search,
frommember with non existing member
- GEOSEARCHSTORE: wrong type key, non existing src key,
fromlonlat with empty search, frommember with non existing member
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-08-01 12:32:24 -04:00
test { GEORADIUSBYMEMBER STORE/ STOREDIST option: plain usage} {
r del points{ t }
r geoadd points{ t } 13.361389 38.115556 " P a l e r m o " 15.087269 37.502669 " C a t a n i a "
r georadiusbymember points{ t } Palermo 500 km store points2{ t }
assert_equal { Palermo Catania} [ r zrange points2{ t } 0 - 1 ]
r georadiusbymember points{ t } Catania 500 km storedist points2{ t }
assert_equal { Catania Palermo} [ r zrange points2{ t } 0 - 1 ]
set res [ r zrange points2{ t } 0 - 1 withscores]
assert { [ lindex $res 1 ] < 1 }
assert { [ lindex $res 3 ] > 166 }
}
2020-12-11 19:21:05 -05:00
test { GEOSEARCHSTORE STORE option: plain usage} {
2021-06-09 08:13:24 -04:00
r geosearchstore points2{ t } points{ t } fromlonlat 13.361389 38.115556 byradius 500 km
assert_equal [ r zrange points{ t } 0 - 1 ] [ r zrange points2{ t } 0 - 1 ]
2020-12-11 19:21:05 -05:00
}
2016-02-18 08:42:42 -05:00
test { GEORANGE STOREDIST option: plain usage} {
2021-06-09 08:13:24 -04:00
r del points{ t }
r geoadd points{ t } 13.361389 38.115556 " P a l e r m o " \
15.087269 37.502669 " C a t a n i a "
r georadius points{ t } 13.361389 38.115556 500 km storedist points2{ t }
set res [ r zrange points2{ t } 0 - 1 withscores]
2016-02-18 08:42:42 -05:00
assert { [ lindex $res 1 ] < 1 }
assert { [ lindex $res 3 ] > 166 }
assert { [ lindex $res 3 ] < 167 }
}
2020-12-11 19:21:05 -05:00
test { GEOSEARCHSTORE STOREDIST option: plain usage} {
2021-06-09 08:13:24 -04:00
r geosearchstore points2{ t } points{ t } fromlonlat 13.361389 38.115556 byradius 500 km storedist
set res [ r zrange points2{ t } 0 - 1 withscores]
2020-12-11 19:21:05 -05:00
assert { [ lindex $res 1 ] < 1 }
assert { [ lindex $res 3 ] > 166 }
assert { [ lindex $res 3 ] < 167 }
}
2016-02-18 08:42:42 -05:00
test { GEORANGE STOREDIST option: COUNT ASC and DESC} {
2021-06-09 08:13:24 -04:00
r del points{ t }
r geoadd points{ t } 13.361389 38.115556 " P a l e r m o " \
15.087269 37.502669 " C a t a n i a "
r georadius points{ t } 13.361389 38.115556 500 km storedist points2{ t } asc count 1
assert { [ r zcard points2{ t } ] == 1 }
set res [ r zrange points2{ t } 0 - 1 withscores]
2016-02-18 08:42:42 -05:00
assert { [ lindex $res 0 ] eq " P a l e r m o " }
2021-06-09 08:13:24 -04:00
r georadius points{ t } 13.361389 38.115556 500 km storedist points2{ t } desc count 1
assert { [ r zcard points2{ t } ] == 1 }
set res [ r zrange points2{ t } 0 - 1 withscores]
2016-02-18 08:42:42 -05:00
assert { [ lindex $res 0 ] eq " C a t a n i a " }
}
2021-02-04 11:08:35 -05:00
test { GEOSEARCH the box spans - 180 ° or 180 ° } {
r del points
r geoadd points 179.5 36 point1
r geoadd points - 179.5 36 point2
assert_equal { point1 point2} [ r geosearch points fromlonlat 179 37 bybox 400 400 km asc]
assert_equal { point2 point1} [ r geosearch points fromlonlat - 179 37 bybox 400 400 km asc]
}
2022-12-15 15:25:38 -05:00
test { GEOSEARCH with small distance} {
r del points
r geoadd points - 122.407107 37.794300 1
r geoadd points - 122.227336 37.794300 2
assert_equal { { 1 0.0001 } { 2 9.8182 } } [ r GEORADIUS points - 122.407107 37.794300 30 mi ASC WITHDIST]
}
2021-02-04 11:08:35 -05:00
foreach { type } { byradius bybox} {
test " G E O S E A R C H f u z z y t e s t - $ t y p e " {
if { $::accurate } { set attempt 300 } else { set attempt 30 }
2015-06-23 09:02:37 -04:00
while { [ incr attempt - 1 ] } {
Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
set rv [ lindex $regression_vectors $rv_idx ]
incr rv_idx
2021-02-04 11:08:35 -05:00
set radius_km 0 ; set width_km 0 ; set height_km 0
2015-06-23 09:02:37 -04:00
unset - nocomplain debuginfo
2016-12-05 08:15:08 -05:00
set srand_seed [ clock milliseconds]
Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
if { $rv ne { } } { set srand_seed [ lindex $rv 0 ] }
2015-06-23 09:02:37 -04:00
lappend debuginfo " s r a n d _ s e e d i s $ s r a n d _ s e e d "
expr { srand ( $srand_seed ) } ; # If you need a reproducible run
r del mypoints
2016-12-05 08:15:08 -05:00
if { [ randomInt 10 ] == 0 } {
# From time to time use very big radiuses
2021-02-04 11:08:35 -05:00
if { $type == " b y r a d i u s " } {
set radius_km [ expr { [ randomInt 5000 ] + 10 } ]
} elseif { $type == " b y b o x " } {
set width_km [ expr { [ randomInt 5000 ] + 10 } ]
set height_km [ expr { [ randomInt 5000 ] + 10 } ]
}
2016-12-05 08:15:08 -05:00
} else {
# Normally use a few - ~200km radiuses to stress
# test the code the most in edge cases.
2021-02-04 11:08:35 -05:00
if { $type == " b y r a d i u s " } {
set radius_km [ expr { [ randomInt 200 ] + 10 } ]
} elseif { $type == " b y b o x " } {
set width_km [ expr { [ randomInt 200 ] + 10 } ]
set height_km [ expr { [ randomInt 200 ] + 10 } ]
}
}
if { $rv ne { } } {
set radius_km [ lindex $rv 1 ]
set width_km [ lindex $rv 1 ]
set height_km [ lindex $rv 1 ]
2016-12-05 08:15:08 -05:00
}
2015-06-25 12:05:45 -04:00
geo_random_point search_lon search_lat
Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
if { $rv ne { } } {
set search_lon [ lindex $rv 2 ]
set search_lat [ lindex $rv 3 ]
}
2021-02-04 11:08:35 -05:00
lappend debuginfo " S e a r c h a r e a : $ s e a r c h _ l o n , $ s e a r c h _ l a t $ r a d i u s _ k m $ w i d t h _ k m $ h e i g h t _ k m k m "
2015-06-23 09:02:37 -04:00
set tcl_result { }
set argv { }
for { set j 0 } { $j < 20000 } { incr j} {
2015-06-25 12:05:45 -04:00
geo_random_point lon lat
lappend argv $lon $lat " p l a c e : $ j "
2021-02-04 11:08:35 -05:00
if { $type == " b y r a d i u s " } {
if { [ pointInCircle $radius_km $lon $lat $search_lon $search_lat ] } {
lappend tcl_result " p l a c e : $ j "
}
} elseif { $type == " b y b o x " } {
if { [ pointInRectangle $width_km $height_km $lon $lat $search_lon $search_lat 1 ] } {
lappend tcl_result " p l a c e : $ j "
}
2015-06-23 09:02:37 -04:00
}
2021-02-04 11:08:35 -05:00
lappend debuginfo " p l a c e : $ j $ l o n $ l a t "
2015-06-23 09:02:37 -04:00
}
r geoadd mypoints { * } $argv
2021-02-04 11:08:35 -05:00
if { $type == " b y r a d i u s " } {
set res [ lsort [ r geosearch mypoints fromlonlat $search_lon $search_lat byradius $radius_km km] ]
} elseif { $type == " b y b o x " } {
set res [ lsort [ r geosearch mypoints fromlonlat $search_lon $search_lat bybox $width_km $height_km km] ]
}
2015-06-23 09:02:37 -04:00
set res2 [ lsort $tcl_result ]
set test_result OK
Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
2015-06-23 09:02:37 -04:00
if { $res != $res2 } {
Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
set rounding_errors 0
set diff [ compare_lists $res $res2 ]
foreach place $diff {
2021-02-04 11:08:35 -05:00
lassign [ lindex [ r geopos mypoints $place ] 0 ] lon lat
Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
set mydist [ geo_distance $lon $lat $search_lon $search_lat ]
set mydist [ expr $mydist / 1000 ]
2021-02-04 11:08:35 -05:00
if { $type == " b y r a d i u s " } {
if { ( $mydist / $radius_km ) > 0.999 } {
incr rounding_errors
continue
}
if { $mydist < [ expr { $radius_km * 1000 } ] } {
# This is a false positive for redis since given the
# same points the higher precision calculation provided
# by TCL shows the point within range
incr rounding_errors
continue
}
} elseif { $type == " b y b o x " } {
# we add 0.1% error for floating point calculation error
if { [ pointInRectangle $width_km $height_km $lon $lat $search_lon $search_lat 1.001 ] } {
incr rounding_errors
continue
}
2019-01-15 02:26:19 -05:00
}
Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
}
2019-01-15 02:26:19 -05:00
Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
# Make sure this is a real error and not a rounidng issue.
if { [ llength $diff ] == $rounding_errors } {
set res $res2 ; # Error silenced
}
}
if { $res != $res2 } {
set diff [ compare_lists $res $res2 ]
puts " * * * P o s s i b l e p r o b l e m i n G E O r a d i u s q u e r y * * * "
2015-06-23 09:02:37 -04:00
puts " R e d i s : $ r e s "
puts " T c l : $ r e s 2 "
Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
puts " D i f f : $ d i f f "
2015-06-23 09:02:37 -04:00
puts [ join $debuginfo " \n " ]
Multiple GEORADIUS bugs fixed.
By grepping the continuous integration errors log a number of GEORADIUS
tests failures were detected.
Fortunately when a GEORADIUS failure happens, the test suite logs enough
information in order to reproduce the problem: the PRNG seed,
coordinates and radius of the query.
By reproducing the issues, three different bugs were discovered and
fixed in this commit. This commit also improves the already good
reporting of the fuzzer and adds the failure vectors as regression
tests.
The issues found:
1. We need larger squares around the poles in order to cover the area
requested by the user. There were already checks in order to use a
smaller step (larger squares) but the limit set (+/- 67 degrees) is not
enough in certain edge cases, so 66 is used now.
2. Even near the equator, when the search area center is very near the
edge of the square, the north, south, west or ovest square may not be
able to fully cover the specified radius. Now a test is performed at the
edge of the initial guessed search area, and larger squares are used in
case the test fails.
3. Because of rounding errors between Redis and Tcl, sometimes the test
signaled false positives. This is now addressed.
Whenever possible the original code was improved a bit in other ways. A
debugging example stanza was added in order to make the next debugging
session simpler when the next bug is found.
2016-07-27 05:07:23 -04:00
foreach place $diff {
if { [ lsearch - exact $res2 $place ] != -1 } {
set where " ( o n l y i n T c l ) "
} else {
set where " ( o n l y i n R e d i s ) "
}
lassign [ lindex [ r geopos mypoints $place ] 0 ] lon lat
set mydist [ geo_distance $lon $lat $search_lon $search_lat ]
set mydist [ expr $mydist / 1000 ]
puts " $ p l a c e - > [ r g e o p o s m y p o i n t s $ p l a c e ] $ m y d i s t $ w h e r e "
}
2015-06-23 09:02:37 -04:00
set test_result FAIL
}
unset - nocomplain debuginfo
if { $test_result ne { OK } } break
}
set test_result
} { OK }
2021-02-04 11:08:35 -05:00
}
test { GEOSEARCH box edges fuzzy test} {
if { $::accurate } { set attempt 300 } else { set attempt 30 }
while { [ incr attempt - 1 ] } {
unset - nocomplain debuginfo
set srand_seed [ clock milliseconds]
lappend debuginfo " s r a n d _ s e e d i s $ s r a n d _ s e e d "
expr { srand ( $srand_seed ) } ; # If you need a reproducible run
r del mypoints
geo_random_point search_lon search_lat
set width_m [ expr { [ randomInt 10000 ] + 10 } ]
set height_m [ expr { [ randomInt 10000 ] + 10 } ]
set lat_delta [ geo_raddeg [ expr { $height_m / 2 / 6372797.560856 } ] ]
set long_delta_top [ geo_raddeg [ expr { $width_m / 2 / 6372797.560856 / cos( [ geo_degrad [ expr { $search_lat + $lat_delta } ] ] ) } ] ]
set long_delta_middle [ geo_raddeg [ expr { $width_m / 2 / 6372797.560856 / cos( [ geo_degrad $search_lat ] ) } ] ]
set long_delta_bottom [ geo_raddeg [ expr { $width_m / 2 / 6372797.560856 / cos( [ geo_degrad [ expr { $search_lat- $lat_delta } ] ] ) } ] ]
# Total of 8 points are generated, which are located at each vertex and the center of each side
set points( north ) [ list $search_lon [ expr { $search_lat + $lat_delta } ] ]
set points( south ) [ list $search_lon [ expr { $search_lat- $lat_delta } ] ]
set points( east ) [ list [ expr { $search_lon + $long_delta_middle } ] $search_lat ]
set points( west ) [ list [ expr { $search_lon- $long_delta_middle } ] $search_lat ]
set points( north_east ) [ list [ expr { $search_lon + $long_delta_top } ] [ expr { $search_lat + $lat_delta } ] ]
set points( north_west ) [ list [ expr { $search_lon- $long_delta_top } ] [ expr { $search_lat + $lat_delta } ] ]
set points( south_east ) [ list [ expr { $search_lon + $long_delta_bottom } ] [ expr { $search_lat- $lat_delta } ] ]
set points( south_west ) [ list [ expr { $search_lon- $long_delta_bottom } ] [ expr { $search_lat- $lat_delta } ] ]
lappend debuginfo " S e a r c h a r e a : g e o s e a r c h m y p o i n t s f r o m l o n l a t $ s e a r c h _ l o n $ s e a r c h _ l a t b y b o x $ w i d t h _ m $ h e i g h t _ m m "
set tcl_result { }
foreach name [ array names points] {
set x [ lindex $points ( $name ) 0 ]
set y [ lindex $points ( $name ) 1 ]
2021-02-04 12:39:07 -05:00
# If longitude crosses -180° or 180°, we need to convert it.
# latitude doesn't have this problem, because it's scope is -70~70, see geo_random_point
if { $x > 180 } {
set x [ expr { $x-360 } ]
} elseif { $x < -180 } {
set x [ expr { $x + 360 } ]
}
2021-02-04 11:08:35 -05:00
r geoadd mypoints $x $y place:$name
lappend tcl_result " p l a c e : $ n a m e "
lappend debuginfo " g e o a d d m y p o i n t s $ x $ y p l a c e : $ n a m e "
}
set res2 [ lsort $tcl_result ]
# make the box larger by two meter in each direction to put the coordinate slightly inside the box.
set height_new [ expr { $height_m + 4 } ]
set width_new [ expr { $width_m + 4 } ]
set res [ lsort [ r geosearch mypoints fromlonlat $search_lon $search_lat bybox $width_new $height_new m] ]
if { $res != $res2 } {
set diff [ compare_lists $res $res2 ]
2021-02-04 12:39:07 -05:00
lappend debuginfo " r e s : $ r e s , r e s 2 : $ r e s 2 , d i f f : $ d i f f "
2021-02-04 11:08:35 -05:00
fail " p l a c e s h o u l d b e f o u n d , d e b u g i n f o : $ d e b u g i n f o , h e i g h t _ n e w : $ h e i g h t _ n e w w i d t h _ n e w : $ w i d t h _ n e w "
}
# The width decreases and the height increases. Only north and south are found
set width_new [ expr { $width_m-4 } ]
set height_new [ expr { $height_m + 4 } ]
set res [ lsort [ r geosearch mypoints fromlonlat $search_lon $search_lat bybox $width_new $height_new m] ]
if { $res != { place : north place:south} } {
2021-02-04 12:39:07 -05:00
lappend debuginfo " r e s : $ r e s "
2021-02-04 11:08:35 -05:00
fail " p l a c e s h o u l d n o t b e f o u n d , d e b u g i n f o : $ d e b u g i n f o , h e i g h t _ n e w : $ h e i g h t _ n e w w i d t h _ n e w : $ w i d t h _ n e w "
}
# The width increases and the height decreases. Only ease and west are found
set width_new [ expr { $width_m + 4 } ]
set height_new [ expr { $height_m-4 } ]
set res [ lsort [ r geosearch mypoints fromlonlat $search_lon $search_lat bybox $width_new $height_new m] ]
if { $res != { place : east place:west} } {
2021-02-04 12:39:07 -05:00
lappend debuginfo " r e s : $ r e s "
2021-02-04 11:08:35 -05:00
fail " p l a c e s h o u l d n o t b e f o u n d , d e b u g i n f o : $ d e b u g i n f o , h e i g h t _ n e w : $ h e i g h t _ n e w w i d t h _ n e w : $ w i d t h _ n e w "
}
# make the box smaller by two meter in each direction to put the coordinate slightly outside the box.
set height_new [ expr { $height_m-4 } ]
set width_new [ expr { $width_m-4 } ]
set res [ r geosearch mypoints fromlonlat $search_lon $search_lat bybox $width_new $height_new m]
if { $res != " " } {
lappend debuginfo " r e s : $ r e s "
fail " p l a c e s h o u l d n o t b e f o u n d , d e b u g i n f o : $ d e b u g i n f o , h e i g h t _ n e w : $ h e i g h t _ n e w w i d t h _ n e w : $ w i d t h _ n e w "
}
unset - nocomplain debuginfo
}
}
2014-05-12 14:38:17 -04:00
}