|
| 1 | +# Connection Monitoring and Pooling (CMAP) Unit and Integration Tests |
| 2 | + |
| 3 | +______________________________________________________________________ |
| 4 | + |
| 5 | +## Introduction |
| 6 | + |
| 7 | +The YAML and JSON files in this directory are platform-independent tests that drivers can use to prove their conformance |
| 8 | +to the Connection Monitoring and Pooling (CMAP) Spec. |
| 9 | + |
| 10 | +## Common Test Format |
| 11 | + |
| 12 | +Each YAML file has the following keys: |
| 13 | + |
| 14 | +- `version`: A version number indicating the expected format of the spec tests (current version = 1) |
| 15 | +- `style`: A string indicating what style of tests this file contains. Contains one of the following: |
| 16 | + - `"unit"`: a test that may be run without connecting to a MongoDB deployment. |
| 17 | + - `"integration"`: a test that MUST be run against a real MongoDB deployment. |
| 18 | +- `description`: A text description of what the test is meant to assert |
| 19 | + |
| 20 | +## Unit Test Format: |
| 21 | + |
| 22 | +All Unit Tests have some of the following fields: |
| 23 | + |
| 24 | +- `poolOptions`: If present, connection pool options to use when creating a pool; both |
| 25 | + [standard ConnectionPoolOptions](../../connection-monitoring-and-pooling.md#connection-pool-options) and the |
| 26 | + following test-specific options are allowed: |
| 27 | + - `backgroundThreadIntervalMS`: A time interval between the end of a |
| 28 | + [Background Thread Run](../../connection-monitoring-and-pooling.md#background-thread) and the beginning of the |
| 29 | + next Run. If a Connection Pool does not implement a Background Thread, the Test Runner MUST ignore the option. If |
| 30 | + the option is not specified, an implementation is free to use any value it finds reasonable. |
| 31 | + |
| 32 | + Possible values (0 is not allowed): |
| 33 | + |
| 34 | + - A negative value: never begin a Run. |
| 35 | + - A positive value: the interval between Runs in milliseconds. |
| 36 | +- `operations`: A list of operations to perform. All operations support the following fields: |
| 37 | + - `name`: A string describing which operation to issue. |
| 38 | + - `thread`: The name of the thread in which to run this operation. If not specified, runs in the default thread |
| 39 | +- `error`: Indicates that the main thread is expected to error during this test. An error may include of the following |
| 40 | + fields: |
| 41 | + - `type`: the type of error emitted |
| 42 | + - `message`: the message associated with that error |
| 43 | + - `address`: Address of pool emitting error |
| 44 | +- `events`: An array of all connection monitoring events expected to occur while running `operations`. An event may |
| 45 | + contain any of the following fields |
| 46 | + - `type`: The type of event emitted |
| 47 | + - `address`: The address of the pool emitting the event |
| 48 | + - `connectionId`: The id of a connection associated with the event |
| 49 | + - `duration`: The event duration |
| 50 | + - `options`: Options used to create the pool |
| 51 | + - `reason`: A reason giving more information on why the event was emitted |
| 52 | +- `ignore`: An array of event names to ignore |
| 53 | + |
| 54 | +Valid Unit Test Operations are the following: |
| 55 | + |
| 56 | +- `start(target)`: Starts a new thread named `target` |
| 57 | + - `target`: The name of the new thread to start |
| 58 | +- `wait(ms)`: Sleep the current thread for `ms` milliseconds |
| 59 | + - `ms`: The number of milliseconds to sleep the current thread for |
| 60 | +- `waitForThread(target)`: wait for thread `target` to finish executing. Propagate any errors to the main thread. |
| 61 | + - `target`: The name of the thread to wait for. |
| 62 | +- `waitForEvent(event, count, timeout)`: block the current thread until `event` has occurred `count` times |
| 63 | + - `event`: The name of the event |
| 64 | + - `count`: The number of times the event must occur (counting from the start of the test) |
| 65 | + - `timeout`: If specified, time out with an error after waiting for this many milliseconds without seeing the required |
| 66 | + events |
| 67 | +- `label = pool.checkOut()`: call `checkOut` on pool, returning the checked out connection |
| 68 | + - `label`: If specified, associate this label with the returned connection, so that it may be referenced in later |
| 69 | + operations |
| 70 | +- `pool.checkIn(connection)`: call `checkIn` on pool |
| 71 | + - `connection`: A string label identifying which connection to check in. Should be a label that was previously set |
| 72 | + with `checkOut` |
| 73 | +- `pool.clear()`: call `clear` on Pool |
| 74 | + - `interruptInUseConnections`: Determines whether "in use" connections should be also interrupted |
| 75 | +- `pool.close()`: call `close` on Pool |
| 76 | +- `pool.ready()`: call `ready` on Pool |
| 77 | + |
| 78 | +## Integration Test Format |
| 79 | + |
| 80 | +The integration test format is identical to the unit test format with the addition of the following fields to each test: |
| 81 | + |
| 82 | +- `runOn` (optional): An array of server version and/or topology requirements for which the tests can be run. If the |
| 83 | + test environment satisfies one or more of these requirements, the tests may be executed; otherwise, this test should |
| 84 | + be skipped. If this field is omitted, the tests can be assumed to have no particular requirements and should be |
| 85 | + executed. Each element will have some or all of the following fields: |
| 86 | + - `minServerVersion` (optional): The minimum server version (inclusive) required to successfully run the tests. If |
| 87 | + this field is omitted, it should be assumed that there is no lower bound on the required server version. |
| 88 | + - `maxServerVersion` (optional): The maximum server version (inclusive) against which the tests can be run |
| 89 | + successfully. If this field is omitted, it should be assumed that there is no upper bound on the required server |
| 90 | + version. |
| 91 | +- `failPoint`: optional, a document containing a `configureFailPoint` command to run against the endpoint being used for |
| 92 | + the test. |
| 93 | +- `poolOptions.appName` (optional): appName attribute to be set in connections, which will be affected by the fail |
| 94 | + point. |
| 95 | + |
| 96 | +## Spec Test Match Function |
| 97 | + |
| 98 | +The definition of MATCH or MATCHES in the Spec Test Runner is as follows: |
| 99 | + |
| 100 | +- MATCH takes two values, `expected` and `actual` |
| 101 | +- Notation is "Assert `actual` MATCHES `expected`" |
| 102 | +- Assertion passes if `expected` is a subset of `actual`, with the values `42` and `"42"` acting as placeholders for |
| 103 | + "any value" |
| 104 | + |
| 105 | +Pseudocode implementation of `actual` MATCHES `expected`: |
| 106 | + |
| 107 | +```text |
| 108 | +If expected is "42" or 42: |
| 109 | + Assert that actual exists (is not null or undefined) |
| 110 | +Else: |
| 111 | + Assert that actual is of the same JSON type as expected |
| 112 | + If expected is a JSON array: |
| 113 | + For every idx/value in expected: |
| 114 | + Assert that actual[idx] MATCHES value |
| 115 | + Else if expected is a JSON object: |
| 116 | + For every key/value in expected |
| 117 | + Assert that actual[key] MATCHES value |
| 118 | + Else: |
| 119 | + Assert that expected equals actual |
| 120 | +``` |
| 121 | + |
| 122 | +## Unit Test Runner: |
| 123 | + |
| 124 | +For the unit tests, the behavior of a Connection is irrelevant beyond the need to asserting `connection.id`. Drivers MAY |
| 125 | +use a mock connection class for testing the pool behavior in unit tests |
| 126 | + |
| 127 | +For each YAML file with `style: unit`: |
| 128 | + |
| 129 | +- Create a Pool `pool`, subscribe and capture any Connection Monitoring events emitted in order. |
| 130 | + - If `poolOptions` is specified, use those options to initialize both pools |
| 131 | + - The returned pool must have an `address` set as a string value. |
| 132 | +- Process each `operation` in `operations` (on the main thread) |
| 133 | + - If a `thread` is specified, the main thread MUST schedule the operation to execute in the corresponding thread. |
| 134 | + Otherwise, execute the operation directly in the main thread. |
| 135 | +- If `error` is presented |
| 136 | + - Assert that an actual error `actualError` was thrown by the main thread |
| 137 | + - Assert that `actualError` MATCHES `error` |
| 138 | +- Else: |
| 139 | + - Assert that no errors were thrown by the main thread |
| 140 | +- calculate `actualEvents` as every Connection Event emitted whose `type` is not in `ignore` |
| 141 | +- if `events` is not empty, then for every `idx`/`expectedEvent` in `events` |
| 142 | + - Assert that `actualEvents[idx]` exists |
| 143 | + - Assert that `actualEvents[idx]` MATCHES `expectedEvent` |
| 144 | + |
| 145 | +It is important to note that the `ignore` list is used for calculating `actualEvents`, but is NOT used for the |
| 146 | +`waitForEvent` command |
| 147 | + |
| 148 | +## Integration Test Runner |
| 149 | + |
| 150 | +The steps to run the integration tests are the same as those used to run the unit tests with the following |
| 151 | +modifications: |
| 152 | + |
| 153 | +- The integration tests MUST be run against an actual endpoint. If the deployment being tested contains multiple |
| 154 | + endpoints, then the runner MUST only use one of them to run the tests against. |
| 155 | + |
| 156 | +- For each test, if `failPoint` is specified, its value is a `configureFailPoint` command. Run the command on the admin |
| 157 | + database of the endpoint being tested to enable the fail point. |
| 158 | + |
| 159 | +- At the end of each test, any enabled fail point MUST be disabled to avoid spurious failures in subsequent tests. The |
| 160 | + fail point may be disabled like so: |
| 161 | + |
| 162 | + ```javascript |
| 163 | + db.adminCommand({ |
| 164 | + configureFailPoint: "<fail point name>", |
| 165 | + mode: "off" |
| 166 | + }); |
| 167 | + ``` |
0 commit comments