Skip to content

Proposal: testing clients with a mock service #2

Closed
@hug-dev

Description

@hug-dev

Integration testing in Parsec clients

Parsec clients need to be tested as well. The integration tests of clients can check that:

  • requests are correctly serialiased according to the wire protocol and body format
  • response are correctly deserialised
  • any client logic for automatic provider/authenticator selection is correct
  • any logic based on any kind of capability discovery is correct (not available in Parsec yet)

For this testing, clients developpers would often create the means of mocking the service side to check that the request serialised bytes are as expected and to reply to the client with specific response bytes, that are deserialised and checked as well. This is currently done in the Rust client and similarly proposed in the Go client, in the linked PR.

Moreover, it is fair to assume that any Parsec client would have structures and methods to be able to send requests with any opcode, any authentication method, to any provider and getting any response. Basically letting the client user tweak all possible parameters of the request header and set its own body/authentication data. Those with idiomatic and high level structure in the specific language. The Rust client has the notion of the OperationClient for that and the Go client has the operation method.

See this PR which started this idea.

Having a mock service to help

In order to stop duplicating the mocking test framework, to put in common and improve the effort for client testing and to help future client developpers to test it, I propose the following:

  1. Create an independant mock service (probably in its own repo). This service will do nothing else but listening on a UDS for a byte buffer, comparing this buffer with some data and replying another data.
  2. Create a set (or multiple) of test data.
  3. Depending on its starting option, the mock service will use one of the set of test data.

The test data will consist of:

  1. the full specification of a whole request, in high-level terms:
  • the content of all fields of the fixed header
  • the content of the authentication body
  • high-level description of the content of the body
  1. the same request, but serialised (shown as base64)
  2. the full specification of a possible corresponding response to the request, can be an erroneous one, in high-level terms as well
  3. the same response but serialised

A test will consist of the following steps:

  • the client forms the request corresponding to 1, using the high-level facilities provided by its implementation, and send it to the mock service. I think the operation client or the basic client are good levels for this, if it's possible to still set all request parameters.
  • the mock service compares the serialised request with 2, and errors out if it does not match.
  • if it matches, the mock service responds with 4
  • the client deserializes 4 with the facilities at hand and compares it with the expected response 3. It errors out if it does not match

The mock service will expect that there is a specific order to all the tests, and will process one after the other. Another way would be to use one of the field to specify the test ID, like the session identifier which is currently not used? However that will be a problem in the future if it gets used.

Example of test data

Here is an example of 1:

Header:
* Magic number: 0x5EC0A710
* Header size: 0x1E
* Major version number: 0x01
* Minor version number: 0x00
* Flags: 0x0000
* Provider: 0x00
* Session handle: 0x0000000000000000
* Content type: 0x00
* Accept type: 0x00
* Auth type: 0x01
* Content length: (don't know yet, to be calculated)
* Auth length: 0x0004
* Opcode: 0x00000009
* Status: 0x0000
Body: ListOpcodes operation with 0x01 as provider_id
Auth: "toto" UTF-8 encoded

Here is an example of 3:

Header:
* Magic number: 0x5EC0A710
* Header size: 0x1E
* Major version number: 0x01
* Minor version number: 0x00
* Flags: 0x0000
* Provider: 0x00
* Session handle: 0x0000000000000000
* Content type: 0x00
* Accept type: 0x00
* Auth type: 0x00
* Content length: (don't know yet, to be calculated)
* Auth length: 0x0000
* Opcode: 0x00000009
* Status: 0x0000
Body: ListOpcodes result with [1, 2, 3] as opcodes

Example of using it in the Rust client, the PARSEC_SERVICE_ENDPOINT variable would be set to point to the mock service.

use parsec_client::BasicClient;
let client = BasicClient::new_naked();
client.set_auth_data(Authentication::Direct("toto".to_string()));
let opcodes = client.list_opcodes(ProviderId::MbedCrypto).unwrap();
assert_eq!(opcodes, vec![1, 2, 3]); //simplified

Note that not all test data will be able to be converted using an high-level client like the BasicClient. I think the rule of thumb should be that the highest-level possible client should be used. If not all request parameters can be set or if not all response fields can be checked, a lower level client should be used instead.
The reason is that I think, the lowest level a client is, the fewer logic is actually tested with this.

Generating the test data and hosting the mock service

The mock service and test data would be in the same repository.
It should be the simplest possible and ideally not re-use one existing client, otherwise we might be testing against a buggy client!
Safety is not primordial here, so Python could be used for its convenience and its nice file handling. Rust could be used as well, but should not use any existing Parsec-related crate.
I think 1 and 3 should be written in human language, similarly than the specifications in our book.
The test data generator would contain the framework to easily create test files from the human-language specifications and produce test data files containing 1, 2, 3 and 4 that both the mock service and the clients' tests can use.
This part can be discussed, I am not too sure if using a specified format like JSON is a good idea to describe a full request as it will create one more thing to convert and doing that programmatically could lead to more bugs/errors than manually doing it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions