-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: testing clients with a mock service #2
Comments
I was thinking of something else: the mock service has a bunch of files ( We can differentiate between different tests issuing the same request and getting different responses by using the authentication field as a test ID. |
That is a good idea! Then the check of the request is done via the fact that the entry is not found in the hash map. If that is the case, the tester can go to the repo and compare expected with actual payloads. Maybe in the future some kind of Parsec on the mock service side would still be useful to check which exact fields do not match for example.
I guess just using the auth field as a way to produce requests that have the same other fields but would produce another entry in the hash map would work! |
That's true, it would be difficult to tell which parameter was incorrect. Maybe if we make it so that the auth field is unique for all requests sent we can map them like that and be able to compare field by field. Alternatively, we split the expected requests into folders (i.e. tests) and have them numbered ( |
Like the idea of the mock client behind the UDS- should be worth noting that this test mocking is also worth doing in a unit test context with a mocked connection implementation (i.e. not needing the UDS) - this makes for fast simple tests with one level of complexity removed. Making the test data and means to generate it common (with a good way to use it) will help a lot for client devs. |
Are you saying that the test data generator would take a formalised human readable format as per 1, 3 and convert those to a simple json/yaml machine parsable model for client test suites? If not, then we should do that, please, or write specs in machine readable format and have translator to human friendly. As a client developer, I want test data files to be machine readable, so that I can use existing unmarshalling libraries and so I don't have to create a parser for them as well as the protocol. |
Agree that generally more testing is anyway better! I think the hard part is making sure the test data is correct, and that's hard to do if you don't have one source of generation that you trust... Having one common place for this kind of test (specifically integration testing), will help for that and make sure that globally all clients are of the same "minimal correctness".
In my originial idea there would not be any automatic conversion between the formalised human readable format (1, 3) and the serialised parts (2, 4). The process I thought of would look like this.
On the client test's side, Alice would read and understand
Now, I agree that it seems strange (or crazy 🤯) to not replace the "formalised human readable format" by JSON/TOML but consider the following arguments:
|
Going further with the above proposal. Suggestion for the repository name:
Contents:
|
I created the repository here: https://github.com/parallaxsecond/parsec-mock |
trying to get my head around the process of creating one of these tests - for the sake of argument TESTA. I think, I'm going to write your 1 and 3 constructs into a file called TESTA.spec. I will then create a gen_TESTA function in python (and add it to the list of generators to run). In that function, I will load TESTA.spec and get it to build the wire headers for request and response automatically (I really don't want to duplicate that, we'll introduce too many errors in the transposition). In that method, I also create (in code, by manually reading the spec) the request body and response body, as well as (potentially) the authentication data. The code can then automatically create the full request and response messages, and write them out in base64 encoded form to TESTA.req and TESTA.resp. - either that, or they get put into, say a json or yaml file that contains the test name+description and request and response. |
i guess the mock client could be configured to work in one of a number of ways: 1 it could load a number of request/response pairs into hashmap and just send the responses that correspond to the request - this mode may also have a default reponse if nothing recieved. Suggest that the selection of tests is supplied to the mock client by a config file - yaml would do - could be something like this:
In any event, suggest we start with option 1 and move on. I'm not a fan, BTW of embedding control messages in the protocol messages themselves - it restricts what you can test and also makes message matching very messy - without it we can cope with a simple equality/map, unless we get to the point of difficult authenticators where we can't guarantee the content. If we're going to try and put effort into control messages, lets put it into making a dedicated control channel! The control channel, could, either just select the next lot of testing files from the configuration, or could actually configure the mock service on the fly: E.g.: activate "second group"; switch to test "TESTA" A control channel could also return status if required. A simple JSON request/response interface on a unix socket would be easy to implement in any language (I'm ignoring C for the momeent, but even C++ has brilliant json support!). |
Ok so if we separate this in between the test generator and the mock service. For the test generator, agreed with what you said. Having one function per test seems necessary to combine the automatic and manual steps involved. Maybe the wire header representation on the For the mock service, I think having a control channel is a good idea! The information that needs to be shared is basically:
I would be happy to make things very simple for now and then having breaking changes later. As long as the mock service is versioned this is fine I think. As it is a test thing, I think it's ok to not come with a perfect future-proof design just now and we can allow making things extra simple and incrementally improve. |
sounds sensible - we could follow a REST type convention - define the service using a path, which could include the version - e.g. /v1/setTest... Next question is whether the results are polled for on the control channel or broadcast to subscribers - polling would be simpler to implement and likely simpler for a test client to implement. |
Instead of polling we could also use time outs on the socket? I guess the flow would be:
As the mock service does very little logic it should be quite fast so the timeout can be very low. |
Just wanted to point out something from the top comment:
I don't think the purpose of the mock service should be to test as much logic as possible in the client, but rather to test that the logic in the lower levels that handle the actual serialising/deserializing of requests and responses is correct. It's not really a complete solution, of course, because the cases where the lower levels reject a request because of inconsistencies (if they can happen) aren't considered. But if those layers do end up sending something over the wire and we can confirm that it is correctly formatted, then anything built on top should be fine from that point of view. |
we could put some sort of long poll with timeout in the logic in the control channel - wait for x seconds and either return failure or test result if it comes earlier - timeout at application layer rather than socket may be pleasanter. Toying with idea of just doing REST calls over unix socket (or even tcp as it probably doesn't matter for control channel??) Flask would make this pretty trivial either for either socket choice, and most languages have simple APIs for doing REST calls. |
Do we still need this issue? Moved to the new repo! |
Integration testing in Parsec clients
Parsec clients need to be tested as well. The integration tests of clients can check that:
For this testing, clients developpers would often create the means of mocking the service side to check that the request serialised bytes are as expected and to reply to the client with specific response bytes, that are deserialised and checked as well. This is currently done in the Rust client and similarly proposed in the Go client, in the linked PR.
Moreover, it is fair to assume that any Parsec client would have structures and methods to be able to send requests with any opcode, any authentication method, to any provider and getting any response. Basically letting the client user tweak all possible parameters of the request header and set its own body/authentication data. Those with idiomatic and high level structure in the specific language. The Rust client has the notion of the OperationClient for that and the Go client has the
operation
method.See this PR which started this idea.
Having a mock service to help
In order to stop duplicating the mocking test framework, to put in common and improve the effort for client testing and to help future client developpers to test it, I propose the following:
The test data will consist of:
A test will consist of the following steps:
The mock service will expect that there is a specific order to all the tests, and will process one after the other. Another way would be to use one of the field to specify the test ID, like the session identifier which is currently not used? However that will be a problem in the future if it gets used.
Example of test data
Here is an example of 1:
Here is an example of 3:
Example of using it in the Rust client, the
PARSEC_SERVICE_ENDPOINT
variable would be set to point to the mock service.Note that not all test data will be able to be converted using an high-level client like the
BasicClient
. I think the rule of thumb should be that the highest-level possible client should be used. If not all request parameters can be set or if not all response fields can be checked, a lower level client should be used instead.The reason is that I think, the lowest level a client is, the fewer logic is actually tested with this.
Generating the test data and hosting the mock service
The mock service and test data would be in the same repository.
It should be the simplest possible and ideally not re-use one existing client, otherwise we might be testing against a buggy client!
Safety is not primordial here, so Python could be used for its convenience and its nice file handling. Rust could be used as well, but should not use any existing Parsec-related crate.
I think 1 and 3 should be written in human language, similarly than the specifications in our book.
The test data generator would contain the framework to easily create test files from the human-language specifications and produce test data files containing 1, 2, 3 and 4 that both the mock service and the clients' tests can use.
This part can be discussed, I am not too sure if using a specified format like JSON is a good idea to describe a full request as it will create one more thing to convert and doing that programmatically could lead to more bugs/errors than manually doing it.
The text was updated successfully, but these errors were encountered: