-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify test database setup #1436
Comments
@kalbfled We should remove the optional line below and just beef up this old todo ticket for that topic: https://app.zenhub.com/workspaces/va-notify-620d21369d810a00146ed9c8/issues/gh/department-of-veterans-affairs/notification-api/900
|
Hey team! Please add your planning poker estimate with Zenhub @cris-oddball @EvanParish @kalbfled @ldraney @nikolai-efimov |
Having one database for all test threads causes a multitude of IntegrityError exceptions. I suspect that this is due to tests creating instances with hard-coded IDs and other values. |
The IntegrityError exceptions were occurring during test setup, and they are fairly straight-forward to eliminate by not using hard-coded values for model IDs and other unique attributes. However, fixing the setup errors exposes the teardown errors. Most tests use the fixture notify_db_session, which includes the teardown step of dropping tables. This is not acceptable for a single database test setup. I think the solution is to restore usage of the pytest-flask-sqlalchemy add-on, which we removed during the Flask upgrade last April. The configuration and usage seems easy enough, but I will have to refactor every test that uses the notify_db or notify_db_session fixture. |
Rationale for point estimate modification from 5 to 8, as well as "Off-track" label detailed above in Dave's comment. |
Dave:
|
I ran into this issue. pytest-flask-sqlalchemy has not had any new commits since April 2022 and has breaks when used with flask-sqlalchemy >= 3.0.
I'm looking for a work-around. Here is one possibility. Another possibility is presented in the flask-sqlalchemy test docs. |
Using a single test database, I have successfully run all tests in tests/app/communication_item sequentially and in parallel using a new "sample_communication_item" fixture that contains its own teardown. I need to repeat this process for the remaining fixtures and tests. |
I have refactored the heavily used sample_service, sample_user, and sample_api_key fixtures, including giving them session scope. With these and other changes, the current state of unit tests run locally with 11 threads is:
This is actually much better than I expected at this point. I'm certain that many of the errors and failures have common causes, and many of the tests are parametrized. Today, I plan to refactor the sample_template and sample_notification fixtures. I expect that to have a significant impact. |
End of day results:
This is after more fixtures changes. I have some tests passing individually by failing with "detached instance" when run together sequentially. I will push changes again when I fix that. |
I have all the tests passing in tests/app/authentication. This involved modifying core fixtures for api keys, services, and users that are used in many other tests.
Without additional effort, I found that tests pass in tests/app/aws/. |
I continue to experience unexpected issues during test or fixture teardown. The problem might have something to do with application context during testing. https://flask-sqlalchemy.palletsprojects.com/en/3.1.x/contexts/#tests |
After extensive revision to the template and notification fixtures:
The tests all pass in the tests/app/ folders beginning with "a" and "b", in tests/app/communication_item, and in the tests/app/template* folders. This is progress even though the total number of tests passing dropped. |
Current status: Passing list:
|
11 threads
|
11 threads
|
All tests pass when run sequentially! With 11 threads:
There are artifacts in these tables:
The latter is expected and only affects one suite of lambda function unit tests. The remaining problems are only apparent when running tests in parallel. For example, run |
After marking additional tests as "serial", the status with 9 threads is:
All tests continue to pass when run singled-threaded. I am in the process of evaluating the remaining failures and either fixing them or marking them a "serial" as well. Kyle and I discussed a new ticket for revising "serial" tests for parallel. |
With 9 threads in parallel + 75 tests run sequentially:
I added the |
With 9 threads in parallel + 80 tests run sequentially:
|
With 9 threads in parallel + 80 tests run sequentially:
|
With 9 threads in parallel:
With 107 tests running sequentially:
Sequential failures:
Parallel failures:
|
I just got a clean parallel run, but these tests fail when running sequentially:
|
…eads. #1436 Co-authored-by: Kyle MacMillan <[email protected]>
Co-authored-by: Kyle MacMillan <[email protected]> Co-authored-by: Michael Wellman <[email protected]>
Co-authored-by: Kyle MacMillan <[email protected]> Co-authored-by: Michael Wellman <[email protected]>
Co-authored-by: Kyle MacMillan <[email protected]> Co-authored-by: Michael Wellman <[email protected]>
Co-authored-by: Kyle MacMillan <[email protected]> Co-authored-by: Michael Wellman <[email protected]>
Co-authored-by: Kyle MacMillan <[email protected]> Co-authored-by: Michael Wellman <[email protected]>
Co-authored-by: Kyle MacMillan <[email protected]> Co-authored-by: Michael Wellman <[email protected]>
All tests pass or are skipped/x-failed. Given the long running nature of this ticket, @k-macmillan and I decided to create new tickets to clean up residual tech debt, etc. See #1631, #1634, and #1635. |
Co-authored-by: Kyle MacMillan <[email protected]> Co-authored-by: Michael Wellman <[email protected]>
Co-authored-by: Kyle MacMillan <[email protected]> Co-authored-by: Michael Wellman <[email protected]>
…1633) Co-authored-by: Kyle MacMillan <[email protected]> Co-authored-by: Michael Wellman <[email protected]>
User Story - Business Need
User Story(ies)
As a Notify developer,
I want the unit tests to create one test database, rather than one for each Pytest thread,
So that to simplify the code and facilitate future test improvements.
Additional Info and Resources
As currently implemented, running unit tests results in the creation of one test database for each Pytest thread, and each database name has an appended worker ID to name is something like "test_notification_api_gw0".
Databases have worker threads, so the more complicated multi-db setup doesn't improve performance. Given our desire to pre-seed the test database and restore proper rollback functionality after each unit test (so tests are always repeatable), we desire a more simple, single database setup.
Engineering Checklist
grep -rni worker_id .
. This includes work in the VA Profile integration lambda.This might be all that is necessary. The test fixture notify_db calls create_test_db, which should not recreate a database that already exists.
Acceptance Criteria
Out of Scope
This is the first ticket in an epic. Future tickets will:
The text was updated successfully, but these errors were encountered: