Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refresh Python testing docs #8102

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
Open

Refresh Python testing docs #8102

wants to merge 10 commits into from

Conversation

cwebster-99
Copy link
Member

@cwebster-99 cwebster-99 commented Feb 27, 2025

Cutting background information from Python testing docs and updating based on new testing functionality.

@cwebster-99 cwebster-99 marked this pull request as ready for review April 2, 2025 16:18
@vs-code-engineering vs-code-engineering bot added this to the April 2025 milestone Apr 2, 2025
`python.testing.autoTestDiscoverOnSaveEnabled` is set to `true` by default, meaning that test discovery is also performed automatically whenever you add, delete, or update any Python file in the workspace. To disable this feature, set the value to `false`. You can refine for which files auto test discovery occurs by specifying a glob pattern in the `python.testing.autoTestDiscoverOnSavePattern` setting. Its default value is `**/*.py`.

You can [configure the settings](/docs/configure/settings.md) in the Settings editor or directly in the `settings.json` file. You need to reload the window for the test discovery setting to take effect.
> **Tip**: `python.testing.autoTestDiscoverOnSaveEnabled` is set to `true` by default, meaning that test discovery is also performed automatically whenever you add, delete, or update any Python file in the workspace. To disable this feature, set the value to `false`, which can be done either in the Settings editor or in the `settings.json` file as described in the VS Code [Settings](/docs/editor/settings.md) documentation. You will need to reload the window for this setting to take effect.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

think we should still include this section: python.testing.autoTestDiscoverOnSavePattern setting. Its default value is **/*.py.


![Cancel test discovery button in the Test Explorer.](/images/testing/test-cancel-button.png)

If discovery fails (for example, the test framework isn't installed or you have a syntax error in your test file), you'll see an error message displayed in the Test Explorer. The Python extension utilizes error tolerant discovery, meaning if there is an error during discovery that effects some tests but not all, you will see a node of tests and an error node in the Test Explorer. You can check the **Python** output panel to see the entire error message (use the **View** > **Output** menu command to show the **Output** panel, then select **Python** from the dropdown on the right side).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there should also be a link to the logs in the error node you see


If discovery fails (for example, the test framework isn't installed or you have a syntax error in your test file), you'll see an error message displayed in the Test Explorer. The Python extension utilizes error tolerant discovery, meaning if there is an error during discovery that effects some tests but not all, you will see a node of tests and an error node in the Test Explorer. You can check the **Python** output panel to see the entire error message (use the **View** > **Output** menu command to show the **Output** panel, then select **Python** from the dropdown on the right side).

> **Tip**: Unittests can be discovered by pytest, but not vice versa. In the case where testing is configured using unittest but tests are written in pytest, there will be an error message in the Test Explorer.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there will not be an error message it will just not find the tests. So if you have pytests but you select unittests then we wont be able to find any tests in the workspace

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image
I see this in the Test Explorer....is this different or unexpected?

eleanorjboyd
eleanorjboyd previously approved these changes Apr 3, 2025
@cwebster-99 cwebster-99 requested a review from ntrogh April 3, 2025 22:57
@cwebster-99
Copy link
Member Author

@ntrogh would you mind giving these changes a quick pass? We are cutting quite a bit of content.

Copy link
Contributor

@ntrogh ntrogh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cwebster-99 Added some suggestions.

Some general comments about the article (didn't review all sections in detail now):

  • Review language for use of future tense and passive voice
  • Review the screenshots and use size them consistently. Also, for smaller cut-outs, make them smaller because they look massive compared to full-screen screenshots. Choose 2 or max 3 widths and stick to them.
  • You can use the GH styling for alerts (tips, note)
  • Happy to do a full review for language and structure in the future - article feels text-heavy right now.

The combined results of all the tests is your test report, which tells you whether the function (the unit), is behaving as expected across all test cases. That is, when a unit passes all of its tests, you can be confident that it's functioning properly. (The practice of **test-driven development** is where you actually write the tests first, then write the code to pass increasingly more tests until all of them pass.)

Because unit tests are small, isolated pieces of code (in unit testing you avoid external dependencies and use mock data or otherwise simulated inputs), they're quick and inexpensive to run. This characteristic means that you can run unit tests early and often. Developers typically run unit tests even before committing code to a repository; gated check-in systems can also run unit tests before merging a commit. Many continuous integration systems also run unit tests after every build. Running the unit test early and often means that you quickly catch **regressions,** which are unexpected changes in the behavior of code that previously passed all its unit tests. Because the test failure can easily be traced to a particular code change, it's easy to find and remedy the cause of the failure, which is undoubtedly better than discovering a problem much later in the process!

For a general background on unit testing, read [Unit testing](https://wikipedia.org/wiki/Unit_testing) on Wikipedia. For useful unit test examples, you can review [https://github.com/gwtw/py-sorting](https://github.com/gwtw/py-sorting), a repository with tests for different sorting algorithms.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we include a reference to the general testing doc for VS Code (/docs/debugtest/testing.md)?

@@ -12,144 +12,27 @@ MetaSocialImage: images/tutorial/python-social.png

The [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) supports testing with Python's built-in [unittest](https://docs.python.org/3/library/unittest.html) framework and [pytest](https://docs.pytest.org/).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Provide a high-level description of what the extension brings: it builds on the built-in testing features in VS Code and provides test discovery, test coverage, and running and debugging tests.

@@ -252,6 +105,8 @@ Once the coverage run is complete, lines will be highlighted in the editor for l

For finer grain control of your coverage run when using pytest, you can edit the `python.testing.pytestArgs` setting to include your specifications. When the pytest argument `--cov` exists in `python.testing.pytestArgs`, the Python extension will make no additional edits to coverage args, to allow your customizations to take effect. If there is no `--cov` argument found, the extension will add `--cov=.` to the pytest args prior to run to enable coverage at the workspace root.

> **Note**: Django does not currently support running tests with coverage.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use stylized notes as well

> [!NOTE]
> put the note here

@cwebster-99
Copy link
Member Author

@ntrogh Thanks for the tips! Incorporated the feedback and made some more cuts. The only thing I wasn't sure about was how to ensure consistent sizing across all of the images.

@ntrogh
Copy link
Contributor

ntrogh commented Apr 4, 2025

@cwebster-99 I'll review on Monday and will check and fix any image issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants