You must have completed the lab on Testing for exceptions.
Software engineers need some measure of the quality of the tests they write. This is not a simple question to answer.
Measuring test case quality is not straightforward, but there is one generally agreed-upon measure used as a baseline: test coverage.
Test coverage is a measure of how much of source code is executed when the tests run. There are three measures of “how much”:
Consider the following (very poorly designed and implemented) code snippet:
|
|
Now consider the following test case:
def test_authorize():
assert my_module.authorize(True, "bob", "privileged") is True
if-statement
evaluates to True.is_authenticated is True
), but the other expressions user_id.startswith('admin')
and caller == privileged
are not.Line coverage is the least precise, and conditional coverage is the most precise.
Test coverage is computed over the union of all source lines, branches, and conditions executed by our test cases. So we can easily write additional test cases that, collectively, reach 100% statement, branch, and condition coverage.
You want to target 100% condition coverage, but achieving 100% of any coverage can be challenging in a real system. Exception handling and user interface code in complex systems can be hard to test for a variety of reasons.
In practice, most organizations aim for 100% line coverage as a target.
pytest-cov
to compute test coverageMost test frameworks, like pytest
and Junit
(for Java), also have tools for computing test coverage. Manually computing these measures would be too tedious. These tools compute line coverage, but not always branch coverage, and almost never condition coverage because of the technical challenges of automating that calculation.
We installed the pytest-cov
tool when we installed pytest
. Refer to the instructions for installing pytest and pytest-cov
Open a Terminal in the directory where you were working on your unit testing examples. Run the following:
pytest-cov
Run the following command from your Terminal in the directory with sample.py
and test_sample.py
from the previous labs.
pytest --cov .
- This tells pytest to run tests in the current directory, .
, and generate the coverage report. You should see something similar to the following:
============================================================= test session starts ==============================================================
platform darwin -- Python 3.12.2, pytest-8.3.3, pluggy-1.5.0
rootdir: /Users/laymanl/git/uncw-seng201/content/en/labs/testing/coverage
plugins: cov-5.0.0
collected 4 items
test_sample.py .... [100%]
---------- coverage: platform darwin, python 3.12.2-final-0 ----------
Name Stmts Miss Cover
------------------------------------
sample.py 23 6 74%
test_sample.py 23 3 87%
------------------------------------
TOTAL 46 9 80%
============================================================== 4 passed in 0.03s ===============================================================
pytest
executes your tests as well, so you will see test failures outputted to the screen. Note that failing tests can lower your test coverage!
pytest --cov <target_directory>
pytest --cov --cov-branch <target-directory>
You can also generate an HTML report with pytest --cov --cov-branch --cov-report=html <target-directory>
. This will create a folder named htmlcov/
in the working directory. Open the htmlcov/index.html
file in a web browser, and you will see an interactive report that shows you which lines are and are not covered.