Skip to content

Commit 9f1a2bc

Browse files
committed
ENH: Validate PET data objects' attributes at instantiation
Validate PET data objects' attributes at instantiation: ensures that the attributes are present and match the expected dimensionalities. **PET class attributes** Refactor the PET attributes so that only `midframe` and `total_duration` are required and accepted by the constructor. These are the only parameters that are required by the current PET model. Remove `uptake` from the constructor: the PET data class does not need to know the uptake values held across its frames; it is rather the estimator that needs to know about its values so that the iterator can pick the frames following the appropriate sorting. Validate and format attributes so to avoid missing or inconsistent data. Specifically, require the midframe data to have the same length as the number of frames in the data object, and disallow the last midframe value being larget than the total duration. Make the `_compute_uptake_statistic` public so that users can call it. **`from_nii`** function: Refactor the `from_nii` function to accept filenames instead of a mix of filenames (e.g. the PET image sequence and brainmask) and temporal attribute arrays. Honors the name of the function, increases consistency with the dMRI counterpart and allows to offer a uniform API. The only required temporal parameter required by BIDS is the frame time (`FrameTimesStart`). Thus, the temporal attribute JSON (sidecar) file is required to contain that key. The values required to model a PET datast for the purposes of NiFreeze, namely the midframe and total duration values, are computed from the frame time. It is assumed that the frame duration spans entirely the time elapsed between two consecutire time frame values. Refactor and rename the `_compute_frame_duration` function so that it computes and returns the required parameters to instantiate a PET data object. The computation of the relevant temporal values is, thus, done at this place only. Use the `get_data` utils function in `from_nii` to handle automatically the data type when loading the PET data. **`PET.load`** class method: Remove the `PET.load` class method and rely on the `data.__init__.load` function: - If an HDF5 filename is provided, it is assumed that it hosts all necessary information, and the data module `load` function should take of loading all data. - If the provided arguments are NIfTI files plus other data files, the function will call the `pet.PET.from_nii` function. Change the `kwargs` arguments to be able to identify the relevant keyword arguments that are now present in the `from_nii` function. Change accordingly the `PET.load(pet_file, json_file)` call in the PET notebook and the `test_pet_load` test function. **Tests**: Refactor the PET data creation fixture in `conftest.py` to accept the `frame_time` (as it is the only required arguments by BIDS and the one that allows computing the rest) and to return the necessary data. Remove values that are no longer needed (i.e. `total_duration`). Refactor the tests accordingly and increase consistency with the `dmri` data module testing helper functions. Reduces cognitive load and maintenance burden. Add additional object instantiation equality checks: check that objects intantiated through reading NIfTI files equal objects instantiated directly. Check the PET dataset attributes systematically in round trip tests by collecting all named attributes that need to be tested. Modify accordingly the PET model and integration tests. Take advantage of the patch set to make other opinionated choices: - Prefer using the global `setup_random_pet_data` fixture over the local `random_dataset` fixture: it allows to control the parameters of the generated data and increases consistency with the practice adopted across the dMRI dataset tests. Remove the `random_dataset` fixture. - Prefer using `assert np.allclose` over `np.testing.assert_array_equal` for the sake of consistency **Dependencies** Require `attrs>24.1.0` so that `attrs.Converter` can be used. Documentation: https://www.attrs.org/en/25.4.0/api.html#converters
1 parent f892e52 commit 9f1a2bc

File tree

11 files changed

+1403
-234
lines changed

11 files changed

+1403
-234
lines changed

docs/notebooks/pet_motion_estimation.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
"from os import getenv\n",
1111
"from pathlib import Path\n",
1212
"\n",
13-
"from nifreeze.data.pet import PET\n",
13+
"from nifreeze.data.pet import from_nii\n",
1414
"\n",
1515
"# Install test data from gin.g-node.org:\n",
1616
"# $ datalad install -g https://gin.g-node.org/nipreps-data/tests-nifreeze.git\n",
@@ -29,7 +29,7 @@
2929
" DATA_PATH / \"pet_data\" / \"sub-02\" / \"ses-baseline\" / \"pet\" / \"sub-02_ses-baseline_pet.json\"\n",
3030
")\n",
3131
"\n",
32-
"pet_dataset = PET.load(pet_file, json_file)"
32+
"pet_dataset = from_nii(pet_file, temporal_file=json_file)"
3333
]
3434
},
3535
{

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ classifiers = [
2020
license = "Apache-2.0"
2121
requires-python = ">=3.10"
2222
dependencies = [
23-
"attrs>=20.1.0",
23+
"attrs>=24.1.0",
2424
"dipy>=1.5.0",
2525
"joblib",
2626
"nipype>=1.5.1,<2.0",

src/nifreeze/data/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ def load(
7676
from nifreeze.data.dmri import from_nii as dmri_from_nii
7777

7878
return dmri_from_nii(filename, brainmask_file=brainmask_file, **kwargs)
79-
elif {"frame_time", "frame_duration"} & set(kwargs):
79+
elif {"temporal_file"} & set(kwargs):
8080
from nifreeze.data.pet import from_nii as pet_from_nii
8181

8282
return pet_from_nii(filename, brainmask_file=brainmask_file, **kwargs)

0 commit comments

Comments
 (0)