diff -Nru nibabel-5.0.0/Changelog nibabel-5.1.0/Changelog --- nibabel-5.0.0/Changelog 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/Changelog 2023-04-03 14:48:22.000000000 +0000 @@ -25,6 +25,68 @@ References like "pr/298" refer to github pull request numbers. +5.1.0 (Monday 3 April 2023) +=========================== + +New feature release in the 5.1.x series. + +Enhancements +------------ +* Make :mod:`nibabel.imagestats` available with ``import nibabel`` (pr/1208) + (Fabian Perez, reviewed by CM) +* Use symmetric threshold for identifying unit quaternions on qform + calculations (pr/1182) (CM, reviewed by MB) +* Type annotations for :mod:`~nibabel.loadsave` (pr/1213) and + :class:`~nibabel.spatialimages.SpatialImage` APIs (pr/1179), + :mod:`~nibabel.deprecated`, :mod:`~nibabel.deprecator`, + :mod:`~nibabel.onetime` and :mod:`~nibabel.optpkg` modules (pr/1188), + :mod:`~nibabel.volumeutils` (pr/1189), :mod:`~nibabel.filename_parser` and + :mod:`~nibabel.openers` (pr/1197) (CM, reviewed by Zvi Baratz) + +Bug fixes +--------- +* Require explicit overrides to write GIFTI files that contain data arrays + with data types not permitted by the GIFTI standard (pr/1199) (CM, reviewed + by Alexis Thual) + +Maintenance +----------- +* Move compression detection logic into a private ``nibabel._compression`` + module, resolving unexpected errors from pyzstd. (pr/1212) (CM) +* Improved consistency of docstring formatting (pr/1200) (Zvi Baratz, reviewed + by CM) +* Modernized README text (pr/1195) (Zvi Baratz, reviewed by CM) +* Updated README badges to include package distributions (pr/1192) (Horea + Christian, reviewed by CM) +* Removed all dependencies on distutils and setuptools (pr/1190) (CM, + reviewed by Zvi Baratz) +* Add a ``_version.pyi`` stub to allow mypy_ to run without building nibabel + (pr/1210) (CM) + + +.. _mypy: https://mypy.readthedocs.io/ + + +5.0.1 (Sunday 12 February 2023) +=============================== + +Bug-fix release in the 5.0.x series. + +Bug fixes +--------- +* Support ragged voxel arrays in + :class:`~nibabel.cifti2.cifti2_axes.ParcelsAxis` (pr/1194) (Michiel Cottaar, + reviewed by CM) +* Return to cwd on exception in :class:`~nibabel.tmpdirs.InTemporaryDirectory` + (pr/1184) (CM) + +Maintenance +----------- +* Add ``py.typed`` to module root to enable use of types in downstream + projects (CM, reviewed by Fernando Pérez-Garcia) +* Cache git-archive separately from Python packages in GitHub Actions + (pr/1186) (CM, reviewed by Zvi Baratz) + 5.0.0 (Monday 9 January 2023) ============================= diff -Nru nibabel-5.0.0/debian/changelog nibabel-5.1.0/debian/changelog --- nibabel-5.0.0/debian/changelog 2023-01-30 15:15:52.000000000 +0000 +++ nibabel-5.1.0/debian/changelog 2023-07-20 17:24:07.000000000 +0000 @@ -1,3 +1,18 @@ +nibabel (5.1.0-1) unstable; urgency=medium + + * New upstream version + * Remove trailing whitespace in debian/changelog (routine-update) + * d/rules: fix permissions for two scripts without shebang. + Affected scripts are dicomwrappers.py and test_dicomwrappers.py. + * d/rules: find and delete stray .gitignore. + A couple of them landed in the nibabel python3 modules directory. + * d/s/lintian-overrides: hide test-leaves-python-version-untested. + The warning chokes on a check that is relevant only for interactive + use, so no meddling with multiple python versions is expected, or even + possible. + + -- Étienne Mollier Thu, 20 Jul 2023 19:24:07 +0200 + nibabel (5.0.0-2) unstable; urgency=medium * d/control: add myself to uploaders. @@ -8,7 +23,7 @@ nibabel (5.0.0-1) unstable; urgency=medium * Team upload. - + [ Andreas Tille ] * New upstream version * Standards-Version: 4.6.2 (routine-update) diff -Nru nibabel-5.0.0/debian/control nibabel-5.1.0/debian/control --- nibabel-5.0.0/debian/control 2023-01-30 15:15:52.000000000 +0000 +++ nibabel-5.1.0/debian/control 2023-07-20 17:24:07.000000000 +0000 @@ -41,8 +41,6 @@ python3-fuse Suggests: python-nibabel-doc, python3-mock -Breaks: python-nibabel -Replaces: python-nibabel Description: Python3 bindings to various neuroimaging data formats NiBabel provides read and write access to some common medical and neuroimaging file formats, including: ANALYZE (plain, SPM99, SPM2), GIFTI, @@ -52,11 +50,11 @@ Package: python-nibabel-doc Architecture: all +Multi-Arch: foreign Section: doc Depends: ${misc:Depends}, libjs-jquery, libjs-mathjax -Multi-Arch: foreign Description: documentation for NiBabel NiBabel provides read and write access to some common medical and neuroimaging file formats, including: ANALYZE (plain, SPM99, SPM2), GIFTI, diff -Nru nibabel-5.0.0/debian/rules nibabel-5.1.0/debian/rules --- nibabel-5.0.0/debian/rules 2023-01-30 15:15:52.000000000 +0000 +++ nibabel-5.1.0/debian/rules 2023-07-20 17:24:07.000000000 +0000 @@ -2,6 +2,7 @@ # -*- mode: makefile; coding: utf-8 -*- BINDIR=$(CURDIR)/debian/tmp/usr/bin/ +NIBABEL_MODULE_DIR=$(CURDIR)/debian/*/usr/lib/python*/dist-packages/nibabel # Setting MPLCONFIGDIR to please matplotlib demanding writable HOME # on older systems (e.g. 12.04) @@ -10,9 +11,11 @@ %: dh $@ --buildsystem=pybuild --builddirectory=build --with=python3 -override_dh_auto_build: - dh_auto_build - PYTHONPATH=$(CURDIR) $(MAKE) -C doc html SPHINXBUILD=/usr/share/sphinx/scripts/python3/sphinx-build PYTHON=python3 +execute_after_dh_auto_build: + PYTHONPATH=$(CURDIR) \ + $(MAKE) -C doc html \ + SPHINXBUILD=/usr/share/sphinx/scripts/python3/sphinx-build \ + PYTHON=python3 # but remove jquery copy (later on link to Debian's version) -rm build/html/_static/jquery.js # objects inventory is of no use for the package @@ -23,31 +26,44 @@ # enable when we believe that the tests should pass override_dh_auto_test: ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS))) - PYBUILD_SYSTEM=custom PYBUILD_TEST_ARGS="PATH={dir}/.pybuild/cpython3_{version}/scripts:$(PATH) PYTHONPATH={build_dir} LC_ALL=C.UTF-8 {interpreter} -m pytest --verbose" dh_auto_test + PYBUILD_SYSTEM=custom \ + PYBUILD_TEST_ARGS="PATH={dir}/.pybuild/cpython3_{version}/scripts:$(PATH) PYTHONPATH={build_dir} LC_ALL=C.UTF-8 {interpreter} -m pytest --verbose" \ + dh_auto_test endif +execute_after_dh_auto_install: + find $(NIBABEL_MODULE_DIR) -type f -name .gitignore -delete + ## immediately useable documentation ## and exemplar data (they are small excerpts anyway) +COMPRESS_EXCLUDE_EXTS := .py .html .css .jpg .txt .js .json .rtc .par .bin override_dh_compress: - dh_compress -X.py -X.html -X.css -X.jpg -X.txt -X.js -X.json -X.rtc -X.par -X.bin + dh_compress $(foreach EXT,$(COMPRESS_EXCLUDE_EXTS),-X$(EXT)) override_dh_installman: - PYTHONPATH=build/lib:$(PYTHONPATH) help2man -N \ - -n 'convert PARREC image to NIfTI' $(BINDIR)/parrec2nii > build/parrec2nii.1 - PYTHONPATH=build/lib:$(PYTHONPATH) help2man -N \ - -n 'FUSE filesystem on top of a directory with DICOMs' $(BINDIR)/nib-dicomfs > build/nib-dicomfs.1 - PYTHONPATH=build/lib:$(PYTHONPATH) help2man -N \ - -n "'ls' for neuroimaging files" $(BINDIR)/nib-ls > build/nib-ls.1 - PYTHONPATH=build/lib:$(PYTHONPATH) help2man -N \ - -n "header diagnostic tool" $(BINDIR)/nib-nifti-dx > build/nib-nifti-dx.1 - + PYTHONPATH=build/lib:$(PYTHONPATH) \ + help2man -N $(BINDIR)/parrec2nii \ + -n 'convert PARREC image to NIfTI' \ + > build/parrec2nii.1 + PYTHONPATH=build/lib:$(PYTHONPATH) \ + help2man -N $(BINDIR)/nib-dicomfs \ + -n 'FUSE filesystem on top of a directory with DICOMs' \ + > build/nib-dicomfs.1 + PYTHONPATH=build/lib:$(PYTHONPATH) \ + help2man -N $(BINDIR)/nib-ls \ + -n "'ls' for neuroimaging files" \ + > build/nib-ls.1 + PYTHONPATH=build/lib:$(PYTHONPATH) \ + help2man -N $(BINDIR)/nib-nifti-dx \ + -n "header diagnostic tool" \ + > build/nib-nifti-dx.1 dh_installman build/*.1 -override_dh_fixperms: - find debian/ -name 'gzipbase64.gii' | xargs chmod -x - find debian/ -name 'umass_anonymized.PAR' | xargs chmod -x - dh_fixperms +execute_before_dh_fixperms: + chmod -x $(NIBABEL_MODULE_DIR)/gifti/tests/data/gzipbase64.gii + chmod -x $(NIBABEL_MODULE_DIR)/tests/data/umass_anonymized.PAR + chmod -x $(NIBABEL_MODULE_DIR)/nicom/dicomwrappers.py + chmod -x $(NIBABEL_MODULE_DIR)/nicom/tests/test_dicomwrappers.py -override_dh_clean: +execute_before_dh_clean: $(MAKE) clean - dh_clean diff -Nru nibabel-5.0.0/debian/source/lintian-overrides nibabel-5.1.0/debian/source/lintian-overrides --- nibabel-5.0.0/debian/source/lintian-overrides 1970-01-01 00:00:00.000000000 +0000 +++ nibabel-5.1.0/debian/source/lintian-overrides 2023-07-20 17:24:07.000000000 +0000 @@ -0,0 +1,2 @@ +# Look at the remark in debian/tests/run-unit-test for the justification. +nibabel source: test-leaves-python-version-untested [debian/tests/run-unit-test] Binary files /tmp/tmpd8ekqkiu/0ZBK3ZlRfz/nibabel-5.0.0/doc/pics/logo.png and /tmp/tmpd8ekqkiu/y8uFPP2kO7/nibabel-5.1.0/doc/pics/logo.png differ diff -Nru nibabel-5.0.0/doc/source/changelog.rst nibabel-5.1.0/doc/source/changelog.rst --- nibabel-5.0.0/doc/source/changelog.rst 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/doc/source/changelog.rst 2023-04-03 14:48:22.000000000 +0000 @@ -25,6 +25,68 @@ References like "pr/298" refer to github pull request numbers. +5.1.0 (Monday 3 April 2023) +=========================== + +New feature release in the 5.1.x series. + +Enhancements +------------ +* Make :mod:`nibabel.imagestats` available with ``import nibabel`` (pr/1208) + (Fabian Perez, reviewed by CM) +* Use symmetric threshold for identifying unit quaternions on qform + calculations (pr/1182) (CM, reviewed by MB) +* Type annotations for :mod:`~nibabel.loadsave` (pr/1213) and + :class:`~nibabel.spatialimages.SpatialImage` APIs (pr/1179), + :mod:`~nibabel.deprecated`, :mod:`~nibabel.deprecator`, + :mod:`~nibabel.onetime` and :mod:`~nibabel.optpkg` modules (pr/1188), + :mod:`~nibabel.volumeutils` (pr/1189), :mod:`~nibabel.filename_parser` and + :mod:`~nibabel.openers` (pr/1197) (CM, reviewed by Zvi Baratz) + +Bug fixes +--------- +* Require explicit overrides to write GIFTI files that contain data arrays + with data types not permitted by the GIFTI standard (pr/1199) (CM, reviewed + by Alexis Thual) + +Maintenance +----------- +* Move compression detection logic into a private ``nibabel._compression`` + module, resolving unexpected errors from pyzstd. (pr/1212) (CM) +* Improved consistency of docstring formatting (pr/1200) (Zvi Baratz, reviewed + by CM) +* Modernized README text (pr/1195) (Zvi Baratz, reviewed by CM) +* Updated README badges to include package distributions (pr/1192) (Horea + Christian, reviewed by CM) +* Removed all dependencies on distutils and setuptools (pr/1190) (CM, + reviewed by Zvi Baratz) +* Add a ``_version.pyi`` stub to allow mypy_ to run without building nibabel + (pr/1210) (CM) + + +.. _mypy: https://mypy.readthedocs.io/ + + +5.0.1 (Sunday 12 February 2023) +=============================== + +Bug-fix release in the 5.0.x series. + +Bug fixes +--------- +* Support ragged voxel arrays in + :class:`~nibabel.cifti2.cifti2_axes.ParcelsAxis` (pr/1194) (Michiel Cottaar, + reviewed by CM) +* Return to cwd on exception in :class:`~nibabel.tmpdirs.InTemporaryDirectory` + (pr/1184) (CM) + +Maintenance +----------- +* Add ``py.typed`` to module root to enable use of types in downstream + projects (CM, reviewed by Fernando Pérez-Garcia) +* Cache git-archive separately from Python packages in GitHub Actions + (pr/1186) (CM, reviewed by Zvi Baratz) + 5.0.0 (Monday 9 January 2023) ============================= diff -Nru nibabel-5.0.0/doc/source/index.rst nibabel-5.1.0/doc/source/index.rst --- nibabel-5.0.0/doc/source/index.rst 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/doc/source/index.rst 2023-04-03 14:48:22.000000000 +0000 @@ -7,6 +7,10 @@ # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### +======= +NiBabel +======= + .. include:: _long_description.inc Documentation @@ -119,6 +123,8 @@ * Andrew Van * Jérôme Dockès * Jacob Roberts +* Horea Christian +* Fabian Perez License reprise =============== diff -Nru nibabel-5.0.0/doc/source/installation.rst nibabel-5.1.0/doc/source/installation.rst --- nibabel-5.0.0/doc/source/installation.rst 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/doc/source/installation.rst 2023-04-03 14:48:22.000000000 +0000 @@ -86,7 +86,7 @@ * Python_ 3.8 or greater * NumPy_ 1.19 or greater * Packaging_ 17.0 or greater -* Setuptools_ +* importlib-resources_ 1.3 or greater (or Python 3.9+) * SciPy_ (optional, for full SPM-ANALYZE support) * h5py_ (optional, for MINC2 support) * PyDICOM_ 1.0.0 or greater (optional, for DICOM support) diff -Nru nibabel-5.0.0/doc/source/links_names.txt nibabel-5.1.0/doc/source/links_names.txt --- nibabel-5.0.0/doc/source/links_names.txt 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/doc/source/links_names.txt 2023-04-03 14:48:22.000000000 +0000 @@ -114,6 +114,7 @@ .. _python imaging library: https://pypi.python.org/pypi/Pillow .. _h5py: https://www.h5py.org/ .. _packaging: https://packaging.pypa.io +.. _importlib-resources: https://importlib-resources.readthedocs.io/ .. Python imaging projects .. _PyMVPA: http://www.pymvpa.org diff -Nru nibabel-5.0.0/doc/source/nifti_images.rst nibabel-5.1.0/doc/source/nifti_images.rst --- nibabel-5.0.0/doc/source/nifti_images.rst 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/doc/source/nifti_images.rst 2023-04-03 14:48:22.000000000 +0000 @@ -273,8 +273,8 @@ the sform: ``get_qform()``, ``set_qform()``. >>> n1_header.get_qform(coded=True) -(array([[ -2. , 0. , 0. , 117.86], - [ -0. , 1.97, -0.36, -35.72], +(array([[ -2. , 0. , -0. , 117.86], + [ 0. , 1.97, -0.36, -35.72], [ 0. , 0.32, 2.17, -7.25], [ 0. , 0. , 0. , 1. ]]), 1) diff -Nru nibabel-5.0.0/.git_archival.txt nibabel-5.1.0/.git_archival.txt --- nibabel-5.0.0/.git_archival.txt 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/.git_archival.txt 2023-04-03 14:48:22.000000000 +0000 @@ -1,4 +1,4 @@ -node: 323a88ae3e278d1d0f1bcbad223575a323a607ac -node-date: 2023-01-09T09:50:15-05:00 -describe-name: 5.0.0 -ref-names: tag: 5.0.0 +node: 97457374dc7d02560c6f442a892fcc56cd41a98e +node-date: 2023-04-03T10:48:22-04:00 +describe-name: 5.1.0 +ref-names: tag: 5.1.0 diff -Nru nibabel-5.0.0/.github/workflows/stable.yml nibabel-5.1.0/.github/workflows/stable.yml --- nibabel-5.0.0/.github/workflows/stable.yml 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/.github/workflows/stable.yml 2023-04-03 14:48:22.000000000 +0000 @@ -46,11 +46,17 @@ run: python -m build - run: twine check dist/* - name: Build git archive - run: git archive -v -o dist/nibabel-archive.tgz HEAD - - uses: actions/upload-artifact@v3 + run: mkdir archive && git archive -v -o archive/nibabel-archive.tgz HEAD + - name: Upload sdist and wheel artifacts + uses: actions/upload-artifact@v3 with: name: dist path: dist/ + - name: Upload git archive artifact + uses: actions/upload-artifact@v3 + with: + name: archive + path: archive/ test-package: runs-on: ubuntu-latest @@ -59,10 +65,18 @@ matrix: package: ['wheel', 'sdist', 'archive'] steps: - - uses: actions/download-artifact@v3 + - name: Download sdist and wheel artifacts + if: matrix.package != 'archive' + uses: actions/download-artifact@v3 with: name: dist path: dist/ + - name: Download git archive artifact + if: matrix.package == 'archive' + uses: actions/download-artifact@v3 + with: + name: archive + path: archive/ - uses: actions/setup-python@v4 with: python-version: 3 @@ -71,14 +85,14 @@ - name: Update pip run: pip install --upgrade pip - name: Install wheel - run: pip install dist/nibabel-*.whl if: matrix.package == 'wheel' + run: pip install dist/nibabel-*.whl - name: Install sdist - run: pip install dist/nibabel-*.tar.gz if: matrix.package == 'sdist' + run: pip install dist/nibabel-*.tar.gz - name: Install archive - run: pip install dist/nibabel-archive.tgz if: matrix.package == 'archive' + run: pip install archive/nibabel-archive.tgz - run: python -c 'import nibabel; print(nibabel.__version__)' - name: Install test extras run: pip install nibabel[test] @@ -165,17 +179,17 @@ - name: Install NiBabel run: tools/ci/install.sh - name: Run tests - run: tools/ci/check.sh if: ${{ matrix.check != 'skiptests' }} + run: tools/ci/check.sh - name: Submit coverage - run: tools/ci/submit_coverage.sh if: ${{ always() }} + run: tools/ci/submit_coverage.sh - name: Upload pytest test results + if: ${{ always() && matrix.check == 'test' }} uses: actions/upload-artifact@v3 with: name: pytest-results-${{ matrix.os }}-${{ matrix.python-version }} path: for_testing/test-results.xml - if: ${{ always() && matrix.check == 'test' }} publish: runs-on: ubuntu-latest diff -Nru nibabel-5.0.0/.mailmap nibabel-5.1.0/.mailmap --- nibabel-5.0.0/.mailmap 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/.mailmap 2023-04-03 14:48:22.000000000 +0000 @@ -30,6 +30,7 @@ Dimitri Papadopoulos Orfanos <3234522+DimitriPapadopoulos@users.noreply.github.com> Eric Larson Eric89GXL Eric Larson larsoner +Fabian Perez Fernando Pérez-García Fernando Félix C. Morency Felix C. Morency Félix C. Morency Félix C. Morency diff -Nru nibabel-5.0.0/min-requirements.txt nibabel-5.1.0/min-requirements.txt --- nibabel-5.0.0/min-requirements.txt 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/min-requirements.txt 2023-04-03 14:48:22.000000000 +0000 @@ -1,4 +1,4 @@ # Auto-generated by tools/update_requirements.py numpy ==1.19 packaging ==17 -setuptools +importlib_resources ==1.3; python_version < '3.9' diff -Nru nibabel-5.0.0/nibabel/affines.py nibabel-5.1.0/nibabel/affines.py --- nibabel-5.0.0/nibabel/affines.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/affines.py 2023-04-03 14:48:22.000000000 +0000 @@ -1,7 +1,6 @@ # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*- # vi: set ft=python sts=4 ts=4 sw=4 et: -"""Utility routines for working with points and affine transforms -""" +"""Utility routines for working with points and affine transforms""" from functools import reduce import numpy as np @@ -100,7 +99,7 @@ def to_matvec(transform): - """Split a transform into its matrix and vector components. + """Split a transform into its matrix and vector components The transformation must be represented in homogeneous coordinates and is split into its rotation matrix and translation vector components. @@ -312,8 +311,7 @@ def obliquity(affine): - r""" - Estimate the *obliquity* an affine's axes represent. + r"""Estimate the *obliquity* an affine's axes represent The term *obliquity* is defined here as the rotation of those axes with respect to the cardinal axes. diff -Nru nibabel-5.0.0/nibabel/analyze.py nibabel-5.1.0/nibabel/analyze.py --- nibabel-5.0.0/nibabel/analyze.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/analyze.py 2023-04-03 14:48:22.000000000 +0000 @@ -83,8 +83,6 @@ """ from __future__ import annotations -from typing import Type - import numpy as np from .arrayproxy import ArrayProxy @@ -895,7 +893,8 @@ class AnalyzeImage(SpatialImage): """Class for basic Analyze format image""" - header_class: Type[AnalyzeHeader] = AnalyzeHeader + header_class: type[AnalyzeHeader] = AnalyzeHeader + header: AnalyzeHeader _meta_sniff_len = header_class.sizeof_hdr files_types: tuple[tuple[str, str], ...] = (('image', '.img'), ('header', '.hdr')) valid_exts: tuple[str, ...] = ('.img', '.hdr') @@ -1064,5 +1063,5 @@ hdr['scl_inter'] = inter -load = AnalyzeImage.load +load = AnalyzeImage.from_filename save = AnalyzeImage.instance_to_filename diff -Nru nibabel-5.0.0/nibabel/arrayproxy.py nibabel-5.1.0/nibabel/arrayproxy.py --- nibabel-5.0.0/nibabel/arrayproxy.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/arrayproxy.py 2023-04-03 14:48:22.000000000 +0000 @@ -59,6 +59,9 @@ if ty.TYPE_CHECKING: # pragma: no cover import numpy.typing as npt + # Taken from numpy/__init__.pyi + _DType = ty.TypeVar('_DType', bound=np.dtype[ty.Any]) + class ArrayLike(ty.Protocol): """Protocol for numpy ndarray-like objects @@ -68,9 +71,19 @@ """ shape: tuple[int, ...] - ndim: int - def __array__(self, dtype: npt.DTypeLike | None = None, /) -> npt.NDArray: + @property + def ndim(self) -> int: + ... # pragma: no cover + + # If no dtype is passed, any dtype might be returned, depending on the array-like + @ty.overload + def __array__(self, dtype: None = ..., /) -> np.ndarray[ty.Any, np.dtype[ty.Any]]: + ... # pragma: no cover + + # Any dtype might be passed, and *that* dtype must be returned + @ty.overload + def __array__(self, dtype: _DType, /) -> np.ndarray[ty.Any, _DType]: ... # pragma: no cover def __getitem__(self, key, /) -> npt.NDArray: diff -Nru nibabel-5.0.0/nibabel/arraywriters.py nibabel-5.1.0/nibabel/arraywriters.py --- nibabel-5.0.0/nibabel/arraywriters.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/arraywriters.py 2023-04-03 14:48:22.000000000 +0000 @@ -28,7 +28,6 @@ something else to make sense of conversions between float and int, or between larger ints and smaller. """ - import numpy as np from .casting import ( diff -Nru nibabel-5.0.0/nibabel/brikhead.py nibabel-5.1.0/nibabel/brikhead.py --- nibabel-5.0.0/nibabel/brikhead.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/brikhead.py 2023-04-03 14:48:22.000000000 +0000 @@ -6,8 +6,7 @@ # copyright and license terms. # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## -""" -Class for reading AFNI BRIK/HEAD datasets +"""Class for reading AFNI BRIK/HEAD datasets See https://afni.nimh.nih.gov/pub/dist/doc/program_help/README.attributes.html for information on what is required to have a valid BRIK/HEAD dataset. @@ -476,6 +475,7 @@ """ header_class = AFNIHeader + header: AFNIHeader valid_exts = ('.brik', '.head') files_types = (('image', '.brik'), ('header', '.head')) _compressed_suffixes = ('.gz', '.bz2', '.Z', '.zst') @@ -564,4 +564,4 @@ return file_map -load = AFNIImage.load +load = AFNIImage.from_filename diff -Nru nibabel-5.0.0/nibabel/cifti2/cifti2_axes.py nibabel-5.1.0/nibabel/cifti2/cifti2_axes.py --- nibabel-5.0.0/nibabel/cifti2/cifti2_axes.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/cifti2/cifti2_axes.py 2023-04-03 14:48:22.000000000 +0000 @@ -775,14 +775,9 @@ maps names of surface elements to integers (not needed for volumetric CIFTI-2 files) """ self.name = np.asanyarray(name, dtype='U') - as_array = np.asanyarray(voxels) - if as_array.ndim == 1: - voxels = as_array.astype('object') - else: - voxels = np.empty(len(voxels), dtype='object') - for idx in range(len(voxels)): - voxels[idx] = as_array[idx] - self.voxels = np.asanyarray(voxels, dtype='object') + self.voxels = np.empty(len(voxels), dtype='object') + for idx, vox in enumerate(voxels): + self.voxels[idx] = vox self.vertices = np.asanyarray(vertices, dtype='object') self.affine = np.asanyarray(affine) if affine is not None else None self.volume_shape = volume_shape diff -Nru nibabel-5.0.0/nibabel/cifti2/cifti2.py nibabel-5.1.0/nibabel/cifti2/cifti2.py --- nibabel-5.0.0/nibabel/cifti2/cifti2.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/cifti2/cifti2.py 2023-04-03 14:48:22.000000000 +0000 @@ -1411,6 +1411,7 @@ """Class for single file CIFTI-2 format image""" header_class = Cifti2Header + header: Cifti2Header valid_exts = Nifti2Image.valid_exts files_types = Nifti2Image.files_types makeable = False diff -Nru nibabel-5.0.0/nibabel/cifti2/tests/test_axes.py nibabel-5.1.0/nibabel/cifti2/tests/test_axes.py --- nibabel-5.0.0/nibabel/cifti2/tests/test_axes.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/cifti2/tests/test_axes.py 2023-04-03 14:48:22.000000000 +0000 @@ -494,13 +494,34 @@ assert prc != prc_other # test direct initialisation - axes.ParcelsAxis( + test_parcel = axes.ParcelsAxis( voxels=[np.ones((3, 2), dtype=int)], vertices=[{}], name=['single_voxel'], affine=np.eye(4), volume_shape=(2, 3, 4), ) + assert len(test_parcel) == 1 + + # test direct initialisation with multiple parcels + test_parcel = axes.ParcelsAxis( + voxels=[np.ones((3, 2), dtype=int), np.zeros((3, 2), dtype=int)], + vertices=[{}, {}], + name=['first_parcel', 'second_parcel'], + affine=np.eye(4), + volume_shape=(2, 3, 4), + ) + assert len(test_parcel) == 2 + + # test direct initialisation with ragged voxel/vertices array + test_parcel = axes.ParcelsAxis( + voxels=[np.ones((3, 2), dtype=int), np.zeros((5, 2), dtype=int)], + vertices=[{}, {}], + name=['first_parcel', 'second_parcel'], + affine=np.eye(4), + volume_shape=(2, 3, 4), + ) + assert len(test_parcel) == 2 with pytest.raises(ValueError): axes.ParcelsAxis( diff -Nru nibabel-5.0.0/nibabel/cmdline/tests/test_conform.py nibabel-5.1.0/nibabel/cmdline/tests/test_conform.py --- nibabel-5.0.0/nibabel/cmdline/tests/test_conform.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/cmdline/tests/test_conform.py 2023-04-03 14:48:22.000000000 +0000 @@ -15,7 +15,7 @@ import nibabel as nib from nibabel.cmdline.conform import main from nibabel.optpkg import optional_package -from nibabel.testing import test_data +from nibabel.testing import get_test_data _, have_scipy, _ = optional_package('scipy.ndimage') needs_scipy = unittest.skipUnless(have_scipy, 'These tests need scipy') @@ -23,7 +23,7 @@ @needs_scipy def test_default(tmpdir): - infile = test_data(fname='anatomical.nii') + infile = get_test_data(fname='anatomical.nii') outfile = tmpdir / 'output.nii.gz' main([str(infile), str(outfile)]) assert outfile.isfile() @@ -41,7 +41,7 @@ @needs_scipy def test_nondefault(tmpdir): - infile = test_data(fname='anatomical.nii') + infile = get_test_data(fname='anatomical.nii') outfile = tmpdir / 'output.nii.gz' out_shape = (100, 100, 150) voxel_size = (1, 2, 4) diff -Nru nibabel-5.0.0/nibabel/cmdline/tests/test_convert.py nibabel-5.1.0/nibabel/cmdline/tests/test_convert.py --- nibabel-5.0.0/nibabel/cmdline/tests/test_convert.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/cmdline/tests/test_convert.py 2023-04-03 14:48:22.000000000 +0000 @@ -13,11 +13,11 @@ import nibabel as nib from nibabel.cmdline import convert -from nibabel.testing import test_data +from nibabel.testing import get_test_data def test_convert_noop(tmp_path): - infile = test_data(fname='anatomical.nii') + infile = get_test_data(fname='anatomical.nii') outfile = tmp_path / 'output.nii.gz' orig = nib.load(infile) @@ -31,7 +31,7 @@ assert converted.shape == orig.shape assert converted.get_data_dtype() == orig.get_data_dtype() - infile = test_data(fname='resampled_anat_moved.nii') + infile = get_test_data(fname='resampled_anat_moved.nii') with pytest.raises(FileExistsError): convert.main([str(infile), str(outfile)]) @@ -50,7 +50,7 @@ @pytest.mark.parametrize('data_dtype', ('u1', 'i2', 'float32', 'float', 'int64')) def test_convert_dtype(tmp_path, data_dtype): - infile = test_data(fname='anatomical.nii') + infile = get_test_data(fname='anatomical.nii') outfile = tmp_path / 'output.nii.gz' orig = nib.load(infile) @@ -78,7 +78,7 @@ ], ) def test_convert_by_extension(tmp_path, ext, img_class): - infile = test_data(fname='anatomical.nii') + infile = get_test_data(fname='anatomical.nii') outfile = tmp_path / f'output.{ext}' orig = nib.load(infile) @@ -102,7 +102,7 @@ ], ) def test_convert_imgtype(tmp_path, ext, img_class): - infile = test_data(fname='anatomical.nii') + infile = get_test_data(fname='anatomical.nii') outfile = tmp_path / f'output.{ext}' orig = nib.load(infile) @@ -118,7 +118,7 @@ def test_convert_nifti_int_fail(tmp_path): - infile = test_data(fname='anatomical.nii') + infile = get_test_data(fname='anatomical.nii') outfile = tmp_path / f'output.nii' orig = nib.load(infile) diff -Nru nibabel-5.0.0/nibabel/_compression.py nibabel-5.1.0/nibabel/_compression.py --- nibabel-5.0.0/nibabel/_compression.py 1970-01-01 00:00:00.000000000 +0000 +++ nibabel-5.1.0/nibabel/_compression.py 2023-04-03 14:48:22.000000000 +0000 @@ -0,0 +1,49 @@ +# emacs: -*- mode: python-mode; py-indent-offset: 4; indent-tabs-mode: nil -*- +# vi: set ft=python sts=4 ts=4 sw=4 et: +### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## +# +# See COPYING file distributed along with the NiBabel package for the +# copyright and license terms. +# +### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## +"""Constants and types for dealing transparently with compression""" +from __future__ import annotations + +import bz2 +import gzip +import io +import typing as ty + +from .optpkg import optional_package + +if ty.TYPE_CHECKING: # pragma: no cover + import indexed_gzip # type: ignore + import pyzstd + + HAVE_INDEXED_GZIP = True + HAVE_ZSTD = True +else: + indexed_gzip, HAVE_INDEXED_GZIP, _ = optional_package('indexed_gzip') + pyzstd, HAVE_ZSTD, _ = optional_package('pyzstd') + + +# Collections of types for isinstance or exception matching +COMPRESSED_FILE_LIKES: tuple[type[io.IOBase], ...] = ( + bz2.BZ2File, + gzip.GzipFile, +) +COMPRESSION_ERRORS: tuple[type[BaseException], ...] = ( + OSError, # BZ2File + gzip.BadGzipFile, +) + +if HAVE_INDEXED_GZIP: + COMPRESSED_FILE_LIKES += (indexed_gzip.IndexedGzipFile,) + COMPRESSION_ERRORS += (indexed_gzip.ZranError,) + from indexed_gzip import IndexedGzipFile # type: ignore +else: + IndexedGzipFile = gzip.GzipFile + +if HAVE_ZSTD: + COMPRESSED_FILE_LIKES += (pyzstd.ZstdFile,) + COMPRESSION_ERRORS += (pyzstd.ZstdError,) diff -Nru nibabel-5.0.0/nibabel/dataobj_images.py nibabel-5.1.0/nibabel/dataobj_images.py --- nibabel-5.0.0/nibabel/dataobj_images.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/dataobj_images.py 2023-04-03 14:48:22.000000000 +0000 @@ -15,17 +15,22 @@ from .arrayproxy import ArrayLike from .deprecated import deprecate_with_version -from .filebasedimages import FileBasedHeader, FileBasedImage, FileMap, FileSpec +from .filebasedimages import FileBasedHeader, FileBasedImage +from .fileholders import FileMap if ty.TYPE_CHECKING: # pragma: no cover import numpy.typing as npt + from .filename_parser import FileSpec + +ArrayImgT = ty.TypeVar('ArrayImgT', bound='DataobjImage') + class DataobjImage(FileBasedImage): """Template class for images that have dataobj data stores""" _data_cache: np.ndarray | None - _fdata_cache: np.ndarray | None + _fdata_cache: np.ndarray[ty.Any, np.dtype[np.floating]] | None def __init__( self, @@ -222,7 +227,7 @@ self, caching: ty.Literal['fill', 'unchanged'] = 'fill', dtype: npt.DTypeLike = np.float64, - ) -> np.ndarray: + ) -> np.ndarray[ty.Any, np.dtype[np.floating]]: """Return floating point image data with necessary scaling applied The image ``dataobj`` property can be an array proxy or an array. An @@ -421,12 +426,12 @@ @classmethod def from_file_map( - klass, + klass: type[ArrayImgT], file_map: FileMap, *, mmap: bool | ty.Literal['c', 'r'] = True, keep_file_open: bool | None = None, - ): + ) -> ArrayImgT: """Class method to create image from mapping in ``file_map`` Parameters @@ -460,12 +465,12 @@ @classmethod def from_filename( - klass, + klass: type[ArrayImgT], filename: FileSpec, *, mmap: bool | ty.Literal['c', 'r'] = True, keep_file_open: bool | None = None, - ): + ) -> ArrayImgT: """Class method to create image from filename `filename` Parameters diff -Nru nibabel-5.0.0/nibabel/data.py nibabel-5.1.0/nibabel/data.py --- nibabel-5.0.0/nibabel/data.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/data.py 2023-04-03 14:48:22.000000000 +0000 @@ -1,8 +1,6 @@ # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*- # vi: set ft=python sts=4 ts=4 sw=4 et: -""" -Utilities to find files from NIPY data packages -""" +"""Utilities to find files from NIPY data packages""" import configparser import glob import os diff -Nru nibabel-5.0.0/nibabel/deprecated.py nibabel-5.1.0/nibabel/deprecated.py --- nibabel-5.0.0/nibabel/deprecated.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/deprecated.py 2023-04-03 14:48:22.000000000 +0000 @@ -1,13 +1,15 @@ -"""Module to help with deprecating objects and classes -""" +"""Module to help with deprecating objects and classes""" from __future__ import annotations +import typing as ty import warnings -from typing import Type from .deprecator import Deprecator from .pkg_info import cmp_pkg_version +if ty.TYPE_CHECKING: # pragma: no cover + P = ty.ParamSpec('P') + class ModuleProxy: """Proxy for module that may not yet have been imported @@ -30,14 +32,14 @@ module. """ - def __init__(self, module_name): + def __init__(self, module_name: str) -> None: self._module_name = module_name - def __getattr__(self, key): + def __getattr__(self, key: str) -> ty.Any: mod = __import__(self._module_name, fromlist=['']) return getattr(mod, key) - def __repr__(self): + def __repr__(self) -> str: return f'' @@ -60,7 +62,7 @@ warn_message = 'This class will be removed in future versions' - def __init__(self, *args, **kwargs): + def __init__(self, *args: P.args, **kwargs: P.kwargs) -> None: warnings.warn(self.warn_message, FutureWarning, stacklevel=2) super().__init__(*args, **kwargs) @@ -85,12 +87,12 @@ msg: str, version: str, *, - warning_class: Type[Warning] = FutureWarning, - error_class: Type[Exception] = RuntimeError, + warning_class: type[Warning] = FutureWarning, + error_class: type[Exception] = RuntimeError, warning_rec: str = '', error_rec: str = '', stacklevel: int = 2, -): +) -> None: """Warn or error with appropriate messages for changing functionality. Parameters diff -Nru nibabel-5.0.0/nibabel/deprecator.py nibabel-5.1.0/nibabel/deprecator.py --- nibabel-5.0.0/nibabel/deprecator.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/deprecator.py 2023-04-03 14:48:22.000000000 +0000 @@ -1,10 +1,15 @@ -"""Class for recording and reporting deprecations -""" +"""Class for recording and reporting deprecations""" +from __future__ import annotations import functools import re +import typing as ty import warnings +if ty.TYPE_CHECKING: # pragma: no cover + T = ty.TypeVar('T') + P = ty.ParamSpec('P') + _LEADING_WHITE = re.compile(r'^(\s*)') TESTSETUP = """ @@ -38,7 +43,7 @@ pass -def _ensure_cr(text): +def _ensure_cr(text: str) -> str: """Remove trailing whitespace and add carriage return Ensures that `text` always ends with a carriage return @@ -46,7 +51,12 @@ return text.rstrip() + '\n' -def _add_dep_doc(old_doc, dep_doc, setup='', cleanup=''): +def _add_dep_doc( + old_doc: str, + dep_doc: str, + setup: str = '', + cleanup: str = '', +) -> str: """Add deprecation message `dep_doc` to docstring in `old_doc` Parameters @@ -55,6 +65,10 @@ Docstring from some object. dep_doc : str Deprecation warning to add to top of docstring, after initial line. + setup : str, optional + Doctest setup text + cleanup : str, optional + Doctest teardown text Returns ------- @@ -76,7 +90,9 @@ if next_line >= len(old_lines): # nothing following first paragraph, just append message return old_doc + '\n' + dep_doc - indent = _LEADING_WHITE.match(old_lines[next_line]).group() + leading_white = _LEADING_WHITE.match(old_lines[next_line]) + assert leading_white is not None # Type narrowing, since this always matches + indent = leading_white.group() setup_lines = [indent + L for L in setup.splitlines()] dep_lines = [indent + L for L in [''] + dep_doc.splitlines() + ['']] cleanup_lines = [indent + L for L in cleanup.splitlines()] @@ -113,15 +129,15 @@ def __init__( self, - version_comparator, - warn_class=DeprecationWarning, - error_class=ExpiredDeprecationError, - ): + version_comparator: ty.Callable[[str], int], + warn_class: type[Warning] = DeprecationWarning, + error_class: type[Exception] = ExpiredDeprecationError, + ) -> None: self.version_comparator = version_comparator self.warn_class = warn_class self.error_class = error_class - def is_bad_version(self, version_str): + def is_bad_version(self, version_str: str) -> bool: """Return True if `version_str` is too high Tests `version_str` with ``self.version_comparator`` @@ -139,7 +155,14 @@ """ return self.version_comparator(version_str) == -1 - def __call__(self, message, since='', until='', warn_class=None, error_class=None): + def __call__( + self, + message: str, + since: str = '', + until: str = '', + warn_class: type[Warning] | None = None, + error_class: type[Exception] | None = None, + ) -> ty.Callable[[ty.Callable[P, T]], ty.Callable[P, T]]: """Return decorator function function for deprecation warning / error Parameters @@ -164,8 +187,8 @@ deprecator : func Function returning a decorator. """ - warn_class = warn_class or self.warn_class - error_class = error_class or self.error_class + exception = error_class if error_class is not None else self.error_class + warning = warn_class if warn_class is not None else self.warn_class messages = [message] if (since, until) != ('', ''): messages.append('') @@ -174,19 +197,21 @@ if until: messages.append( f"* {'Raises' if self.is_bad_version(until) else 'Will raise'} " - f'{error_class} as of version: {until}' + f'{exception} as of version: {until}' ) message = '\n'.join(messages) - def deprecator(func): + def deprecator(func: ty.Callable[P, T]) -> ty.Callable[P, T]: @functools.wraps(func) - def deprecated_func(*args, **kwargs): + def deprecated_func(*args: P.args, **kwargs: P.kwargs) -> T: if until and self.is_bad_version(until): - raise error_class(message) - warnings.warn(message, warn_class, stacklevel=2) + raise exception(message) + warnings.warn(message, warning, stacklevel=2) return func(*args, **kwargs) keep_doc = deprecated_func.__doc__ + if keep_doc is None: + keep_doc = '' setup = TESTSETUP cleanup = TESTCLEANUP # After expiration, remove all but the first paragraph. diff -Nru nibabel-5.0.0/nibabel/dft.py nibabel-5.1.0/nibabel/dft.py --- nibabel-5.0.0/nibabel/dft.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/dft.py 2023-04-03 14:48:22.000000000 +0000 @@ -7,8 +7,7 @@ # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## # Copyright (C) 2011 Christian Haselgrove -"""DICOM filesystem tools -""" +"""DICOM filesystem tools""" import contextlib diff -Nru nibabel-5.0.0/nibabel/ecat.py nibabel-5.1.0/nibabel/ecat.py --- nibabel-5.0.0/nibabel/ecat.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/ecat.py 2023-04-03 14:48:22.000000000 +0000 @@ -42,7 +42,6 @@ GPL and some of the header files are adapted from CTI files (called CTI code below). It's not clear what the licenses are for these files. """ - import warnings from numbers import Integral @@ -747,12 +746,14 @@ class EcatImage(SpatialImage): """Class returns a list of Ecat images, with one image(hdr/data) per frame""" - _header = EcatHeader - header_class = _header + header_class = EcatHeader + subheader_class = EcatSubHeader valid_exts = ('.v',) - _subheader = EcatSubHeader files_types = (('image', '.v'), ('header', '.v')) + header: EcatHeader + _subheader: EcatSubHeader + ImageArrayProxy = EcatImageArrayProxy def __init__(self, dataobj, affine, header, subheader, mlist, extra=None, file_map=None): @@ -879,14 +880,14 @@ hdr_file, img_file = klass._get_fileholders(file_map) # note header and image are in same file hdr_fid = hdr_file.get_prepare_fileobj(mode='rb') - header = klass._header.from_fileobj(hdr_fid) + header = klass.header_class.from_fileobj(hdr_fid) hdr_copy = header.copy() # LOAD MLIST mlist = np.zeros((header['num_frames'], 4), dtype=np.int32) mlist_data = read_mlist(hdr_fid, hdr_copy.endianness) mlist[: len(mlist_data)] = mlist_data # LOAD SUBHEADERS - subheaders = klass._subheader(hdr_copy, mlist, hdr_fid) + subheaders = klass.subheader_class(hdr_copy, mlist, hdr_fid) # LOAD DATA # Class level ImageArrayProxy data = klass.ImageArrayProxy(subheaders) diff -Nru nibabel-5.0.0/nibabel/environment.py nibabel-5.1.0/nibabel/environment.py --- nibabel-5.0.0/nibabel/environment.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/environment.py 2023-04-03 14:48:22.000000000 +0000 @@ -1,9 +1,6 @@ # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*- # vi: set ft=python sts=4 ts=4 sw=4 et: -""" -Settings from the system environment relevant to NIPY -""" - +"""Settings from the system environment relevant to NIPY""" import os from os.path import join as pjoin diff -Nru nibabel-5.0.0/nibabel/eulerangles.py nibabel-5.1.0/nibabel/eulerangles.py --- nibabel-5.0.0/nibabel/eulerangles.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/eulerangles.py 2023-04-03 14:48:22.000000000 +0000 @@ -82,7 +82,6 @@ ``y``, followed by rotation around ``x``, is known (confusingly) as "xyz", pitch-roll-yaw, Cardan angles, or Tait-Bryan angles. """ - import math from functools import reduce diff -Nru nibabel-5.0.0/nibabel/filebasedimages.py nibabel-5.1.0/nibabel/filebasedimages.py --- nibabel-5.0.0/nibabel/filebasedimages.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/filebasedimages.py 2023-04-03 14:48:22.000000000 +0000 @@ -6,24 +6,29 @@ # copyright and license terms. # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## -"""Common interface for any image format--volume or surface, binary or xml.""" +"""Common interface for any image format--volume or surface, binary or xml""" from __future__ import annotations import io -import os import typing as ty from copy import deepcopy -from typing import Type from urllib import request -from .fileholders import FileHolder -from .filename_parser import TypesFilenamesError, splitext_addext, types_filenames +from ._compression import COMPRESSION_ERRORS +from .fileholders import FileHolder, FileMap +from .filename_parser import TypesFilenamesError, _stringify_path, splitext_addext, types_filenames from .openers import ImageOpener -FileSpec = ty.Union[str, os.PathLike] -FileMap = ty.Mapping[str, FileHolder] +if ty.TYPE_CHECKING: # pragma: no cover + from .filename_parser import ExtensionSpec, FileSpec + FileSniff = ty.Tuple[bytes, str] +ImgT = ty.TypeVar('ImgT', bound='FileBasedImage') +HdrT = ty.TypeVar('HdrT', bound='FileBasedHeader') + +StreamImgT = ty.TypeVar('StreamImgT', bound='SerializableImage') + class ImageFileError(Exception): pass @@ -33,7 +38,7 @@ """Template class to implement header protocol""" @classmethod - def from_header(klass, header=None): + def from_header(klass: type[HdrT], header: FileBasedHeader | ty.Mapping | None = None) -> HdrT: if header is None: return klass() # I can't do isinstance here because it is not necessarily true @@ -47,19 +52,19 @@ ) @classmethod - def from_fileobj(klass, fileobj: io.IOBase): - raise NotImplementedError + def from_fileobj(klass: type[HdrT], fileobj: io.IOBase) -> HdrT: + raise NotImplementedError # pragma: no cover - def write_to(self, fileobj: io.IOBase): - raise NotImplementedError + def write_to(self, fileobj: io.IOBase) -> None: + raise NotImplementedError # pragma: no cover - def __eq__(self, other): - raise NotImplementedError + def __eq__(self, other: object) -> bool: + raise NotImplementedError # pragma: no cover - def __ne__(self, other): + def __ne__(self, other: object) -> bool: return not self == other - def copy(self) -> FileBasedHeader: + def copy(self: HdrT) -> HdrT: """Copy object to independent representation The copy should not be affected by any changes to the original @@ -152,9 +157,9 @@ work. """ - header_class: Type[FileBasedHeader] = FileBasedHeader + header_class: type[FileBasedHeader] = FileBasedHeader _meta_sniff_len: int = 0 - files_types: tuple[tuple[str, str | None], ...] = (('image', None),) + files_types: tuple[ExtensionSpec, ...] = (('image', None),) valid_exts: tuple[str, ...] = () _compressed_suffixes: tuple[str, ...] = () @@ -186,7 +191,7 @@ self._header = self.header_class.from_header(header) if extra is None: extra = {} - self.extra = extra + self.extra = dict(extra) if file_map is None: file_map = self.__class__.make_file_map() @@ -196,7 +201,7 @@ def header(self) -> FileBasedHeader: return self._header - def __getitem__(self, key): + def __getitem__(self, key) -> None: """No slicing or dictionary interface for images""" raise TypeError('Cannot slice image objects.') @@ -221,7 +226,7 @@ characteristic_type = self.files_types[0][0] return self.file_map[characteristic_type].filename - def set_filename(self, filename: str): + def set_filename(self, filename: str) -> None: """Sets the files in the object from a given filename The different image formats may check whether the filename has @@ -239,16 +244,16 @@ self.file_map = self.__class__.filespec_to_file_map(filename) @classmethod - def from_filename(klass, filename: FileSpec): + def from_filename(klass: type[ImgT], filename: FileSpec) -> ImgT: file_map = klass.filespec_to_file_map(filename) return klass.from_file_map(file_map) @classmethod - def from_file_map(klass, file_map: FileMap): - raise NotImplementedError + def from_file_map(klass: type[ImgT], file_map: FileMap) -> ImgT: + raise NotImplementedError # pragma: no cover @classmethod - def filespec_to_file_map(klass, filespec: FileSpec): + def filespec_to_file_map(klass, filespec: FileSpec) -> FileMap: """Make `file_map` for this class from filename `filespec` Class method @@ -282,7 +287,7 @@ file_map[key] = FileHolder(filename=fname) return file_map - def to_filename(self, filename: FileSpec, **kwargs): + def to_filename(self, filename: FileSpec, **kwargs) -> None: r"""Write image to files implied by filename string Parameters @@ -301,11 +306,11 @@ self.file_map = self.filespec_to_file_map(filename) self.to_file_map(**kwargs) - def to_file_map(self, file_map: FileMap | None = None, **kwargs): - raise NotImplementedError + def to_file_map(self, file_map: FileMap | None = None, **kwargs) -> None: + raise NotImplementedError # pragma: no cover @classmethod - def make_file_map(klass, mapping: ty.Mapping[str, str | io.IOBase] | None = None): + def make_file_map(klass, mapping: ty.Mapping[str, str | io.IOBase] | None = None) -> FileMap: """Class method to make files holder for this image type Parameters @@ -338,7 +343,7 @@ load = from_filename @classmethod - def instance_to_filename(klass, img: FileBasedImage, filename: FileSpec): + def instance_to_filename(klass, img: FileBasedImage, filename: FileSpec) -> None: """Save `img` in our own format, to name implied by `filename` This is a class method @@ -354,20 +359,20 @@ img.to_filename(filename) @classmethod - def from_image(klass, img: FileBasedImage): + def from_image(klass: type[ImgT], img: FileBasedImage) -> ImgT: """Class method to create new instance of own class from `img` Parameters ---------- - img : ``spatialimage`` instance + img : ``FileBasedImage`` instance In fact, an object with the API of ``FileBasedImage``. Returns ------- - cimg : ``spatialimage`` instance + img : ``FileBasedImage`` instance Image, of our own class """ - raise NotImplementedError() + raise NotImplementedError # pragma: no cover @classmethod def _sniff_meta_for( @@ -375,7 +380,7 @@ filename: FileSpec, sniff_nbytes: int, sniff: FileSniff | None = None, - ): + ) -> FileSniff | None: """Sniff metadata for image represented by `filename` Parameters @@ -405,7 +410,7 @@ t_fnames = types_filenames( filename, klass.files_types, trailing_suffixes=klass._compressed_suffixes ) - meta_fname = t_fnames.get('header', filename) + meta_fname = t_fnames.get('header', _stringify_path(filename)) # Do not re-sniff if it would be from the same file if sniff is not None and sniff[1] == meta_fname: @@ -415,7 +420,7 @@ try: with ImageOpener(meta_fname, 'rb') as fobj: binaryblock = fobj.read(sniff_nbytes) - except (OSError, EOFError): + except COMPRESSION_ERRORS + (OSError, EOFError): return None return (binaryblock, meta_fname) @@ -425,7 +430,7 @@ filename: FileSpec, sniff: FileSniff | None = None, sniff_max: int = 1024, - ): + ) -> tuple[bool, FileSniff | None]: """Return True if `filename` may be image matching this class Parameters @@ -527,14 +532,14 @@ """ @classmethod - def _filemap_from_iobase(klass, io_obj: io.IOBase): + def _filemap_from_iobase(klass, io_obj: io.IOBase) -> FileMap: """For single-file image types, make a file map with the correct key""" if len(klass.files_types) > 1: raise NotImplementedError('(de)serialization is undefined for multi-file images') return klass.make_file_map({klass.files_types[0][0]: io_obj}) @classmethod - def from_stream(klass, io_obj: io.IOBase): + def from_stream(klass: type[StreamImgT], io_obj: io.IOBase) -> StreamImgT: """Load image from readable IO stream Convert to BytesIO to enable seeking, if input stream is not seekable @@ -548,7 +553,7 @@ io_obj = io.BytesIO(io_obj.read()) return klass.from_file_map(klass._filemap_from_iobase(io_obj)) - def to_stream(self, io_obj: io.IOBase, **kwargs): + def to_stream(self, io_obj: io.IOBase, **kwargs) -> None: r"""Save image to writable IO stream Parameters @@ -561,7 +566,7 @@ self.to_file_map(self._filemap_from_iobase(io_obj), **kwargs) @classmethod - def from_bytes(klass, bytestring: bytes): + def from_bytes(klass: type[StreamImgT], bytestring: bytes) -> StreamImgT: """Construct image from a byte string Class method @@ -592,7 +597,9 @@ return bio.getvalue() @classmethod - def from_url(klass, url: str | request.Request, timeout: float = 5): + def from_url( + klass: type[StreamImgT], url: str | request.Request, timeout: float = 5 + ) -> StreamImgT: """Retrieve and load an image from a URL Class method diff -Nru nibabel-5.0.0/nibabel/fileholders.py nibabel-5.1.0/nibabel/fileholders.py --- nibabel-5.0.0/nibabel/fileholders.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/fileholders.py 2023-04-03 14:48:22.000000000 +0000 @@ -7,7 +7,10 @@ # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## """Fileholder class""" +from __future__ import annotations +import io +import typing as ty from copy import copy from .openers import ImageOpener @@ -20,7 +23,12 @@ class FileHolder: """class to contain filename, fileobj and file position""" - def __init__(self, filename=None, fileobj=None, pos=0): + def __init__( + self, + filename: str | None = None, + fileobj: io.IOBase | None = None, + pos: int = 0, + ): """Initialize FileHolder instance Parameters @@ -38,7 +46,7 @@ self.fileobj = fileobj self.pos = pos - def get_prepare_fileobj(self, *args, **kwargs): + def get_prepare_fileobj(self, *args, **kwargs) -> ImageOpener: """Return fileobj if present, or return fileobj from filename Set position to that given in self.pos @@ -70,7 +78,7 @@ raise FileHolderError('No filename or fileobj present') return obj - def same_file_as(self, other): + def same_file_as(self, other: FileHolder) -> bool: """Test if `self` refers to same files / fileobj as `other` Parameters @@ -87,12 +95,15 @@ return (self.filename == other.filename) and (self.fileobj == other.fileobj) @property - def file_like(self): + def file_like(self) -> str | io.IOBase | None: """Return ``self.fileobj`` if not None, otherwise ``self.filename``""" return self.fileobj if self.fileobj is not None else self.filename -def copy_file_map(file_map): +FileMap = ty.Mapping[str, FileHolder] + + +def copy_file_map(file_map: FileMap) -> FileMap: r"""Copy mapping of fileholders given by `file_map` Parameters @@ -106,7 +117,4 @@ Copy of `file_map`, using shallow copy of ``FileHolder``\s """ - fm_copy = {} - for key, fh in file_map.items(): - fm_copy[key] = copy(fh) - return fm_copy + return {key: copy(fh) for key, fh in file_map.items()} diff -Nru nibabel-5.0.0/nibabel/filename_parser.py nibabel-5.1.0/nibabel/filename_parser.py --- nibabel-5.0.0/nibabel/filename_parser.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/filename_parser.py 2023-04-03 14:48:22.000000000 +0000 @@ -7,16 +7,21 @@ # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## """Create filename pairs, triplets etc, with expected extensions""" +from __future__ import annotations import os -import pathlib +import typing as ty + +if ty.TYPE_CHECKING: # pragma: no cover + FileSpec = str | os.PathLike[str] + ExtensionSpec = tuple[str, str | None] class TypesFilenamesError(Exception): pass -def _stringify_path(filepath_or_buffer): +def _stringify_path(filepath_or_buffer: FileSpec) -> str: """Attempt to convert a path-like object to a string. Parameters @@ -29,30 +34,21 @@ Notes ----- - Objects supporting the fspath protocol (python 3.6+) are coerced - according to its __fspath__ method. - For backwards compatibility with older pythons, pathlib.Path objects - are specially coerced. - Any other object is passed through unchanged, which includes bytes, - strings, buffers, or anything else that's not even path-like. - - Copied from: - https://github.com/pandas-dev/pandas/blob/325dd686de1589c17731cf93b649ed5ccb5a99b4/pandas/io/common.py#L131-L160 + Adapted from: + https://github.com/pandas-dev/pandas/blob/325dd68/pandas/io/common.py#L131-L160 """ - if hasattr(filepath_or_buffer, '__fspath__'): + if isinstance(filepath_or_buffer, os.PathLike): return filepath_or_buffer.__fspath__() - elif isinstance(filepath_or_buffer, pathlib.Path): - return str(filepath_or_buffer) return filepath_or_buffer def types_filenames( - template_fname, - types_exts, - trailing_suffixes=('.gz', '.bz2'), - enforce_extensions=True, - match_case=False, -): + template_fname: FileSpec, + types_exts: ty.Sequence[ExtensionSpec], + trailing_suffixes: ty.Sequence[str] = ('.gz', '.bz2'), + enforce_extensions: bool = True, + match_case: bool = False, +) -> dict[str, str]: """Return filenames with standard extensions from template name The typical case is returning image and header filenames for an @@ -153,12 +149,12 @@ # we've found .IMG as the extension, we want .HDR as the matching # one. Let's only do this when the extension is all upper or all # lower case. - proc_ext = lambda s: s + proc_ext: ty.Callable[[str], str] = lambda s: s if found_ext: if found_ext == found_ext.upper(): - proc_ext = lambda s: s.upper() + proc_ext = str.upper elif found_ext == found_ext.lower(): - proc_ext = lambda s: s.lower() + proc_ext = str.lower for name, ext in types_exts: if name == direct_set_name: tfns[name] = template_fname @@ -172,7 +168,12 @@ return tfns -def parse_filename(filename, types_exts, trailing_suffixes, match_case=False): +def parse_filename( + filename: FileSpec, + types_exts: ty.Sequence[ExtensionSpec], + trailing_suffixes: ty.Sequence[str], + match_case: bool = False, +) -> tuple[str, str, str | None, str | None]: """Split filename into fileroot, extension, trailing suffix; guess type. Parameters @@ -231,9 +232,9 @@ break guessed_name = None found_ext = None - for name, ext in types_exts: - if ext and endswith(filename, ext): - extpos = -len(ext) + for name, type_ext in types_exts: + if type_ext and endswith(filename, type_ext): + extpos = -len(type_ext) found_ext = filename[extpos:] filename = filename[:extpos] guessed_name = name @@ -243,15 +244,19 @@ return (filename, found_ext, ignored, guessed_name) -def _endswith(whole, end): +def _endswith(whole: str, end: str) -> bool: return whole.endswith(end) -def _iendswith(whole, end): +def _iendswith(whole: str, end: str) -> bool: return whole.lower().endswith(end.lower()) -def splitext_addext(filename, addexts=('.gz', '.bz2', '.zst'), match_case=False): +def splitext_addext( + filename: FileSpec, + addexts: ty.Sequence[str] = ('.gz', '.bz2', '.zst'), + match_case: bool = False, +) -> tuple[str, str, str]: """Split ``/pth/fname.ext.gz`` into ``/pth/fname, .ext, .gz`` where ``.gz`` may be any of passed `addext` trailing suffixes. diff -Nru nibabel-5.0.0/nibabel/fileslice.py nibabel-5.1.0/nibabel/fileslice.py --- nibabel-5.0.0/nibabel/fileslice.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/fileslice.py 2023-04-03 14:48:22.000000000 +0000 @@ -1,6 +1,4 @@ -"""Utilities for getting array slices out of file-like objects -""" - +"""Utilities for getting array slices out of file-like objects""" import operator from functools import reduce from mmap import mmap diff -Nru nibabel-5.0.0/nibabel/fileutils.py nibabel-5.1.0/nibabel/fileutils.py --- nibabel-5.0.0/nibabel/fileutils.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/fileutils.py 2023-04-03 14:48:22.000000000 +0000 @@ -6,8 +6,7 @@ # copyright and license terms. # # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## -"""Utilities for reading and writing to binary file formats -""" +"""Utilities for reading and writing to binary file formats""" def read_zt_byte_strings(fobj, n_strings=1, bufsize=1024): diff -Nru nibabel-5.0.0/nibabel/freesurfer/mghformat.py nibabel-5.1.0/nibabel/freesurfer/mghformat.py --- nibabel-5.0.0/nibabel/freesurfer/mghformat.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/freesurfer/mghformat.py 2023-04-03 14:48:22.000000000 +0000 @@ -462,6 +462,7 @@ """Class for MGH format image""" header_class = MGHHeader + header: MGHHeader valid_exts = ('.mgh', '.mgz') # Register that .mgz extension signals gzip compression ImageOpener.compress_ext_map['.mgz'] = ImageOpener.gz_def @@ -589,5 +590,5 @@ hdr['Pxyz_c'] = c_ras -load = MGHImage.load +load = MGHImage.from_filename save = MGHImage.instance_to_filename diff -Nru nibabel-5.0.0/nibabel/gifti/gifti.py nibabel-5.1.0/nibabel/gifti/gifti.py --- nibabel-5.0.0/nibabel/gifti/gifti.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/gifti/gifti.py 2023-04-03 14:48:22.000000000 +0000 @@ -16,7 +16,8 @@ import base64 import sys import warnings -from typing import Type +from copy import copy +from typing import Type, cast import numpy as np @@ -27,6 +28,12 @@ from ..nifti1 import data_type_codes, intent_codes, xform_codes from .util import KIND2FMT, array_index_order_codes, gifti_encoding_codes, gifti_endian_codes +GIFTI_DTYPES = ( + data_type_codes['NIFTI_TYPE_UINT8'], + data_type_codes['NIFTI_TYPE_INT32'], + data_type_codes['NIFTI_TYPE_FLOAT32'], +) + class _GiftiMDList(list): """List view of GiftiMetaData object that will translate most operations""" @@ -81,7 +88,8 @@ >>> GiftiMetaData({"key": "val"}) - >>> nvpairs = GiftiNVPairs(name='key', value='val') + >>> with pytest.deprecated_call(): + ... nvpairs = GiftiNVPairs(name='key', value='val') >>> with pytest.warns(FutureWarning): ... GiftiMetaData(nvpairs) @@ -460,7 +468,17 @@ self.data = None if data is None else np.asarray(data) self.intent = intent_codes.code[intent] if datatype is None: - datatype = 'none' if self.data is None else self.data.dtype + if self.data is None: + datatype = 'none' + elif data_type_codes[self.data.dtype] in GIFTI_DTYPES: + datatype = self.data.dtype + else: + raise ValueError( + f'Data array has type {self.data.dtype}. ' + 'The GIFTI standard only supports uint8, int32 and float32 arrays.\n' + 'Explicitly cast the data array to a supported dtype or pass an ' + 'explicit "datatype" parameter to GiftiDataArray().' + ) self.datatype = data_type_codes.code[datatype] self.encoding = gifti_encoding_codes.code[encoding] self.endian = gifti_endian_codes.code[endian] @@ -701,8 +719,8 @@ Consider a surface GIFTI file: >>> import nibabel as nib - >>> from nibabel.testing import test_data - >>> surf_img = nib.load(test_data('gifti', 'ascii.gii')) + >>> from nibabel.testing import get_test_data + >>> surf_img = nib.load(get_test_data('gifti', 'ascii.gii')) The coordinate data, which is indicated by the ``NIFTI_INTENT_POINTSET`` intent code, may be retrieved using any of the following equivalent @@ -754,7 +772,7 @@ The following image is a GIFTI file with ten (10) data arrays of the same size, and with intent code 2001 (``NIFTI_INTENT_TIME_SERIES``): - >>> func_img = nib.load(test_data('gifti', 'task.func.gii')) + >>> func_img = nib.load(get_test_data('gifti', 'task.func.gii')) When aggregating time series data, these arrays are concatenated into a single, vertex-by-timestep array: @@ -834,20 +852,45 @@ GIFTI.append(dar._to_xml_element()) return GIFTI - def to_xml(self, enc='utf-8') -> bytes: + def to_xml(self, enc='utf-8', *, mode='strict') -> bytes: """Return XML corresponding to image content""" + if mode == 'strict': + if any(arr.datatype not in GIFTI_DTYPES for arr in self.darrays): + raise ValueError( + 'GiftiImage contains data arrays with invalid data types; ' + 'use mode="compat" to automatically cast to conforming types' + ) + elif mode == 'compat': + darrays = [] + for arr in self.darrays: + if arr.datatype not in GIFTI_DTYPES: + arr = copy(arr) + # TODO: Better typing for recoders + dtype = cast(np.dtype, data_type_codes.dtype[arr.datatype]) + if np.issubdtype(dtype, np.floating): + arr.datatype = data_type_codes['float32'] + elif np.issubdtype(dtype, np.integer): + arr.datatype = data_type_codes['int32'] + else: + raise ValueError(f'Cannot convert {dtype} to float32/int32') + darrays.append(arr) + gii = copy(self) + gii.darrays = darrays + return gii.to_xml(enc=enc, mode='strict') + elif mode != 'force': + raise TypeError(f'Unknown mode {mode}') header = b""" """ return header + super().to_xml(enc) # Avoid the indirection of going through to_file_map - def to_bytes(self, enc='utf-8'): - return self.to_xml(enc=enc) + def to_bytes(self, enc='utf-8', *, mode='strict'): + return self.to_xml(enc=enc, mode=mode) to_bytes.__doc__ = SerializableImage.to_bytes.__doc__ - def to_file_map(self, file_map=None, enc='utf-8'): + def to_file_map(self, file_map=None, enc='utf-8', *, mode='strict'): """Save the current image to the specified file_map Parameters @@ -863,7 +906,7 @@ if file_map is None: file_map = self.file_map with file_map['image'].get_prepare_fileobj('wb') as f: - f.write(self.to_xml(enc=enc)) + f.write(self.to_xml(enc=enc, mode=mode)) @classmethod def from_file_map(klass, file_map, buffer_size=35000000, mmap=True): diff -Nru nibabel-5.0.0/nibabel/gifti/tests/test_gifti.py nibabel-5.1.0/nibabel/gifti/tests/test_gifti.py --- nibabel-5.0.0/nibabel/gifti/tests/test_gifti.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/gifti/tests/test_gifti.py 2023-04-03 14:48:22.000000000 +0000 @@ -14,7 +14,7 @@ from ... import load from ...fileholders import FileHolder from ...nifti1 import data_type_codes -from ...testing import test_data +from ...testing import get_test_data from .. import ( GiftiCoordSystem, GiftiDataArray, @@ -33,11 +33,13 @@ DATA_FILE6, ) +rng = np.random.default_rng() + def test_agg_data(): - surf_gii_img = load(test_data('gifti', 'ascii.gii')) - func_gii_img = load(test_data('gifti', 'task.func.gii')) - shape_gii_img = load(test_data('gifti', 'rh.shape.curv.gii')) + surf_gii_img = load(get_test_data('gifti', 'ascii.gii')) + func_gii_img = load(get_test_data('gifti', 'task.func.gii')) + shape_gii_img = load(get_test_data('gifti', 'rh.shape.curv.gii')) # add timeseries data with intent code ``none`` point_data = surf_gii_img.get_arrays_from_intent('pointset')[0].data @@ -81,7 +83,7 @@ assert gi.numDA == 0 # Test from numpy numeric array - data = np.random.random((5,)) + data = rng.random(5, dtype=np.float32) da = GiftiDataArray(data) gi.add_gifti_data_array(da) assert gi.numDA == 1 @@ -98,7 +100,7 @@ # Remove one gi = GiftiImage() - da = GiftiDataArray(np.zeros((5,)), intent=0) + da = GiftiDataArray(np.zeros((5,), np.float32), intent=0) gi.add_gifti_data_array(da) gi.remove_gifti_data_array_by_intent(3) @@ -126,6 +128,42 @@ pytest.raises(TypeError, assign_metadata, 'not-a-meta') +@pytest.mark.parametrize('label', data_type_codes.value_set('label')) +def test_image_typing(label): + dtype = data_type_codes.dtype[label] + if dtype == np.void: + return + arr = 127 * rng.random(20) + try: + cast = arr.astype(label) + except TypeError: + return + darr = GiftiDataArray(cast, datatype=label) + img = GiftiImage(darrays=[darr]) + + # Force-write always works + force_rt = img.from_bytes(img.to_bytes(mode='force')) + assert np.array_equal(cast, force_rt.darrays[0].data) + + # Compatibility mode does its best + if np.issubdtype(dtype, np.integer) or np.issubdtype(dtype, np.floating): + compat_rt = img.from_bytes(img.to_bytes(mode='compat')) + compat_darr = compat_rt.darrays[0].data + assert np.allclose(cast, compat_darr) + assert compat_darr.dtype in ('uint8', 'int32', 'float32') + else: + with pytest.raises(ValueError): + img.to_bytes(mode='compat') + + # Strict mode either works or fails + if label in ('uint8', 'int32', 'float32'): + strict_rt = img.from_bytes(img.to_bytes(mode='strict')) + assert np.array_equal(cast, strict_rt.darrays[0].data) + else: + with pytest.raises(ValueError): + img.to_bytes(mode='strict') + + def test_dataarray_empty(): # Test default initialization of DataArray null_da = GiftiDataArray() @@ -195,6 +233,38 @@ assert gda(ext_offset=12).ext_offset == 12 +@pytest.mark.parametrize('label', data_type_codes.value_set('label')) +def test_dataarray_typing(label): + dtype = data_type_codes.dtype[label] + code = data_type_codes.code[label] + arr = np.zeros((5,), dtype=dtype) + + # Default interface: accept standards-conformant arrays, reject else + if dtype in ('uint8', 'int32', 'float32'): + assert GiftiDataArray(arr).datatype == code + else: + with pytest.raises(ValueError): + GiftiDataArray(arr) + + # Explicit override - permit for now, may want to warn or eventually + # error + assert GiftiDataArray(arr, datatype=label).datatype == code + assert GiftiDataArray(arr, datatype=code).datatype == code + # Void is how we say we don't know how to do something, so it's not unique + if dtype != np.dtype('void'): + assert GiftiDataArray(arr, datatype=dtype).datatype == code + + # Side-load data array (as in parsing) + # We will probably always want this to load legacy images, but it's + # probably not ideal to make it easy to silently propagate nonconformant + # arrays + gda = GiftiDataArray() + gda.data = arr + gda.datatype = data_type_codes.code[label] + assert gda.data.dtype == dtype + assert gda.datatype == data_type_codes.code[label] + + def test_labeltable(): img = GiftiImage() assert len(img.labeltable.labels) == 0 @@ -303,7 +373,7 @@ def test_gifti_label_rgba(): - rgba = np.random.rand(4) + rgba = rng.random(4) kwargs = dict(zip(['red', 'green', 'blue', 'alpha'], rgba)) gl1 = GiftiLabel(**kwargs) @@ -332,13 +402,17 @@ assert np.all([elem is None for elem in gl4.rgba]) -def test_print_summary(): - for fil in [DATA_FILE1, DATA_FILE2, DATA_FILE3, DATA_FILE4, DATA_FILE5, DATA_FILE6]: - gimg = load(fil) - gimg.print_summary() +@pytest.mark.parametrize( + 'fname', [DATA_FILE1, DATA_FILE2, DATA_FILE3, DATA_FILE4, DATA_FILE5, DATA_FILE6] +) +def test_print_summary(fname, capsys): + gimg = load(fname) + gimg.print_summary() + captured = capsys.readouterr() + assert captured.out.startswith('----start----\n') -def test_gifti_coord(): +def test_gifti_coord(capsys): from ..gifti import GiftiCoordSystem gcs = GiftiCoordSystem() @@ -347,6 +421,15 @@ # Smoke test gcs.xform = None gcs.print_summary() + captured = capsys.readouterr() + assert captured.out == '\n'.join( + [ + 'Dataspace: NIFTI_XFORM_UNKNOWN', + 'XFormSpace: NIFTI_XFORM_UNKNOWN', + 'Affine Transformation Matrix: ', + ' None\n', + ] + ) gcs.to_xml() @@ -471,14 +554,14 @@ datatype=darray_dtype, ) gii = GiftiImage(darrays=[da]) - gii_copy = GiftiImage.from_bytes(gii.to_bytes()) + gii_copy = GiftiImage.from_bytes(gii.to_bytes(mode='force')) da_copy = gii_copy.darrays[0] assert np.dtype(da_copy.data.dtype) == np.dtype(darray_dtype) assert_array_equal(da_copy.data, da.data) def test_gifti_file_close(recwarn): - gii = load(test_data('gifti', 'ascii.gii')) + gii = load(get_test_data('gifti', 'ascii.gii')) with InTemporaryDirectory(): gii.to_filename('test.gii') assert not any(isinstance(r.message, ResourceWarning) for r in recwarn) diff -Nru nibabel-5.0.0/nibabel/imageclasses.py nibabel-5.1.0/nibabel/imageclasses.py --- nibabel-5.0.0/nibabel/imageclasses.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/imageclasses.py 2023-04-03 14:48:22.000000000 +0000 @@ -7,10 +7,13 @@ # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## """Define supported image classes and names""" +from __future__ import annotations from .analyze import AnalyzeImage from .brikhead import AFNIImage from .cifti2 import Cifti2Image +from .dataobj_images import DataobjImage +from .filebasedimages import FileBasedImage from .freesurfer import MGHImage from .gifti import GiftiImage from .minc1 import Minc1Image @@ -22,7 +25,7 @@ from .spm99analyze import Spm99AnalyzeImage # Ordered by the load/save priority. -all_image_classes = [ +all_image_classes: list[type[FileBasedImage]] = [ Nifti1Pair, Nifti1Image, Nifti2Pair, @@ -42,7 +45,7 @@ # Image classes known to require spatial axes to be first in index ordering. # When adding an image class, consider whether the new class should be listed # here. -KNOWN_SPATIAL_FIRST = ( +KNOWN_SPATIAL_FIRST: tuple[type[FileBasedImage], ...] = ( Nifti1Pair, Nifti1Image, Nifti2Pair, @@ -56,7 +59,7 @@ ) -def spatial_axes_first(img): +def spatial_axes_first(img: DataobjImage) -> bool: """True if spatial image axes for `img` always precede other axes Parameters diff -Nru nibabel-5.0.0/nibabel/imageglobals.py nibabel-5.1.0/nibabel/imageglobals.py --- nibabel-5.0.0/nibabel/imageglobals.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/imageglobals.py 2023-04-03 14:48:22.000000000 +0000 @@ -23,7 +23,6 @@ Use ``logger.level = 1`` to see all messages. """ - import logging error_level = 40 diff -Nru nibabel-5.0.0/nibabel/imagestats.py nibabel-5.1.0/nibabel/imagestats.py --- nibabel-5.0.0/nibabel/imagestats.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/imagestats.py 2023-04-03 14:48:22.000000000 +0000 @@ -6,10 +6,7 @@ # copyright and license terms. # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## -""" -Functions for computing image statistics -""" - +"""Functions for computing image statistics""" import numpy as np from nibabel.imageclasses import spatial_axes_first diff -Nru nibabel-5.0.0/nibabel/info.py nibabel-5.1.0/nibabel/info.py --- nibabel-5.0.0/nibabel/info.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/info.py 2023-04-03 14:48:22.000000000 +0000 @@ -1,7 +1,7 @@ -"""Define distribution parameters for nibabel, including package version +"""Define static nibabel metadata for nibabel -The long description parameter is used to fill settings in setup.py, the -nibabel top-level docstring, and in building the docs. +The long description parameter is used in the nibabel top-level docstring, +and in building the docs. We exec this file in several places, so it cannot import nibabel or use relative imports. """ @@ -12,86 +12,79 @@ # We also include this text in the docs by ``..include::`` in # ``docs/source/index.rst``. long_description = """ -======= -NiBabel -======= - -Read / write access to some common neuroimaging file formats - -This package provides read +/- write access to some common medical and -neuroimaging file formats, including: ANALYZE_ (plain, SPM99, SPM2 and later), -GIFTI_, NIfTI1_, NIfTI2_, `CIFTI-2`_, MINC1_, MINC2_, `AFNI BRIK/HEAD`_, MGH_ and -ECAT_ as well as Philips PAR/REC. We can read and write FreeSurfer_ geometry, -annotation and morphometry files. There is some very limited support for -DICOM_. NiBabel is the successor of PyNIfTI_. +Read and write access to common neuroimaging file formats, including: +ANALYZE_ (plain, SPM99, SPM2 and later), GIFTI_, NIfTI1_, NIfTI2_, `CIFTI-2`_, +MINC1_, MINC2_, `AFNI BRIK/HEAD`_, ECAT_ and Philips PAR/REC. +In addition, NiBabel also supports FreeSurfer_'s MGH_, geometry, annotation and +morphometry files, and provides some limited support for DICOM_. + +NiBabel's API gives full or selective access to header information (metadata), +and image data is made available via NumPy arrays. For more information, see +NiBabel's `documentation site`_ and `API reference`_. -.. _ANALYZE: http://www.grahamwideman.com/gw/brain/analyze/formatdoc.htm +.. _API reference: https://nipy.org/nibabel/api.html .. _AFNI BRIK/HEAD: https://afni.nimh.nih.gov/pub/dist/src/README.attributes -.. _NIfTI1: http://nifti.nimh.nih.gov/nifti-1/ -.. _NIfTI2: http://nifti.nimh.nih.gov/nifti-2/ +.. _ANALYZE: http://www.grahamwideman.com/gw/brain/analyze/formatdoc.htm .. _CIFTI-2: https://www.nitrc.org/projects/cifti/ +.. _DICOM: http://medical.nema.org/ +.. _documentation site: http://nipy.org/nibabel +.. _ECAT: http://xmedcon.sourceforge.net/Docs/Ecat +.. _Freesurfer: https://surfer.nmr.mgh.harvard.edu +.. _GIFTI: https://www.nitrc.org/projects/gifti +.. _MGH: https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/MghFormat .. _MINC1: https://en.wikibooks.org/wiki/MINC/Reference/MINC1_File_Format_Reference .. _MINC2: https://en.wikibooks.org/wiki/MINC/Reference/MINC2.0_File_Format_Reference -.. _PyNIfTI: http://niftilib.sourceforge.net/pynifti/ -.. _GIFTI: https://www.nitrc.org/projects/gifti -.. _MGH: https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/MghFormat -.. _ECAT: http://xmedcon.sourceforge.net/Docs/Ecat -.. _Freesurfer: https://surfer.nmr.mgh.harvard.edu -.. _DICOM: http://medical.nema.org/ +.. _NIfTI1: http://nifti.nimh.nih.gov/nifti-1/ +.. _NIfTI2: http://nifti.nimh.nih.gov/nifti-2/ -The various image format classes give full or selective access to header -(meta) information and access to the image data is made available via NumPy -arrays. +Installation +============ -Website -======= +To install NiBabel's `current release`_ with ``pip``, run:: -Current documentation on nibabel can always be found at the `NIPY nibabel -website `_. + pip install nibabel -Mailing Lists -============= +To install the latest development version, run:: -Please send any questions or suggestions to the `neuroimaging mailing list -`_. + pip install git+https://github.com/nipy/nibabel + +When working on NiBabel itself, it may be useful to install in "editable" mode:: -Code -==== + git clone https://github.com/nipy/nibabel.git + pip install -e ./nibabel -Install nibabel with:: +For more information on previous releases, see the `release archive`_ or +`development changelog`_. - pip install nibabel +.. _current release: https://pypi.python.org/pypi/NiBabel +.. _release archive: https://github.com/nipy/NiBabel/releases +.. _development changelog: https://nipy.org/nibabel/changelog.html -You may also be interested in: +Mailing List +============ -* the `nibabel code repository`_ on Github; -* documentation_ for all releases and current development tree; -* download the `current release`_ from pypi; -* download `current development version`_ as a zip file; -* downloads of all `available releases`_. - -.. _nibabel code repository: https://github.com/nipy/nibabel -.. _Documentation: http://nipy.org/nibabel -.. _current release: https://pypi.python.org/pypi/nibabel -.. _current development version: https://github.com/nipy/nibabel/archive/master.zip -.. _available releases: https://github.com/nipy/nibabel/releases +Please send any questions or suggestions to the `neuroimaging mailing list +`_. License ======= -Nibabel is licensed under the terms of the MIT license. Some code included -with nibabel is licensed under the BSD license. Please see the COPYING file -in the nibabel distribution. - -Citing nibabel -============== - -Please see the `available releases`_ for the release of nibabel that you are -using. Recent releases have a Zenodo_ `Digital Object Identifier`_ badge at -the top of the release notes. Click on the badge for more information. +NiBabel is licensed under the terms of the `MIT license +`__. +Some code included with NiBabel is licensed under the `BSD license`_. +For more information, please see the COPYING_ file. + +.. _BSD license: https://opensource.org/licenses/BSD-3-Clause +.. _COPYING: https://github.com/nipy/nibabel/blob/master/COPYING + +Citation +======== + +NiBabel releases have a Zenodo_ `Digital Object Identifier`_ (DOI) badge at +the top of the release notes. Click on the badge for more information. -.. _zenodo: https://zenodo.org .. _Digital Object Identifier: https://en.wikipedia.org/wiki/Digital_object_identifier -""" +.. _zenodo: https://zenodo.org +""" # noqa: E501 diff -Nru nibabel-5.0.0/nibabel/__init__.py nibabel-5.1.0/nibabel/__init__.py --- nibabel-5.0.0/nibabel/__init__.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/__init__.py 2023-04-03 14:48:22.000000000 +0000 @@ -39,7 +39,7 @@ # module imports from . import analyze as ana -from . import ecat, mriutils +from . import ecat, imagestats, mriutils from . import nifti1 as ni1 from . import spm2analyze as spm2 from . import spm99analyze as spm99 @@ -171,11 +171,16 @@ code : ExitCode Returns the result of running the tests as a ``pytest.ExitCode`` enum """ - from pkg_resources import resource_filename + try: + from importlib.resources import as_file, files + except ImportError: + from importlib_resources import as_file, files - config = resource_filename('nibabel', 'benchmarks/pytest.benchmark.ini') args = [] if extra_argv is not None: args.extend(extra_argv) - args.extend(['-c', config]) - return test(label, verbose, extra_argv=args) + + config_path = files('nibabel') / 'benchmarks/pytest.benchmark.ini' + with as_file(config_path) as config: + args.extend(['-c', str(config)]) + return test(label, verbose, extra_argv=args) diff -Nru nibabel-5.0.0/nibabel/loadsave.py nibabel-5.1.0/nibabel/loadsave.py --- nibabel-5.0.0/nibabel/loadsave.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/loadsave.py 2023-04-03 14:48:22.000000000 +0000 @@ -8,8 +8,10 @@ ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## # module imports """Utilities to load and save image objects""" +from __future__ import annotations import os +import typing as ty import numpy as np @@ -23,7 +25,18 @@ _compressed_suffixes = ('.gz', '.bz2', '.zst') -def _signature_matches_extension(filename): +if ty.TYPE_CHECKING: # pragma: no cover + from .filebasedimages import FileBasedImage + from .filename_parser import FileSpec + + P = ty.ParamSpec('P') + + class Signature(ty.TypedDict): + signature: bytes + format_name: str + + +def _signature_matches_extension(filename: FileSpec) -> tuple[bool, str]: """Check if signature aka magic number matches filename extension. Parameters @@ -43,7 +56,7 @@ the empty string otherwise. """ - signatures = { + signatures: dict[str, Signature] = { '.gz': {'signature': b'\x1f\x8b', 'format_name': 'gzip'}, '.bz2': {'signature': b'BZh', 'format_name': 'bzip2'}, '.zst': {'signature': b'\x28\xb5\x2f\xfd', 'format_name': 'ztsd'}, @@ -65,7 +78,7 @@ return False, f'File {filename} is not a {format_name} file' -def load(filename, **kwargs): +def load(filename: FileSpec, **kwargs) -> FileBasedImage: r"""Load file given filename, guessing at file type Parameters @@ -127,7 +140,7 @@ raise ImageFileError(f'Cannot work out file type of "{filename}"') -def save(img, filename, **kwargs): +def save(img: FileBasedImage, filename: FileSpec, **kwargs) -> None: r"""Save an image to file adapting format to `filename` Parameters @@ -162,19 +175,17 @@ from .nifti1 import Nifti1Image, Nifti1Pair from .nifti2 import Nifti2Image, Nifti2Pair - klass = None - converted = None - + converted: FileBasedImage if type(img) == Nifti1Image and lext in ('.img', '.hdr'): - klass = Nifti1Pair + converted = Nifti1Pair.from_image(img) elif type(img) == Nifti2Image and lext in ('.img', '.hdr'): - klass = Nifti2Pair + converted = Nifti2Pair.from_image(img) elif type(img) == Nifti1Pair and lext == '.nii': - klass = Nifti1Image + converted = Nifti1Image.from_image(img) elif type(img) == Nifti2Pair and lext == '.nii': - klass = Nifti2Image + converted = Nifti2Image.from_image(img) else: # arbitrary conversion - valid_klasses = [klass for klass in all_image_classes if ext in klass.valid_exts] + valid_klasses = [klass for klass in all_image_classes if lext in klass.valid_exts] if not valid_klasses: # if list is empty raise ImageFileError(f'Cannot work out file type of "{filename}"') @@ -187,13 +198,9 @@ break except Exception as e: err = e - # ... and if none of them work, raise an error. - if converted is None: + else: raise err - # Here, we either have a klass or a converted image. - if converted is None: - converted = klass.from_image(img) converted.to_filename(filename, **kwargs) diff -Nru nibabel-5.0.0/nibabel/minc1.py nibabel-5.1.0/nibabel/minc1.py --- nibabel-5.0.0/nibabel/minc1.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/minc1.py 2023-04-03 14:48:22.000000000 +0000 @@ -10,7 +10,6 @@ from __future__ import annotations from numbers import Integral -from typing import Type import numpy as np @@ -307,7 +306,8 @@ load. """ - header_class: Type[MincHeader] = Minc1Header + header_class: type[MincHeader] = Minc1Header + header: MincHeader _meta_sniff_len: int = 4 valid_exts: tuple[str, ...] = ('.mnc',) files_types: tuple[tuple[str, str], ...] = (('image', '.mnc'),) @@ -334,4 +334,4 @@ return klass(data, affine, header, extra=None, file_map=file_map) -load = Minc1Image.load +load = Minc1Image.from_filename diff -Nru nibabel-5.0.0/nibabel/minc2.py nibabel-5.1.0/nibabel/minc2.py --- nibabel-5.0.0/nibabel/minc2.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/minc2.py 2023-04-03 14:48:22.000000000 +0000 @@ -150,6 +150,7 @@ # MINC2 does not do compressed whole files _compressed_suffixes = () header_class = Minc2Header + header: Minc2Header @classmethod def from_file_map(klass, file_map, *, mmap=True, keep_file_open=None): @@ -172,4 +173,4 @@ return klass(data, affine, header, extra=None, file_map=file_map) -load = Minc2Image.load +load = Minc2Image.from_filename diff -Nru nibabel-5.0.0/nibabel/mriutils.py nibabel-5.1.0/nibabel/mriutils.py --- nibabel-5.0.0/nibabel/mriutils.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/mriutils.py 2023-04-03 14:48:22.000000000 +0000 @@ -6,9 +6,7 @@ # copyright and license terms. # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## -""" -Utilities for calculations related to MRI -""" +"""Utilities for calculations related to MRI""" __all__ = ['calculate_dwell_time'] diff -Nru nibabel-5.0.0/nibabel/nifti1.py nibabel-5.1.0/nibabel/nifti1.py --- nibabel-5.0.0/nibabel/nifti1.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/nifti1.py 2023-04-03 14:48:22.000000000 +0000 @@ -14,7 +14,6 @@ import warnings from io import BytesIO -from typing import Type import numpy as np import numpy.linalg as npl @@ -25,10 +24,10 @@ from .batteryrunners import Report from .casting import have_binary128 from .deprecated import alert_future_error -from .filebasedimages import SerializableImage +from .filebasedimages import ImageFileError, SerializableImage from .optpkg import optional_package from .quaternions import fillpositive, mat2quat, quat2mat -from .spatialimages import HeaderDataError, ImageFileError +from .spatialimages import HeaderDataError from .spm99analyze import SpmAnalyzeHeader from .volumeutils import Recoder, endian_codes, make_dt_codes @@ -90,8 +89,8 @@ # datatypes not in analyze format, with codes if have_binary128(): # Only enable 128 bit floats if we really have IEEE binary 128 longdoubles - _float128t: Type[np.generic] = np.longdouble - _complex256t: Type[np.generic] = np.longcomplex + _float128t: type[np.generic] = np.longdouble + _complex256t: type[np.generic] = np.longcomplex else: _float128t = np.void _complex256t = np.void @@ -688,7 +687,7 @@ single_magic = b'n+1' # Quaternion threshold near 0, based on float32 precision - quaternion_threshold = -np.finfo(np.float32).eps * 3 + quaternion_threshold = np.finfo(np.float32).eps * 3 def __init__(self, binaryblock=None, endianness=None, check=True, extensions=()): """Initialize header from binary data block and extensions""" @@ -1817,7 +1816,8 @@ class Nifti1Pair(analyze.AnalyzeImage): """Class for NIfTI1 format image, header pair""" - header_class: Type[Nifti1Header] = Nifti1PairHeader + header_class: type[Nifti1Header] = Nifti1PairHeader + header: Nifti1Header _meta_sniff_len = header_class.sizeof_hdr rw = True diff -Nru nibabel-5.0.0/nibabel/nifti2.py nibabel-5.1.0/nibabel/nifti2.py --- nibabel-5.0.0/nibabel/nifti2.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/nifti2.py 2023-04-03 14:48:22.000000000 +0000 @@ -12,13 +12,13 @@ https://www.nitrc.org/forum/message.php?msg_id=3738 """ - import numpy as np from .analyze import AnalyzeHeader from .batteryrunners import Report +from .filebasedimages import ImageFileError from .nifti1 import Nifti1Header, Nifti1Image, Nifti1Pair -from .spatialimages import HeaderDataError, ImageFileError +from .spatialimages import HeaderDataError r""" Header struct from : https://www.nitrc.org/forum/message.php?msg_id=3738 diff -Nru nibabel-5.0.0/nibabel/onetime.py nibabel-5.1.0/nibabel/onetime.py --- nibabel-5.0.0/nibabel/onetime.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/onetime.py 2023-04-03 14:48:22.000000000 +0000 @@ -1,5 +1,4 @@ -""" -Descriptor support for NIPY. +"""Descriptor support for NIPY Utilities to support special Python descriptors [1,2], in particular the use of a useful pattern for properties we call 'one time properties'. These are @@ -19,6 +18,12 @@ [2] Python data model, https://docs.python.org/reference/datamodel.html """ +from __future__ import annotations + +import typing as ty + +InstanceT = ty.TypeVar('InstanceT') +T = ty.TypeVar('T') from nibabel.deprecated import deprecate_with_version @@ -96,26 +101,24 @@ 10.0 """ - def reset(self): + def reset(self) -> None: """Reset all OneTimeProperty attributes that may have fired already.""" - instdict = self.__dict__ - classdict = self.__class__.__dict__ # To reset them, we simply remove them from the instance dict. At that # point, it's as if they had never been computed. On the next access, # the accessor function from the parent class will be called, simply # because that's how the python descriptor protocol works. - for mname, mval in classdict.items(): - if mname in instdict and isinstance(mval, OneTimeProperty): + for mname, mval in self.__class__.__dict__.items(): + if mname in self.__dict__ and isinstance(mval, OneTimeProperty): delattr(self, mname) -class OneTimeProperty: +class OneTimeProperty(ty.Generic[T]): """A descriptor to make special properties that become normal attributes. This is meant to be used mostly by the auto_attr decorator in this module. """ - def __init__(self, func): + def __init__(self, func: ty.Callable[[InstanceT], T]) -> None: """Create a OneTimeProperty instance. Parameters @@ -128,24 +131,35 @@ """ self.getter = func self.name = func.__name__ + self.__doc__ = func.__doc__ - def __get__(self, obj, type=None): + @ty.overload + def __get__( + self, obj: None, objtype: type[InstanceT] | None = None + ) -> ty.Callable[[InstanceT], T]: + ... # pragma: no cover + + @ty.overload + def __get__(self, obj: InstanceT, objtype: type[InstanceT] | None = None) -> T: + ... # pragma: no cover + + def __get__( + self, obj: InstanceT | None, objtype: type[InstanceT] | None = None + ) -> T | ty.Callable[[InstanceT], T]: """This will be called on attribute access on the class or instance.""" if obj is None: # Being called on the class, return the original function. This # way, introspection works on the class. - # return func return self.getter - # Errors in the following line are errors in setting a - # OneTimeProperty + # Errors in the following line are errors in setting a OneTimeProperty val = self.getter(obj) - setattr(obj, self.name, val) + obj.__dict__[self.name] = val return val -def auto_attr(func): +def auto_attr(func: ty.Callable[[InstanceT], T]) -> OneTimeProperty[T]: """Decorator to create OneTimeProperty attributes. Parameters diff -Nru nibabel-5.0.0/nibabel/openers.py nibabel-5.1.0/nibabel/openers.py --- nibabel-5.0.0/nibabel/openers.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/openers.py 2023-04-03 14:48:22.000000000 +0000 @@ -6,42 +6,40 @@ # copyright and license terms. # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## -"""Context manager openers for various fileobject types -""" +"""Context manager openers for various fileobject types""" +from __future__ import annotations import gzip -import warnings +import io +import typing as ty from bz2 import BZ2File from os.path import splitext -from packaging.version import Version +from ._compression import HAVE_INDEXED_GZIP, IndexedGzipFile, pyzstd -from nibabel.optpkg import optional_package +if ty.TYPE_CHECKING: # pragma: no cover + from types import TracebackType -# is indexed_gzip present and modern? -try: - import indexed_gzip as igzip # type: ignore + from _typeshed import WriteableBuffer - version = igzip.__version__ + ModeRT = ty.Literal['r', 'rt'] + ModeRB = ty.Literal['rb'] + ModeWT = ty.Literal['w', 'wt'] + ModeWB = ty.Literal['wb'] + ModeR = ty.Union[ModeRT, ModeRB] + ModeW = ty.Union[ModeWT, ModeWB] + Mode = ty.Union[ModeR, ModeW] - HAVE_INDEXED_GZIP = True + OpenerDef = tuple[ty.Callable[..., io.IOBase], tuple[str, ...]] - # < 0.7 - no good - if Version(version) < Version('0.7.0'): - warnings.warn(f'indexed_gzip is present, but too old (>= 0.7.0 required): {version})') - HAVE_INDEXED_GZIP = False - # >= 0.8 SafeIndexedGzipFile renamed to IndexedGzipFile - elif Version(version) < Version('0.8.0'): - IndexedGzipFile = igzip.SafeIndexedGzipFile - else: - IndexedGzipFile = igzip.IndexedGzipFile - del igzip, version -except ImportError: - # nibabel.openers.IndexedGzipFile is imported by nibabel.volumeutils - # to detect compressed file types, so we give a fallback value here. - IndexedGzipFile = gzip.GzipFile - HAVE_INDEXED_GZIP = False +@ty.runtime_checkable +class Fileish(ty.Protocol): + def read(self, size: int = -1, /) -> bytes: + ... # pragma: no cover + + def write(self, b: bytes, /) -> int | None: + ... # pragma: no cover class DeterministicGzipFile(gzip.GzipFile): @@ -51,35 +49,63 @@ to a modification time (``mtime``) of 0 seconds. """ - def __init__(self, filename=None, mode=None, compresslevel=9, fileobj=None, mtime=0): - # These two guards are copied from + def __init__( + self, + filename: str | None = None, + mode: Mode | None = None, + compresslevel: int = 9, + fileobj: io.FileIO | None = None, + mtime: int = 0, + ): + if mode is None: + mode = 'rb' + modestr: str = mode + + # These two guards are adapted from # https://github.com/python/cpython/blob/6ab65c6/Lib/gzip.py#L171-L174 - if mode and 'b' not in mode: - mode += 'b' + if 'b' not in modestr: + modestr = f'{mode}b' if fileobj is None: - fileobj = self.myfileobj = open(filename, mode or 'rb') + if filename is None: + raise TypeError('Must define either fileobj or filename') + # Cast because GzipFile.myfileobj has type io.FileIO while open returns ty.IO + fileobj = self.myfileobj = ty.cast(io.FileIO, open(filename, modestr)) return super().__init__( - filename='', mode=mode, compresslevel=compresslevel, fileobj=fileobj, mtime=mtime + filename='', + mode=modestr, + compresslevel=compresslevel, + fileobj=fileobj, + mtime=mtime, ) -def _gzip_open(filename, mode='rb', compresslevel=9, mtime=0, keep_open=False): +def _gzip_open( + filename: str, + mode: Mode = 'rb', + compresslevel: int = 9, + mtime: int = 0, + keep_open: bool = False, +) -> gzip.GzipFile: + + if not HAVE_INDEXED_GZIP or mode != 'rb': + gzip_file = DeterministicGzipFile(filename, mode, compresslevel, mtime=mtime) # use indexed_gzip if possible for faster read access. If keep_open == # True, we tell IndexedGzipFile to keep the file handle open. Otherwise # the IndexedGzipFile will close/open the file on each read. - if HAVE_INDEXED_GZIP and mode == 'rb': - gzip_file = IndexedGzipFile(filename, drop_handles=not keep_open) - - # Fall-back to built-in GzipFile else: - gzip_file = DeterministicGzipFile(filename, mode, compresslevel, mtime=mtime) + gzip_file = IndexedGzipFile(filename, drop_handles=not keep_open) return gzip_file -def _zstd_open(filename, mode='r', *, level_or_option=None, zstd_dict=None): - pyzstd = optional_package('pyzstd')[0] +def _zstd_open( + filename: str, + mode: Mode = 'r', + *, + level_or_option: int | dict | None = None, + zstd_dict: pyzstd.ZstdDict | None = None, +) -> pyzstd.ZstdFile: return pyzstd.ZstdFile(filename, mode, level_or_option=level_or_option, zstd_dict=zstd_dict) @@ -106,7 +132,7 @@ gz_def = (_gzip_open, ('mode', 'compresslevel', 'mtime', 'keep_open')) bz2_def = (BZ2File, ('mode', 'buffering', 'compresslevel')) zstd_def = (_zstd_open, ('mode', 'level_or_option', 'zstd_dict')) - compress_ext_map = { + compress_ext_map: dict[str | None, OpenerDef] = { '.gz': gz_def, '.bz2': bz2_def, '.zst': zstd_def, @@ -123,19 +149,19 @@ 'w': default_zst_compresslevel, } #: whether to ignore case looking for compression extensions - compress_ext_icase = True + compress_ext_icase: bool = True + + fobj: io.IOBase - def __init__(self, fileish, *args, **kwargs): - if self._is_fileobj(fileish): + def __init__(self, fileish: str | io.IOBase, *args, **kwargs): + if isinstance(fileish, (io.IOBase, Fileish)): self.fobj = fileish self.me_opened = False - self._name = None + self._name = getattr(fileish, 'name', None) return opener, arg_names = self._get_opener_argnames(fileish) # Get full arguments to check for mode and compresslevel - full_kwargs = kwargs.copy() - n_args = len(args) - full_kwargs.update(dict(zip(arg_names[:n_args], args))) + full_kwargs = {**kwargs, **dict(zip(arg_names, args))} # Set default mode if 'mode' not in full_kwargs: mode = 'rb' @@ -157,7 +183,7 @@ self._name = fileish self.me_opened = True - def _get_opener_argnames(self, fileish): + def _get_opener_argnames(self, fileish: str) -> OpenerDef: _, ext = splitext(fileish) if self.compress_ext_icase: ext = ext.lower() @@ -170,16 +196,12 @@ return self.compress_ext_map[ext] return self.compress_ext_map[None] - def _is_fileobj(self, obj): - """Is `obj` a file-like object?""" - return hasattr(obj, 'read') and hasattr(obj, 'write') - @property - def closed(self): + def closed(self) -> bool: return self.fobj.closed @property - def name(self): + def name(self) -> str | None: """Return ``self.fobj.name`` or self._name if not present self._name will be None if object was created with a fileobj, otherwise @@ -188,42 +210,53 @@ return self._name @property - def mode(self): - return self.fobj.mode + def mode(self) -> str: + # Check and raise our own error for type narrowing purposes + if hasattr(self.fobj, 'mode'): + return self.fobj.mode + raise AttributeError(f'{self.fobj.__class__.__name__} has no attribute "mode"') - def fileno(self): + def fileno(self) -> int: return self.fobj.fileno() - def read(self, *args, **kwargs): - return self.fobj.read(*args, **kwargs) + def read(self, size: int = -1, /) -> bytes: + return self.fobj.read(size) - def readinto(self, *args, **kwargs): - return self.fobj.readinto(*args, **kwargs) + def readinto(self, buffer: WriteableBuffer, /) -> int | None: + # Check and raise our own error for type narrowing purposes + if hasattr(self.fobj, 'readinto'): + return self.fobj.readinto(buffer) + raise AttributeError(f'{self.fobj.__class__.__name__} has no attribute "readinto"') - def write(self, *args, **kwargs): - return self.fobj.write(*args, **kwargs) + def write(self, b: bytes, /) -> int | None: + return self.fobj.write(b) - def seek(self, *args, **kwargs): - return self.fobj.seek(*args, **kwargs) + def seek(self, pos: int, whence: int = 0, /) -> int: + return self.fobj.seek(pos, whence) - def tell(self, *args, **kwargs): - return self.fobj.tell(*args, **kwargs) + def tell(self, /) -> int: + return self.fobj.tell() - def close(self, *args, **kwargs): - return self.fobj.close(*args, **kwargs) + def close(self, /) -> None: + return self.fobj.close() - def __iter__(self): + def __iter__(self) -> ty.Iterator[bytes]: return iter(self.fobj) - def close_if_mine(self): + def close_if_mine(self) -> None: """Close ``self.fobj`` iff we opened it in the constructor""" if self.me_opened: self.close() - def __enter__(self): + def __enter__(self) -> Opener: return self - def __exit__(self, exc_type, exc_val, exc_tb): + def __exit__( + self, + exc_type: type[BaseException] | None, + exc_val: BaseException | None, + exc_tb: TracebackType | None, + ) -> None: self.close_if_mine() diff -Nru nibabel-5.0.0/nibabel/optpkg.py nibabel-5.1.0/nibabel/optpkg.py --- nibabel-5.0.0/nibabel/optpkg.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/optpkg.py 2023-04-03 14:48:22.000000000 +0000 @@ -1,20 +1,31 @@ """Routines to support optional packages""" +from __future__ import annotations + +import typing as ty +from types import ModuleType + from packaging.version import Version from .tripwire import TripWire -def _check_pkg_version(pkg, min_version): - # Default version checking function - if isinstance(min_version, str): - min_version = Version(min_version) - try: - return min_version <= Version(pkg.__version__) - except AttributeError: +def _check_pkg_version(min_version: str | Version) -> ty.Callable[[ModuleType], bool]: + min_ver = Version(min_version) if isinstance(min_version, str) else min_version + + def check(pkg: ModuleType) -> bool: + pkg_ver = getattr(pkg, '__version__', None) + if isinstance(pkg_ver, str): + return min_ver <= Version(pkg_ver) return False + return check + -def optional_package(name, trip_msg=None, min_version=None): +def optional_package( + name: str, + trip_msg: str | None = None, + min_version: str | Version | ty.Callable[[ModuleType], bool] | None = None, +) -> tuple[ModuleType | TripWire, bool, ty.Callable[[], None]]: """Return package-like thing and module setup for package `name` Parameters @@ -81,7 +92,7 @@ elif min_version is None: check_version = lambda pkg: True else: - check_version = lambda pkg: _check_pkg_version(pkg, min_version) + check_version = _check_pkg_version(min_version) # fromlist=[''] results in submodule being returned, rather than the top # level module. See help(__import__) fromlist = [''] if '.' in name else [] @@ -107,11 +118,11 @@ trip_msg = ( f'We need package {name} for these functions, but ``import {name}`` raised {exc}' ) - pkg = TripWire(trip_msg) + trip = TripWire(trip_msg) - def setup_module(): + def setup_module() -> None: import unittest raise unittest.SkipTest(f'No {name} for these tests') - return pkg, False, setup_module + return trip, False, setup_module diff -Nru nibabel-5.0.0/nibabel/orientations.py nibabel-5.1.0/nibabel/orientations.py --- nibabel-5.0.0/nibabel/orientations.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/orientations.py 2023-04-03 14:48:22.000000000 +0000 @@ -7,8 +7,6 @@ # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## """Utilities for calculating and applying affine orientations""" - - import numpy as np import numpy.linalg as npl diff -Nru nibabel-5.0.0/nibabel/parrec.py nibabel-5.1.0/nibabel/parrec.py --- nibabel-5.0.0/nibabel/parrec.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/parrec.py 2023-04-03 14:48:22.000000000 +0000 @@ -8,7 +8,7 @@ ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## # Disable line length checking for PAR fragments in module docstring # flake8: noqa E501 -"""Read images in PAR/REC format. +"""Read images in PAR/REC format This is yet another MRI image format generated by Philips scanners. It is an ASCII header (PAR) plus a binary blob (REC). @@ -121,7 +121,6 @@ utility via the option "--strict-sort". The dimension info can be exported to a CSV file by adding the option "--volume-info". """ - import re import warnings from collections import OrderedDict @@ -1254,6 +1253,7 @@ """PAR/REC image""" header_class = PARRECHeader + header: PARRECHeader valid_exts = ('.rec', '.par') files_types = (('image', '.rec'), ('header', '.par')) diff -Nru nibabel-5.0.0/nibabel/pkg_info.py nibabel-5.1.0/nibabel/pkg_info.py --- nibabel-5.0.0/nibabel/pkg_info.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/pkg_info.py 2023-04-03 14:48:22.000000000 +0000 @@ -11,10 +11,10 @@ __version__ = '0+unknown' -COMMIT_HASH = '323a88ae3' +COMMIT_HASH = '97457374dc' -def _cmp(a, b) -> int: +def _cmp(a: Version, b: Version) -> int: """Implementation of ``cmp`` for Python 3""" return (a > b) - (a < b) @@ -101,7 +101,7 @@ return 'archive substitution', COMMIT_HASH ver = Version(__version__) if ver.local is not None and ver.local.startswith('g'): - return ver.local[1:8], 'installation' + return 'installation', ver.local[1:8] # maybe we are in a repository proc = run( ('git', 'rev-parse', '--short', 'HEAD'), @@ -113,7 +113,7 @@ return '(none found)', '' -def get_pkg_info(pkg_path: str) -> dict: +def get_pkg_info(pkg_path: str) -> dict[str, str]: """Return dict describing the context of this package Parameters diff -Nru nibabel-5.0.0/nibabel/processing.py nibabel-5.1.0/nibabel/processing.py --- nibabel-5.0.0/nibabel/processing.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/processing.py 2023-04-03 14:48:22.000000000 +0000 @@ -6,21 +6,22 @@ # copyright and license terms. # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## -"""Image processing functions for: +"""Image processing functions -* smoothing -* resampling -* converting sd to and from FWHM +Image processing functions for: -Smoothing and resampling routines need scipy -""" + * smoothing + * resampling + * converting SD to and from FWHM +Smoothing and resampling routines need scipy. +""" import numpy as np import numpy.linalg as npl from .optpkg import optional_package -spnd, _, _ = optional_package('scipy.ndimage') +spnd = optional_package('scipy.ndimage')[0] from .affines import AffineError, append_diag, from_matvec, rescale_affine, to_matvec from .imageclasses import spatial_axes_first diff -Nru nibabel-5.0.0/nibabel/quaternions.py nibabel-5.1.0/nibabel/quaternions.py --- nibabel-5.0.0/nibabel/quaternions.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/quaternions.py 2023-04-03 14:48:22.000000000 +0000 @@ -7,7 +7,7 @@ # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## """ -Functions to operate on, or return, quaternions. +Functions to operate on, or return, quaternions The module also includes functions for the closely related angle, axis pair as a specification for rotation. @@ -25,7 +25,6 @@ >>> vec = np.array([1, 2, 3]).reshape((3,1)) # column vector >>> tvec = np.dot(M, vec) """ - import math import numpy as np @@ -42,10 +41,10 @@ xyz : iterable iterable containing 3 values, corresponding to quaternion x, y, z w2_thresh : None or float, optional - threshold to determine if w squared is really negative. + threshold to determine if w squared is non-zero. If None (default) then w2_thresh set equal to - ``-np.finfo(xyz.dtype).eps``, if possible, otherwise - ``-np.finfo(np.float64).eps`` + 3 * ``np.finfo(xyz.dtype).eps``, if possible, otherwise + 3 * ``np.finfo(np.float64).eps`` Returns ------- @@ -89,17 +88,17 @@ # If necessary, guess precision of input if w2_thresh is None: try: # trap errors for non-array, integer array - w2_thresh = -np.finfo(xyz.dtype).eps * 3 + w2_thresh = np.finfo(xyz.dtype).eps * 3 except (AttributeError, ValueError): - w2_thresh = -FLOAT_EPS * 3 + w2_thresh = FLOAT_EPS * 3 # Use maximum precision xyz = np.asarray(xyz, dtype=MAX_FLOAT) # Calculate w - w2 = 1.0 - np.dot(xyz, xyz) - if w2 < 0: - if w2 < w2_thresh: - raise ValueError(f'w2 should be positive, but is {w2:e}') + w2 = 1.0 - xyz @ xyz + if np.abs(w2) < np.abs(w2_thresh): w = 0 + elif w2 < 0: + raise ValueError(f'w2 should be positive, but is {w2:e}') else: w = np.sqrt(w2) return np.r_[w, xyz] diff -Nru nibabel-5.0.0/nibabel/rstutils.py nibabel-5.1.0/nibabel/rstutils.py --- nibabel-5.0.0/nibabel/rstutils.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/rstutils.py 2023-04-03 14:48:22.000000000 +0000 @@ -2,7 +2,6 @@ * Make ReST table given array of values """ - import numpy as np diff -Nru nibabel-5.0.0/nibabel/spaces.py nibabel-5.1.0/nibabel/spaces.py --- nibabel-5.0.0/nibabel/spaces.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/spaces.py 2023-04-03 14:48:22.000000000 +0000 @@ -19,7 +19,6 @@ mapping), or * a length 2 sequence with the same information (shape, affine). """ - from itertools import product import numpy as np diff -Nru nibabel-5.0.0/nibabel/spatialimages.py nibabel-5.1.0/nibabel/spatialimages.py --- nibabel-5.0.0/nibabel/spatialimages.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/spatialimages.py 2023-04-03 14:48:22.000000000 +0000 @@ -131,18 +131,53 @@ """ from __future__ import annotations -from typing import Type +import io +import typing as ty +from collections.abc import Sequence +from typing import Literal import numpy as np +from .arrayproxy import ArrayLike from .dataobj_images import DataobjImage -from .filebasedimages import ImageFileError # noqa -from .filebasedimages import FileBasedHeader +from .filebasedimages import FileBasedHeader, FileBasedImage +from .fileholders import FileMap from .fileslice import canonical_slicers from .orientations import apply_orientation, inv_ornt_aff from .viewers import OrthoSlicer3D from .volumeutils import shape_zoom_affine +try: + from functools import cache +except ImportError: # PY38 + from functools import lru_cache as cache + +if ty.TYPE_CHECKING: # pragma: no cover + import numpy.typing as npt + +SpatialImgT = ty.TypeVar('SpatialImgT', bound='SpatialImage') +SpatialHdrT = ty.TypeVar('SpatialHdrT', bound='SpatialHeader') + + +class HasDtype(ty.Protocol): + def get_data_dtype(self) -> np.dtype: + ... # pragma: no cover + + def set_data_dtype(self, dtype: npt.DTypeLike) -> None: + ... # pragma: no cover + + +@ty.runtime_checkable +class SpatialProtocol(ty.Protocol): + def get_data_dtype(self) -> np.dtype: + ... # pragma: no cover + + def get_data_shape(self) -> ty.Tuple[int, ...]: + ... # pragma: no cover + + def get_zooms(self) -> ty.Tuple[float, ...]: + ... # pragma: no cover + class HeaderDataError(Exception): """Class to indicate error in getting or setting header data""" @@ -152,13 +187,22 @@ """Class to indicate error in parameters into header functions""" -class SpatialHeader(FileBasedHeader): +class SpatialHeader(FileBasedHeader, SpatialProtocol): """Template class to implement header protocol""" - default_x_flip = True - data_layout = 'F' + default_x_flip: bool = True + data_layout: Literal['F', 'C'] = 'F' - def __init__(self, data_dtype=np.float32, shape=(0,), zooms=None): + _dtype: np.dtype + _shape: tuple[int, ...] + _zooms: tuple[float, ...] + + def __init__( + self, + data_dtype: npt.DTypeLike = np.float32, + shape: Sequence[int] = (0,), + zooms: Sequence[float] | None = None, + ): self.set_data_dtype(data_dtype) self._zooms = () self.set_data_shape(shape) @@ -166,7 +210,10 @@ self.set_zooms(zooms) @classmethod - def from_header(klass, header=None): + def from_header( + klass: type[SpatialHdrT], + header: SpatialProtocol | FileBasedHeader | ty.Mapping | None = None, + ) -> SpatialHdrT: if header is None: return klass() # I can't do isinstance here because it is not necessarily true @@ -175,26 +222,20 @@ # different field names if type(header) == klass: return header.copy() - return klass(header.get_data_dtype(), header.get_data_shape(), header.get_zooms()) - - @classmethod - def from_fileobj(klass, fileobj): - raise NotImplementedError - - def write_to(self, fileobj): - raise NotImplementedError - - def __eq__(self, other): - return (self.get_data_dtype(), self.get_data_shape(), self.get_zooms()) == ( - other.get_data_dtype(), - other.get_data_shape(), - other.get_zooms(), - ) - - def __ne__(self, other): - return not self == other + if isinstance(header, SpatialProtocol): + return klass(header.get_data_dtype(), header.get_data_shape(), header.get_zooms()) + return super().from_header(header) + + def __eq__(self, other: object) -> bool: + if isinstance(other, SpatialHeader): + return (self.get_data_dtype(), self.get_data_shape(), self.get_zooms()) == ( + other.get_data_dtype(), + other.get_data_shape(), + other.get_zooms(), + ) + return NotImplemented - def copy(self): + def copy(self: SpatialHdrT) -> SpatialHdrT: """Copy object to independent representation The copy should not be affected by any changes to the original @@ -202,47 +243,47 @@ """ return self.__class__(self._dtype, self._shape, self._zooms) - def get_data_dtype(self): + def get_data_dtype(self) -> np.dtype: return self._dtype - def set_data_dtype(self, dtype): + def set_data_dtype(self, dtype: npt.DTypeLike) -> None: self._dtype = np.dtype(dtype) - def get_data_shape(self): + def get_data_shape(self) -> tuple[int, ...]: return self._shape - def set_data_shape(self, shape): + def set_data_shape(self, shape: Sequence[int]) -> None: ndim = len(shape) if ndim == 0: self._shape = (0,) self._zooms = (1.0,) return - self._shape = tuple([int(s) for s in shape]) + self._shape = tuple(int(s) for s in shape) # set any unset zooms to 1.0 nzs = min(len(self._zooms), ndim) self._zooms = self._zooms[:nzs] + (1.0,) * (ndim - nzs) - def get_zooms(self): + def get_zooms(self) -> tuple[float, ...]: return self._zooms - def set_zooms(self, zooms): - zooms = tuple([float(z) for z in zooms]) + def set_zooms(self, zooms: Sequence[float]) -> None: + zooms = tuple(float(z) for z in zooms) shape = self.get_data_shape() ndim = len(shape) if len(zooms) != ndim: raise HeaderDataError('Expecting %d zoom values for ndim %d' % (ndim, ndim)) - if len([z for z in zooms if z < 0]): + if any(z < 0 for z in zooms): raise HeaderDataError('zooms must be positive') self._zooms = zooms - def get_base_affine(self): + def get_base_affine(self) -> np.ndarray: shape = self.get_data_shape() zooms = self.get_zooms() return shape_zoom_affine(shape, zooms, self.default_x_flip) get_best_affine = get_base_affine - def data_to_fileobj(self, data, fileobj, rescale=True): + def data_to_fileobj(self, data: npt.ArrayLike, fileobj: io.IOBase, rescale: bool = True): """Write array data `data` as binary to `fileobj` Parameters @@ -259,7 +300,7 @@ dtype = self.get_data_dtype() fileobj.write(data.astype(dtype).tobytes(order=self.data_layout)) - def data_from_fileobj(self, fileobj): + def data_from_fileobj(self, fileobj: io.IOBase) -> np.ndarray: """Read binary image data from `fileobj`""" dtype = self.get_data_dtype() shape = self.get_data_shape() @@ -268,7 +309,42 @@ return np.ndarray(shape, dtype, data_bytes, order=self.data_layout) -def supported_np_types(obj): +@cache +def _supported_np_types(klass: type[HasDtype]) -> set[type[np.generic]]: + """Numpy data types that instances of ``klass`` support + + Parameters + ---------- + klass : class + Class implementing `get_data_dtype` and `set_data_dtype` methods. The object + should raise ``HeaderDataError`` for setting unsupported dtypes. The + object will likely be a header or a :class:`SpatialImage` + + Returns + ------- + np_types : set + set of numpy types that ``klass`` instances support + """ + try: + obj = klass() + except TypeError as e: + if hasattr(klass, 'header_class'): + obj = klass.header_class() + else: + raise e + supported = set() + for np_type in set(np.sctypeDict.values()): + try: + obj.set_data_dtype(np_type) + except HeaderDataError: + continue + # Did set work? + if np.dtype(obj.get_data_dtype()) == np.dtype(np_type): + supported.add(np_type) + return supported + + +def supported_np_types(obj: HasDtype) -> set[type[np.generic]]: """Numpy data types that instance `obj` supports Parameters @@ -283,33 +359,22 @@ np_types : set set of numpy types that `obj` supports """ - dt = obj.get_data_dtype() - supported = [] - for name, np_types in np.sctypes.items(): - for np_type in np_types: - try: - obj.set_data_dtype(np_type) - except HeaderDataError: - continue - # Did set work? - if np.dtype(obj.get_data_dtype()) == np.dtype(np_type): - supported.append(np_type) - # Reset original header dtype - obj.set_data_dtype(dt) - return set(supported) + return _supported_np_types(obj.__class__) class ImageDataError(Exception): pass -class SpatialFirstSlicer: +class SpatialFirstSlicer(ty.Generic[SpatialImgT]): """Slicing interface that returns a new image with an updated affine Checks that an image's first three axes are spatial """ - def __init__(self, img): + img: SpatialImgT + + def __init__(self, img: SpatialImgT): # Local import to avoid circular import on module load from .imageclasses import spatial_axes_first @@ -319,7 +384,7 @@ ) self.img = img - def __getitem__(self, slicer): + def __getitem__(self, slicer: object) -> SpatialImgT: try: slicer = self.check_slicing(slicer) except ValueError as err: @@ -332,7 +397,11 @@ affine = self.slice_affine(slicer) return self.img.__class__(dataobj.copy(), affine, self.img.header) - def check_slicing(self, slicer, return_spatial=False): + def check_slicing( + self, + slicer: object, + return_spatial: bool = False, + ) -> tuple[slice | int | None, ...]: """Canonicalize slicers and check for scalar indices in spatial dims Parameters @@ -349,11 +418,11 @@ Validated slicer object that will slice image's `dataobj` without collapsing spatial dimensions """ - slicer = canonical_slicers(slicer, self.img.shape) + canonical = canonical_slicers(slicer, self.img.shape) # We can get away with this because we've checked the image's # first three axes are spatial. # More general slicers will need to be smarter, here. - spatial_slices = slicer[:3] + spatial_slices = canonical[:3] for subslicer in spatial_slices: if subslicer is None: raise IndexError('New axis not permitted in spatial dimensions') @@ -361,9 +430,9 @@ raise IndexError( 'Scalar indices disallowed in spatial dimensions; Use `[x]` or `x:x+1`.' ) - return spatial_slices if return_spatial else slicer + return spatial_slices if return_spatial else canonical - def slice_affine(self, slicer): + def slice_affine(self, slicer: object) -> np.ndarray: """Retrieve affine for current image, if sliced by a given index Applies scaling if down-sampling is applied, and adjusts the intercept @@ -403,10 +472,20 @@ class SpatialImage(DataobjImage): """Template class for volumetric (3D/4D) images""" - header_class: Type[SpatialHeader] = SpatialHeader - ImageSlicer = SpatialFirstSlicer + header_class: type[SpatialHeader] = SpatialHeader + ImageSlicer: type[SpatialFirstSlicer] = SpatialFirstSlicer + + _header: SpatialHeader + header: SpatialHeader - def __init__(self, dataobj, affine, header=None, extra=None, file_map=None): + def __init__( + self, + dataobj: ArrayLike, + affine: np.ndarray, + header: FileBasedHeader | ty.Mapping | None = None, + extra: ty.Mapping | None = None, + file_map: FileMap | None = None, + ): """Initialize image The image is a combination of (array-like, affine matrix, header), with @@ -456,7 +535,7 @@ def affine(self): return self._affine - def update_header(self): + def update_header(self) -> None: """Harmonize header with image data and affine >>> data = np.zeros((2,3,4)) @@ -485,7 +564,7 @@ return self._affine2header() - def _affine2header(self): + def _affine2header(self) -> None: """Unconditionally set affine into the header""" RZS = self._affine[:3, :3] vox = np.sqrt(np.sum(RZS * RZS, axis=0)) @@ -495,7 +574,7 @@ zooms[:n_to_set] = vox[:n_to_set] hdr.set_zooms(zooms) - def __str__(self): + def __str__(self) -> str: shape = self.shape affine = self.affine return f""" @@ -507,14 +586,14 @@ {self._header} """ - def get_data_dtype(self): + def get_data_dtype(self) -> np.dtype: return self._header.get_data_dtype() - def set_data_dtype(self, dtype): + def set_data_dtype(self, dtype: npt.DTypeLike) -> None: self._header.set_data_dtype(dtype) @classmethod - def from_image(klass, img): + def from_image(klass: type[SpatialImgT], img: SpatialImage | FileBasedImage) -> SpatialImgT: """Class method to create new instance of own class from `img` Parameters @@ -528,15 +607,17 @@ cimg : ``spatialimage`` instance Image, of our own class """ - return klass( - img.dataobj, - img.affine, - klass.header_class.from_header(img.header), - extra=img.extra.copy(), - ) + if isinstance(img, SpatialImage): + return klass( + img.dataobj, + img.affine, + klass.header_class.from_header(img.header), + extra=img.extra.copy(), + ) + return super().from_image(img) @property - def slicer(self): + def slicer(self: SpatialImgT) -> SpatialFirstSlicer[SpatialImgT]: """Slicer object that returns cropped and subsampled images The image is resliced in the current orientation; no rotation or @@ -555,7 +636,7 @@ """ return self.ImageSlicer(self) - def __getitem__(self, idx): + def __getitem__(self, idx: object) -> None: """No slicing or dictionary interface for images Use the slicer attribute to perform cropping and subsampling at your @@ -568,7 +649,7 @@ '`img.get_fdata()[slice]`' ) - def orthoview(self): + def orthoview(self) -> OrthoSlicer3D: """Plot the image using OrthoSlicer3D Returns @@ -584,7 +665,7 @@ """ return OrthoSlicer3D(self.dataobj, self.affine, title=self.get_filename()) - def as_reoriented(self, ornt): + def as_reoriented(self: SpatialImgT, ornt: Sequence[Sequence[int]]) -> SpatialImgT: """Apply an orientation change and return a new image If ornt is identity transform, return the original image, unchanged diff -Nru nibabel-5.0.0/nibabel/spm2analyze.py nibabel-5.1.0/nibabel/spm2analyze.py --- nibabel-5.0.0/nibabel/spm2analyze.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/spm2analyze.py 2023-04-03 14:48:22.000000000 +0000 @@ -128,7 +128,8 @@ """Class for SPM2 variant of basic Analyze image""" header_class = Spm2AnalyzeHeader + header: Spm2AnalyzeHeader -load = Spm2AnalyzeImage.load +load = Spm2AnalyzeImage.from_filename save = Spm2AnalyzeImage.instance_to_filename diff -Nru nibabel-5.0.0/nibabel/spm99analyze.py nibabel-5.1.0/nibabel/spm99analyze.py --- nibabel-5.0.0/nibabel/spm99analyze.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/spm99analyze.py 2023-04-03 14:48:22.000000000 +0000 @@ -227,6 +227,7 @@ """Class for SPM99 variant of basic Analyze image""" header_class = Spm99AnalyzeHeader + header: Spm99AnalyzeHeader files_types = (('image', '.img'), ('header', '.hdr'), ('mat', '.mat')) has_affine = True makeable = True @@ -331,5 +332,5 @@ sio.savemat(mfobj, {'M': M, 'mat': mat}, format='4') -load = Spm99AnalyzeImage.load +load = Spm99AnalyzeImage.from_filename save = Spm99AnalyzeImage.instance_to_filename diff -Nru nibabel-5.0.0/nibabel/testing/helpers.py nibabel-5.1.0/nibabel/testing/helpers.py --- nibabel-5.0.0/nibabel/testing/helpers.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/testing/helpers.py 2023-04-03 14:48:22.000000000 +0000 @@ -6,7 +6,7 @@ from ..optpkg import optional_package -_, have_scipy, _ = optional_package('scipy.io') +have_scipy = optional_package('scipy.io')[1] from numpy.testing import assert_array_equal diff -Nru nibabel-5.0.0/nibabel/testing/__init__.py nibabel-5.1.0/nibabel/testing/__init__.py --- nibabel-5.0.0/nibabel/testing/__init__.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/testing/__init__.py 2023-04-03 14:48:22.000000000 +0000 @@ -7,10 +7,12 @@ # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## """Utilities for testing""" +from __future__ import annotations import os import re import sys +import typing as ty import unittest import warnings from contextlib import nullcontext @@ -19,28 +21,38 @@ import numpy as np import pytest from numpy.testing import assert_array_equal -from pkg_resources import resource_filename from .helpers import assert_data_similar, bytesio_filemap, bytesio_round_trip from .np_features import memmap_after_ufunc +try: + from importlib.abc import Traversable + from importlib.resources import as_file, files +except ImportError: # PY38 + from importlib_resources import as_file, files + from importlib_resources.abc import Traversable -def test_data(subdir=None, fname=None): + +def get_test_data( + subdir: ty.Literal['gifti', 'nicom', 'externals'] | None = None, + fname: str | None = None, +) -> Traversable: + parts: tuple[str, ...] if subdir is None: - resource = os.path.join('tests', 'data') + parts = ('tests', 'data') elif subdir in ('gifti', 'nicom', 'externals'): - resource = os.path.join(subdir, 'tests', 'data') + parts = (subdir, 'tests', 'data') else: raise ValueError(f'Unknown test data directory: {subdir}') if fname is not None: - resource = os.path.join(resource, fname) + parts += (fname,) - return resource_filename('nibabel', resource) + return files('nibabel').joinpath(*parts) # set path to example data -data_path = test_data() +data_path = get_test_data() def assert_dt_equal(a, b): @@ -210,19 +222,6 @@ assert_array_equal(value1, value2) -class BaseTestCase(unittest.TestCase): - """TestCase that does not attempt to run if prefixed with a ``_`` - - This restores the nose-like behavior of skipping so-named test cases - in test runners like pytest. - """ - - def setUp(self): - if self.__class__.__name__.startswith('_'): - raise unittest.SkipTest('Base test case - subclass to run') - super().setUp() - - def expires(version): """Decorator to mark a test as xfail with ExpiredDeprecationError after version""" from packaging.version import Version diff -Nru nibabel-5.0.0/nibabel/tests/conftest.py nibabel-5.1.0/nibabel/tests/conftest.py --- nibabel-5.0.0/nibabel/tests/conftest.py 1970-01-01 00:00:00.000000000 +0000 +++ nibabel-5.1.0/nibabel/tests/conftest.py 2023-04-03 14:48:22.000000000 +0000 @@ -0,0 +1,18 @@ +import pytest + +from ..spatialimages import supported_np_types + + +# Generate dynamic fixtures +def pytest_generate_tests(metafunc): + if 'supported_dtype' in metafunc.fixturenames: + if metafunc.cls is None or not getattr(metafunc.cls, 'image_class'): + raise pytest.UsageError( + 'Attempting to use supported_dtype fixture outside an image test case' + ) + # xdist needs a consistent ordering, so sort by class name + supported_dtypes = sorted( + supported_np_types(metafunc.cls.image_class.header_class()), + key=lambda x: x.__name__, + ) + metafunc.parametrize('supported_dtype', supported_dtypes) diff -Nru nibabel-5.0.0/nibabel/tests/test_analyze.py nibabel-5.1.0/nibabel/tests/test_analyze.py --- nibabel-5.0.0/nibabel/tests/test_analyze.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/tests/test_analyze.py 2023-04-03 14:48:22.000000000 +0000 @@ -49,12 +49,12 @@ PIXDIM0_MSG = 'pixdim[1,2,3] should be non-zero; setting 0 dims to 1' -def add_intp(supported_np_types): - # Add intp, uintp to supported types as necessary - supported_dtypes = [np.dtype(t) for t in supported_np_types] - for np_type in (np.intp, np.uintp): - if np.dtype(np_type) in supported_dtypes: - supported_np_types.add(np_type) +def add_duplicate_types(supported_np_types): + # Update supported numpy types with named scalar types that map to the same set of dtypes + dtypes = {np.dtype(t) for t in supported_np_types} + supported_np_types.update( + scalar for scalar in set(np.sctypeDict.values()) if np.dtype(scalar) in dtypes + ) class TestAnalyzeHeader(tws._TestLabeledWrapStruct): @@ -62,7 +62,7 @@ example_file = header_file sizeof_hdr = AnalyzeHeader.sizeof_hdr supported_np_types = {np.uint8, np.int16, np.int32, np.float32, np.float64, np.complex64} - add_intp(supported_np_types) + add_duplicate_types(supported_np_types) def test_supported_types(self): hdr = self.header_class() diff -Nru nibabel-5.0.0/nibabel/tests/test_casting.py nibabel-5.1.0/nibabel/tests/test_casting.py --- nibabel-5.0.0/nibabel/tests/test_casting.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/tests/test_casting.py 2023-04-03 14:48:22.000000000 +0000 @@ -233,10 +233,15 @@ def test_longdouble_precision_improved(): - # Just check that this can only be True on windows, msvc - from numpy.distutils.ccompiler import get_default_compiler + # Just check that this can only be True on Windows - if not (os.name == 'nt' and get_default_compiler() == 'msvc'): + # This previously used distutils.ccompiler.get_default_compiler to check for msvc + # In https://github.com/python/cpython/blob/3467991/Lib/distutils/ccompiler.py#L919-L956 + # we see that this was implied by os.name == 'nt', so we can remove this deprecated + # call. + # However, there may be detectable conditions in Windows where we would expect this + # to be False as well. + if os.name != 'nt': assert not longdouble_precision_improved() diff -Nru nibabel-5.0.0/nibabel/tests/test_init.py nibabel-5.1.0/nibabel/tests/test_init.py --- nibabel-5.0.0/nibabel/tests/test_init.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/tests/test_init.py 2023-04-03 14:48:22.000000000 +0000 @@ -1,7 +1,12 @@ +import pathlib from unittest import mock import pytest -from pkg_resources import resource_filename + +try: + from importlib.resources import as_file, files +except ImportError: + from importlib_resources import as_file, files import nibabel as nib @@ -38,12 +43,11 @@ def test_nibabel_bench(): - expected_args = ['-c', '--pyargs', 'nibabel'] + config_path = files('nibabel') / 'benchmarks/pytest.benchmark.ini' + if not isinstance(config_path, pathlib.Path): + raise unittest.SkipTest('Package is not unpacked; could get temp path') - try: - expected_args.insert(1, resource_filename('nibabel', 'benchmarks/pytest.benchmark.ini')) - except: - raise unittest.SkipTest('Not installed') + expected_args = ['-c', str(config_path), '--pyargs', 'nibabel'] with mock.patch('pytest.main') as pytest_main: nib.bench(verbose=0) diff -Nru nibabel-5.0.0/nibabel/tests/test_nifti1.py nibabel-5.1.0/nibabel/tests/test_nifti1.py --- nibabel-5.0.0/nibabel/tests/test_nifti1.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/tests/test_nifti1.py 2023-04-03 14:48:22.000000000 +0000 @@ -80,7 +80,7 @@ ) if have_binary128(): supported_np_types = supported_np_types.union((np.longdouble, np.longcomplex)) - tana.add_intp(supported_np_types) + tana.add_duplicate_types(supported_np_types) def test_empty(self): tana.TestAnalyzeHeader.test_empty(self) diff -Nru nibabel-5.0.0/nibabel/tests/test_openers.py nibabel-5.1.0/nibabel/tests/test_openers.py --- nibabel-5.0.0/nibabel/tests/test_openers.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/tests/test_openers.py 2023-04-03 14:48:22.000000000 +0000 @@ -38,7 +38,7 @@ def write(self): pass - def read(self): + def read(self, size=-1, /): return self.message diff -Nru nibabel-5.0.0/nibabel/tests/test_quaternions.py nibabel-5.1.0/nibabel/tests/test_quaternions.py --- nibabel-5.0.0/nibabel/tests/test_quaternions.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/tests/test_quaternions.py 2023-04-03 14:48:22.000000000 +0000 @@ -16,35 +16,40 @@ from .. import eulerangles as nea from .. import quaternions as nq + +def norm(vec): + # Return unit vector with same orientation as input vector + return vec / np.sqrt(vec @ vec) + + +def gen_vec(dtype): + # Generate random 3-vector in [-1, 1]^3 + rand = np.random.default_rng() + return rand.uniform(low=-1.0, high=1.0, size=(3,)).astype(dtype) + + # Example rotations -eg_rots = [] -params = (-pi, pi, pi / 2) -zs = np.arange(*params) -ys = np.arange(*params) -xs = np.arange(*params) -for z in zs: - for y in ys: - for x in xs: - eg_rots.append(nea.euler2mat(z, y, x)) +eg_rots = [ + nea.euler2mat(z, y, x) + for z in np.arange(-pi, pi, pi / 2) + for y in np.arange(-pi, pi, pi / 2) + for x in np.arange(-pi, pi, pi / 2) +] + # Example quaternions (from rotations) -eg_quats = [] -for M in eg_rots: - eg_quats.append(nq.mat2quat(M)) +eg_quats = [nq.mat2quat(M) for M in eg_rots] # M, quaternion pairs eg_pairs = list(zip(eg_rots, eg_quats)) # Set of arbitrary unit quaternions -unit_quats = set() -params = range(-2, 3) -for w in params: - for x in params: - for y in params: - for z in params: - q = (w, x, y, z) - Nq = np.sqrt(np.dot(q, q)) - if not Nq == 0: - q = tuple([e / Nq for e in q]) - unit_quats.add(q) +unit_quats = set( + tuple(norm(np.r_[w, x, y, z])) + for w in range(-2, 3) + for x in range(-2, 3) + for y in range(-2, 3) + for z in range(-2, 3) + if (w, x, y, z) != (0, 0, 0, 0) +) def test_fillpos(): @@ -69,6 +74,51 @@ assert wxyz[0] == 0.0 +@pytest.mark.parametrize('dtype', ('f4', 'f8')) +def test_fillpositive_plus_minus_epsilon(dtype): + # Deterministic test for fillpositive threshold + # We are trying to fill (x, y, z) with a w such that |(w, x, y, z)| == 1 + # If |(x, y, z)| is slightly off one, w should still be 0 + nptype = np.dtype(dtype).type + + # Obviously, |(x, y, z)| == 1 + baseline = np.array([0, 0, 1], dtype=dtype) + + # Obviously, |(x, y, z)| ~ 1 + plus = baseline * nptype(1 + np.finfo(dtype).eps) + minus = baseline * nptype(1 - np.finfo(dtype).eps) + + assert nq.fillpositive(plus)[0] == 0.0 + assert nq.fillpositive(minus)[0] == 0.0 + + # |(x, y, z)| > 1, no real solutions + plus = baseline * nptype(1 + 2 * np.finfo(dtype).eps) + with pytest.raises(ValueError): + nq.fillpositive(plus) + + # |(x, y, z)| < 1, two real solutions, we choose positive + minus = baseline * nptype(1 - 2 * np.finfo(dtype).eps) + assert nq.fillpositive(minus)[0] > 0.0 + + +@pytest.mark.parametrize('dtype', ('f4', 'f8')) +def test_fillpositive_simulated_error(dtype): + # Nondeterministic test for fillpositive threshold + # Create random vectors, normalize to unit length, and count on floating point + # error to result in magnitudes larger/smaller than one + # This is to simulate cases where a unit quaternion with w == 0 would be encoded + # as xyz with small error, and we want to recover the w of 0 + + # Permit 1 epsilon per value (default, but make explicit here) + w2_thresh = 3 * np.finfo(dtype).eps + + pos_error = neg_error = False + for _ in range(50): + xyz = norm(gen_vec(dtype)) + + assert nq.fillpositive(xyz, w2_thresh)[0] == 0.0 + + def test_conjugate(): # Takes sequence cq = nq.conjugate((1, 0, 0, 0)) @@ -125,7 +175,7 @@ def test_mult(M1, q1, M2, q2): # Test that quaternion * same as matrix * q21 = nq.mult(q2, q1) - assert_array_almost_equal, np.dot(M2, M1), nq.quat2mat(q21) + assert_array_almost_equal, M2 @ M1, nq.quat2mat(q21) @pytest.mark.parametrize('M, q', eg_pairs) @@ -146,7 +196,7 @@ @pytest.mark.parametrize('M, q', eg_pairs) def test_qrotate(vec, M, q): vdash = nq.rotate_vector(vec, q) - vM = np.dot(M, vec) + vM = M @ vec assert_array_almost_equal(vdash, vM) @@ -179,6 +229,6 @@ nq.nearly_equivalent(q, q2) aa_mat = nq.angle_axis2mat(theta, vec) assert_array_almost_equal(aa_mat, M) - unit_vec = vec / np.sqrt(vec.dot(vec)) + unit_vec = norm(vec) aa_mat2 = nq.angle_axis2mat(theta, unit_vec, is_normalized=True) assert_array_almost_equal(aa_mat2, M) diff -Nru nibabel-5.0.0/nibabel/tests/test_spatialimages.py nibabel-5.1.0/nibabel/tests/test_spatialimages.py --- nibabel-5.0.0/nibabel/tests/test_spatialimages.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/tests/test_spatialimages.py 2023-04-03 14:48:22.000000000 +0000 @@ -11,7 +11,6 @@ import warnings from io import BytesIO -from unittest import TestCase import numpy as np import pytest @@ -205,7 +204,7 @@ return np.arange(3, dtype=dtype) -class TestSpatialImage(TestCase): +class TestSpatialImage: # class for testing images image_class = SpatialImage can_save = False diff -Nru nibabel-5.0.0/nibabel/tests/test_spm99analyze.py nibabel-5.1.0/nibabel/tests/test_spm99analyze.py --- nibabel-5.0.0/nibabel/tests/test_spm99analyze.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/tests/test_spm99analyze.py 2023-04-03 14:48:22.000000000 +0000 @@ -35,10 +35,20 @@ from ..volumeutils import _dt_min_max, apply_read_scaling from . import test_analyze -FLOAT_TYPES = np.sctypes['float'] -COMPLEX_TYPES = np.sctypes['complex'] -INT_TYPES = np.sctypes['int'] -UINT_TYPES = np.sctypes['uint'] +# np.sctypes values are lists of types with unique sizes +# For testing, we want all concrete classes of a type +# Key on kind, rather than abstract base classes, since timedelta64 is a signedinteger +sctypes = {} +for sctype in set(np.sctypeDict.values()): + sctypes.setdefault(np.dtype(sctype).kind, []).append(sctype) + +# Sort types to ensure that xdist doesn't complain about test order when we parametrize +FLOAT_TYPES = sorted(sctypes['f'], key=lambda x: x.__name__) +COMPLEX_TYPES = sorted(sctypes['c'], key=lambda x: x.__name__) +INT_TYPES = sorted(sctypes['i'], key=lambda x: x.__name__) +UINT_TYPES = sorted(sctypes['u'], key=lambda x: x.__name__) + +# Create combined type lists CFLOAT_TYPES = FLOAT_TYPES + COMPLEX_TYPES IUINT_TYPES = INT_TYPES + UINT_TYPES NUMERIC_TYPES = CFLOAT_TYPES + IUINT_TYPES @@ -306,57 +316,57 @@ img_rt = bytesio_round_trip(img) assert_array_equal(img_rt.get_fdata(), np.clip(arr, 0, 255)) - def test_no_scaling(self): + # NOTE: Need to check complex scaling + @pytest.mark.parametrize('in_dtype', FLOAT_TYPES + IUINT_TYPES) + def test_no_scaling(self, in_dtype, supported_dtype): # Test writing image converting types when not calculating scaling img_class = self.image_class hdr_class = img_class.header_class hdr = hdr_class() - supported_types = supported_np_types(hdr) # Any old non-default slope and intercept slope = 2 inter = 10 if hdr.has_data_intercept else 0 - for in_dtype, out_dtype in itertools.product(FLOAT_TYPES + IUINT_TYPES, supported_types): - # Need to check complex scaling - mn_in, mx_in = _dt_min_max(in_dtype) - arr = np.array([mn_in, -1, 0, 1, 10, mx_in], dtype=in_dtype) - img = img_class(arr, np.eye(4), hdr) - img.set_data_dtype(out_dtype) - # Setting the scaling means we don't calculate it later - img.header.set_slope_inter(slope, inter) - with np.errstate(invalid='ignore'): - rt_img = bytesio_round_trip(img) - with suppress_warnings(): # invalid mult - back_arr = np.asanyarray(rt_img.dataobj) - exp_back = arr.copy() - # If converting to floating point type, casting is direct. - # Otherwise we will need to do float-(u)int casting at some point - if out_dtype in IUINT_TYPES: - if in_dtype in FLOAT_TYPES: - # Working precision is (at least) float - exp_back = exp_back.astype(float) - # Float to iu conversion will always round, clip - with np.errstate(invalid='ignore'): - exp_back = np.round(exp_back) - if in_dtype in FLOAT_TYPES: - # Clip to shared range of working precision - exp_back = np.clip(exp_back, *shared_range(float, out_dtype)) - else: # iu input and output type - # No scaling, never gets converted to float. - # Does get clipped to range of output type - mn_out, mx_out = _dt_min_max(out_dtype) - if (mn_in, mx_in) != (mn_out, mx_out): - # Use smaller of input, output range to avoid np.clip - # upcasting the array because of large clip limits. - exp_back = np.clip(exp_back, max(mn_in, mn_out), min(mx_in, mx_out)) - if out_dtype in COMPLEX_TYPES: - # always cast to real from complex - exp_back = exp_back.astype(out_dtype) - else: - # Cast to working precision + + mn_in, mx_in = _dt_min_max(in_dtype) + arr = np.array([mn_in, -1, 0, 1, 10, mx_in], dtype=in_dtype) + img = img_class(arr, np.eye(4), hdr) + img.set_data_dtype(supported_dtype) + # Setting the scaling means we don't calculate it later + img.header.set_slope_inter(slope, inter) + with np.errstate(invalid='ignore'): + rt_img = bytesio_round_trip(img) + with suppress_warnings(): # invalid mult + back_arr = np.asanyarray(rt_img.dataobj) + exp_back = arr.copy() + # If converting to floating point type, casting is direct. + # Otherwise we will need to do float-(u)int casting at some point + if supported_dtype in IUINT_TYPES: + if in_dtype in FLOAT_TYPES: + # Working precision is (at least) float exp_back = exp_back.astype(float) - # Allow for small differences in large numbers - with suppress_warnings(): # invalid value - assert_allclose_safely(back_arr, exp_back * slope + inter) + # Float to iu conversion will always round, clip + with np.errstate(invalid='ignore'): + exp_back = np.round(exp_back) + if in_dtype in FLOAT_TYPES: + # Clip to shared range of working precision + exp_back = np.clip(exp_back, *shared_range(float, supported_dtype)) + else: # iu input and output type + # No scaling, never gets converted to float. + # Does get clipped to range of output type + mn_out, mx_out = _dt_min_max(supported_dtype) + if (mn_in, mx_in) != (mn_out, mx_out): + # Use smaller of input, output range to avoid np.clip + # upcasting the array because of large clip limits. + exp_back = np.clip(exp_back, max(mn_in, mn_out), min(mx_in, mx_out)) + if supported_dtype in COMPLEX_TYPES: + # always cast to real from complex + exp_back = exp_back.astype(supported_dtype) + else: + # Cast to working precision + exp_back = exp_back.astype(float) + # Allow for small differences in large numbers + with suppress_warnings(): # invalid value + assert_allclose_safely(back_arr, exp_back * slope + inter) def test_write_scaling(self): # Check writes with scaling set diff -Nru nibabel-5.0.0/nibabel/tests/test_testing.py nibabel-5.1.0/nibabel/tests/test_testing.py --- nibabel-5.0.0/nibabel/tests/test_testing.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/tests/test_testing.py 2023-04-03 14:48:22.000000000 +0000 @@ -15,8 +15,8 @@ data_path, error_warnings, get_fresh_mod, + get_test_data, suppress_warnings, - test_data, ) @@ -171,20 +171,22 @@ def test_test_data(): - assert test_data() == data_path - assert test_data() == os.path.abspath( + assert str(get_test_data()) == str(data_path) # Always get the same result + # Works the same as using __file__ and os.path utilities + assert str(get_test_data()) == os.path.abspath( os.path.join(os.path.dirname(__file__), '..', 'tests', 'data') ) + # Check action of subdir and that existence checks work for subdir in ('nicom', 'gifti', 'externals'): - assert test_data(subdir) == os.path.join(data_path[:-10], subdir, 'tests', 'data') - assert os.path.exists(test_data(subdir)) - assert not os.path.exists(test_data(subdir, 'doesnotexist')) + assert get_test_data(subdir) == data_path.parent.parent / subdir / 'tests' / 'data' + assert os.path.exists(get_test_data(subdir)) + assert not os.path.exists(get_test_data(subdir, 'doesnotexist')) for subdir in ('freesurfer', 'doesnotexist'): with pytest.raises(ValueError): - test_data(subdir) + get_test_data(subdir) - assert not os.path.exists(test_data(None, 'doesnotexist')) + assert not os.path.exists(get_test_data(None, 'doesnotexist')) for subdir, fname in [ ('gifti', 'ascii.gii'), @@ -192,4 +194,4 @@ ('externals', 'example_1.nc'), (None, 'empty.tck'), ]: - assert os.path.exists(test_data(subdir, fname)) + assert os.path.exists(get_test_data(subdir, fname)) diff -Nru nibabel-5.0.0/nibabel/tests/test_wrapstruct.py nibabel-5.1.0/nibabel/tests/test_wrapstruct.py --- nibabel-5.0.0/nibabel/tests/test_wrapstruct.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/tests/test_wrapstruct.py 2023-04-03 14:48:22.000000000 +0000 @@ -33,7 +33,6 @@ from .. import imageglobals from ..batteryrunners import Report from ..spatialimages import HeaderDataError -from ..testing import BaseTestCase from ..volumeutils import Recoder, native_code, swapped_code from ..wrapstruct import LabeledWrapStruct, WrapStruct, WrapStructError @@ -101,7 +100,7 @@ return hdrc, message, raiser -class _TestWrapStructBase(BaseTestCase): +class _TestWrapStructBase: """Class implements base tests for binary headers It serves as a base class for other binary header tests diff -Nru nibabel-5.0.0/nibabel/tmpdirs.py nibabel-5.1.0/nibabel/tmpdirs.py --- nibabel-5.0.0/nibabel/tmpdirs.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/tmpdirs.py 2023-04-03 14:48:22.000000000 +0000 @@ -6,8 +6,7 @@ # copyright and license terms. # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## -"""Contexts for *with* statement providing temporary directories -""" +"""Contexts for *with* statement providing temporary directories""" import os import tempfile from contextlib import contextmanager @@ -20,8 +19,10 @@ def _chdir(path): cwd = os.getcwd() os.chdir(path) - yield - os.chdir(cwd) + try: + yield + finally: + os.chdir(cwd) from .deprecated import deprecate_with_version diff -Nru nibabel-5.0.0/nibabel/tripwire.py nibabel-5.1.0/nibabel/tripwire.py --- nibabel-5.0.0/nibabel/tripwire.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/tripwire.py 2023-04-03 14:48:22.000000000 +0000 @@ -1,5 +1,5 @@ -"""Class to raise error for missing modules or other misfortunes -""" +"""Class to raise error for missing modules or other misfortunes""" +from typing import Any class TripWireError(AttributeError): @@ -11,7 +11,7 @@ # is not present. -def is_tripwire(obj): +def is_tripwire(obj: Any) -> bool: """Returns True if `obj` appears to be a TripWire object Examples @@ -44,9 +44,9 @@ TripWireError: We do not have a_module """ - def __init__(self, msg): + def __init__(self, msg: str) -> None: self._msg = msg - def __getattr__(self, attr_name): + def __getattr__(self, attr_name: str) -> Any: """Raise informative error accessing attributes""" raise TripWireError(self._msg) diff -Nru nibabel-5.0.0/nibabel/_version.pyi nibabel-5.1.0/nibabel/_version.pyi --- nibabel-5.0.0/nibabel/_version.pyi 1970-01-01 00:00:00.000000000 +0000 +++ nibabel-5.1.0/nibabel/_version.pyi 2023-04-03 14:48:22.000000000 +0000 @@ -0,0 +1,4 @@ +__version__: str +__version_tuple__: tuple[str, ...] +version: str +version_tuple: tuple[str, ...] diff -Nru nibabel-5.0.0/nibabel/viewers.py nibabel-5.1.0/nibabel/viewers.py --- nibabel-5.0.0/nibabel/viewers.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/viewers.py 2023-04-03 14:48:22.000000000 +0000 @@ -3,7 +3,6 @@ Includes version of OrthoSlicer3D code originally written by our own Paul Ivanov. """ - import weakref import numpy as np @@ -14,7 +13,7 @@ class OrthoSlicer3D: - """Orthogonal-plane slice viewer. + """Orthogonal-plane slice viewer OrthoSlicer3d expects 3- or 4-dimensional array data. It treats 4D data as a sequence of 3D spatial volumes, where a slice over the final diff -Nru nibabel-5.0.0/nibabel/volumeutils.py nibabel-5.1.0/nibabel/volumeutils.py --- nibabel-5.0.0/nibabel/volumeutils.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/volumeutils.py 2023-04-03 14:48:22.000000000 +0000 @@ -9,22 +9,28 @@ """Utility functions for analyze-like formats""" from __future__ import annotations -import gzip +import io import sys +import typing as ty import warnings -from collections import OrderedDict from functools import reduce -from operator import mul +from operator import getitem, mul from os.path import exists, splitext import numpy as np +from ._compression import COMPRESSED_FILE_LIKES from .casting import OK_FLOATS, shared_range from .externals.oset import OrderedSet -from .openers import BZ2File, IndexedGzipFile -from .optpkg import optional_package -pyzstd, HAVE_ZSTD, _ = optional_package('pyzstd') +if ty.TYPE_CHECKING: # pragma: no cover + import numpy.typing as npt + + Scalar = np.number | float + + K = ty.TypeVar('K') + V = ty.TypeVar('V') + DT = ty.TypeVar('DT', bound=np.generic) sys_is_le = sys.byteorder == 'little' native_code = sys_is_le and '<' or '>' @@ -41,13 +47,6 @@ #: default compression level when writing gz and bz2 files default_compresslevel = 1 -#: file-like classes known to hold compressed data -COMPRESSED_FILE_LIKES: tuple[type, ...] = (gzip.GzipFile, BZ2File, IndexedGzipFile) - -# Enable .zst support if pyzstd installed. -if HAVE_ZSTD: - COMPRESSED_FILE_LIKES = (*COMPRESSED_FILE_LIKES, pyzstd.ZstdFile) - class Recoder: """class to return canonical code(s) from code or aliases @@ -83,7 +82,14 @@ 2 """ - def __init__(self, codes, fields=('code',), map_maker=OrderedDict): + fields: tuple[str, ...] + + def __init__( + self, + codes: ty.Sequence[ty.Sequence[ty.Hashable]], + fields: ty.Sequence[str] = ('code',), + map_maker: type[ty.Mapping[ty.Hashable, ty.Hashable]] = dict, + ): """Create recoder object ``codes`` give a sequence of code, alias sequences @@ -121,7 +127,14 @@ self.field1 = self.__dict__[fields[0]] self.add_codes(codes) - def add_codes(self, code_syn_seqs): + def __getattr__(self, key: str) -> ty.Mapping[ty.Hashable, ty.Hashable]: + # By setting this, we let static analyzers know that dynamic attributes will + # be dict-like (Mapping). + # However, __getattr__ is called if looking up the field in __dict__ fails, + # so we only get here if the attribute is really missing. + raise AttributeError(f'{self.__class__.__name__!r} object has no attribute {key!r}') + + def add_codes(self, code_syn_seqs: ty.Sequence[ty.Sequence[ty.Hashable]]) -> None: """Add codes to object Parameters @@ -155,7 +168,7 @@ for field_ind, field_name in enumerate(self.fields): self.__dict__[field_name][alias] = code_syns[field_ind] - def __getitem__(self, key): + def __getitem__(self, key: ty.Hashable) -> ty.Hashable: """Return value from field1 dictionary (first column of values) Returns same value as ``obj.field1[key]`` and, with the @@ -168,13 +181,9 @@ """ return self.field1[key] - def __contains__(self, key): + def __contains__(self, key: ty.Hashable) -> bool: """True if field1 in recoder contains `key`""" - try: - self.field1[key] - except KeyError: - return False - return True + return key in self.field1 def keys(self): """Return all available code and alias values @@ -190,7 +199,7 @@ """ return self.field1.keys() - def value_set(self, name=None): + def value_set(self, name: str | None = None) -> OrderedSet: """Return OrderedSet of possible returned values for column By default, the column is the first column. @@ -224,7 +233,7 @@ endian_codes = Recoder(_endian_codes) -class DtypeMapper: +class DtypeMapper(ty.Dict[ty.Hashable, ty.Hashable]): """Specialized mapper for numpy dtypes We pass this mapper into the Recoder class to deal with numpy dtype @@ -242,26 +251,20 @@ and return any matching values for the matching key. """ - def __init__(self): - self._dict = {} - self._dtype_keys = [] - - def keys(self): - return self._dict.keys() + def __init__(self) -> None: + super().__init__() + self._dtype_keys: list[np.dtype] = [] - def values(self): - return self._dict.values() - - def __setitem__(self, key, value): + def __setitem__(self, key: ty.Hashable, value: ty.Hashable) -> None: """Set item into mapping, checking for dtype keys Cache dtype keys for comparison test in __getitem__ """ - self._dict[key] = value - if hasattr(key, 'subdtype'): + super().__setitem__(key, value) + if isinstance(key, np.dtype): self._dtype_keys.append(key) - def __getitem__(self, key): + def __getitem__(self, key: ty.Hashable) -> ty.Hashable: """Get item from mapping, checking for dtype keys First do simple hash lookup, then check for a dtype key that has failed @@ -269,17 +272,20 @@ to `key`. """ try: - return self._dict[key] + return super().__getitem__(key) except KeyError: pass - if hasattr(key, 'subdtype'): + if isinstance(key, np.dtype): for dt in self._dtype_keys: if key == dt: - return self._dict[dt] + return super().__getitem__(dt) raise KeyError(key) -def pretty_mapping(mapping, getterfunc=None): +def pretty_mapping( + mapping: ty.Mapping[K, V], + getterfunc: ty.Callable[[ty.Mapping[K, V], K], V] | None = None, +) -> str: """Make pretty string from mapping Adjusts text column to print values on basis of longest key. @@ -328,9 +334,8 @@ longer_field : method string """ if getterfunc is None: - getterfunc = lambda obj, key: obj[key] - lens = [len(str(name)) for name in mapping] - mxlen = np.max(lens) + getterfunc = getitem + mxlen = max(len(str(name)) for name in mapping) fmt = '%%-%ds : %%s' % mxlen out = [] for name in mapping: @@ -339,7 +344,7 @@ return '\n'.join(out) -def make_dt_codes(codes_seqs): +def make_dt_codes(codes_seqs: ty.Sequence[ty.Sequence]) -> Recoder: """Create full dt codes Recoder instance from datatype codes Include created numpy dtype (from numpy type) and opposite endian @@ -379,12 +384,19 @@ return Recoder(dt_codes, fields + ['dtype', 'sw_dtype'], DtypeMapper) -def _is_compressed_fobj(fobj): +def _is_compressed_fobj(fobj: io.IOBase) -> bool: """Return True if fobj represents a compressed data file-like object""" return isinstance(fobj, COMPRESSED_FILE_LIKES) -def array_from_file(shape, in_dtype, infile, offset=0, order='F', mmap=True): +def array_from_file( + shape: tuple[int, ...], + in_dtype: np.dtype[DT], + infile: io.IOBase, + offset: int = 0, + order: ty.Literal['C', 'F'] = 'F', + mmap: bool | ty.Literal['c', 'r', 'r+'] = True, +) -> npt.NDArray[DT]: """Get array from file with specified shape, dtype and file offset Parameters @@ -429,24 +441,23 @@ """ if mmap not in (True, False, 'c', 'r', 'r+'): raise ValueError("mmap value should be one of True, False, 'c', " "'r', 'r+'") - if mmap is True: - mmap = 'c' in_dtype = np.dtype(in_dtype) # Get file-like object from Opener instance infile = getattr(infile, 'fobj', infile) if mmap and not _is_compressed_fobj(infile): + mode = 'c' if mmap is True else mmap try: # Try memmapping file on disk - return np.memmap(infile, in_dtype, mode=mmap, shape=shape, order=order, offset=offset) + return np.memmap(infile, in_dtype, mode=mode, shape=shape, order=order, offset=offset) # The error raised by memmap, for different file types, has # changed in different incarnations of the numpy routine except (AttributeError, TypeError, ValueError): pass if len(shape) == 0: - return np.array([]) + return np.array([], in_dtype) # Use reduce and mul to work around numpy integer overflow n_bytes = reduce(mul, shape) * in_dtype.itemsize if n_bytes == 0: - return np.array([]) + return np.array([], in_dtype) # Read data from file infile.seek(offset) if hasattr(infile, 'readinto'): @@ -462,7 +473,7 @@ f'Expected {n_bytes} bytes, got {n_read} bytes from ' f"{getattr(infile, 'name', 'object')}\n - could the file be damaged?" ) - arr = np.ndarray(shape, in_dtype, buffer=data_bytes, order=order) + arr: np.ndarray = np.ndarray(shape, in_dtype, buffer=data_bytes, order=order) if needs_copy: return arr.copy() arr.flags.writeable = True @@ -470,17 +481,17 @@ def array_to_file( - data, - fileobj, - out_dtype=None, - offset=0, - intercept=0.0, - divslope=1.0, - mn=None, - mx=None, - order='F', - nan2zero=True, -): + data: npt.ArrayLike, + fileobj: io.IOBase, + out_dtype: np.dtype | None = None, + offset: int = 0, + intercept: Scalar = 0.0, + divslope: Scalar | None = 1.0, + mn: Scalar | None = None, + mx: Scalar | None = None, + order: ty.Literal['C', 'F'] = 'F', + nan2zero: bool = True, +) -> None: """Helper function for writing arrays to file objects Writes arrays as scaled by `intercept` and `divslope`, and clipped @@ -562,8 +573,7 @@ True """ # Shield special case - div_none = divslope is None - if not np.all(np.isfinite((intercept, 1.0 if div_none else divslope))): + if not np.isfinite(np.array((intercept, 1.0 if divslope is None else divslope))).all(): raise ValueError('divslope and intercept must be finite') if divslope == 0: raise ValueError('divslope cannot be zero') @@ -575,7 +585,7 @@ out_dtype = np.dtype(out_dtype) if offset is not None: seek_tell(fileobj, offset) - if div_none or (mn, mx) == (0, 0) or ((mn is not None and mx is not None) and mx < mn): + if divslope is None or (mn, mx) == (0, 0) or ((mn is not None and mx is not None) and mx < mn): write_zeros(fileobj, data.size * out_dtype.itemsize) return if order not in 'FC': @@ -707,17 +717,17 @@ def _write_data( - data, - fileobj, - out_dtype, - order, - in_cast=None, - pre_clips=None, - inter=0.0, - slope=1.0, - post_clips=None, - nan_fill=None, -): + data: np.ndarray, + fileobj: io.IOBase, + out_dtype: np.dtype, + order: ty.Literal['C', 'F'], + in_cast: np.dtype | None = None, + pre_clips: tuple[Scalar | None, Scalar | None] | None = None, + inter: Scalar | np.ndarray = 0.0, + slope: Scalar | np.ndarray = 1.0, + post_clips: tuple[Scalar | None, Scalar | None] | None = None, + nan_fill: Scalar | None = None, +) -> None: """Write array `data` to `fileobj` as `out_dtype` type, layout `order` Does not modify `data` in-place. @@ -774,7 +784,9 @@ fileobj.write(dslice.tobytes()) -def _dt_min_max(dtype_like, mn=None, mx=None): +def _dt_min_max( + dtype_like: npt.DTypeLike, mn: Scalar | None = None, mx: Scalar | None = None +) -> tuple[Scalar, Scalar]: dt = np.dtype(dtype_like) if dt.kind in 'fc': dt_mn, dt_mx = (-np.inf, np.inf) @@ -786,20 +798,25 @@ return dt_mn if mn is None else mn, dt_mx if mx is None else mx -_CSIZE2FLOAT = {8: np.float32, 16: np.float64, 24: np.longdouble, 32: np.longdouble} +_CSIZE2FLOAT: dict[int, type[np.floating]] = { + 8: np.float32, + 16: np.float64, + 24: np.longdouble, + 32: np.longdouble, +} -def _matching_float(np_type): +def _matching_float(np_type: npt.DTypeLike) -> type[np.floating]: """Return floating point type matching `np_type`""" dtype = np.dtype(np_type) if dtype.kind not in 'cf': raise ValueError('Expecting float or complex type as input') - if dtype.kind in 'f': + if issubclass(dtype.type, np.floating): return dtype.type return _CSIZE2FLOAT[dtype.itemsize] -def write_zeros(fileobj, count, block_size=8194): +def write_zeros(fileobj: io.IOBase, count: int, block_size: int = 8194) -> None: """Write `count` zero bytes to `fileobj` Parameters @@ -819,7 +836,7 @@ fileobj.write(b'\x00' * rem) -def seek_tell(fileobj, offset, write0=False): +def seek_tell(fileobj: io.IOBase, offset: int, write0: bool = False) -> None: """Seek in `fileobj` or check we're in the right place already Parameters @@ -849,7 +866,11 @@ assert fileobj.tell() == offset -def apply_read_scaling(arr, slope=None, inter=None): +def apply_read_scaling( + arr: np.ndarray, + slope: Scalar | None = None, + inter: Scalar | None = None, +) -> np.ndarray: """Apply scaling in `slope` and `inter` to array `arr` This is for loading the array from a file (as opposed to the reverse @@ -888,23 +909,28 @@ return arr shape = arr.shape # Force float / float upcasting by promoting to arrays - arr, slope, inter = (np.atleast_1d(v) for v in (arr, slope, inter)) + slope1d, inter1d = (np.atleast_1d(v) for v in (slope, inter)) + arr = np.atleast_1d(arr) if arr.dtype.kind in 'iu': # int to float; get enough precision to avoid infs # Find floating point type for which scaling does not overflow, # starting at given type - default = slope.dtype.type if slope.dtype.kind == 'f' else np.float64 - ftype = int_scinter_ftype(arr.dtype, slope, inter, default) - slope = slope.astype(ftype) - inter = inter.astype(ftype) - if slope != 1.0: - arr = arr * slope - if inter != 0.0: - arr = arr + inter + default = slope1d.dtype.type if slope1d.dtype.kind == 'f' else np.float64 + ftype = int_scinter_ftype(arr.dtype, slope1d, inter1d, default) + slope1d = slope1d.astype(ftype) + inter1d = inter1d.astype(ftype) + if slope1d != 1.0: + arr = arr * slope1d + if inter1d != 0.0: + arr = arr + inter1d return arr.reshape(shape) -def working_type(in_type, slope=1.0, inter=0.0): +def working_type( + in_type: npt.DTypeLike, + slope: npt.ArrayLike = 1.0, + inter: npt.ArrayLike = 0.0, +) -> type[np.number]: """Return array type from applying `slope`, `inter` to array of `in_type` Numpy type that results from an array of type `in_type` being combined with @@ -935,19 +961,22 @@ `in_type`. """ val = np.array([1], dtype=in_type) - slope = np.array(slope) - inter = np.array(inter) # Don't use real values to avoid overflows. Promote to 1D to avoid scalar # casting rules. Don't use ones_like, zeros_like because of a bug in numpy # <= 1.5.1 in converting complex192 / complex256 scalars. if inter != 0: - val = val + np.array([0], dtype=inter.dtype) + val = val + np.array([0], dtype=np.array(inter).dtype) if slope != 1: - val = val / np.array([1], dtype=slope.dtype) + val = val / np.array([1], dtype=np.array(slope).dtype) return val.dtype.type -def int_scinter_ftype(ifmt, slope=1.0, inter=0.0, default=np.float32): +def int_scinter_ftype( + ifmt: type[np.integer], + slope: npt.ArrayLike = 1.0, + inter: npt.ArrayLike = 0.0, + default: type[np.floating] = np.float32, +) -> type[np.floating]: """float type containing int type `ifmt` * `slope` + `inter` Return float type that can represent the max and the min of the `ifmt` type @@ -999,7 +1028,12 @@ raise ValueError('Overflow using highest floating point type') -def best_write_scale_ftype(arr, slope=1.0, inter=0.0, default=np.float32): +def best_write_scale_ftype( + arr: np.ndarray, + slope: npt.ArrayLike = 1.0, + inter: npt.ArrayLike = 0.0, + default: type[np.number] = np.float32, +) -> type[np.floating]: """Smallest float type to contain range of ``arr`` after scaling Scaling that will be applied to ``arr`` is ``(arr - inter) / slope``. @@ -1063,7 +1097,11 @@ return OK_FLOATS[-1] -def better_float_of(first, second, default=np.float32): +def better_float_of( + first: npt.DTypeLike, + second: npt.DTypeLike, + default: type[np.floating] = np.float32, +) -> type[np.floating]: """Return more capable float type of `first` and `second` Return `default` if neither of `first` or `second` is a float @@ -1097,19 +1135,22 @@ first = np.dtype(first) second = np.dtype(second) default = np.dtype(default).type - kinds = (first.kind, second.kind) - if 'f' not in kinds: - return default - if kinds == ('f', 'f'): - if first.itemsize >= second.itemsize: - return first.type - return second.type - if first.kind == 'f': + if issubclass(first.type, np.floating): + if issubclass(second.type, np.floating) and first.itemsize < second.itemsize: + return second.type return first.type - return second.type + if issubclass(second.type, np.floating): + return second.type + return default -def _ftype4scaled_finite(tst_arr, slope, inter, direction='read', default=np.float32): +def _ftype4scaled_finite( + tst_arr: np.ndarray, + slope: npt.ArrayLike, + inter: npt.ArrayLike, + direction: ty.Literal['read', 'write'] = 'read', + default: type[np.floating] = np.float32, +) -> type[np.floating]: """Smallest float type for scaling of `tst_arr` that does not overflow""" assert direction in ('read', 'write') if default not in OK_FLOATS and default is np.longdouble: @@ -1120,7 +1161,6 @@ tst_arr = np.atleast_1d(tst_arr) slope = np.atleast_1d(slope) inter = np.atleast_1d(inter) - overflow_filter = ('error', '.*overflow.*', RuntimeWarning) for ftype in OK_FLOATS[def_ind:]: tst_trans = tst_arr.copy() slope = slope.astype(ftype) @@ -1128,7 +1168,7 @@ try: with warnings.catch_warnings(): # Error on overflows to short circuit the logic - warnings.filterwarnings(*overflow_filter) + warnings.filterwarnings('error', '.*overflow.*', RuntimeWarning) if direction == 'read': # as in reading of image from disk if slope != 1.0: tst_trans = tst_trans * slope @@ -1147,7 +1187,22 @@ raise ValueError('Overflow using highest floating point type') -def finite_range(arr, check_nan=False): +@ty.overload +def finite_range( + arr: npt.ArrayLike, check_nan: ty.Literal[False] = False +) -> tuple[Scalar, Scalar]: + ... # pragma: no cover + + +@ty.overload +def finite_range(arr: npt.ArrayLike, check_nan: ty.Literal[True]) -> tuple[Scalar, Scalar, bool]: + ... # pragma: no cover + + +def finite_range( + arr: npt.ArrayLike, + check_nan: bool = False, +) -> tuple[Scalar, Scalar, bool] | tuple[Scalar, Scalar]: """Get range (min, max) or range and flag (min, max, has_nan) from `arr` Parameters @@ -1195,7 +1250,9 @@ """ arr = np.asarray(arr) if arr.size == 0: - return (np.inf, -np.inf) + (False,) * check_nan + if check_nan: + return (np.inf, -np.inf, False) + return (np.inf, -np.inf) # Resort array to slowest->fastest memory change indices stride_order = np.argsort(arr.strides)[::-1] sarr = arr.transpose(stride_order) @@ -1243,7 +1300,11 @@ return np.nanmin(mins), np.nanmax(maxes) -def shape_zoom_affine(shape, zooms, x_flip=True): +def shape_zoom_affine( + shape: ty.Sequence[int] | np.ndarray, + zooms: ty.Sequence[float] | np.ndarray, + x_flip: bool = True, +) -> np.ndarray: """Get affine implied by given shape and zooms We get the translations from the center of the image (implied by @@ -1305,7 +1366,7 @@ return aff -def rec2dict(rec): +def rec2dict(rec: np.ndarray) -> dict[str, np.generic | np.ndarray]: """Convert recarray to dictionary Also converts scalar values to scalars @@ -1338,7 +1399,7 @@ return dct -def fname_ext_ul_case(fname): +def fname_ext_ul_case(fname: str) -> str: """`fname` with ext changed to upper / lower case if file exists Check for existence of `fname`. If it does exist, return unmodified. If diff -Nru nibabel-5.0.0/nibabel/xmlutils.py nibabel-5.1.0/nibabel/xmlutils.py --- nibabel-5.0.0/nibabel/xmlutils.py 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/nibabel/xmlutils.py 2023-04-03 14:48:22.000000000 +0000 @@ -6,10 +6,7 @@ # copyright and license terms. # ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## -""" -Thin layer around xml.etree.ElementTree, to abstract nibabel xml support. -""" - +"""Thin layer around xml.etree.ElementTree, to abstract nibabel xml support""" from io import BytesIO from xml.etree.ElementTree import Element, SubElement, tostring # noqa from xml.parsers.expat import ParserCreate diff -Nru nibabel-5.0.0/.pre-commit-config.yaml nibabel-5.1.0/.pre-commit-config.yaml --- nibabel-5.0.0/.pre-commit-config.yaml 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/.pre-commit-config.yaml 2023-04-03 14:48:22.000000000 +0000 @@ -17,7 +17,7 @@ hooks: - id: blue - repo: https://github.com/pycqa/isort - rev: 5.11.2 + rev: 5.12.0 hooks: - id: isort - repo: https://github.com/pycqa/flake8 @@ -35,5 +35,8 @@ - types-setuptools - types-Pillow - pydicom - # Sync with tool.mypy['exclude'] - exclude: "^(doc|nisext|tools)/|.*/tests/" + - numpy + - pyzstd + - importlib_resources + args: ["nibabel"] + pass_filenames: false diff -Nru nibabel-5.0.0/pyproject.toml nibabel-5.1.0/pyproject.toml --- nibabel-5.0.0/pyproject.toml 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/pyproject.toml 2023-04-03 14:48:22.000000000 +0000 @@ -10,7 +10,11 @@ readme = "README.rst" license = { text = "MIT License" } requires-python = ">=3.8" -dependencies = ["numpy >=1.19", "packaging >=17", "setuptools"] +dependencies = [ + "numpy >=1.19", + "packaging >=17", + "importlib_resources >=1.3; python_version < '3.9'", +] classifiers = [ "Development Status :: 5 - Production/Stable", "Environment :: Console", @@ -68,11 +72,23 @@ "pytest-httpserver", "pytest-xdist", ] -typing = ["mypy", "pytest", "types-setuptools", "types-Pillow", "pydicom"] +typing = [ + "mypy", + "importlib_resources", + "pydicom", + "pytest", + "pyzstd", + "types-setuptools", + "types-Pillow", +] zstd = ["pyzstd >= 0.14.3"] [tool.hatch.build.targets.sdist] -exclude = [".git_archival.txt"] +exclude = [ + ".git_archival.txt", + # Submodules with large files; if we don't want them in the repo... + "nibabel-data/", +] [tool.hatch.build.targets.wheel] packages = ["nibabel", "nisext"] diff -Nru nibabel-5.0.0/README.rst nibabel-5.1.0/README.rst --- nibabel-5.0.0/README.rst 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/README.rst 2023-04-03 14:48:22.000000000 +0000 @@ -1,94 +1,149 @@ .. -*- rest -*- .. vim:syntax=rst -.. image:: https://codecov.io/gh/nipy/nibabel/branch/master/graph/badge.svg - :target: https://codecov.io/gh/nipy/nibabel +.. image:: doc/pics/logo.png + :target: https://nipy.org/nibabel + :alt: NiBabel logo + +.. list-table:: + :widths: 20 80 + :header-rows: 0 + + * - Code + - + .. image:: https://img.shields.io/pypi/pyversions/nibabel.svg + :target: https://pypi.python.org/pypi/nibabel/ + :alt: PyPI - Python Version + .. image:: https://img.shields.io/badge/code%20style-blue-blue.svg + :target: https://blue.readthedocs.io/en/latest/ + :alt: code style: blue + .. image:: https://img.shields.io/badge/imports-isort-1674b1 + :target: https://pycqa.github.io/isort/ + :alt: imports: isort + .. image:: https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white + :target: https://github.com/pre-commit/pre-commit + :alt: pre-commit + + * - Tests + - + .. image:: https://github.com/nipy/NiBabel/actions/workflows/stable.yml/badge.svg + :target: https://github.com/nipy/NiBabel/actions/workflows/stable.yml + :alt: stable tests + .. image:: https://codecov.io/gh/nipy/NiBabel/branch/master/graph/badge.svg + :target: https://codecov.io/gh/nipy/NiBabel + :alt: codecov badge + + * - PyPI + - + .. image:: https://img.shields.io/pypi/v/nibabel.svg + :target: https://pypi.python.org/pypi/nibabel/ + :alt: PyPI version + .. image:: https://img.shields.io/pypi/dm/nibabel.svg + :target: https://pypistats.org/packages/nibabel + :alt: PyPI - Downloads + + * - Packages + - + .. image:: https://img.shields.io/conda/vn/conda-forge/nibabel + :target: https://anaconda.org/conda-forge/nibabel + :alt: Conda package + .. image:: https://repology.org/badge/version-for-repo/debian_unstable/nibabel.svg?header=Debian%20Unstable + :target: https://repology.org/project/nibabel/versions + :alt: Debian Unstable package + .. image:: https://repology.org/badge/version-for-repo/aur/python:nibabel.svg?header=Arch%20%28%41%55%52%29 + :target: https://repology.org/project/python:nibabel/versions + :alt: Arch (AUR) + .. image:: https://repology.org/badge/version-for-repo/gentoo_ovl_science/nibabel.svg?header=Gentoo%20%28%3A%3Ascience%29 + :target: https://repology.org/project/nibabel/versions + :alt: Gentoo (::science) + .. image:: https://repology.org/badge/version-for-repo/nix_unstable/python:nibabel.svg?header=nixpkgs%20unstable + :target: https://repology.org/project/python:nibabel/versions + :alt: nixpkgs unstable + + * - License & DOI + - + .. image:: https://img.shields.io/pypi/l/nibabel.svg + :target: https://github.com/nipy/nibabel/blob/master/COPYING + :alt: License + .. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.591597.svg + :target: https://doi.org/10.5281/zenodo.591597 + :alt: Zenodo DOI + +.. Following contents should be copied from LONG_DESCRIPTION in nibabel/info.py + + +Read and write access to common neuroimaging file formats, including: +ANALYZE_ (plain, SPM99, SPM2 and later), GIFTI_, NIfTI1_, NIfTI2_, `CIFTI-2`_, +MINC1_, MINC2_, `AFNI BRIK/HEAD`_, ECAT_ and Philips PAR/REC. +In addition, NiBabel also supports FreeSurfer_'s MGH_, geometry, annotation and +morphometry files, and provides some limited support for DICOM_. + +NiBabel's API gives full or selective access to header information (metadata), +and image data is made available via NumPy arrays. For more information, see +NiBabel's `documentation site`_ and `API reference`_. -.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.591597.svg - :target: https://doi.org/10.5281/zenodo.591597 - -.. Following contents should be from LONG_DESCRIPTION in nibabel/info.py - - -======= -NiBabel -======= - -Read / write access to some common neuroimaging file formats - -This package provides read +/- write access to some common medical and -neuroimaging file formats, including: ANALYZE_ (plain, SPM99, SPM2 and later), -GIFTI_, NIfTI1_, NIfTI2_, `CIFTI-2`_, MINC1_, MINC2_, `AFNI BRIK/HEAD`_, MGH_ and -ECAT_ as well as Philips PAR/REC. We can read and write FreeSurfer_ geometry, -annotation and morphometry files. There is some very limited support for -DICOM_. NiBabel is the successor of PyNIfTI_. - -.. _ANALYZE: http://www.grahamwideman.com/gw/brain/analyze/formatdoc.htm +.. _API reference: https://nipy.org/nibabel/api.html .. _AFNI BRIK/HEAD: https://afni.nimh.nih.gov/pub/dist/src/README.attributes -.. _NIfTI1: http://nifti.nimh.nih.gov/nifti-1/ -.. _NIfTI2: http://nifti.nimh.nih.gov/nifti-2/ +.. _ANALYZE: http://www.grahamwideman.com/gw/brain/analyze/formatdoc.htm .. _CIFTI-2: https://www.nitrc.org/projects/cifti/ +.. _DICOM: http://medical.nema.org/ +.. _documentation site: http://nipy.org/nibabel +.. _ECAT: http://xmedcon.sourceforge.net/Docs/Ecat +.. _Freesurfer: https://surfer.nmr.mgh.harvard.edu +.. _GIFTI: https://www.nitrc.org/projects/gifti +.. _MGH: https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/MghFormat .. _MINC1: https://en.wikibooks.org/wiki/MINC/Reference/MINC1_File_Format_Reference .. _MINC2: https://en.wikibooks.org/wiki/MINC/Reference/MINC2.0_File_Format_Reference -.. _PyNIfTI: http://niftilib.sourceforge.net/pynifti/ -.. _GIFTI: https://www.nitrc.org/projects/gifti -.. _MGH: https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/MghFormat -.. _ECAT: http://xmedcon.sourceforge.net/Docs/Ecat -.. _Freesurfer: https://surfer.nmr.mgh.harvard.edu -.. _DICOM: http://medical.nema.org/ +.. _NIfTI1: http://nifti.nimh.nih.gov/nifti-1/ +.. _NIfTI2: http://nifti.nimh.nih.gov/nifti-2/ -The various image format classes give full or selective access to header -(meta) information and access to the image data is made available via NumPy -arrays. +Installation +============ -Website -======= +To install NiBabel's `current release`_ with ``pip``, run:: -Current documentation on nibabel can always be found at the `NIPY nibabel -website `_. + pip install nibabel -Mailing Lists -============= +To install the latest development version, run:: -Please send any questions or suggestions to the `neuroimaging mailing list -`_. + pip install git+https://github.com/nipy/nibabel -Code -==== +When working on NiBabel itself, it may be useful to install in "editable" mode:: -Install nibabel with:: + git clone https://github.com/nipy/nibabel.git + pip install -e ./nibabel - pip install nibabel +For more information on previous releases, see the `release archive`_ or +`development changelog`_. -You may also be interested in: +.. _current release: https://pypi.python.org/pypi/NiBabel +.. _release archive: https://github.com/nipy/NiBabel/releases +.. _development changelog: https://nipy.org/nibabel/changelog.html -* the `nibabel code repository`_ on Github; -* documentation_ for all releases and current development tree; -* download the `current release`_ from pypi; -* download `current development version`_ as a zip file; -* downloads of all `available releases`_. - -.. _nibabel code repository: https://github.com/nipy/nibabel -.. _Documentation: http://nipy.org/nibabel -.. _current release: https://pypi.python.org/pypi/nibabel -.. _current development version: https://github.com/nipy/nibabel/archive/master.zip -.. _available releases: https://github.com/nipy/nibabel/releases +Mailing List +============ + +Please send any questions or suggestions to the `neuroimaging mailing list +`_. License ======= -Nibabel is licensed under the terms of the MIT license. Some code included -with nibabel is licensed under the BSD license. Please see the COPYING file -in the nibabel distribution. - -Citing nibabel -============== - -Please see the `available releases`_ for the release of nibabel that you are -using. Recent releases have a Zenodo_ `Digital Object Identifier`_ badge at -the top of the release notes. Click on the badge for more information. +NiBabel is licensed under the terms of the `MIT license +`__. +Some code included with NiBabel is licensed under the `BSD license`_. +For more information, please see the COPYING_ file. + +.. _BSD license: https://opensource.org/licenses/BSD-3-Clause +.. _COPYING: https://github.com/nipy/nibabel/blob/master/COPYING + +Citation +======== + +NiBabel releases have a Zenodo_ `Digital Object Identifier`_ (DOI) badge at +the top of the release notes. Click on the badge for more information. -.. _zenodo: https://zenodo.org .. _Digital Object Identifier: https://en.wikipedia.org/wiki/Digital_object_identifier +.. _zenodo: https://zenodo.org diff -Nru nibabel-5.0.0/requirements.txt nibabel-5.1.0/requirements.txt --- nibabel-5.0.0/requirements.txt 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/requirements.txt 2023-04-03 14:48:22.000000000 +0000 @@ -1,4 +1,4 @@ # Auto-generated by tools/update_requirements.py numpy >=1.19 packaging >=17 -setuptools +importlib_resources >=1.3; python_version < '3.9' diff -Nru nibabel-5.0.0/.zenodo.json nibabel-5.1.0/.zenodo.json --- nibabel-5.0.0/.zenodo.json 2023-01-09 14:50:15.000000000 +0000 +++ nibabel-5.1.0/.zenodo.json 2023-04-03 14:48:22.000000000 +0000 @@ -74,6 +74,10 @@ "orcid": "0000-0001-8895-2740" }, { + "name": "Baratz, Zvi", + "orcid": "0000-0001-7159-1387" + }, + { "name": "Wang, Hao-Ting", "orcid": "0000-0003-4078-2038" }, @@ -126,10 +130,6 @@ "orcid": "0000-0002-7252-7771" }, { - "name": "Baratz, Zvi", - "orcid": "0000-0001-7159-1387" - }, - { "affiliation": "Montreal Neurological Institute and Hospital", "name": "Markello, Ross", "orcid": "0000-0003-1057-1336" @@ -230,6 +230,9 @@ "name": "Amirbekian, Bago" }, { + "name": "Christian, Horea" + }, + { "name": "Nimmo-Smith, Ian" }, { @@ -275,6 +278,9 @@ "name": "Fauber, Bennet" }, { + "name": "Perez, Fabian" + }, + { "name": "Roberts, Jacob" }, {