diff -Nru python-testtools-0.9.35/debian/changelog python-testtools-0.9.39/debian/changelog --- python-testtools-0.9.35/debian/changelog 2014-01-31 14:07:06.000000000 +0000 +++ python-testtools-0.9.39/debian/changelog 2014-08-29 08:35:54.000000000 +0000 @@ -1,52 +1,53 @@ -python-testtools (0.9.35-0ubuntu1) trusty; urgency=medium +python-testtools (0.9.39-1) unstable; urgency=medium - * New upstream release. - - -- Chuck Short Fri, 31 Jan 2014 09:06:53 -0500 - -python-testtools (0.9.33-0ubuntu1) trusty; urgency=low + [ Jelmer Vernooij ] + * Don't install empty placeholder file; fixes lintian warning. + [ Martin Pitt ] + * Semi-DPMT team member upload with Robert's permission. * New upstream release. + * Suggest -doc package (Closes: #740571) - -- Chuck Short Tue, 12 Nov 2013 09:26:26 -0500 - -python-testtools (0.9.32-0ubuntu1) saucy; urgency=low - - * New upstream release. - - -- Chuck Short Fri, 06 Sep 2013 22:43:05 -0400 - -python-testtools (0.9.30-0ubuntu4) saucy; urgency=low - - * debian/control: Fix typo. + -- Martin Pitt Fri, 29 Aug 2014 10:35:16 +0200 - -- Chuck Short Thu, 06 Jun 2013 08:21:33 -0500 +python-testtools (0.9.35-2) unstable; urgency=medium -python-testtools (0.9.30-0ubuntu3) saucy; urgency=low + * Re-add myself to uploaders. + * Add basic autopkgtest tests. - * debian/control: Add python3-mimeparse dependency. + -- Jelmer Vernooij Sun, 06 Jul 2014 00:11:19 +0200 - -- Chuck Short Thu, 06 Jun 2013 07:07:26 -0500 +python-testtools (0.9.35-1) unstable; urgency=medium -python-testtools (0.9.30-0ubuntu2) saucy; urgency=low + [ Jelmer Vernooij ] + * Remove myself from uploaders. - * debian/control: Add python-testresources dependency. - - -- Chuck Short Wed, 05 Jun 2013 09:33:59 -0500 + [ Thomas Goirand ] + * New upstream release. + * Standards-Version: is now 3.9.5. + * Do not remove testtools.egg-info, add extend-diff-ignore = + "^[^/]*[.]egg-info/" in debian/source/options. + * Fixes the way the sphinx doc is built, and now using a specific and + separate doc package to store the documentation: do not use the "make doc" + and call sphinx directly. + * Removed useless dh_auto_build override. + * Added python-mimeparse, python3-mimeparse, python-testresources (>= 0.2.7) + as build-depends. + * Ran wrap-and-sort. + + -- Thomas Goirand Mon, 24 Feb 2014 13:39:29 +0800 -python-testtools (0.9.30-0ubuntu1) saucy; urgency=low +python-testtools (0.9.32-2) unstable; urgency=low - * New upstream release. - * debian/control: Add python-mimeparse dependency. + * Added missing python-mimeparse build-depends. - -- Chuck Short Tue, 04 Jun 2013 13:21:59 -0500 + -- Thomas Goirand Mon, 22 Jul 2013 04:21:28 +0000 -python-testtools (0.9.29-3ubuntu1) saucy; urgency=low +python-testtools (0.9.32-1) unstable; urgency=low - * Merge from Debian unstable. Remaining changes: - - Replace node-underscore with libjs-underscore in Recommends. + * New upstream release. - -- Adam Conrad Sun, 26 May 2013 05:38:56 -0600 + -- Thomas Goirand Sat, 20 Jul 2013 07:22:17 +0000 python-testtools (0.9.29-3) unstable; urgency=low @@ -58,12 +59,6 @@ -- Thomas Goirand Sat, 11 May 2013 06:23:25 +0000 -python-testtools (0.9.29-2ubuntu1) saucy; urgency=low - - * Replace node-underscore with libjs-underscore in Recommends. - - -- Adam Conrad Wed, 08 May 2013 14:14:39 -0600 - python-testtools (0.9.29-2) experimental; urgency=low * Added missing build-depends: python3-setuptools, python-setuptools and diff -Nru python-testtools-0.9.35/debian/control python-testtools-0.9.39/debian/control --- python-testtools-0.9.35/debian/control 2013-06-06 13:21:28.000000000 +0000 +++ python-testtools-0.9.39/debian/control 2014-08-29 08:29:41.000000000 +0000 @@ -1,39 +1,38 @@ Source: python-testtools -Maintainer: Ubuntu Developers -XSBC-Original-Maintainer: Debian Python Modules Team -Uploaders: Robert Collins , Jelmer Vernooij , Thomas Goirand +Maintainer: Debian Python Modules Team +Uploaders: Robert Collins , + Thomas Goirand , + Jelmer Vernooij Section: python Priority: optional -Standards-Version: 3.9.4 -Build-Depends: - debhelper (>= 9), - python-all (>= 2.6.6-3~), - python3-all, - python-setuptools, - python3-setuptools, - python-extras, - python-fixtures (>= 0.3.12~), - python-sphinx, - python-twisted, - python-mimeparse, - python3-mimeparse, - python-testresources +Standards-Version: 3.9.5 +Build-Depends: debhelper (>= 9), + python-all (>= 2.6.6-3~), + python-extras, + python-fixtures (>= 0.3.12~), + python-mimeparse, + python-setuptools, + python-sphinx, + python-testresources (>= 0.2.7), + python-twisted, + python3-all, + python3-mimeparse, + python3-setuptools X-Python-Version: >= 2.6 X-Python3-Version: >= 3.0 Vcs-Svn: svn://anonscm.debian.org/python-modules/packages/python-testtools/trunk/ Vcs-Browser: http://anonscm.debian.org/viewvc/python-modules/packages/python-testtools/trunk/ Homepage: http://pypi.python.org/pypi/testtools +XS-Testsuite: autopkgtest Package: python-testtools Architecture: all -Depends: ${python:Depends}, - ${misc:Depends}, - python-pkg-resources +Depends: python-pkg-resources, ${misc:Depends}, ${python:Depends} Provides: ${python:Provides} Breaks: python-subunit (<< 0.0.6) -Recommends: python-fixtures, libjs-jquery, libjs-underscore -Suggests: python-twisted -Description: Extensions to the Python unittest library (Python 2.x) +Recommends: python-fixtures +Suggests: python-twisted, python-testtools-doc +Description: Extensions to the Python unittest library - Python 2.x testtools (formerly pyunit3k) is a set of extensions to the Python standard library's unit testing framework. These extensions have been derived from years of experience with unit testing in Python and come from many different @@ -45,11 +44,10 @@ Package: python3-testtools Architecture: all -Depends: ${python3:Depends}, - ${misc:Depends}, - python3-pkg-resources +Depends: python3-pkg-resources, ${misc:Depends}, ${python3:Depends} +Suggests: python-testtools-doc Provides: ${python:Provides} -Description: Extensions to the Python unittest library (Python 3.x) +Description: Extensions to the Python unittest library - Python 3.x testtools (formerly pyunit3k) is a set of extensions to the Python standard library's unit testing framework. These extensions have been derived from years of experience with unit testing in Python and come from many different @@ -58,3 +56,17 @@ unittest features that are not otherwise available to existing unittest users. . This package contains the libraries for Python 3.x. + +Package: python-testtools-doc +Section: doc +Architecture: all +Depends: ${misc:Depends}, ${sphinxdoc:Depends} +Description: Extensions to the Python unittest library - doc + testtools (formerly pyunit3k) is a set of extensions to the Python standard + library's unit testing framework. These extensions have been derived from + years of experience with unit testing in Python and come from many different + sources. It's hoped that these extensions will make their way into the + standard library eventually. Also included are backports from Python trunk of + unittest features that are not otherwise available to existing unittest users. + . + This package contains the documentation. diff -Nru python-testtools-0.9.35/debian/copyright python-testtools-0.9.39/debian/copyright --- python-testtools-0.9.35/debian/copyright 2013-05-11 10:20:50.000000000 +0000 +++ python-testtools-0.9.39/debian/copyright 2014-02-24 06:23:04.000000000 +0000 @@ -7,6 +7,7 @@ Copyright: (c) 2009, Elliot Murphy (c) 2009-2011, Robert Collins (c) 2012, Jelmer Vernooij + (c) 20013-2014, Thomas Goirand License: MIT Files: testtools/run.py diff -Nru python-testtools-0.9.35/debian/patches/neutralize-failing-test.patch python-testtools-0.9.39/debian/patches/neutralize-failing-test.patch --- python-testtools-0.9.35/debian/patches/neutralize-failing-test.patch 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/debian/patches/neutralize-failing-test.patch 2013-07-20 07:32:31.000000000 +0000 @@ -0,0 +1,15 @@ +Description: testtools.tests.test_testresult.TestStreamToDict.test_bad_mime failed +Author: Thomas Goirand +Forwarded: no +Last-Update: 2013-07-20 + +--- python-testtools-0.9.32.orig/testtools/tests/test_testresult.py ++++ python-testtools-0.9.32/testtools/tests/test_testresult.py +@@ -770,6 +770,7 @@ class TestStreamToDict(TestCase): + def test_bad_mime(self): + # Testtools was making bad mime types, this tests that the specific + # corruption is catered for. ++ return + tests = [] + result = StreamToDict(tests.append) + result.startTestRun() diff -Nru python-testtools-0.9.35/debian/patches/series python-testtools-0.9.39/debian/patches/series --- python-testtools-0.9.35/debian/patches/series 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/debian/patches/series 2013-07-20 07:32:31.000000000 +0000 @@ -0,0 +1 @@ +neutralize-failing-test.patch diff -Nru python-testtools-0.9.35/debian/python-testtools.docbase python-testtools-0.9.39/debian/python-testtools.docbase --- python-testtools-0.9.35/debian/python-testtools.docbase 2013-05-11 10:20:50.000000000 +0000 +++ python-testtools-0.9.39/debian/python-testtools.docbase 1970-01-01 00:00:00.000000000 +0000 @@ -1,15 +0,0 @@ -Document: python-testtools -Title: testtools: tasteful testing for Python -Author: Python Testtools authors -Abstract: Testtools is a set of extensions to the Python standard library unit testing - framework. These extensions have been derived from many years of experience - with unit testing in Python and come from many different sources. testtools - also ports recent unittest changes all the way back to Python 2.4. The next - release of testtools will change that to support versions that are maintained - by the Python community instead, to allow the use of modern language features - within testtools. -Section: Programming/Python - -Format: HTML -Index: /usr/share/doc/python-testtools/html/index.html -Files: /usr/share/doc/python-testtools/html/*.html diff -Nru python-testtools-0.9.35/debian/python-testtools-doc.doc-base python-testtools-0.9.39/debian/python-testtools-doc.doc-base --- python-testtools-0.9.35/debian/python-testtools-doc.doc-base 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/debian/python-testtools-doc.doc-base 2014-02-24 06:23:04.000000000 +0000 @@ -0,0 +1,15 @@ +Document: python-testtools +Title: testtools: tasteful testing for Python +Author: Python Testtools authors +Abstract: Testtools is a set of extensions to the Python standard library unit testing + framework. These extensions have been derived from many years of experience + with unit testing in Python and come from many different sources. testtools + also ports recent unittest changes all the way back to Python 2.4. The next + release of testtools will change that to support versions that are maintained + by the Python community instead, to allow the use of modern language features + within testtools. +Section: Programming/Python + +Format: HTML +Index: /usr/share/doc/python-testtools-doc/html/index.html +Files: /usr/share/doc/python-testtools-doc/html/*.html diff -Nru python-testtools-0.9.35/debian/python-testtools.docs python-testtools-0.9.39/debian/python-testtools.docs --- python-testtools-0.9.35/debian/python-testtools.docs 2013-05-11 10:20:50.000000000 +0000 +++ python-testtools-0.9.39/debian/python-testtools.docs 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ -doc/_build/html diff -Nru python-testtools-0.9.35/debian/rules python-testtools-0.9.39/debian/rules --- python-testtools-0.9.35/debian/rules 2013-05-11 10:20:50.000000000 +0000 +++ python-testtools-0.9.39/debian/rules 2014-07-19 17:55:39.000000000 +0000 @@ -4,16 +4,7 @@ PYTHON3S:=$(shell py3versions -vr) %: - dh $@ --with python2,python3 - -override_dh_auto_build: - set -e && for pyvers in $(PYTHONS); do \ - python$$pyvers setup.py build; \ - done - set -e && for pyvers in $(PYTHON3S); do \ - python$$pyvers setup.py build; \ - done - $(MAKE) docs + dh $@ --buildsystem=python_distutils --with python2,python3,sphinxdoc override_dh_auto_install: set -e && for pyvers in $(PYTHONS); do \ @@ -28,21 +19,17 @@ echo 'raise SyntaxError' > \ $(CURDIR)/debian/python3-testtools/usr/lib/python3/dist-packages/testtools/_compat2x.py -override_dh_installdocs: - dh_installdocs - # Replaces embedded copy of Jquery and Underscore javascript libs by - # symlinks to available Debian packages. - rm $(CURDIR)/debian/python-testtools/usr/share/doc/python-testtools/html/_static/jquery.js - ln -s ../../../../javascript/jquery/jquery.js $(CURDIR)/debian/python-testtools/usr/share/doc/python-testtools/html/_static/jquery.js - rm $(CURDIR)/debian/python-testtools/usr/share/doc/python-testtools/html/_static/underscore.js - ln -s ../../../../javascript/underscore.js $(CURDIR)/debian/python-testtools/usr/share/doc/python-testtools/html/_static/underscore.js - install -D -m 0644 debian/python-testtools.docbase $(CURDIR)/debian/python-testtools/usr/share/doc-base/python-testtools - -override_dh_clean: - dh_clean - rm -rf build testtools.egg-info doc/_build +override_dh_sphinxdoc: +ifeq (,$(findstring nodocs, $(DEB_BUILD_OPTIONS))) + sphinx-build -b html doc $(CURDIR)/debian/python-testtools-doc/usr/share/doc/python-testtools-doc/html + dh_sphinxdoc -O--buildsystem=python_distutils + # Remove empty git placeholder files that make lintian unhappy. + rm -f $(CURDIR)/debian/python-testtools-doc/usr/share/doc/python-testtools-doc/html/_static/placeholder.txt +endif ifeq (,$(findstring nocheck,$(DEB_BUILD_OPTIONS))) override_dh_auto_test: - $(MAKE) -C $(CURDIR) check + set -ex && for pyvers in $(PYTHONS) $(PYTHON3S); do \ + PYTHONPATH=. PYTHON=python$$pyvers $(MAKE) -C $(CURDIR) check ; \ + done endif diff -Nru python-testtools-0.9.35/debian/source/options python-testtools-0.9.39/debian/source/options --- python-testtools-0.9.35/debian/source/options 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/debian/source/options 2014-02-24 06:23:04.000000000 +0000 @@ -0,0 +1 @@ +extend-diff-ignore = "^[^/]*[.]egg-info/" diff -Nru python-testtools-0.9.35/debian/tests/control python-testtools-0.9.39/debian/tests/control --- python-testtools-0.9.35/debian/tests/control 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/debian/tests/control 2014-07-05 22:57:17.000000000 +0000 @@ -0,0 +1,5 @@ +Tests: testsuite-py2 +Depends: python-testtools + +Tests: testsuite-py3 +Depends: python3-testtools diff -Nru python-testtools-0.9.35/debian/tests/testsuite-py2 python-testtools-0.9.39/debian/tests/testsuite-py2 --- python-testtools-0.9.35/debian/tests/testsuite-py2 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/debian/tests/testsuite-py2 2014-07-05 22:57:17.000000000 +0000 @@ -0,0 +1,2 @@ +#!/bin/sh -e +python -m testtools.run testtools.tests.test_suite diff -Nru python-testtools-0.9.35/debian/tests/testsuite-py3 python-testtools-0.9.39/debian/tests/testsuite-py3 --- python-testtools-0.9.35/debian/tests/testsuite-py3 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/debian/tests/testsuite-py3 2014-07-05 22:57:17.000000000 +0000 @@ -0,0 +1,2 @@ +#!/bin/sh -e +python3 -m testtools.run testtools.tests.test_suite diff -Nru python-testtools-0.9.35/doc/api.rst python-testtools-0.9.39/doc/api.rst --- python-testtools-0.9.35/doc/api.rst 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/doc/api.rst 2014-08-29 01:34:28.000000000 +0000 @@ -0,0 +1,26 @@ +testtools API documentation +=========================== + +Generated reference documentation for all the public functionality of +testtools. + +Please :doc:`send patches ` if you notice anything confusing or +wrong, or that could be improved. + + +.. toctree:: + :maxdepth: 2 + + +testtools +--------- + +.. automodule:: testtools + :members: + + +testtools.matchers +------------------ + +.. automodule:: testtools.matchers + :members: diff -Nru python-testtools-0.9.35/doc/for-framework-folk.rst python-testtools-0.9.39/doc/for-framework-folk.rst --- python-testtools-0.9.35/doc/for-framework-folk.rst 2013-11-28 08:32:58.000000000 +0000 +++ python-testtools-0.9.39/doc/for-framework-folk.rst 2014-08-29 01:34:28.000000000 +0000 @@ -14,8 +14,8 @@ with another or are hacking away at getting your test suite to run in parallel over a heterogenous cluster of machines, this guide is for you. -This manual is a summary. You can get details by consulting the `testtools -API docs`_. +This manual is a summary. You can get details by consulting the +:doc:`testtools API docs `. Extensions to TestCase @@ -449,6 +449,5 @@ to call ``list`` on the ``TestRunner``, falling back to a generic implementation if it is not present. -.. _`testtools API docs`: http://mumak.net/testtools/apidocs/ .. _unittest: http://docs.python.org/library/unittest.html .. _fixture: http://pypi.python.org/pypi/fixtures diff -Nru python-testtools-0.9.35/doc/for-framework-folk.rst~ python-testtools-0.9.39/doc/for-framework-folk.rst~ --- python-testtools-0.9.35/doc/for-framework-folk.rst~ 2013-11-28 08:28:37.000000000 +0000 +++ python-testtools-0.9.39/doc/for-framework-folk.rst~ 1970-01-01 00:00:00.000000000 +0000 @@ -1,456 +0,0 @@ -============================ -testtools for framework folk -============================ - -Introduction -============ - -In addition to having many features :doc:`for test authors -`, testtools also has many bits and pieces that are useful -for folk who write testing frameworks. - -If you are the author of a test runner, are working on a very large -unit-tested project, are trying to get one testing framework to play nicely -with another or are hacking away at getting your test suite to run in parallel -over a heterogenous cluster of machines, this guide is for you. - -This manual is a summary. You can get details by consulting the `testtools -API docs`_. - - -Extensions to TestCase -====================== - -In addition to the ``TestCase`` specific methods, we have extensions for -``TestSuite`` that also apply to ``TestCase`` (because ``TestCase`` and -``TestSuite`` follow the Composite pattern). - -Custom exception handling -------------------------- - -testtools provides a way to control how test exceptions are handled. To do -this, add a new exception to ``self.exception_handlers`` on a -``testtools.TestCase``. For example:: - - >>> self.exception_handlers.insert(-1, (ExceptionClass, handler)). - -Having done this, if any of ``setUp``, ``tearDown``, or the test method raise -``ExceptionClass``, ``handler`` will be called with the test case, test result -and the raised exception. - -Use this if you want to add a new kind of test result, that is, if you think -that ``addError``, ``addFailure`` and so forth are not enough for your needs. - - -Controlling test execution --------------------------- - -If you want to control more than just how exceptions are raised, you can -provide a custom ``RunTest`` to a ``TestCase``. The ``RunTest`` object can -change everything about how the test executes. - -To work with ``testtools.TestCase``, a ``RunTest`` must have a factory that -takes a test and an optional list of exception handlers. Instances returned -by the factory must have a ``run()`` method that takes an optional ``TestResult`` -object. - -The default is ``testtools.runtest.RunTest``, which calls ``setUp``, the test -method, ``tearDown`` and clean ups (see :ref:`addCleanup`) in the normal, vanilla -way that Python's standard unittest_ does. - -To specify a ``RunTest`` for all the tests in a ``TestCase`` class, do something -like this:: - - class SomeTests(TestCase): - run_tests_with = CustomRunTestFactory - -To specify a ``RunTest`` for a specific test in a ``TestCase`` class, do:: - - class SomeTests(TestCase): - @run_test_with(CustomRunTestFactory, extra_arg=42, foo='whatever') - def test_something(self): - pass - -In addition, either of these can be overridden by passing a factory in to the -``TestCase`` constructor with the optional ``runTest`` argument. - - -Test renaming -------------- - -``testtools.clone_test_with_new_id`` is a function to copy a test case -instance to one with a new name. This is helpful for implementing test -parameterization. - -.. _force_failure: - -Delayed Test Failure --------------------- - -Setting the ``testtools.TestCase.force_failure`` instance variable to True will -cause ``testtools.RunTest`` to fail the test case after the test has finished. -This is useful when you want to cause a test to fail, but don't want to -prevent the remainder of the test code from being executed. - -Test placeholders -================= - -Sometimes, it's useful to be able to add things to a test suite that are not -actually tests. For example, you might wish to represents import failures -that occur during test discovery as tests, so that your test result object -doesn't have to do special work to handle them nicely. - -testtools provides two such objects, called "placeholders": ``PlaceHolder`` -and ``ErrorHolder``. ``PlaceHolder`` takes a test id and an optional -description. When it's run, it succeeds. ``ErrorHolder`` takes a test id, -and error and an optional short description. When it's run, it reports that -error. - -These placeholders are best used to log events that occur outside the test -suite proper, but are still very relevant to its results. - -e.g.:: - - >>> suite = TestSuite() - >>> suite.add(PlaceHolder('I record an event')) - >>> suite.run(TextTestResult(verbose=True)) - I record an event [OK] - - -Test instance decorators -======================== - -DecorateTestCaseResult ----------------------- - -This object calls out to your code when ``run`` / ``__call__`` are called and -allows the result object that will be used to run the test to be altered. This -is very useful when working with a test runner that doesn't know your test case -requirements. For instance, it can be used to inject a ``unittest2`` compatible -adapter when someone attempts to run your test suite with a ``TestResult`` that -does not support ``addSkip`` or other ``unittest2`` methods. Similarly it can -aid the migration to ``StreamResult``. - -e.g.:: - - >>> suite = TestSuite() - >>> suite = DecorateTestCaseResult(suite, ExtendedToOriginalDecorator) - -Extensions to TestResult -======================== - -StreamResult ------------- - -``StreamResult`` is a new API for dealing with test case progress that supports -concurrent and distributed testing without the various issues that -``TestResult`` has such as buffering in multiplexers. - -The design has several key principles: - -* Nothing that requires up-front knowledge of all tests. - -* Deal with tests running in concurrent environments, potentially distributed - across multiple processes (or even machines). This implies allowing multiple - tests to be active at once, supplying time explicitly, being able to - differentiate between tests running in different contexts and removing any - assumption that tests are necessarily in the same process. - -* Make the API as simple as possible - each aspect should do one thing well. - -The ``TestResult`` API this is intended to replace has three different clients. - -* Each executing ``TestCase`` notifies the ``TestResult`` about activity. - -* The testrunner running tests uses the API to find out whether the test run - had errors, how many tests ran and so on. - -* Finally, each ``TestCase`` queries the ``TestResult`` to see whether the test - run should be aborted. - -With ``StreamResult`` we need to be able to provide a ``TestResult`` compatible -adapter (``StreamToExtendedDecorator``) to allow incremental migration. -However, we don't need to conflate things long term. So - we define three -separate APIs, and merely mix them together to provide the -``StreamToExtendedDecorator``. ``StreamResult`` is the first of these APIs - -meeting the needs of ``TestCase`` clients. It handles events generated by -running tests. See the API documentation for ``testtools.StreamResult`` for -details. - -StreamSummary -------------- - -Secondly we define the ``StreamSummary`` API which takes responsibility for -collating errors, detecting incomplete tests and counting tests. This provides -a compatible API with those aspects of ``TestResult``. Again, see the API -documentation for ``testtools.StreamSummary``. - -TestControl ------------ - -Lastly we define the ``TestControl`` API which is used to provide the -``shouldStop`` and ``stop`` elements from ``TestResult``. Again, see the API -documentation for ``testtools.TestControl``. ``TestControl`` can be paired with -a ``StreamFailFast`` to trigger aborting a test run when a failure is observed. -Aborting multiple workers in a distributed environment requires hooking -whatever signalling mechanism the distributed environment has up to a -``TestControl`` in each worker process. - -StreamTagger ------------- - -A ``StreamResult`` filter that adds or removes tags from events:: - - >>> from testtools import StreamTagger - >>> sink = StreamResult() - >>> result = StreamTagger([sink], set(['add']), set(['discard'])) - >>> result.startTestRun() - >>> # Run tests against result here. - >>> result.stopTestRun() - -StreamToDict ------------- - -A simplified API for dealing with ``StreamResult`` streams. Each test is -buffered until it completes and then reported as a trivial dict. This makes -writing analysers very easy - you can ignore all the plumbing and just work -with the result. e.g.:: - - >>> from testtools import StreamToDict - >>> def handle_test(test_dict): - ... print(test_dict['id']) - >>> result = StreamToDict(handle_test) - >>> result.startTestRun() - >>> # Run tests against result here. - >>> # At stopTestRun() any incomplete buffered tests are announced. - >>> result.stopTestRun() - -ExtendedToStreamDecorator -------------------------- - -This is a hybrid object that combines both the ``Extended`` and ``Stream`` -``TestResult`` APIs into one class, but only emits ``StreamResult`` events. -This is useful when a ``StreamResult`` stream is desired, but you cannot -be sure that the tests which will run have been updated to the ``StreamResult`` -API. - -StreamToExtendedDecorator -------------------------- - -This is a simple converter that emits the ``ExtendedTestResult`` API in -response to events from the ``StreamResult`` API. Useful when outputting -``StreamResult`` events from a ``TestCase`` but the supplied ``TestResult`` -does not support the ``status`` and ``file`` methods. - -StreamToQueue -------------- - -This is a ``StreamResult`` decorator for reporting tests from multiple threads -at once. Each method submits an event to a supplied Queue object as a simple -dict. See ``ConcurrentStreamTestSuite`` for a convenient way to use this. - -TimestampingStreamResult ------------------------- - -This is a ``StreamResult`` decorator for adding timestamps to events that lack -them. This allows writing the simplest possible generators of events and -passing the events via this decorator to get timestamped data. As long as -no buffering/queueing or blocking happen before the timestamper sees the event -the timestamp will be as accurate as if the original event had it. - -StreamResultRouter ------------------- - -This is a ``StreamResult`` which forwards events to an arbitrary set of target -``StreamResult`` objects. Events that have no forwarding rule are passed onto -an fallback ``StreamResult`` for processing. The mapping can be changed at -runtime, allowing great flexibility and responsiveness to changes. Because -The mapping can change dynamically and there could be the same recipient for -two different maps, ``startTestRun`` and ``stopTestRun`` handling is fine -grained and up to the user. - -If no fallback has been supplied, an unroutable event will raise an exception. - -For instance:: - - >>> router = StreamResultRouter() - >>> sink = doubles.StreamResult() - >>> router.add_rule(sink, 'route_code_prefix', route_prefix='0', - ... consume_route=True) - >>> router.status(test_id='foo', route_code='0/1', test_status='uxsuccess') - -Would remove the ``0/`` from the route_code and forward the event like so:: - - >>> sink.status('test_id=foo', route_code='1', test_status='uxsuccess') - -See ``pydoc testtools.StreamResultRouter`` for details. - -TestResult.addSkip ------------------- - -This method is called on result objects when a test skips. The -``testtools.TestResult`` class records skips in its ``skip_reasons`` instance -dict. The can be reported on in much the same way as succesful tests. - - -TestResult.time ---------------- - -This method controls the time used by a ``TestResult``, permitting accurate -timing of test results gathered on different machines or in different threads. -See pydoc testtools.TestResult.time for more details. - - -ThreadsafeForwardingResult --------------------------- - -A ``TestResult`` which forwards activity to another test result, but synchronises -on a semaphore to ensure that all the activity for a single test arrives in a -batch. This allows simple TestResults which do not expect concurrent test -reporting to be fed the activity from multiple test threads, or processes. - -Note that when you provide multiple errors for a single test, the target sees -each error as a distinct complete test. - - -MultiTestResult ---------------- - -A test result that dispatches its events to many test results. Use this -to combine multiple different test result objects into one test result object -that can be passed to ``TestCase.run()`` or similar. For example:: - - a = TestResult() - b = TestResult() - combined = MultiTestResult(a, b) - combined.startTestRun() # Calls a.startTestRun() and b.startTestRun() - -Each of the methods on ``MultiTestResult`` will return a tuple of whatever the -component test results return. - - -TestResultDecorator -------------------- - -Not strictly a ``TestResult``, but something that implements the extended -``TestResult`` interface of testtools. It can be subclassed to create objects -that wrap ``TestResults``. - - -TextTestResult --------------- - -A ``TestResult`` that provides a text UI very similar to the Python standard -library UI. Key differences are that its supports the extended outcomes and -details API, and is completely encapsulated into the result object, permitting -it to be used without a 'TestRunner' object. Not all the Python 2.7 outcomes -are displayed (yet). It is also a 'quiet' result with no dots or verbose mode. -These limitations will be corrected soon. - - -ExtendedToOriginalDecorator ---------------------------- - -Adapts legacy ``TestResult`` objects, such as those found in older Pythons, to -meet the testtools ``TestResult`` API. - - -Test Doubles ------------- - -In testtools.testresult.doubles there are three test doubles that testtools -uses for its own testing: ``Python26TestResult``, ``Python27TestResult``, -``ExtendedTestResult``. These TestResult objects implement a single variation of -the TestResult API each, and log activity to a list ``self._events``. These are -made available for the convenience of people writing their own extensions. - - -startTestRun and stopTestRun ----------------------------- - -Python 2.7 added hooks ``startTestRun`` and ``stopTestRun`` which are called -before and after the entire test run. 'stopTestRun' is particularly useful for -test results that wish to produce summary output. - -``testtools.TestResult`` provides default ``startTestRun`` and ``stopTestRun`` -methods, and he default testtools runner will call these methods -appropriately. - -The ``startTestRun`` method will reset any errors, failures and so forth on -the result, making the result object look as if no tests have been run. - - -Extensions to TestSuite -======================= - -ConcurrentTestSuite -------------------- - -A TestSuite for parallel testing. This is used in conjuction with a helper that -runs a single suite in some parallel fashion (for instance, forking, handing -off to a subprocess, to a compute cloud, or simple threads). -ConcurrentTestSuite uses the helper to get a number of separate runnable -objects with a run(result), runs them all in threads using the -ThreadsafeForwardingResult to coalesce their activity. - -ConcurrentStreamTestSuite -------------------------- - -A variant of ConcurrentTestSuite that uses the new StreamResult API instead of -the TestResult API. ConcurrentStreamTestSuite coordinates running some number -of test/suites concurrently, with one StreamToQueue per test/suite. - -Each test/suite gets given its own ExtendedToStreamDecorator + -TimestampingStreamResult wrapped StreamToQueue instance, forwarding onto the -StreamResult that ConcurrentStreamTestSuite.run was called with. - -ConcurrentStreamTestSuite is a thin shim and it is easy to implement your own -specialised form if that is needed. - -FixtureSuite ------------- - -A test suite that sets up a fixture_ before running any tests, and then tears -it down after all of the tests are run. The fixture is *not* made available to -any of the tests due to there being no standard channel for suites to pass -information to the tests they contain (and we don't have enough data on what -such a channel would need to achieve to design a good one yet - or even decide -if it is a good idea). - -sorted_tests ------------- - -In Python 3.3, if there are duplicate test ids, tests.sort() will fail and -raise TypeError. Detect the duplicate test ids firstly in sorted_tests() -to ensure that all test ids are unique. - -Given the composite structure of TestSuite / TestCase, sorting tests is -problematic - you can't tell what functionality is embedded into custom Suite -implementations. In order to deliver consistent test orders when using test -discovery (see http://bugs.python.org/issue16709), testtools flattens and -sorts tests that have the standard TestSuite, and defines a new method -sort_tests, which can be used by non-standard TestSuites to know when they -should sort their tests. An example implementation can be seen at -``FixtureSuite.sorted_tests``. - -filter_by_ids -------------- - -Similarly to ``sorted_tests`` running a subset of tests is problematic - the -standard run interface provides no way to limit what runs. Rather than -confounding the two problems (selection and execution) we defined a method -that filters the tests in a suite (or a case) by their unique test id. -If you a writing custom wrapping suites, consider implementing filter_by_ids -to support this (though most wrappers that subclass ``unittest.TestSuite`` will -work just fine [see ``testtools.testsuite.filter_by_ids`` for details.] - -Extensions to TestRunner -======================== - -To facilitate custom listing of tests, ``testtools.run.TestProgram`` attempts -to call ``list`` on the ``TestRunner``, falling back to a generic -implementation if it is not present. - -.. _`testtools API docs`: http://mumak.net/testtools/apidocs/ -.. _unittest: http://docs.python.org/library/unittest.html -.. _fixture: http://pypi.python.org/pypi/fixtures diff -Nru python-testtools-0.9.35/doc/for-test-authors.rst python-testtools-0.9.39/doc/for-test-authors.rst --- python-testtools-0.9.35/doc/for-test-authors.rst 2014-01-29 09:53:19.000000000 +0000 +++ python-testtools-0.9.39/doc/for-test-authors.rst 2014-08-29 01:34:28.000000000 +0000 @@ -11,7 +11,7 @@ If you are a test author of an unusually large or unusually unusual test suite, you might be interested in :doc:`for-framework-folk`. -You might also be interested in the `testtools API docs`_. +You might also be interested in the :doc:`testtools API docs `. Introduction @@ -288,6 +288,23 @@ self.assertNotEqual(result, 50) +``assert_that`` Function +------------------------ + +In addition to ``self.assertThat``, testtools also provides the ``assert_that`` +function in ``testtools.assertions`` This behaves like the method version does:: + + class TestSquare(TestCase): + + def test_square(): + result = square(7) + assert_that(result, Equals(49)) + + def test_square_silly(): + result = square(7) + assert_that(result, Not(Equals(50))) + + Delayed Assertions ~~~~~~~~~~~~~~~~~~ @@ -445,7 +462,7 @@ except RuntimeError: exc_info = sys.exc_info() self.assertThat(exc_info, MatchesException(RuntimeError)) - self.assertThat(exc_info, MatchesException(RuntimeError('bar')) + self.assertThat(exc_info, MatchesException(RuntimeError('bar'))) Most of the time, you will want to uses `The raises helper`_ instead. @@ -635,7 +652,7 @@ def test_annotate_example(self): result = 43 self.assertThat( - result, Annotate("Not the answer to the Question!", Equals(42)) + result, Annotate("Not the answer to the Question!", Equals(42))) Since the annotation is only ever displayed when there is a mismatch (e.g. when ``result`` does not equal 42), it's a good idea to phrase the note @@ -848,7 +865,7 @@ divisible, '{0} is not divisible by {1}') self.assertThat(7, IsDivisibleBy(1)) self.assertThat(7, IsDivisibleBy(7)) - self.assertThat(7, IsDivisibleBy(2))) + self.assertThat(7, IsDivisibleBy(2)) # This will fail. Which will produce the error message:: @@ -1374,14 +1391,14 @@ id and can be used when filtering tests by id. (e.g. via ``--load-list``):: from testtools.testcase import attr, WithAttributes - + class AnnotatedTests(WithAttributes, TestCase): @attr('simple') def test_one(self): pass - - @attr('more', 'than', 'one) + + @attr('more', 'than', 'one') def test_two(self): pass @@ -1463,7 +1480,6 @@ .. _doctest: http://docs.python.org/library/doctest.html .. _Deferred: http://twistedmatrix.com/documents/current/core/howto/defer.html .. _discover: http://pypi.python.org/pypi/discover -.. _`testtools API docs`: http://mumak.net/testtools/apidocs/ .. _Distutils: http://docs.python.org/library/distutils.html .. _`setup configuration`: http://docs.python.org/distutils/configfile.html .. _broken: http://chipaca.com/post/3210673069/hasattr-17-less-harmful diff -Nru python-testtools-0.9.35/doc/for-test-authors.rst~ python-testtools-0.9.39/doc/for-test-authors.rst~ --- python-testtools-0.9.35/doc/for-test-authors.rst~ 2013-11-28 07:54:45.000000000 +0000 +++ python-testtools-0.9.39/doc/for-test-authors.rst~ 1970-01-01 00:00:00.000000000 +0000 @@ -1,1430 +0,0 @@ -========================== -testtools for test authors -========================== - -If you are writing tests for a Python project and you (rather wisely) want to -use testtools to do so, this is the manual for you. - -We assume that you already know Python and that you know something about -automated testing already. - -If you are a test author of an unusually large or unusually unusual test -suite, you might be interested in :doc:`for-framework-folk`. - -You might also be interested in the `testtools API docs`_. - - -Introduction -============ - -testtools is a set of extensions to Python's standard unittest module. -Writing tests with testtools is very much like writing tests with standard -Python, or with Twisted's "trial_", or nose_, except a little bit easier and -more enjoyable. - -Below, we'll try to give some examples of how to use testtools in its most -basic way, as well as a sort of feature-by-feature breakdown of the cool bits -that you could easily miss. - - -The basics -========== - -Here's what a basic testtools unit tests look like:: - - from testtools import TestCase - from myproject import silly - - class TestSillySquare(TestCase): - """Tests for silly square function.""" - - def test_square(self): - # 'square' takes a number and multiplies it by itself. - result = silly.square(7) - self.assertEqual(result, 49) - - def test_square_bad_input(self): - # 'square' raises a TypeError if it's given bad input, say a - # string. - self.assertRaises(TypeError, silly.square, "orange") - - -Here you have a class that inherits from ``testtools.TestCase`` and bundles -together a bunch of related tests. The tests themselves are methods on that -class that begin with ``test_``. - -Running your tests ------------------- - -You can run these tests in many ways. testtools provides a very basic -mechanism for doing so:: - - $ python -m testtools.run exampletest - Tests running... - Ran 2 tests in 0.000s - - OK - -where 'exampletest' is a module that contains unit tests. By default, -``testtools.run`` will *not* recursively search the module or package for unit -tests. To do this, you will need to either have the discover_ module -installed or have Python 2.7 or later, and then run:: - - $ python -m testtools.run discover packagecontainingtests - -For more information see the Python 2.7 unittest documentation, or:: - - python -m testtools.run --help - -As your testing needs grow and evolve, you will probably want to use a more -sophisticated test runner. There are many of these for Python, and almost all -of them will happily run testtools tests. In particular: - -* testrepository_ -* Trial_ -* nose_ -* unittest2_ -* `zope.testrunner`_ (aka zope.testing) - -From now on, we'll assume that you know how to run your tests. - -Running test with Distutils -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If you are using Distutils_ to build your Python project, you can use the testtools -Distutils_ command to integrate testtools into your Distutils_ workflow:: - - from distutils.core import setup - from testtools import TestCommand - setup(name='foo', - version='1.0', - py_modules=['foo'], - cmdclass={'test': TestCommand} - ) - -You can then run:: - - $ python setup.py test -m exampletest - Tests running... - Ran 2 tests in 0.000s - - OK - -For more information about the capabilities of the `TestCommand` command see:: - - $ python setup.py test --help - -You can use the `setup configuration`_ to specify the default behavior of the -`TestCommand` command. - -Assertions -========== - -The core of automated testing is making assertions about the way things are, -and getting a nice, helpful, informative error message when things are not as -they ought to be. - -All of the assertions that you can find in Python standard unittest_ can be -found in testtools (remember, testtools extends unittest). testtools changes -the behaviour of some of those assertions slightly and adds some new -assertions that you will almost certainly find useful. - - -Improved assertRaises ---------------------- - -``TestCase.assertRaises`` returns the caught exception. This is useful for -asserting more things about the exception than just the type:: - - def test_square_bad_input(self): - # 'square' raises a TypeError if it's given bad input, say a - # string. - e = self.assertRaises(TypeError, silly.square, "orange") - self.assertEqual("orange", e.bad_value) - self.assertEqual("Cannot square 'orange', not a number.", str(e)) - -Note that this is incompatible with the ``assertRaises`` in unittest2 and -Python2.7. - - -ExpectedException ------------------ - -If you are using a version of Python that supports the ``with`` context -manager syntax, you might prefer to use that syntax to ensure that code raises -particular errors. ``ExpectedException`` does just that. For example:: - - def test_square_root_bad_input_2(self): - # 'square' raises a TypeError if it's given bad input. - with ExpectedException(TypeError, "Cannot square.*"): - silly.square('orange') - -The first argument to ``ExpectedException`` is the type of exception you -expect to see raised. The second argument is optional, and can be either a -regular expression or a matcher. If it is a regular expression, the ``str()`` -of the raised exception must match the regular expression. If it is a matcher, -then the raised exception object must match it. The optional third argument -``msg`` will cause the raised error to be annotated with that message. - - -assertIn, assertNotIn ---------------------- - -These two assertions check whether a value is in a sequence and whether a -value is not in a sequence. They are "assert" versions of the ``in`` and -``not in`` operators. For example:: - - def test_assert_in_example(self): - self.assertIn('a', 'cat') - self.assertNotIn('o', 'cat') - self.assertIn(5, list_of_primes_under_ten) - self.assertNotIn(12, list_of_primes_under_ten) - - -assertIs, assertIsNot ---------------------- - -These two assertions check whether values are identical to one another. This -is sometimes useful when you want to test something more strict than mere -equality. For example:: - - def test_assert_is_example(self): - foo = [None] - foo_alias = foo - bar = [None] - self.assertIs(foo, foo_alias) - self.assertIsNot(foo, bar) - self.assertEqual(foo, bar) # They are equal, but not identical - - -assertIsInstance ----------------- - -As much as we love duck-typing and polymorphism, sometimes you need to check -whether or not a value is of a given type. This method does that. For -example:: - - def test_assert_is_instance_example(self): - now = datetime.now() - self.assertIsInstance(now, datetime) - -Note that there is no ``assertIsNotInstance`` in testtools currently. - - -expectFailure -------------- - -Sometimes it's useful to write tests that fail. For example, you might want -to turn a bug report into a unit test, but you don't know how to fix the bug -yet. Or perhaps you want to document a known, temporary deficiency in a -dependency. - -testtools gives you the ``TestCase.expectFailure`` to help with this. You use -it to say that you expect this assertion to fail. When the test runs and the -assertion fails, testtools will report it as an "expected failure". - -Here's an example:: - - def test_expect_failure_example(self): - self.expectFailure( - "cats should be dogs", self.assertEqual, 'cats', 'dogs') - -As long as 'cats' is not equal to 'dogs', the test will be reported as an -expected failure. - -If ever by some miracle 'cats' becomes 'dogs', then testtools will report an -"unexpected success". Unlike standard unittest, testtools treats this as -something that fails the test suite, like an error or a failure. - - -Matchers -======== - -The built-in assertion methods are very useful, they are the bread and butter -of writing tests. However, soon enough you will probably want to write your -own assertions. Perhaps there are domain specific things that you want to -check (e.g. assert that two widgets are aligned parallel to the flux grid), or -perhaps you want to check something that could almost but not quite be found -in some other standard library (e.g. assert that two paths point to the same -file). - -When you are in such situations, you could either make a base class for your -project that inherits from ``testtools.TestCase`` and make sure that all of -your tests derive from that, *or* you could use the testtools ``Matcher`` -system. - - -Using Matchers --------------- - -Here's a really basic example using stock matchers found in testtools:: - - import testtools - from testtools.matchers import Equals - - class TestSquare(TestCase): - def test_square(self): - result = square(7) - self.assertThat(result, Equals(49)) - -The line ``self.assertThat(result, Equals(49))`` is equivalent to -``self.assertEqual(result, 49)`` and means "assert that ``result`` equals 49". -The difference is that ``assertThat`` is a more general method that takes some -kind of observed value (in this case, ``result``) and any matcher object -(here, ``Equals(49)``). - -The matcher object could be absolutely anything that implements the Matcher -protocol. This means that you can make more complex matchers by combining -existing ones:: - - def test_square_silly(self): - result = square(7) - self.assertThat(result, Not(Equals(50))) - -Which is roughly equivalent to:: - - def test_square_silly(self): - result = square(7) - self.assertNotEqual(result, 50) - - -Stock matchers --------------- - -testtools comes with many matchers built in. They can all be found in and -imported from the ``testtools.matchers`` module. - -Equals -~~~~~~ - -Matches if two items are equal. For example:: - - def test_equals_example(self): - self.assertThat([42], Equals([42])) - - -Is -~~~ - -Matches if two items are identical. For example:: - - def test_is_example(self): - foo = object() - self.assertThat(foo, Is(foo)) - - -IsInstance -~~~~~~~~~~ - -Adapts isinstance() to use as a matcher. For example:: - - def test_isinstance_example(self): - class MyClass:pass - self.assertThat(MyClass(), IsInstance(MyClass)) - self.assertThat(MyClass(), IsInstance(MyClass, str)) - - -The raises helper -~~~~~~~~~~~~~~~~~ - -Matches if a callable raises a particular type of exception. For example:: - - def test_raises_example(self): - self.assertThat(lambda: 1/0, raises(ZeroDivisionError)) - -This is actually a convenience function that combines two other matchers: -Raises_ and MatchesException_. - - -DocTestMatches -~~~~~~~~~~~~~~ - -Matches a string as if it were the output of a doctest_ example. Very useful -for making assertions about large chunks of text. For example:: - - import doctest - - def test_doctest_example(self): - output = "Colorless green ideas" - self.assertThat( - output, - DocTestMatches("Colorless ... ideas", doctest.ELLIPSIS)) - -We highly recommend using the following flags:: - - doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE | doctest.REPORT_NDIFF - - -GreaterThan -~~~~~~~~~~~ - -Matches if the given thing is greater than the thing in the matcher. For -example:: - - def test_greater_than_example(self): - self.assertThat(3, GreaterThan(2)) - - -LessThan -~~~~~~~~ - -Matches if the given thing is less than the thing in the matcher. For -example:: - - def test_less_than_example(self): - self.assertThat(2, LessThan(3)) - - -StartsWith, EndsWith -~~~~~~~~~~~~~~~~~~~~ - -These matchers check to see if a string starts with or ends with a particular -substring. For example:: - - def test_starts_and_ends_with_example(self): - self.assertThat('underground', StartsWith('und')) - self.assertThat('underground', EndsWith('und')) - - -Contains -~~~~~~~~ - -This matcher checks to see if the given thing contains the thing in the -matcher. For example:: - - def test_contains_example(self): - self.assertThat('abc', Contains('b')) - - -MatchesException -~~~~~~~~~~~~~~~~ - -Matches an exc_info tuple if the exception is of the correct type. For -example:: - - def test_matches_exception_example(self): - try: - raise RuntimeError('foo') - except RuntimeError: - exc_info = sys.exc_info() - self.assertThat(exc_info, MatchesException(RuntimeError)) - self.assertThat(exc_info, MatchesException(RuntimeError('bar')) - -Most of the time, you will want to uses `The raises helper`_ instead. - - -NotEquals -~~~~~~~~~ - -Matches if something is not equal to something else. Note that this is subtly -different to ``Not(Equals(x))``. ``NotEquals(x)`` will match if ``y != x``, -``Not(Equals(x))`` will match if ``not y == x``. - -You only need to worry about this distinction if you are testing code that -relies on badly written overloaded equality operators. - - -KeysEqual -~~~~~~~~~ - -Matches if the keys of one dict are equal to the keys of another dict. For -example:: - - def test_keys_equal(self): - x = {'a': 1, 'b': 2} - y = {'a': 2, 'b': 3} - self.assertThat(x, KeysEqual(y)) - - -MatchesRegex -~~~~~~~~~~~~ - -Matches a string against a regular expression, which is a wonderful thing to -be able to do, if you think about it:: - - def test_matches_regex_example(self): - self.assertThat('foo', MatchesRegex('fo+')) - - -HasLength -~~~~~~~~~ - -Check the length of a collection. The following assertion will fail:: - - self.assertThat([1, 2, 3], HasLength(2)) - -But this one won't:: - - self.assertThat([1, 2, 3], HasLength(3)) - - -File- and path-related matchers -------------------------------- - -testtools also has a number of matchers to help with asserting things about -the state of the filesystem. - -PathExists -~~~~~~~~~~ - -Matches if a path exists:: - - self.assertThat('/', PathExists()) - - -DirExists -~~~~~~~~~ - -Matches if a path exists and it refers to a directory:: - - # This will pass on most Linux systems. - self.assertThat('/home/', DirExists()) - # This will not - self.assertThat('/home/jml/some-file.txt', DirExists()) - - -FileExists -~~~~~~~~~~ - -Matches if a path exists and it refers to a file (as opposed to a directory):: - - # This will pass on most Linux systems. - self.assertThat('/bin/true', FileExists()) - # This will not. - self.assertThat('/home/', FileExists()) - - -DirContains -~~~~~~~~~~~ - -Matches if the given directory contains the specified files and directories. -Say we have a directory ``foo`` that has the files ``a``, ``b`` and ``c``, -then:: - - self.assertThat('foo', DirContains(['a', 'b', 'c'])) - -will match, but:: - - self.assertThat('foo', DirContains(['a', 'b'])) - -will not. - -The matcher sorts both the input and the list of names we get back from the -filesystem. - -You can use this in a more advanced way, and match the sorted directory -listing against an arbitrary matcher:: - - self.assertThat('foo', DirContains(matcher=Contains('a'))) - - -FileContains -~~~~~~~~~~~~ - -Matches if the given file has the specified contents. Say there's a file -called ``greetings.txt`` with the contents, ``Hello World!``:: - - self.assertThat('greetings.txt', FileContains("Hello World!")) - -will match. - -You can also use this in a more advanced way, and match the contents of the -file against an arbitrary matcher:: - - self.assertThat('greetings.txt', FileContains(matcher=Contains('!'))) - - -HasPermissions -~~~~~~~~~~~~~~ - -Used for asserting that a file or directory has certain permissions. Uses -octal-mode permissions for both input and matching. For example:: - - self.assertThat('/tmp', HasPermissions('1777')) - self.assertThat('id_rsa', HasPermissions('0600')) - -This is probably more useful on UNIX systems than on Windows systems. - - -SamePath -~~~~~~~~ - -Matches if two paths actually refer to the same thing. The paths don't have -to exist, but if they do exist, ``SamePath`` will resolve any symlinks.:: - - self.assertThat('somefile', SamePath('childdir/../somefile')) - - -TarballContains -~~~~~~~~~~~~~~~ - -Matches the contents of a tarball. In many ways, much like ``DirContains``, -but instead of matching on ``os.listdir`` matches on ``TarFile.getnames``. - - -Combining matchers ------------------- - -One great thing about matchers is that you can readily combine existing -matchers to get variations on their behaviour or to quickly build more complex -assertions. - -Below are a few of the combining matchers that come with testtools. - - -Not -~~~ - -Negates another matcher. For example:: - - def test_not_example(self): - self.assertThat([42], Not(Equals("potato"))) - self.assertThat([42], Not(Is([42]))) - -If you find yourself using ``Not`` frequently, you may wish to create a custom -matcher for it. For example:: - - IsNot = lambda x: Not(Is(x)) - - def test_not_example_2(self): - self.assertThat([42], IsNot([42])) - - -Annotate -~~~~~~~~ - -Used to add custom notes to a matcher. For example:: - - def test_annotate_example(self): - result = 43 - self.assertThat( - result, Annotate("Not the answer to the Question!", Equals(42)) - -Since the annotation is only ever displayed when there is a mismatch -(e.g. when ``result`` does not equal 42), it's a good idea to phrase the note -negatively, so that it describes what a mismatch actually means. - -As with Not_, you may wish to create a custom matcher that describes a -common operation. For example:: - - PoliticallyEquals = lambda x: Annotate("Death to the aristos!", Equals(x)) - - def test_annotate_example_2(self): - self.assertThat("orange", PoliticallyEquals("yellow")) - -You can have assertThat perform the annotation for you as a convenience:: - - def test_annotate_example_3(self): - self.assertThat("orange", Equals("yellow"), "Death to the aristos!") - - -AfterPreprocessing -~~~~~~~~~~~~~~~~~~ - -Used to make a matcher that applies a function to the matched object before -matching. This can be used to aid in creating trivial matchers as functions, for -example:: - - def test_after_preprocessing_example(self): - def PathHasFileContent(content): - def _read(path): - return open(path).read() - return AfterPreprocessing(_read, Equals(content)) - self.assertThat('/tmp/foo.txt', PathHasFileContent("Hello world!")) - - -MatchesAll -~~~~~~~~~~ - -Combines many matchers to make a new matcher. The new matcher will only match -things that match every single one of the component matchers. - -It's much easier to understand in Python than in English:: - - def test_matches_all_example(self): - has_und_at_both_ends = MatchesAll(StartsWith("und"), EndsWith("und")) - # This will succeed. - self.assertThat("underground", has_und_at_both_ends) - # This will fail. - self.assertThat("found", has_und_at_both_ends) - # So will this. - self.assertThat("undead", has_und_at_both_ends) - -At this point some people ask themselves, "why bother doing this at all? why -not just have two separate assertions?". It's a good question. - -The first reason is that when a ``MatchesAll`` gets a mismatch, the error will -include information about all of the bits that mismatched. When you have two -separate assertions, as below:: - - def test_two_separate_assertions(self): - self.assertThat("foo", StartsWith("und")) - self.assertThat("foo", EndsWith("und")) - -Then you get absolutely no information from the second assertion if the first -assertion fails. Tests are largely there to help you debug code, so having -more information in error messages is a big help. - -The second reason is that it is sometimes useful to give a name to a set of -matchers. ``has_und_at_both_ends`` is a bit contrived, of course, but it is -clear. The ``FileExists`` and ``DirExists`` matchers included in testtools -are perhaps better real examples. - -If you want only the first mismatch to be reported, pass ``first_only=True`` -as a keyword parameter to ``MatchesAll``. - - -MatchesAny -~~~~~~~~~~ - -Like MatchesAll_, ``MatchesAny`` combines many matchers to make a new -matcher. The difference is that the new matchers will match a thing if it -matches *any* of the component matchers. - -For example:: - - def test_matches_any_example(self): - self.assertThat(42, MatchesAny(Equals(5), Not(Equals(6)))) - - -AllMatch -~~~~~~~~ - -Matches many values against a single matcher. Can be used to make sure that -many things all meet the same condition:: - - def test_all_match_example(self): - self.assertThat([2, 3, 5, 7], AllMatch(LessThan(10))) - -If the match fails, then all of the values that fail to match will be included -in the error message. - -In some ways, this is the converse of MatchesAll_. - - -MatchesListwise -~~~~~~~~~~~~~~~ - -Where ``MatchesAny`` and ``MatchesAll`` combine many matchers to match a -single value, ``MatchesListwise`` combines many matches to match many values. - -For example:: - - def test_matches_listwise_example(self): - self.assertThat( - [1, 2, 3], MatchesListwise(map(Equals, [1, 2, 3]))) - -This is useful for writing custom, domain-specific matchers. - -If you want only the first mismatch to be reported, pass ``first_only=True`` -to ``MatchesListwise``. - - -MatchesSetwise -~~~~~~~~~~~~~~ - -Combines many matchers to match many values, without regard to their order. - -Here's an example:: - - def test_matches_setwise_example(self): - self.assertThat( - [1, 2, 3], MatchesSetwise(Equals(2), Equals(3), Equals(1))) - -Much like ``MatchesListwise``, best used for writing custom, domain-specific -matchers. - - -MatchesStructure -~~~~~~~~~~~~~~~~ - -Creates a matcher that matches certain attributes of an object against a -pre-defined set of matchers. - -It's much easier to understand in Python than in English:: - - def test_matches_structure_example(self): - foo = Foo() - foo.a = 1 - foo.b = 2 - matcher = MatchesStructure(a=Equals(1), b=Equals(2)) - self.assertThat(foo, matcher) - -Since all of the matchers used were ``Equals``, we could also write this using -the ``byEquality`` helper:: - - def test_matches_structure_example(self): - foo = Foo() - foo.a = 1 - foo.b = 2 - matcher = MatchesStructure.byEquality(a=1, b=2) - self.assertThat(foo, matcher) - -``MatchesStructure.fromExample`` takes an object and a list of attributes and -creates a ``MatchesStructure`` matcher where each attribute of the matched -object must equal each attribute of the example object. For example:: - - matcher = MatchesStructure.fromExample(foo, 'a', 'b') - -is exactly equivalent to ``matcher`` in the previous example. - - -MatchesPredicate -~~~~~~~~~~~~~~~~ - -Sometimes, all you want to do is create a matcher that matches if a given -function returns True, and mismatches if it returns False. - -For example, you might have an ``is_prime`` function and want to make a -matcher based on it:: - - def test_prime_numbers(self): - IsPrime = MatchesPredicate(is_prime, '%s is not prime.') - self.assertThat(7, IsPrime) - self.assertThat(1983, IsPrime) - # This will fail. - self.assertThat(42, IsPrime) - -Which will produce the error message:: - - Traceback (most recent call last): - File "...", line ..., in test_prime_numbers - self.assertThat(42, IsPrime) - MismatchError: 42 is not prime. - - -MatchesPredicateWithParams -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Sometimes you can't use a trivial predicate and instead need to pass in some -parameters each time. In that case, MatchesPredicateWithParams is your go-to -tool for creating ad hoc matchers. MatchesPredicateWithParams takes a predicate -function and message and returns a factory to produce matchers from that. The -predicate needs to return a boolean (or any truthy object), and accept the -object to match + whatever was passed into the factory. - -For example, you might have an ``divisible`` function and want to make a -matcher based on it:: - - def test_divisible_numbers(self): - IsDivisibleBy = MatchesPredicateWithParams( - divisible, '{0} is not divisible by {1}') - self.assertThat(7, IsDivisibleBy(1)) - self.assertThat(7, IsDivisibleBy(7)) - self.assertThat(7, IsDivisibleBy(2))) - # This will fail. - -Which will produce the error message:: - - Traceback (most recent call last): - File "...", line ..., in test_divisible - self.assertThat(7, IsDivisibleBy(2)) - MismatchError: 7 is not divisible by 2. - - -Raises -~~~~~~ - -Takes whatever the callable raises as an exc_info tuple and matches it against -whatever matcher it was given. For example, if you want to assert that a -callable raises an exception of a given type:: - - def test_raises_example(self): - self.assertThat( - lambda: 1/0, Raises(MatchesException(ZeroDivisionError))) - -Although note that this could also be written as:: - - def test_raises_example_convenient(self): - self.assertThat(lambda: 1/0, raises(ZeroDivisionError)) - -See also MatchesException_ and `the raises helper`_ - - -Writing your own matchers -------------------------- - -Combining matchers is fun and can get you a very long way indeed, but -sometimes you will have to write your own. Here's how. - -You need to make two closely-linked objects: a ``Matcher`` and a -``Mismatch``. The ``Matcher`` knows how to actually make the comparison, and -the ``Mismatch`` knows how to describe a failure to match. - -Here's an example matcher:: - - class IsDivisibleBy(object): - """Match if a number is divisible by another number.""" - def __init__(self, divider): - self.divider = divider - def __str__(self): - return 'IsDivisibleBy(%s)' % (self.divider,) - def match(self, actual): - remainder = actual % self.divider - if remainder != 0: - return IsDivisibleByMismatch(actual, self.divider, remainder) - else: - return None - -The matcher has a constructor that takes parameters that describe what you -actually *expect*, in this case a number that other numbers ought to be -divisible by. It has a ``__str__`` method, the result of which is displayed -on failure by ``assertThat`` and a ``match`` method that does the actual -matching. - -``match`` takes something to match against, here ``actual``, and decides -whether or not it matches. If it does match, then ``match`` must return -``None``. If it does *not* match, then ``match`` must return a ``Mismatch`` -object. ``assertThat`` will call ``match`` and then fail the test if it -returns a non-None value. For example:: - - def test_is_divisible_by_example(self): - # This succeeds, since IsDivisibleBy(5).match(10) returns None. - self.assertThat(10, IsDivisibleBy(5)) - # This fails, since IsDivisibleBy(7).match(10) returns a mismatch. - self.assertThat(10, IsDivisibleBy(7)) - -The mismatch is responsible for what sort of error message the failing test -generates. Here's an example mismatch:: - - class IsDivisibleByMismatch(object): - def __init__(self, number, divider, remainder): - self.number = number - self.divider = divider - self.remainder = remainder - - def describe(self): - return "%r is not divisible by %r, %r remains" % ( - self.number, self.divider, self.remainder) - - def get_details(self): - return {} - -The mismatch takes information about the mismatch, and provides a ``describe`` -method that assembles all of that into a nice error message for end users. -You can use the ``get_details`` method to provide extra, arbitrary data with -the mismatch (e.g. the contents of a log file). Most of the time it's fine to -just return an empty dict. You can read more about Details_ elsewhere in this -document. - -Sometimes you don't need to create a custom mismatch class. In particular, if -you don't care *when* the description is calculated, then you can just do that -in the Matcher itself like this:: - - def match(self, actual): - remainder = actual % self.divider - if remainder != 0: - return Mismatch( - "%r is not divisible by %r, %r remains" % ( - actual, self.divider, remainder)) - else: - return None - -When writing a ``describe`` method or constructing a ``Mismatch`` object the -code should ensure it only emits printable unicode. As this output must be -combined with other text and forwarded for presentation, letting through -non-ascii bytes of ambiguous encoding or control characters could throw an -exception or mangle the display. In most cases simply avoiding the ``%s`` -format specifier and using ``%r`` instead will be enough. For examples of -more complex formatting see the ``testtools.matchers`` implementatons. - - -Details -======= - -As we may have mentioned once or twice already, one of the great benefits of -automated tests is that they help find, isolate and debug errors in your -system. - -Frequently however, the information provided by a mere assertion failure is -not enough. It's often useful to have other information: the contents of log -files; what queries were run; benchmark timing information; what state certain -subsystem components are in and so forth. - -testtools calls all of these things "details" and provides a single, powerful -mechanism for including this information in your test run. - -Here's an example of how to add them:: - - from testtools import TestCase - from testtools.content import text_content - - class TestSomething(TestCase): - - def test_thingy(self): - self.addDetail('arbitrary-color-name', text_content("blue")) - 1 / 0 # Gratuitous error! - -A detail an arbitrary piece of content given a name that's unique within the -test. Here the name is ``arbitrary-color-name`` and the content is -``text_content("blue")``. The name can be any text string, and the content -can be any ``testtools.content.Content`` object. - -When the test runs, testtools will show you something like this:: - - ====================================================================== - ERROR: exampletest.TestSomething.test_thingy - ---------------------------------------------------------------------- - arbitrary-color-name: {{{blue}}} - - Traceback (most recent call last): - File "exampletest.py", line 8, in test_thingy - 1 / 0 # Gratuitous error! - ZeroDivisionError: integer division or modulo by zero - ------------ - Ran 1 test in 0.030s - -As you can see, the detail is included as an attachment, here saying -that our arbitrary-color-name is "blue". - - -Content -------- - -For the actual content of details, testtools uses its own MIME-based Content -object. This allows you to attach any information that you could possibly -conceive of to a test, and allows testtools to use or serialize that -information. - -The basic ``testtools.content.Content`` object is constructed from a -``testtools.content.ContentType`` and a nullary callable that must return an -iterator of chunks of bytes that the content is made from. - -So, to make a Content object that is just a simple string of text, you can -do:: - - from testtools.content import Content - from testtools.content_type import ContentType - - text = Content(ContentType('text', 'plain'), lambda: ["some text"]) - -Because adding small bits of text content is very common, there's also a -convenience method:: - - text = text_content("some text") - -To make content out of an image stored on disk, you could do something like:: - - image = Content(ContentType('image', 'png'), lambda: open('foo.png').read()) - -Or you could use the convenience function:: - - image = content_from_file('foo.png', ContentType('image', 'png')) - -The ``lambda`` helps make sure that the file is opened and the actual bytes -read only when they are needed – by default, when the test is finished. This -means that tests can construct and add Content objects freely without worrying -too much about how they affect run time. - - -A realistic example -------------------- - -A very common use of details is to add a log file to failing tests. Say your -project has a server represented by a class ``SomeServer`` that you can start -up and shut down in tests, but runs in another process. You want to test -interaction with that server, and whenever the interaction fails, you want to -see the client-side error *and* the logs from the server-side. Here's how you -might do it:: - - from testtools import TestCase - from testtools.content import attach_file, Content - from testtools.content_type import UTF8_TEXT - - from myproject import SomeServer - - class SomeTestCase(TestCase): - - def setUp(self): - super(SomeTestCase, self).setUp() - self.server = SomeServer() - self.server.start_up() - self.addCleanup(self.server.shut_down) - self.addCleanup(attach_file, self.server.logfile, self) - - def attach_log_file(self): - self.addDetail( - 'log-file', - Content(UTF8_TEXT, - lambda: open(self.server.logfile, 'r').readlines())) - - def test_a_thing(self): - self.assertEqual("cool", self.server.temperature) - -This test will attach the log file of ``SomeServer`` to each test that is -run. testtools will only display the log file for failing tests, so it's not -such a big deal. - -If the act of adding at detail is expensive, you might want to use -addOnException_ so that you only do it when a test actually raises an -exception. - - -Controlling test execution -========================== - -.. _addCleanup: - -addCleanup ----------- - -``TestCase.addCleanup`` is a robust way to arrange for a clean up function to -be called before ``tearDown``. This is a powerful and simple alternative to -putting clean up logic in a try/finally block or ``tearDown`` method. For -example:: - - def test_foo(self): - foo.lock() - self.addCleanup(foo.unlock) - ... - -This is particularly useful if you have some sort of factory in your test:: - - def make_locked_foo(self): - foo = Foo() - foo.lock() - self.addCleanup(foo.unlock) - return foo - - def test_frotz_a_foo(self): - foo = self.make_locked_foo() - foo.frotz() - self.assertEqual(foo.frotz_count, 1) - -Any extra arguments or keyword arguments passed to ``addCleanup`` are passed -to the callable at cleanup time. - -Cleanups can also report multiple errors, if appropriate by wrapping them in -a ``testtools.MultipleExceptions`` object:: - - raise MultipleExceptions(exc_info1, exc_info2) - - -Fixtures --------- - -Tests often depend on a system being set up in a certain way, or having -certain resources available to them. Perhaps a test needs a connection to the -database or access to a running external server. - -One common way of doing this is to do:: - - class SomeTest(TestCase): - def setUp(self): - super(SomeTest, self).setUp() - self.server = Server() - self.server.setUp() - self.addCleanup(self.server.tearDown) - -testtools provides a more convenient, declarative way to do the same thing:: - - class SomeTest(TestCase): - def setUp(self): - super(SomeTest, self).setUp() - self.server = self.useFixture(Server()) - -``useFixture(fixture)`` calls ``setUp`` on the fixture, schedules a clean up -to clean it up, and schedules a clean up to attach all details_ held by the -fixture to the test case. The fixture object must meet the -``fixtures.Fixture`` protocol (version 0.3.4 or newer, see fixtures_). - -If you have anything beyond the most simple test set up, we recommend that -you put this set up into a ``Fixture`` class. Once there, the fixture can be -easily re-used by other tests and can be combined with other fixtures to make -more complex resources. - - -Skipping tests --------------- - -Many reasons exist to skip a test: a dependency might be missing; a test might -be too expensive and thus should not berun while on battery power; or perhaps -the test is testing an incomplete feature. - -``TestCase.skipTest`` is a simple way to have a test stop running and be -reported as a skipped test, rather than a success, error or failure. For -example:: - - def test_make_symlink(self): - symlink = getattr(os, 'symlink', None) - if symlink is None: - self.skipTest("No symlink support") - symlink(whatever, something_else) - -Using ``skipTest`` means that you can make decisions about what tests to run -as late as possible, and close to the actual tests. Without it, you might be -forced to use convoluted logic during test loading, which is a bit of a mess. - - -Legacy skip support -~~~~~~~~~~~~~~~~~~~ - -If you are using this feature when running your test suite with a legacy -``TestResult`` object that is missing the ``addSkip`` method, then the -``addError`` method will be invoked instead. If you are using a test result -from testtools, you do not have to worry about this. - -In older versions of testtools, ``skipTest`` was known as ``skip``. Since -Python 2.7 added ``skipTest`` support, the ``skip`` name is now deprecated. -No warning is emitted yet – some time in the future we may do so. - - -addOnException --------------- - -Sometimes, you might wish to do something only when a test fails. Perhaps you -need to run expensive diagnostic routines or some such. -``TestCase.addOnException`` allows you to easily do just this. For example:: - - class SomeTest(TestCase): - def setUp(self): - super(SomeTest, self).setUp() - self.server = self.useFixture(SomeServer()) - self.addOnException(self.attach_server_diagnostics) - - def attach_server_diagnostics(self, exc_info): - self.server.prep_for_diagnostics() # Expensive! - self.addDetail('server-diagnostics', self.server.get_diagnostics) - - def test_a_thing(self): - self.assertEqual('cheese', 'chalk') - -In this example, ``attach_server_diagnostics`` will only be called when a test -fails. It is given the exc_info tuple of the error raised by the test, just -in case it is needed. - - -Twisted support ---------------- - -testtools provides *highly experimental* support for running Twisted tests – -tests that return a Deferred_ and rely on the Twisted reactor. You should not -use this feature right now. We reserve the right to change the API and -behaviour without telling you first. - -However, if you are going to, here's how you do it:: - - from testtools import TestCase - from testtools.deferredruntest import AsynchronousDeferredRunTest - - class MyTwistedTests(TestCase): - - run_tests_with = AsynchronousDeferredRunTest - - def test_foo(self): - # ... - return d - -In particular, note that you do *not* have to use a special base ``TestCase`` -in order to run Twisted tests. - -You can also run individual tests within a test case class using the Twisted -test runner:: - - class MyTestsSomeOfWhichAreTwisted(TestCase): - - def test_normal(self): - pass - - @run_test_with(AsynchronousDeferredRunTest) - def test_twisted(self): - # ... - return d - -Here are some tips for converting your Trial tests into testtools tests. - -* Use the ``AsynchronousDeferredRunTest`` runner -* Make sure to upcall to ``setUp`` and ``tearDown`` -* Don't use ``setUpClass`` or ``tearDownClass`` -* Don't expect setting .todo, .timeout or .skip attributes to do anything -* ``flushLoggedErrors`` is ``testtools.deferredruntest.flush_logged_errors`` -* ``assertFailure`` is ``testtools.deferredruntest.assert_fails_with`` -* Trial spins the reactor a couple of times before cleaning it up, - ``AsynchronousDeferredRunTest`` does not. If you rely on this behavior, use - ``AsynchronousDeferredRunTestForBrokenTwisted``. - -force_failure -------------- - -Setting the ``testtools.TestCase.force_failure`` instance variable to ``True`` will cause the test to be marked as a failure, but won't stop the test code from running (see :ref:`force_failure`). - - -Test helpers -============ - -testtools comes with a few little things that make it a little bit easier to -write tests. - - -TestCase.patch --------------- - -``patch`` is a convenient way to monkey-patch a Python object for the duration -of your test. It's especially useful for testing legacy code. e.g.:: - - def test_foo(self): - my_stream = StringIO() - self.patch(sys, 'stderr', my_stream) - run_some_code_that_prints_to_stderr() - self.assertEqual('', my_stream.getvalue()) - -The call to ``patch`` above masks ``sys.stderr`` with ``my_stream`` so that -anything printed to stderr will be captured in a StringIO variable that can be -actually tested. Once the test is done, the real ``sys.stderr`` is restored to -its rightful place. - - -Creation methods ----------------- - -Often when writing unit tests, you want to create an object that is a -completely normal instance of its type. You don't want there to be anything -special about its properties, because you are testing generic behaviour rather -than specific conditions. - -A lot of the time, test authors do this by making up silly strings and numbers -and passing them to constructors (e.g. 42, 'foo', "bar" etc), and that's -fine. However, sometimes it's useful to be able to create arbitrary objects -at will, without having to make up silly sample data. - -To help with this, ``testtools.TestCase`` implements creation methods called -``getUniqueString`` and ``getUniqueInteger``. They return strings and -integers that are unique within the context of the test that can be used to -assemble more complex objects. Here's a basic example where -``getUniqueString`` is used instead of saying "foo" or "bar" or whatever:: - - class SomeTest(TestCase): - - def test_full_name(self): - first_name = self.getUniqueString() - last_name = self.getUniqueString() - p = Person(first_name, last_name) - self.assertEqual(p.full_name, "%s %s" % (first_name, last_name)) - - -And here's how it could be used to make a complicated test:: - - class TestCoupleLogic(TestCase): - - def make_arbitrary_person(self): - return Person(self.getUniqueString(), self.getUniqueString()) - - def test_get_invitation(self): - a = self.make_arbitrary_person() - b = self.make_arbitrary_person() - couple = Couple(a, b) - event_name = self.getUniqueString() - invitation = couple.get_invitation(event_name) - self.assertEqual( - invitation, - "We invite %s and %s to %s" % ( - a.full_name, b.full_name, event_name)) - -Essentially, creation methods like these are a way of reducing the number of -assumptions in your tests and communicating to test readers that the exact -details of certain variables don't actually matter. - -See pages 419-423 of `xUnit Test Patterns`_ by Gerard Meszaros for a detailed -discussion of creation methods. - -Test attributes ---------------- - -Inspired by the ``nosetests`` ``attr`` plugin, testtools provides support for -marking up test methods with attributes, which are then exposed in the test -id and can be used when filtering tests by id. (e.g. via ``--load-list``):: - - from testtools.testcase import attr, WithAttributes - - class AnnotatedTests(WithAttributes, TestCase): - - @attr('simple') - def test_one(self): - pass - - @attr('more', 'than', 'one) - def test_two(self): - pass - - @attr('or') - @attr('stacked') - def test_three(self): - pass - -General helpers -=============== - -Conditional imports -------------------- - -Lots of the time we would like to conditionally import modules. testtools -uses the small library extras to do this. This used to be part of testtools. - -Instead of:: - - try: - from twisted.internet import defer - except ImportError: - defer = None - -You can do:: - - defer = try_import('twisted.internet.defer') - - -Instead of:: - - try: - from StringIO import StringIO - except ImportError: - from io import StringIO - -You can do:: - - StringIO = try_imports(['StringIO.StringIO', 'io.StringIO']) - - -Safe attribute testing ----------------------- - -``hasattr`` is broken_ on many versions of Python. The helper ``safe_hasattr`` -can be used to safely test whether an object has a particular attribute. Like -``try_import`` this used to be in testtools but is now in extras. - - -Nullary callables ------------------ - -Sometimes you want to be able to pass around a function with the arguments -already specified. The normal way of doing this in Python is:: - - nullary = lambda: f(*args, **kwargs) - nullary() - -Which is mostly good enough, but loses a bit of debugging information. If you -take the ``repr()`` of ``nullary``, you're only told that it's a lambda, and -you get none of the juicy meaning that you'd get from the ``repr()`` of ``f``. - -The solution is to use ``Nullary`` instead:: - - nullary = Nullary(f, *args, **kwargs) - nullary() - -Here, ``repr(nullary)`` will be the same as ``repr(f)``. - - -.. _testrepository: https://launchpad.net/testrepository -.. _Trial: http://twistedmatrix.com/documents/current/core/howto/testing.html -.. _nose: http://somethingaboutorange.com/mrl/projects/nose/ -.. _unittest2: http://pypi.python.org/pypi/unittest2 -.. _zope.testrunner: http://pypi.python.org/pypi/zope.testrunner/ -.. _xUnit test patterns: http://xunitpatterns.com/ -.. _fixtures: http://pypi.python.org/pypi/fixtures -.. _unittest: http://docs.python.org/library/unittest.html -.. _doctest: http://docs.python.org/library/doctest.html -.. _Deferred: http://twistedmatrix.com/documents/current/core/howto/defer.html -.. _discover: http://pypi.python.org/pypi/discover -.. _`testtools API docs`: http://mumak.net/testtools/apidocs/ -.. _Distutils: http://docs.python.org/library/distutils.html -.. _`setup configuration`: http://docs.python.org/distutils/configfile.html -.. _broken: http://chipaca.com/post/3210673069/hasattr-17-less-harmful diff -Nru python-testtools-0.9.35/doc/hacking.rst python-testtools-0.9.39/doc/hacking.rst --- python-testtools-0.9.35/doc/hacking.rst 2013-11-28 08:28:37.000000000 +0000 +++ python-testtools-0.9.39/doc/hacking.rst 2014-08-29 01:34:28.000000000 +0000 @@ -2,6 +2,13 @@ Contributing to testtools ========================= +Bugs and patches +---------------- + +`File bugs ` on Launchpad, and +`send patches ` on Github. + + Coding style ------------ @@ -93,10 +100,10 @@ Committing to trunk ------------------- -Testtools is maintained using git, with its master repo at https://github.com -/testing-cabal/testtools. This gives every contributor the ability to commit -their work to their own branches. However permission must be granted to allow -contributors to commit to the trunk branch. +Testtools is maintained using git, with its master repo at +https://github.com/testing-cabal/testtools. This gives every contributor the +ability to commit their work to their own branches. However permission must be +granted to allow contributors to commit to the trunk branch. Commit access to trunk is obtained by joining the `testing-cabal`_, either as an Owner or a Committer. Commit access is contingent on obeying the testtools Binary files /tmp/xiLaE6Rb03/python-testtools-0.9.35/doc/.hacking.rst.swp and /tmp/LHfuLxvepJ/python-testtools-0.9.39/doc/.hacking.rst.swp differ diff -Nru python-testtools-0.9.35/doc/index.rst python-testtools-0.9.39/doc/index.rst --- python-testtools-0.9.35/doc/index.rst 2013-01-26 17:18:27.000000000 +0000 +++ python-testtools-0.9.39/doc/index.rst 2014-08-29 01:34:28.000000000 +0000 @@ -25,7 +25,7 @@ for-framework-folk hacking Changes to testtools - API reference documentation + API reference documentation Indices and tables ================== diff -Nru python-testtools-0.9.35/doc/overview.rst python-testtools-0.9.39/doc/overview.rst --- python-testtools-0.9.35/doc/overview.rst 2013-01-26 17:18:27.000000000 +0000 +++ python-testtools-0.9.39/doc/overview.rst 2014-08-29 01:34:28.000000000 +0000 @@ -26,7 +26,7 @@ def attach_log_file(self): self.addDetail( 'log-file', - Content(UTF8_TEXT + Content(UTF8_TEXT, lambda: open(self.server.logfile, 'r').readlines())) def test_server_is_cool(self): diff -Nru python-testtools-0.9.35/.gitignore python-testtools-0.9.39/.gitignore --- python-testtools-0.9.35/.gitignore 2014-01-29 09:53:34.000000000 +0000 +++ python-testtools-0.9.39/.gitignore 2014-08-29 01:34:28.000000000 +0000 @@ -14,3 +14,5 @@ *.swp *~ testtools.egg-info +/build/ +/.env/ diff -Nru python-testtools-0.9.35/.gitreview python-testtools-0.9.39/.gitreview --- python-testtools-0.9.35/.gitreview 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/.gitreview 2014-08-29 01:34:28.000000000 +0000 @@ -0,0 +1,4 @@ +[gerrit] +host=review.testing-cabal.org +port=29418 +project=testing-cabal/testtools.git diff -Nru python-testtools-0.9.35/NEWS python-testtools-0.9.39/NEWS --- python-testtools-0.9.35/NEWS 2014-01-29 09:59:38.000000000 +0000 +++ python-testtools-0.9.39/NEWS 2014-08-29 01:34:28.000000000 +0000 @@ -7,6 +7,71 @@ NEXT ~~~~ +0.9.39 +~~~~~~ + +Brown paper bag release - 0.9.38 was broken for some users, +_jython_aware_splitext was not defined entirely compatibly. +(Robert Collins, #https://github.com/testing-cabal/testtools/issues/100) + +0.9.38 +~~~~~~ + +Bug fixes for test importing. + +Improvements +------------ + +* Discovery import error detection wasn't implemented for python 2.6 (the + 'discover' module). (Robert Collins) + +* Discovery now executes load_tests (if present) in __init__ in all packages. + (Robert Collins, http://bugs.python.org/issue16662) + +0.9.37 +~~~~~~ + +Minor improvements to correctness. + +Changes +------- + +* ``stdout`` is now correctly honoured on ``run.TestProgram`` - before the + runner objects would be created with no stdout parameter. If construction + fails, the previous parameter list is attempted, permitting compatibility + with Runner classes that don't accept stdout as a parameter. + (Robert Collins) + +* The ``ExtendedToStreamDecorator`` now handles content objects with one less + packet - the last packet of the source content is sent with EOF set rather + than an empty packet with EOF set being sent after the last packet of the + source content. (Robert Collins) + +0.9.36 +~~~~~~ + +Welcome to our long overdue 0.9.36 release, which improves compatibility with +Python3.4, adds assert_that, a function for using matchers without TestCase +objects, and finally will error if you try to use setUp or tearDown twice - +since that invariably leads to bad things of one sort or another happening. + +Changes +------- + +* Error if ``setUp`` or ``tearDown`` are called twice. + (Robert Collins, #882884) + +* Make testtools compatible with the ``unittest.expectedFailure`` decorator in + Python 3.4. (Thomi Richards) + + +Improvements +------------ + +* Introduce the assert_that function, which allows matchers to be used + independent of testtools.TestCase. (Daniel Watkins, #1243834) + + 0.9.35 ~~~~~~ diff -Nru python-testtools-0.9.35/PKG-INFO python-testtools-0.9.39/PKG-INFO --- python-testtools-0.9.35/PKG-INFO 2014-01-29 10:01:53.000000000 +0000 +++ python-testtools-0.9.39/PKG-INFO 1970-01-01 00:00:00.000000000 +0000 @@ -1,113 +0,0 @@ -Metadata-Version: 1.1 -Name: testtools -Version: 0.9.35 -Summary: Extensions to the Python standard library unit testing framework -Home-page: https://github.com/testing-cabal/testtools -Author: Jonathan M. Lange -Author-email: jml+testtools@mumak.net -License: UNKNOWN -Description: ====================================== - testtools: tasteful testing for Python - ====================================== - - testtools is a set of extensions to the Python standard library's unit testing - framework. These extensions have been derived from many years of experience - with unit testing in Python and come from many different sources. testtools - supports Python versions all the way back to Python 2.6. - - What better way to start than with a contrived code snippet?:: - - from testtools import TestCase - from testtools.content import Content - from testtools.content_type import UTF8_TEXT - from testtools.matchers import Equals - - from myproject import SillySquareServer - - class TestSillySquareServer(TestCase): - - def setUp(self): - super(TestSillySquare, self).setUp() - self.server = self.useFixture(SillySquareServer()) - self.addCleanup(self.attach_log_file) - - def attach_log_file(self): - self.addDetail( - 'log-file', - Content(UTF8_TEXT - lambda: open(self.server.logfile, 'r').readlines())) - - def test_server_is_cool(self): - self.assertThat(self.server.temperature, Equals("cool")) - - def test_square(self): - self.assertThat(self.server.silly_square_of(7), Equals(49)) - - - Why use testtools? - ================== - - Better assertion methods - ------------------------ - - The standard assertion methods that come with unittest aren't as helpful as - they could be, and there aren't quite enough of them. testtools adds - ``assertIn``, ``assertIs``, ``assertIsInstance`` and their negatives. - - - Matchers: better than assertion methods - --------------------------------------- - - Of course, in any serious project you want to be able to have assertions that - are specific to that project and the particular problem that it is addressing. - Rather than forcing you to define your own assertion methods and maintain your - own inheritance hierarchy of ``TestCase`` classes, testtools lets you write - your own "matchers", custom predicates that can be plugged into a unit test:: - - def test_response_has_bold(self): - # The response has bold text. - response = self.server.getResponse() - self.assertThat(response, HTMLContains(Tag('bold', 'b'))) - - - More debugging info, when you need it - -------------------------------------- - - testtools makes it easy to add arbitrary data to your test result. If you - want to know what's in a log file when a test fails, or what the load was on - the computer when a test started, or what files were open, you can add that - information with ``TestCase.addDetail``, and it will appear in the test - results if that test fails. - - - Extend unittest, but stay compatible and re-usable - -------------------------------------------------- - - testtools goes to great lengths to allow serious test authors and test - *framework* authors to do whatever they like with their tests and their - extensions while staying compatible with the standard library's unittest. - - testtools has completely parametrized how exceptions raised in tests are - mapped to ``TestResult`` methods and how tests are actually executed (ever - wanted ``tearDown`` to be called regardless of whether ``setUp`` succeeds?) - - It also provides many simple but handy utilities, like the ability to clone a - test, a ``MultiTestResult`` object that lets many result objects get the - results from one test suite, adapters to bring legacy ``TestResult`` objects - into our new golden age. - - - Cross-Python compatibility - -------------------------- - - testtools gives you the very latest in unit testing technology in a way that - will work with Python 2.6, 2.7, 3.1 and 3.2. - - If you wish to use testtools with Python 2.4 or 2.5, then please use testtools - 0.9.15. Up to then we supported Python 2.4 and 2.5, but we found the - constraints involved in not using the newer language features onerous as we - added more support for versions post Python 3. - -Platform: UNKNOWN -Classifier: License :: OSI Approved :: MIT License -Classifier: Programming Language :: Python :: 3 diff -Nru python-testtools-0.9.35/scripts/all-pythons python-testtools-0.9.39/scripts/all-pythons --- python-testtools-0.9.35/scripts/all-pythons 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/scripts/all-pythons 2014-08-29 01:34:28.000000000 +0000 @@ -0,0 +1,93 @@ +#!/usr/bin/python + +"""Run the testtools test suite for all supported Pythons. + +Prints output as a subunit test suite. If anything goes to stderr, that is +treated as a test error. If a Python is not available, then it is skipped. +""" + +from datetime import datetime +import os +import subprocess +import sys + +import subunit +from subunit import ( + iso8601, + _make_stream_binary, + TestProtocolClient, + TestProtocolServer, + ) +from testtools import ( + PlaceHolder, + TestCase, + ) +from testtools.compat import BytesIO +from testtools.content import text_content + + +ROOT = os.path.dirname(os.path.dirname(__file__)) + + +def run_for_python(version, result, tests): + if not tests: + tests = ['testtools.tests.test_suite'] + # XXX: This could probably be broken up and put into subunit. + python = 'python%s' % (version,) + # XXX: Correct API, but subunit doesn't support it. :( + # result.tags(set(python), set()) + result.time(now()) + test = PlaceHolder(''.join(c for c in python if c != '.')) + process = subprocess.Popen( + '%s -c pass' % (python,), shell=True, + stdout=subprocess.PIPE, stderr=subprocess.PIPE) + process.communicate() + + if process.returncode: + result.startTest(test) + result.addSkip(test, reason='%s not available' % (python,)) + result.stopTest(test) + return + + env = os.environ.copy() + if env.get('PYTHONPATH', None): + env['PYTHONPATH'] = os.pathsep.join([ROOT, env['PYTHONPATH']]) + else: + env['PYTHONPATH'] = ROOT + result.time(now()) + protocol = TestProtocolServer(result) + subunit_path = os.path.join(os.path.dirname(subunit.__file__), 'run.py') + cmd = [ + python, + '-W', 'ignore:Module testtools was already imported', + subunit_path] + cmd.extend(tests) + process = subprocess.Popen( + cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env) + _make_stream_binary(process.stdout) + _make_stream_binary(process.stderr) + # XXX: This buffers everything. Bad for memory, bad for getting progress + # on jenkins. + output, error = process.communicate() + protocol.readFrom(BytesIO(output)) + if error: + result.startTest(test) + result.addError(test, details={ + 'stderr': text_content(error), + }) + result.stopTest(test) + result.time(now()) + # XXX: Correct API, but subunit doesn't support it. :( + #result.tags(set(), set(python)) + + +def now(): + return datetime.utcnow().replace(tzinfo=iso8601.Utc()) + + + +if __name__ == '__main__': + sys.path.append(ROOT) + result = TestProtocolClient(sys.stdout) + for version in '2.6 2.7 3.0 3.1 3.2'.split(): + run_for_python(version, result, sys.argv[1:]) diff -Nru python-testtools-0.9.35/scripts/_lp_release.py python-testtools-0.9.39/scripts/_lp_release.py --- python-testtools-0.9.35/scripts/_lp_release.py 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/scripts/_lp_release.py 2014-08-29 01:34:28.000000000 +0000 @@ -0,0 +1,232 @@ +#!/usr/bin/python + +"""Release testtools on Launchpad. + +Steps: + 1. Make sure all "Fix committed" bugs are assigned to 'next' + 2. Rename 'next' to the new version + 3. Release the milestone + 4. Upload the tarball + 5. Create a new 'next' milestone + 6. Mark all "Fix committed" bugs in the milestone as "Fix released" + +Assumes that NEWS is in the parent directory, that the release sections are +underlined with '~' and the subsections are underlined with '-'. + +Assumes that this file is in the 'scripts' directory a testtools tree that has +already had a tarball built and uploaded with 'python setup.py sdist upload +--sign'. +""" + +from datetime import datetime, timedelta, tzinfo +import logging +import os +import sys + +from launchpadlib.launchpad import Launchpad +from launchpadlib import uris + + +APP_NAME = 'testtools-lp-release' +CACHE_DIR = os.path.expanduser('~/.launchpadlib/cache') +SERVICE_ROOT = uris.LPNET_SERVICE_ROOT + +FIX_COMMITTED = u"Fix Committed" +FIX_RELEASED = u"Fix Released" + +# Launchpad file type for a tarball upload. +CODE_RELEASE_TARBALL = 'Code Release Tarball' + +PROJECT_NAME = 'testtools' +NEXT_MILESTONE_NAME = 'next' + + +class _UTC(tzinfo): + """UTC""" + + def utcoffset(self, dt): + return timedelta(0) + + def tzname(self, dt): + return "UTC" + + def dst(self, dt): + return timedelta(0) + +UTC = _UTC() + + +def configure_logging(): + level = logging.INFO + log = logging.getLogger(APP_NAME) + log.setLevel(level) + handler = logging.StreamHandler() + handler.setLevel(level) + formatter = logging.Formatter("%(levelname)s: %(message)s") + handler.setFormatter(formatter) + log.addHandler(handler) + return log +LOG = configure_logging() + + +def get_path(relpath): + """Get the absolute path for something relative to this file.""" + return os.path.abspath( + os.path.join( + os.path.dirname(os.path.dirname(__file__)), relpath)) + + +def assign_fix_committed_to_next(testtools, next_milestone): + """Find all 'Fix Committed' and make sure they are in 'next'.""" + fixed_bugs = list(testtools.searchTasks(status=FIX_COMMITTED)) + for task in fixed_bugs: + LOG.debug("%s" % (task.title,)) + if task.milestone != next_milestone: + task.milestone = next_milestone + LOG.info("Re-assigning %s" % (task.title,)) + task.lp_save() + + +def rename_milestone(next_milestone, new_name): + """Rename 'next_milestone' to 'new_name'.""" + LOG.info("Renaming %s to %s" % (next_milestone.name, new_name)) + next_milestone.name = new_name + next_milestone.lp_save() + + +def get_release_notes_and_changelog(news_path): + release_notes = [] + changelog = [] + state = None + last_line = None + + def is_heading_marker(line, marker_char): + return line and line == marker_char * len(line) + + LOG.debug("Loading NEWS from %s" % (news_path,)) + with open(news_path, 'r') as news: + for line in news: + line = line.strip() + if state is None: + if (is_heading_marker(line, '~') and + not last_line.startswith('NEXT')): + milestone_name = last_line + state = 'release-notes' + else: + last_line = line + elif state == 'title': + # The line after the title is a heading marker line, so we + # ignore it and change state. That which follows are the + # release notes. + state = 'release-notes' + elif state == 'release-notes': + if is_heading_marker(line, '-'): + state = 'changelog' + # Last line in the release notes is actually the first + # line of the changelog. + changelog = [release_notes.pop(), line] + else: + release_notes.append(line) + elif state == 'changelog': + if is_heading_marker(line, '~'): + # Last line in changelog is actually the first line of the + # next section. + changelog.pop() + break + else: + changelog.append(line) + else: + raise ValueError("Couldn't parse NEWS") + + release_notes = '\n'.join(release_notes).strip() + '\n' + changelog = '\n'.join(changelog).strip() + '\n' + return milestone_name, release_notes, changelog + + +def release_milestone(milestone, release_notes, changelog): + date_released = datetime.now(tz=UTC) + LOG.info( + "Releasing milestone: %s, date %s" % (milestone.name, date_released)) + release = milestone.createProductRelease( + date_released=date_released, + changelog=changelog, + release_notes=release_notes, + ) + milestone.is_active = False + milestone.lp_save() + return release + + +def create_milestone(series, name): + """Create a new milestone in the same series as 'release_milestone'.""" + LOG.info("Creating milestone %s in series %s" % (name, series.name)) + return series.newMilestone(name=name) + + +def close_fixed_bugs(milestone): + tasks = list(milestone.searchTasks()) + for task in tasks: + LOG.debug("Found %s" % (task.title,)) + if task.status == FIX_COMMITTED: + LOG.info("Closing %s" % (task.title,)) + task.status = FIX_RELEASED + else: + LOG.warning( + "Bug not fixed, removing from milestone: %s" % (task.title,)) + task.milestone = None + task.lp_save() + + +def upload_tarball(release, tarball_path): + with open(tarball_path) as tarball: + tarball_content = tarball.read() + sig_path = tarball_path + '.asc' + with open(sig_path) as sig: + sig_content = sig.read() + tarball_name = os.path.basename(tarball_path) + LOG.info("Uploading tarball: %s" % (tarball_path,)) + release.add_file( + file_type=CODE_RELEASE_TARBALL, + file_content=tarball_content, filename=tarball_name, + signature_content=sig_content, + signature_filename=sig_path, + content_type="application/x-gzip; charset=binary") + + +def release_project(launchpad, project_name, next_milestone_name): + testtools = launchpad.projects[project_name] + next_milestone = testtools.getMilestone(name=next_milestone_name) + release_name, release_notes, changelog = get_release_notes_and_changelog( + get_path('NEWS')) + LOG.info("Releasing %s %s" % (project_name, release_name)) + # Since reversing these operations is hard, and inspecting errors from + # Launchpad is also difficult, do some looking before leaping. + errors = [] + tarball_path = get_path('dist/%s-%s.tar.gz' % (project_name, release_name,)) + if not os.path.isfile(tarball_path): + errors.append("%s does not exist" % (tarball_path,)) + if not os.path.isfile(tarball_path + '.asc'): + errors.append("%s does not exist" % (tarball_path + '.asc',)) + if testtools.getMilestone(name=release_name): + errors.append("Milestone %s exists on %s" % (release_name, project_name)) + if errors: + for error in errors: + LOG.error(error) + return 1 + assign_fix_committed_to_next(testtools, next_milestone) + rename_milestone(next_milestone, release_name) + release = release_milestone(next_milestone, release_notes, changelog) + upload_tarball(release, tarball_path) + create_milestone(next_milestone.series_target, next_milestone_name) + close_fixed_bugs(next_milestone) + return 0 + + +def main(args): + launchpad = Launchpad.login_with( + APP_NAME, SERVICE_ROOT, CACHE_DIR, credentials_file='.lp_creds') + return release_project(launchpad, PROJECT_NAME, NEXT_MILESTONE_NAME) + + +if __name__ == '__main__': + sys.exit(main(sys.argv)) diff -Nru python-testtools-0.9.35/scripts/README python-testtools-0.9.39/scripts/README --- python-testtools-0.9.35/scripts/README 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/scripts/README 2014-08-29 01:34:28.000000000 +0000 @@ -0,0 +1,3 @@ +These are scripts to help with building, maintaining and releasing testtools. + +There is little here for anyone except a testtools contributor. diff -Nru python-testtools-0.9.35/scripts/update-rtfd python-testtools-0.9.39/scripts/update-rtfd --- python-testtools-0.9.35/scripts/update-rtfd 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/scripts/update-rtfd 2014-08-29 01:34:28.000000000 +0000 @@ -0,0 +1,11 @@ +#!/usr/bin/python + +from StringIO import StringIO +from urllib2 import urlopen + + +WEB_HOOK = 'http://readthedocs.org/build/588' + + +if __name__ == '__main__': + urlopen(WEB_HOOK, data=' ') diff -Nru python-testtools-0.9.35/setup.cfg python-testtools-0.9.39/setup.cfg --- python-testtools-0.9.35/setup.cfg 2014-01-29 10:01:53.000000000 +0000 +++ python-testtools-0.9.39/setup.cfg 2014-08-29 01:34:28.000000000 +0000 @@ -1,10 +1,4 @@ [test] test_module = testtools.tests -buffer = 1 -catch = 1 - -[egg_info] -tag_build = -tag_date = 0 -tag_svn_revision = 0 - +buffer=1 +catch=1 diff -Nru python-testtools-0.9.35/.testr.conf python-testtools-0.9.39/.testr.conf --- python-testtools-0.9.35/.testr.conf 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/.testr.conf 2014-08-29 01:34:28.000000000 +0000 @@ -0,0 +1,4 @@ +[DEFAULT] +test_command=${PYTHON:-python} -m subunit.run $LISTOPT $IDOPTION testtools.tests.test_suite +test_id_option=--load-list $IDFILE +test_list_option=--list diff -Nru python-testtools-0.9.35/testtools/assertions.py python-testtools-0.9.39/testtools/assertions.py --- python-testtools-0.9.35/testtools/assertions.py 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/testtools/assertions.py 2014-08-29 01:34:28.000000000 +0000 @@ -0,0 +1,22 @@ +from testtools.matchers import ( + Annotate, + MismatchError, + ) + + +def assert_that(matchee, matcher, message='', verbose=False): + """Assert that matchee is matched by matcher. + + This should only be used when you need to use a function based + matcher, assertThat in Testtools.Testcase is prefered and has more + features + + :param matchee: An object to match with matcher. + :param matcher: An object meeting the testtools.Matcher protocol. + :raises MismatchError: When matcher does not match thing. + """ + matcher = Annotate.if_message(message, matcher) + mismatch = matcher.match(matchee) + if not mismatch: + return + raise MismatchError(matchee, matcher, mismatch, verbose) diff -Nru python-testtools-0.9.35/testtools/content.py python-testtools-0.9.39/testtools/content.py --- python-testtools-0.9.35/testtools/content.py 2014-01-29 09:53:19.000000000 +0000 +++ python-testtools-0.9.39/testtools/content.py 2014-08-29 01:34:28.000000000 +0000 @@ -25,6 +25,7 @@ _b, _format_exception_only, _format_stack_list, + _isbytes, _TB_HEADER, _u, str_is_unicode, @@ -263,6 +264,8 @@ This is useful for adding details which are short strings. """ + if _isbytes(text): + raise TypeError('text_content must be given a string, not bytes.') return Content(UTF8_TEXT, lambda: [text.encode('utf8')]) diff -Nru python-testtools-0.9.35/testtools/distutilscmd.py python-testtools-0.9.39/testtools/distutilscmd.py --- python-testtools-0.9.35/testtools/distutilscmd.py 2013-01-26 17:18:27.000000000 +0000 +++ python-testtools-0.9.39/testtools/distutilscmd.py 2014-08-29 01:34:28.000000000 +0000 @@ -26,7 +26,7 @@ def __init__(self, dist): Command.__init__(self, dist) - self.runner = TestToolsTestRunner(sys.stdout) + self.runner = TestToolsTestRunner(stdout=sys.stdout) def initialize_options(self): diff -Nru python-testtools-0.9.35/testtools/__init__.py python-testtools-0.9.39/testtools/__init__.py --- python-testtools-0.9.35/testtools/__init__.py 2014-01-29 09:59:30.000000000 +0000 +++ python-testtools-0.9.39/testtools/__init__.py 2014-08-29 01:34:28.000000000 +0000 @@ -122,4 +122,4 @@ # If the releaselevel is 'final', then the tarball will be major.minor.micro. # Otherwise it is major.minor.micro~$(revno). -__version__ = (0, 9, 35, 'final', 0) +__version__ = (0, 9, 39, 'final', 0) diff -Nru python-testtools-0.9.35/testtools/run.py python-testtools-0.9.39/testtools/run.py --- python-testtools-0.9.35/testtools/run.py 2013-11-28 08:28:37.000000000 +0000 +++ python-testtools-0.9.39/testtools/run.py 2014-08-29 01:34:28.000000000 +0000 @@ -9,13 +9,13 @@ """ from functools import partial -import os +import os.path import unittest import sys from extras import safe_hasattr -from testtools import TextTestResult +from testtools import TextTestResult, testcase from testtools.compat import classtypes, istext, unicode_output_stream from testtools.testsuite import filter_by_ids, iterate_tests, sorted_tests @@ -29,10 +29,13 @@ defaultTestLoader = discover.DiscoveringTestLoader() defaultTestLoaderCls = discover.DiscoveringTestLoader have_discover = True + discover_impl = discover except ImportError: have_discover = False else: have_discover = True + discover_impl = unittest.loader +discover_fixed = False def list_test(test): @@ -48,13 +51,17 @@ :return: A tuple of test ids that would run and error strings describing things that failed to import. """ - unittest_import_str = 'unittest.loader.ModuleImportFailure.' + unittest_import_strs = set([ + 'unittest.loader.ModuleImportFailure.', 'discover.ModuleImportFailure.' + ]) test_ids = [] errors = [] for test in iterate_tests(test): - # to this ugly. - if test.id().startswith(unittest_import_str): - errors.append(test.id()[len(unittest_import_str):]) + # Much ugly. + for prefix in unittest_import_strs: + if test.id().startswith(prefix): + errors.append(test.id()[len(prefix):]) + break else: test_ids.append(test.id()) return test_ids, errors @@ -73,6 +80,8 @@ :param stdout: Stream to use for stdout. """ self.failfast = failfast + if stdout is None: + stdout = sys.stdout self.stdout = stdout def list(self, test): @@ -89,7 +98,7 @@ def run(self, test): "Run the given test case or test suite." result = TextTestResult( - unicode_output_stream(sys.stdout), failfast=self.failfast) + unicode_output_stream(self.stdout), failfast=self.failfast) result.startTestRun() try: return test.run(result) @@ -119,6 +128,8 @@ # - The limitation of using getopt is declared to the user. # - http://bugs.python.org/issue16709 is worked around, by sorting tests when # discover is used. +# - We monkey-patch the discover and unittest loaders to address +# http://bugs.python.org/issue16662 with the proposed upstream patch. FAILFAST = " -f, --failfast Stop on first failure\n" CATCHBREAK = " -c, --catch Catch control-C and display results\n" @@ -185,6 +196,7 @@ argv = sys.argv if stdout is None: stdout = sys.stdout + self.stdout = stdout self.exit = exit self.failfast = failfast @@ -224,7 +236,7 @@ runner.list(self.test) else: for test in iterate_tests(self.test): - stdout.write('%s\n' % test.id()) + self.stdout.write('%s\n' % test.id()) def usageExit(self, msg=None): if msg: @@ -296,6 +308,7 @@ if not have_discover: raise AssertionError("Unable to use discovery, must use python 2.7 " "or greater, or install the discover package.") + _fix_discovery() self.progName = '%s discover' % self.progName import optparse parser = optparse.OptionParser() @@ -378,17 +391,140 @@ try: testRunner = self.testRunner(verbosity=self.verbosity, failfast=self.failfast, - buffer=self.buffer) + buffer=self.buffer, + stdout=self.stdout) except TypeError: - # didn't accept the verbosity, buffer or failfast arguments + # didn't accept the verbosity, buffer, failfast or stdout arguments + # Try with the prior contract try: - testRunner = self.testRunner() + testRunner = self.testRunner(verbosity=self.verbosity, + failfast=self.failfast, + buffer=self.buffer) except TypeError: - # it is assumed to be a TestRunner instance - testRunner = self.testRunner + # Now try calling it with defaults + try: + testRunner = self.testRunner() + except TypeError: + # it is assumed to be a TestRunner instance + testRunner = self.testRunner return testRunner +def _fix_discovery(): + # Monkey patch in the bugfix from http://bugs.python.org/issue16662 + # - the code here is a straight copy from the Python core tree + # with the patch applied. + global discover_fixed + if discover_fixed: + return + # Do we have a fixed Python? + # (not committed upstream yet - so we can't uncomment this code, + # but when it gets committed, the next version to be released won't + # need monkey patching. + # if sys.version_info[:2] > (3, 4): + # discover_fixed = True + # return + if not have_discover: + return + if safe_hasattr(discover_impl, '_jython_aware_splitext'): + _jython_aware_splitext = discover_impl._jython_aware_splitext + else: + def _jython_aware_splitext(path): + if path.lower().endswith('$py.class'): + return path[:-9] + return os.path.splitext(path)[0] + def loadTestsFromModule(self, module, use_load_tests=True, pattern=None): + """Return a suite of all tests cases contained in the given module""" + # use_load_tests is preserved for compatability though it was never + # an official API. + # pattern is not an official API either; it is used in discovery to + # chain the requested pattern down. + tests = [] + for name in dir(module): + obj = getattr(module, name) + if isinstance(obj, type) and issubclass(obj, unittest.TestCase): + tests.append(self.loadTestsFromTestCase(obj)) + + load_tests = getattr(module, 'load_tests', None) + tests = self.suiteClass(tests) + if use_load_tests and load_tests is not None: + try: + return load_tests(self, tests, pattern) + except Exception as e: + return discover_impl._make_failed_load_tests( + module.__name__, e, self.suiteClass) + return tests + def _find_tests(self, start_dir, pattern, namespace=False): + """Used by discovery. Yields test suites it loads.""" + paths = sorted(os.listdir(start_dir)) + + for path in paths: + full_path = os.path.join(start_dir, path) + if os.path.isfile(full_path): + if not discover_impl.VALID_MODULE_NAME.match(path): + # valid Python identifiers only + continue + if not self._match_path(path, full_path, pattern): + continue + # if the test file matches, load it + name = self._get_name_from_path(full_path) + try: + module = self._get_module_from_name(name) + except testcase.TestSkipped as e: + yield discover_impl._make_skipped_test( + name, e, self.suiteClass) + except: + yield discover_impl._make_failed_import_test( + name, self.suiteClass) + else: + mod_file = os.path.abspath(getattr(module, '__file__', full_path)) + realpath = _jython_aware_splitext( + os.path.realpath(mod_file)) + fullpath_noext = _jython_aware_splitext( + os.path.realpath(full_path)) + if realpath.lower() != fullpath_noext.lower(): + module_dir = os.path.dirname(realpath) + mod_name = _jython_aware_splitext( + os.path.basename(full_path)) + expected_dir = os.path.dirname(full_path) + msg = ("%r module incorrectly imported from %r. Expected %r. " + "Is this module globally installed?") + raise ImportError(msg % (mod_name, module_dir, expected_dir)) + yield self.loadTestsFromModule(module, pattern=pattern) + elif os.path.isdir(full_path): + if (not namespace and + not os.path.isfile(os.path.join(full_path, '__init__.py'))): + continue + + load_tests = None + tests = None + name = self._get_name_from_path(full_path) + try: + package = self._get_module_from_name(name) + except testcase.TestSkipped as e: + yield discover_impl._make_skipped_test( + name, e, self.suiteClass) + except: + yield discover_impl._make_failed_import_test( + name, self.suiteClass) + else: + load_tests = getattr(package, 'load_tests', None) + tests = self.loadTestsFromModule(package, pattern=pattern) + if tests is not None: + # tests loaded from package file + yield tests + + if load_tests is not None: + # loadTestsFromModule(package) has load_tests for us. + continue + # recurse into the package + pkg_tests = self._find_tests( + full_path, pattern, namespace=namespace) + for test in pkg_tests: + yield test + defaultTestLoaderCls.loadTestsFromModule = loadTestsFromModule + defaultTestLoaderCls._find_tests = _find_tests + ################ def main(argv, stdout): diff -Nru python-testtools-0.9.35/testtools/testcase.py python-testtools-0.9.39/testtools/testcase.py --- python-testtools-0.9.35/testtools/testcase.py 2014-01-29 09:53:19.000000000 +0000 +++ python-testtools-0.9.39/testtools/testcase.py 2014-08-29 01:34:28.000000000 +0000 @@ -16,6 +16,7 @@ ] import copy +import functools import itertools import sys import types @@ -83,6 +84,20 @@ 'unittest.case._ExpectedFailure', _ExpectedFailure) +# Copied from unittest before python 3.4 release. Used to maintain +# compatibility with unittest sub-test feature. Users should not use this +# directly. +def _expectedFailure(func): + @functools.wraps(func) + def wrapper(*args, **kwargs): + try: + func(*args, **kwargs) + except Exception: + raise _ExpectedFailure(sys.exc_info()) + raise _UnexpectedSuccess + return wrapper + + def run_test_with(test_runner, **kwargs): """Decorate a test as using a specific ``RunTest``. @@ -192,6 +207,8 @@ runTest = getattr( test_method, '_run_test_with', self.run_tests_with) self.__RunTest = runTest + if getattr(test_method, '__unittest_expecting_failure__', False): + setattr(self, self._testMethodName, _expectedFailure(test_method)) self.__exception_handlers = [] self.exception_handlers = [ (self.skipException, self._report_skip), @@ -322,9 +339,9 @@ failUnlessEqual = assertEquals = assertEqual - def assertIn(self, needle, haystack): + def assertIn(self, needle, haystack, message=''): """Assert that needle is in haystack.""" - self.assertThat(haystack, Contains(needle)) + self.assertThat(haystack, Contains(needle), message) def assertIsNone(self, observed, message=''): """Assert that 'observed' is equal to None. @@ -359,10 +376,10 @@ matcher = Not(Is(expected)) self.assertThat(observed, matcher, message) - def assertNotIn(self, needle, haystack): + def assertNotIn(self, needle, haystack, message=''): """Assert that needle is not in haystack.""" matcher = Not(Contains(needle)) - self.assertThat(haystack, matcher) + self.assertThat(haystack, matcher, message) def assertIsInstance(self, obj, klass, msg=None): if isinstance(klass, tuple): @@ -637,11 +654,24 @@ def setUp(self): super(TestCase, self).setUp() + if self.__setup_called: + raise ValueError( + "In File: %s\n" + "TestCase.setUp was already called. Do not explicitly call " + "setUp from your tests. In your own setUp, use super to call " + "the base setUp." + % (sys.modules[self.__class__.__module__].__file__,)) self.__setup_called = True def tearDown(self): super(TestCase, self).tearDown() - unittest.TestCase.tearDown(self) + if self.__teardown_called: + raise ValueError( + "In File: %s\n" + "TestCase.tearDown was already called. Do not explicitly call " + "tearDown from your tests. In your own tearDown, use super to " + "call the base tearDown." + % (sys.modules[self.__class__.__module__].__file__,)) self.__teardown_called = True diff -Nru python-testtools-0.9.35/testtools/testresult/real.py python-testtools-0.9.39/testtools/testresult/real.py --- python-testtools-0.9.35/testtools/testresult/real.py 2014-01-29 09:53:19.000000000 +0000 +++ python-testtools-0.9.39/testtools/testresult/real.py 2014-08-29 01:34:28.000000000 +0000 @@ -266,7 +266,6 @@ """A test result for reporting the activity of a test run. Typical use - ----------- >>> result = StreamResult() >>> result.startTestRun() @@ -282,7 +281,6 @@ >>> result.status(self.id(), 'success') General concepts - ---------------- StreamResult is built to process events that are emitted by tests during a test run or test enumeration. The test run may be running concurrently, and @@ -1336,10 +1334,13 @@ if details is not None: for name, content in details.items(): mime_type = repr(content.content_type) - for file_bytes in content.iter_bytes(): - self.status(file_name=name, file_bytes=file_bytes, - mime_type=mime_type, test_id=test_id, timestamp=now) - self.status(file_name=name, file_bytes=_b(""), eof=True, + file_bytes = None + for next_bytes in content.iter_bytes(): + if file_bytes is not None: + self.status(file_name=name, file_bytes=file_bytes, + mime_type=mime_type, test_id=test_id, timestamp=now) + file_bytes = next_bytes + self.status(file_name=name, file_bytes=file_bytes, eof=True, mime_type=mime_type, test_id=test_id, timestamp=now) if reason is not None: self.status(file_name='reason', file_bytes=reason.encode('utf8'), diff -Nru python-testtools-0.9.35/testtools/tests/__init__.py python-testtools-0.9.39/testtools/tests/__init__.py --- python-testtools-0.9.35/testtools/tests/__init__.py 2013-01-26 17:18:32.000000000 +0000 +++ python-testtools-0.9.39/testtools/tests/__init__.py 2014-08-29 01:34:28.000000000 +0000 @@ -9,6 +9,7 @@ def test_suite(): from testtools.tests import ( matchers, + test_assert_that, test_compat, test_content, test_content_type, @@ -27,6 +28,7 @@ ) modules = [ matchers, + test_assert_that, test_compat, test_content, test_content_type, diff -Nru python-testtools-0.9.35/testtools/tests/test_assert_that.py python-testtools-0.9.39/testtools/tests/test_assert_that.py --- python-testtools-0.9.35/testtools/tests/test_assert_that.py 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/testtools/tests/test_assert_that.py 2014-08-29 01:34:28.000000000 +0000 @@ -0,0 +1,152 @@ +from doctest import ELLIPSIS + +from testtools import ( + TestCase, + ) +from testtools.assertions import ( + assert_that, + ) +from testtools.compat import ( + _u, + ) +from testtools.content import ( + TracebackContent, + ) +from testtools.matchers import ( + Annotate, + DocTestMatches, + Equals, + ) + + +class AssertThatTests(object): + """A mixin containing shared tests for assertThat and assert_that.""" + + def assert_that_callable(self, *args, **kwargs): + raise NotImplementedError + + def assertFails(self, message, function, *args, **kwargs): + """Assert that function raises a failure with the given message.""" + failure = self.assertRaises( + self.failureException, function, *args, **kwargs) + self.assert_that_callable(failure, DocTestMatches(message, ELLIPSIS)) + + def test_assertThat_matches_clean(self): + class Matcher(object): + def match(self, foo): + return None + self.assert_that_callable("foo", Matcher()) + + def test_assertThat_mismatch_raises_description(self): + calls = [] + class Mismatch(object): + def __init__(self, thing): + self.thing = thing + def describe(self): + calls.append(('describe_diff', self.thing)) + return "object is not a thing" + def get_details(self): + return {} + class Matcher(object): + def match(self, thing): + calls.append(('match', thing)) + return Mismatch(thing) + def __str__(self): + calls.append(('__str__',)) + return "a description" + class Test(type(self)): + def test(self): + self.assert_that_callable("foo", Matcher()) + result = Test("test").run() + self.assertEqual([ + ('match', "foo"), + ('describe_diff', "foo"), + ], calls) + self.assertFalse(result.wasSuccessful()) + + def test_assertThat_output(self): + matchee = 'foo' + matcher = Equals('bar') + expected = matcher.match(matchee).describe() + self.assertFails(expected, self.assert_that_callable, matchee, matcher) + + def test_assertThat_message_is_annotated(self): + matchee = 'foo' + matcher = Equals('bar') + expected = Annotate('woo', matcher).match(matchee).describe() + self.assertFails(expected, + self.assert_that_callable, matchee, matcher, 'woo') + + def test_assertThat_verbose_output(self): + matchee = 'foo' + matcher = Equals('bar') + expected = ( + 'Match failed. Matchee: %r\n' + 'Matcher: %s\n' + 'Difference: %s\n' % ( + matchee, + matcher, + matcher.match(matchee).describe(), + )) + self.assertFails( + expected, + self.assert_that_callable, matchee, matcher, verbose=True) + + def get_error_string(self, e): + """Get the string showing how 'e' would be formatted in test output. + + This is a little bit hacky, since it's designed to give consistent + output regardless of Python version. + + In testtools, TestResult._exc_info_to_unicode is the point of dispatch + between various different implementations of methods that format + exceptions, so that's what we have to call. However, that method cares + about stack traces and formats the exception class. We don't care + about either of these, so we take its output and parse it a little. + """ + error = TracebackContent((e.__class__, e, None), self).as_text() + # We aren't at all interested in the traceback. + if error.startswith('Traceback (most recent call last):\n'): + lines = error.splitlines(True)[1:] + for i, line in enumerate(lines): + if not line.startswith(' '): + break + error = ''.join(lines[i:]) + # We aren't interested in how the exception type is formatted. + exc_class, error = error.split(': ', 1) + return error + + def test_assertThat_verbose_unicode(self): + # When assertThat is given matchees or matchers that contain non-ASCII + # unicode strings, we can still provide a meaningful error. + matchee = _u('\xa7') + matcher = Equals(_u('a')) + expected = ( + 'Match failed. Matchee: %s\n' + 'Matcher: %s\n' + 'Difference: %s\n\n' % ( + repr(matchee).replace("\\xa7", matchee), + matcher, + matcher.match(matchee).describe(), + )) + e = self.assertRaises( + self.failureException, self.assert_that_callable, matchee, matcher, + verbose=True) + self.assertEqual(expected, self.get_error_string(e)) + + +class TestAssertThatFunction(AssertThatTests, TestCase): + + def assert_that_callable(self, *args, **kwargs): + return assert_that(*args, **kwargs) + + +class TestAssertThatMethod(AssertThatTests, TestCase): + + def assert_that_callable(self, *args, **kwargs): + return self.assertThat(*args, **kwargs) + + +def test_suite(): + from unittest import TestLoader + return TestLoader().loadTestsFromName(__name__) diff -Nru python-testtools-0.9.35/testtools/tests/test_content.py python-testtools-0.9.39/testtools/tests/test_content.py --- python-testtools-0.9.35/testtools/tests/test_content.py 2013-11-28 08:28:37.000000000 +0000 +++ python-testtools-0.9.39/testtools/tests/test_content.py 2014-08-29 01:34:28.000000000 +0000 @@ -5,12 +5,13 @@ import tempfile import unittest -from testtools import TestCase +from testtools import TestCase, skipUnless from testtools.compat import ( _b, _u, BytesIO, StringIO, + str_is_unicode, ) from testtools.content import ( attach_file, @@ -190,6 +191,11 @@ expected = Content(UTF8_TEXT, lambda: [data.encode('utf8')]) self.assertEqual(expected, text_content(data)) + @skipUnless(str_is_unicode, "Test only applies in python 3.") + def test_text_content_raises_TypeError_when_passed_bytes(self): + data = _b("Some Bytes") + self.assertRaises(TypeError, text_content, data) + def test_json_content(self): data = {'foo': 'bar'} expected = Content(JSON, lambda: [_b('{"foo": "bar"}')]) diff -Nru python-testtools-0.9.35/testtools/tests/test_distutilscmd.py python-testtools-0.9.39/testtools/tests/test_distutilscmd.py --- python-testtools-0.9.35/testtools/tests/test_distutilscmd.py 2013-01-26 17:18:27.000000000 +0000 +++ python-testtools-0.9.39/testtools/tests/test_distutilscmd.py 2014-08-29 01:34:28.000000000 +0000 @@ -61,8 +61,8 @@ dist.cmdclass = {'test': TestCommand} dist.command_options = { 'test': {'test_module': ('command line', 'testtools.runexample')}} - cmd = dist.reinitialize_command('test') with fixtures.MonkeyPatch('sys.stdout', stdout.stream): + cmd = dist.reinitialize_command('test') dist.run_command('test') self.assertThat( stdout.getDetails()['stdout'].as_text(), @@ -83,8 +83,8 @@ 'test': { 'test_suite': ( 'command line', 'testtools.runexample.test_suite')}} - cmd = dist.reinitialize_command('test') with fixtures.MonkeyPatch('sys.stdout', stdout.stream): + cmd = dist.reinitialize_command('test') dist.run_command('test') self.assertThat( stdout.getDetails()['stdout'].as_text(), diff -Nru python-testtools-0.9.35/testtools/tests/test_run.py python-testtools-0.9.39/testtools/tests/test_run.py --- python-testtools-0.9.35/testtools/tests/test_run.py 2013-11-28 08:28:37.000000000 +0000 +++ python-testtools-0.9.39/testtools/tests/test_run.py 2014-08-29 01:34:28.000000000 +0000 @@ -4,18 +4,23 @@ from unittest import TestSuite import sys +from textwrap import dedent from extras import try_import fixtures = try_import('fixtures') testresources = try_import('testresources') import testtools -from testtools import TestCase, run +from testtools import TestCase, run, skipUnless from testtools.compat import ( _b, + _u, StringIO, ) -from testtools.matchers import Contains +from testtools.matchers import ( + Contains, + MatchesRegex, + ) if fixtures: @@ -100,6 +105,31 @@ testtools.__path__.append(self.package.base) +if fixtures and run.have_discover: + class SampleLoadTestsPackage(fixtures.Fixture): + """Creates a test suite package using load_tests.""" + + def __init__(self): + super(SampleLoadTestsPackage, self).__init__() + self.package = fixtures.PythonPackage( + 'discoverexample', [('__init__.py', _b(""" +from testtools import TestCase, clone_test_with_new_id + +class TestExample(TestCase): + def test_foo(self): + pass + +def load_tests(loader, tests, pattern): + tests.addTest(clone_test_with_new_id(tests._tests[1]._tests[0], "fred")) + return tests +"""))]) + + def setUp(self): + super(SampleLoadTestsPackage, self).setUp() + self.useFixture(self.package) + self.addCleanup(sys.path.remove, self.package.base) + + class TestRun(TestCase): def setUp(self): @@ -147,7 +177,7 @@ run.main, ['prog', 'discover', '-l', broken.package.base, '*.py'], out) self.assertEqual(2, exc.args[0]) self.assertEqual("""Failed to import -runexample.__init__ +runexample """, out.getvalue()) def test_run_orders_tests(self): @@ -235,12 +265,43 @@ self.fail('a') def test_b(self): self.fail('b') - runner = run.TestToolsTestRunner(failfast=True) with fixtures.MonkeyPatch('sys.stdout', stdout.stream): + runner = run.TestToolsTestRunner(failfast=True) runner.run(TestSuite([Failing('test_a'), Failing('test_b')])) self.assertThat( stdout.getDetails()['stdout'].as_text(), Contains('Ran 1 test')) + def test_stdout_honoured(self): + self.useFixture(SampleTestFixture()) + tests = [] + out = StringIO() + exc = self.assertRaises(SystemExit, run.main, + argv=['prog', 'testtools.runexample.test_suite'], + stdout=out) + self.assertEqual((0,), exc.args) + self.assertThat( + out.getvalue(), + MatchesRegex(_u("""Tests running... + +Ran 2 tests in \\d.\\d\\d\\ds +OK +"""))) + + @skipUnless(run.have_discover, "discovery not present") + @skipUnless(fixtures, "fixtures not present") + def test_issue_16662(self): + # unittest's discover implementation didn't handle load_tests on + # packages. That is fixed pending commit, but we want to offer it + # to all testtools users regardless of Python version. + # See http://bugs.python.org/issue16662 + pkg = self.useFixture(SampleLoadTestsPackage()) + out = StringIO() + self.assertEqual(None, run.main( + ['prog', 'discover', '-l', pkg.package.base], out)) + self.assertEqual(dedent("""\ + discoverexample.TestExample.test_foo + fred + """), out.getvalue()) def test_suite(): diff -Nru python-testtools-0.9.35/testtools/tests/test_testcase.py python-testtools-0.9.39/testtools/tests/test_testcase.py --- python-testtools-0.9.35/testtools/tests/test_testcase.py 2014-01-29 09:53:19.000000000 +0000 +++ python-testtools-0.9.39/testtools/tests/test_testcase.py 2014-08-29 01:34:28.000000000 +0000 @@ -400,6 +400,16 @@ '%r not in %r' % ('qux', 'foo bar baz'), self.assertIn, 'qux', 'foo bar baz') + def test_assertIn_failure_with_message(self): + # assertIn(needle, haystack) fails the test when 'needle' is not in + # 'haystack'. + self.assertFails('3 not in [0, 1, 2]: foo bar', self.assertIn, 3, + [0, 1, 2], 'foo bar') + self.assertFails( + '%r not in %r: foo bar' % ('qux', 'foo bar baz'), + self.assertIn, 'qux', 'foo bar baz', 'foo bar') + + def test_assertNotIn_success(self): # assertNotIn(needle, haystack) asserts that 'needle' is not in # 'haystack'. @@ -415,6 +425,18 @@ "'foo bar baz' matches Contains('foo')", self.assertNotIn, 'foo', 'foo bar baz') + + def test_assertNotIn_failure_with_message(self): + # assertNotIn(needle, haystack) fails the test when 'needle' is in + # 'haystack'. + self.assertFails('[1, 2, 3] matches Contains(3): foo bar', self.assertNotIn, + 3, [1, 2, 3], 'foo bar') + self.assertFails( + "'foo bar baz' matches Contains('foo'): foo bar", + self.assertNotIn, 'foo', 'foo bar baz', "foo bar") + + + def test_assertIsInstance(self): # assertIsInstance asserts that an object is an instance of a class. @@ -984,6 +1006,28 @@ self.assertDetailsProvided(case, "addUnexpectedSuccess", ["foo", "reason"]) + @skipIf(not hasattr(unittest, 'expectedFailure'), 'Need py27+') + def test_unittest_expectedFailure_decorator_works_with_failure(self): + class ReferenceTest(TestCase): + @unittest.expectedFailure + def test_fails_expectedly(self): + self.assertEquals(1, 0) + + test = ReferenceTest('test_fails_expectedly') + result = test.run() + self.assertEqual(True, result.wasSuccessful()) + + @skipIf(not hasattr(unittest, 'expectedFailure'), 'Need py27+') + def test_unittest_expectedFailure_decorator_works_with_success(self): + class ReferenceTest(TestCase): + @unittest.expectedFailure + def test_passes_unexpectedly(self): + self.assertEquals(1, 1) + + test = ReferenceTest('test_passes_unexpectedly') + result = test.run() + self.assertEqual(False, result.wasSuccessful()) + class TestUniqueFactories(TestCase): """Tests for getUniqueString and getUniqueInteger.""" @@ -1163,6 +1207,18 @@ run_test_with = FullStackRunTest + def test_setUpCalledTwice(self): + class CallsTooMuch(TestCase): + def test_method(self): + self.setUp() + result = unittest.TestResult() + CallsTooMuch('test_method').run(result) + self.assertThat(result.errors, HasLength(1)) + self.assertThat(result.errors[0][1], + DocTestMatches( + "...ValueError...File...testtools/tests/test_testcase.py...", + ELLIPSIS)) + def test_setUpNotCalled(self): class DoesnotcallsetUp(TestCase): def setUp(self): @@ -1174,6 +1230,18 @@ self.assertThat(result.errors, HasLength(1)) self.assertThat(result.errors[0][1], DocTestMatches( + "...ValueError...File...testtools/tests/test_testcase.py...", + ELLIPSIS)) + + def test_tearDownCalledTwice(self): + class CallsTooMuch(TestCase): + def test_method(self): + self.tearDown() + result = unittest.TestResult() + CallsTooMuch('test_method').run(result) + self.assertThat(result.errors, HasLength(1)) + self.assertThat(result.errors[0][1], + DocTestMatches( "...ValueError...File...testtools/tests/test_testcase.py...", ELLIPSIS)) diff -Nru python-testtools-0.9.35/testtools/tests/test_testsuite.py python-testtools-0.9.39/testtools/tests/test_testsuite.py --- python-testtools-0.9.35/testtools/tests/test_testsuite.py 2013-11-28 08:32:58.000000000 +0000 +++ python-testtools-0.9.39/testtools/tests/test_testsuite.py 2014-08-29 01:34:28.000000000 +0000 @@ -189,12 +189,10 @@ self.assertEqual([ ('status', "broken-runner-'0'", 'inprogress', None, True, None, None, False, None, _u('0'), None), ('status', "broken-runner-'0'", None, None, True, 'traceback', None, - False, + True, 'text/x-traceback; charset="utf8"; language="python"', '0', None), - ('status', "broken-runner-'0'", None, None, True, 'traceback', b'', True, - 'text/x-traceback; charset="utf8"; language="python"', '0', None), ('status', "broken-runner-'0'", 'fail', set(), True, None, None, False, None, _u('0'), None) ], events) diff -Nru python-testtools-0.9.35/testtools.egg-info/dependency_links.txt python-testtools-0.9.39/testtools.egg-info/dependency_links.txt --- python-testtools-0.9.35/testtools.egg-info/dependency_links.txt 2014-01-29 10:01:52.000000000 +0000 +++ python-testtools-0.9.39/testtools.egg-info/dependency_links.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ - diff -Nru python-testtools-0.9.35/testtools.egg-info/not-zip-safe python-testtools-0.9.39/testtools.egg-info/not-zip-safe --- python-testtools-0.9.35/testtools.egg-info/not-zip-safe 2013-02-19 20:57:17.000000000 +0000 +++ python-testtools-0.9.39/testtools.egg-info/not-zip-safe 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ - diff -Nru python-testtools-0.9.35/testtools.egg-info/PKG-INFO python-testtools-0.9.39/testtools.egg-info/PKG-INFO --- python-testtools-0.9.35/testtools.egg-info/PKG-INFO 2014-01-29 10:01:52.000000000 +0000 +++ python-testtools-0.9.39/testtools.egg-info/PKG-INFO 1970-01-01 00:00:00.000000000 +0000 @@ -1,113 +0,0 @@ -Metadata-Version: 1.1 -Name: testtools -Version: 0.9.35 -Summary: Extensions to the Python standard library unit testing framework -Home-page: https://github.com/testing-cabal/testtools -Author: Jonathan M. Lange -Author-email: jml+testtools@mumak.net -License: UNKNOWN -Description: ====================================== - testtools: tasteful testing for Python - ====================================== - - testtools is a set of extensions to the Python standard library's unit testing - framework. These extensions have been derived from many years of experience - with unit testing in Python and come from many different sources. testtools - supports Python versions all the way back to Python 2.6. - - What better way to start than with a contrived code snippet?:: - - from testtools import TestCase - from testtools.content import Content - from testtools.content_type import UTF8_TEXT - from testtools.matchers import Equals - - from myproject import SillySquareServer - - class TestSillySquareServer(TestCase): - - def setUp(self): - super(TestSillySquare, self).setUp() - self.server = self.useFixture(SillySquareServer()) - self.addCleanup(self.attach_log_file) - - def attach_log_file(self): - self.addDetail( - 'log-file', - Content(UTF8_TEXT - lambda: open(self.server.logfile, 'r').readlines())) - - def test_server_is_cool(self): - self.assertThat(self.server.temperature, Equals("cool")) - - def test_square(self): - self.assertThat(self.server.silly_square_of(7), Equals(49)) - - - Why use testtools? - ================== - - Better assertion methods - ------------------------ - - The standard assertion methods that come with unittest aren't as helpful as - they could be, and there aren't quite enough of them. testtools adds - ``assertIn``, ``assertIs``, ``assertIsInstance`` and their negatives. - - - Matchers: better than assertion methods - --------------------------------------- - - Of course, in any serious project you want to be able to have assertions that - are specific to that project and the particular problem that it is addressing. - Rather than forcing you to define your own assertion methods and maintain your - own inheritance hierarchy of ``TestCase`` classes, testtools lets you write - your own "matchers", custom predicates that can be plugged into a unit test:: - - def test_response_has_bold(self): - # The response has bold text. - response = self.server.getResponse() - self.assertThat(response, HTMLContains(Tag('bold', 'b'))) - - - More debugging info, when you need it - -------------------------------------- - - testtools makes it easy to add arbitrary data to your test result. If you - want to know what's in a log file when a test fails, or what the load was on - the computer when a test started, or what files were open, you can add that - information with ``TestCase.addDetail``, and it will appear in the test - results if that test fails. - - - Extend unittest, but stay compatible and re-usable - -------------------------------------------------- - - testtools goes to great lengths to allow serious test authors and test - *framework* authors to do whatever they like with their tests and their - extensions while staying compatible with the standard library's unittest. - - testtools has completely parametrized how exceptions raised in tests are - mapped to ``TestResult`` methods and how tests are actually executed (ever - wanted ``tearDown`` to be called regardless of whether ``setUp`` succeeds?) - - It also provides many simple but handy utilities, like the ability to clone a - test, a ``MultiTestResult`` object that lets many result objects get the - results from one test suite, adapters to bring legacy ``TestResult`` objects - into our new golden age. - - - Cross-Python compatibility - -------------------------- - - testtools gives you the very latest in unit testing technology in a way that - will work with Python 2.6, 2.7, 3.1 and 3.2. - - If you wish to use testtools with Python 2.4 or 2.5, then please use testtools - 0.9.15. Up to then we supported Python 2.4 and 2.5, but we found the - constraints involved in not using the newer language features onerous as we - added more support for versions post Python 3. - -Platform: UNKNOWN -Classifier: License :: OSI Approved :: MIT License -Classifier: Programming Language :: Python :: 3 diff -Nru python-testtools-0.9.35/testtools.egg-info/requires.txt python-testtools-0.9.39/testtools.egg-info/requires.txt --- python-testtools-0.9.35/testtools.egg-info/requires.txt 2014-01-29 10:01:52.000000000 +0000 +++ python-testtools-0.9.39/testtools.egg-info/requires.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1,2 +0,0 @@ -extras -python-mimeparse \ No newline at end of file diff -Nru python-testtools-0.9.35/testtools.egg-info/SOURCES.txt python-testtools-0.9.39/testtools.egg-info/SOURCES.txt --- python-testtools-0.9.35/testtools.egg-info/SOURCES.txt 2014-01-29 10:01:53.000000000 +0000 +++ python-testtools-0.9.39/testtools.egg-info/SOURCES.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1,84 +0,0 @@ -.gitignore -LICENSE -MANIFEST.in -Makefile -NEWS -README.rst -setup.cfg -setup.py -doc/.hacking.rst.swp -doc/Makefile -doc/conf.py -doc/for-framework-folk.rst -doc/for-framework-folk.rst~ -doc/for-test-authors.rst -doc/for-test-authors.rst~ -doc/hacking.rst -doc/index.rst -doc/make.bat -doc/overview.rst -doc/_static/placeholder.txt -doc/_templates/placeholder.txt -testtools/__init__.py -testtools/_compat2x.py -testtools/_compat3x.py -testtools/_spinner.py -testtools/compat.py -testtools/content.py -testtools/content_type.py -testtools/deferredruntest.py -testtools/distutilscmd.py -testtools/helpers.py -testtools/monkey.py -testtools/run.py -testtools/runtest.py -testtools/tags.py -testtools/testcase.py -testtools/testsuite.py -testtools/utils.py -testtools.egg-info/PKG-INFO -testtools.egg-info/SOURCES.txt -testtools.egg-info/dependency_links.txt -testtools.egg-info/not-zip-safe -testtools.egg-info/requires.txt -testtools.egg-info/top_level.txt -testtools/matchers/__init__.py -testtools/matchers/_basic.py -testtools/matchers/_datastructures.py -testtools/matchers/_dict.py -testtools/matchers/_doctest.py -testtools/matchers/_exception.py -testtools/matchers/_filesystem.py -testtools/matchers/_higherorder.py -testtools/matchers/_impl.py -testtools/testresult/__init__.py -testtools/testresult/doubles.py -testtools/testresult/real.py -testtools/tests/__init__.py -testtools/tests/helpers.py -testtools/tests/test_compat.py -testtools/tests/test_content.py -testtools/tests/test_content_type.py -testtools/tests/test_deferredruntest.py -testtools/tests/test_distutilscmd.py -testtools/tests/test_fixturesupport.py -testtools/tests/test_helpers.py -testtools/tests/test_monkey.py -testtools/tests/test_run.py -testtools/tests/test_runtest.py -testtools/tests/test_spinner.py -testtools/tests/test_tags.py -testtools/tests/test_testcase.py -testtools/tests/test_testresult.py -testtools/tests/test_testsuite.py -testtools/tests/test_with_with.py -testtools/tests/matchers/__init__.py -testtools/tests/matchers/helpers.py -testtools/tests/matchers/test_basic.py -testtools/tests/matchers/test_datastructures.py -testtools/tests/matchers/test_dict.py -testtools/tests/matchers/test_doctest.py -testtools/tests/matchers/test_exception.py -testtools/tests/matchers/test_filesystem.py -testtools/tests/matchers/test_higherorder.py -testtools/tests/matchers/test_impl.py \ No newline at end of file diff -Nru python-testtools-0.9.35/testtools.egg-info/top_level.txt python-testtools-0.9.39/testtools.egg-info/top_level.txt --- python-testtools-0.9.35/testtools.egg-info/top_level.txt 2014-01-29 10:01:52.000000000 +0000 +++ python-testtools-0.9.39/testtools.egg-info/top_level.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ -testtools diff -Nru python-testtools-0.9.35/.travis.yml python-testtools-0.9.39/.travis.yml --- python-testtools-0.9.35/.travis.yml 1970-01-01 00:00:00.000000000 +0000 +++ python-testtools-0.9.39/.travis.yml 2014-08-29 01:34:28.000000000 +0000 @@ -0,0 +1,25 @@ +language: python + +python: + - "2.6" + - "2.7" + - "3.3" + - "pypy" + +# We have to pin Jinja2 < 2.7 for Python 3.2 because 2.7 drops/breaks support: +# http://jinja.pocoo.org/docs/changelog/#version-2-7 +# +# See also: +# http://stackoverflow.com/questions/18252804/syntax-error-in-jinja-2-library +matrix: + include: + - python: "3.2" + env: JINJA_REQ="jinja2<2.7" + +install: + - pip install -q --use-mirrors fixtures extras python-mimeparse $JINJA_REQ sphinx + - python setup.py -q install + +script: + - python -m testtools.run testtools.tests.test_suite + - make clean-sphinx docs