--- python-testtools-0.1~r16.orig/HACKING +++ python-testtools-0.1~r16/HACKING @@ -0,0 +1,51 @@ +=================================== +Notes for contributing to testtools +=================================== + +Coding style +------------ + +In general, follow PEP 8 . + +For consistency with the standard library's ``unittest`` module, method names +are generally ``camelCase``. + +testtools supports Python 2.4 and later, so avoid any 2.5-only features like +the ``with`` statement. + + +Copyright assignment +-------------------- + +Currently all code in testtools is copyright Jonathan M. Lange. For the sake +of licensing simplicity, copyright of contributed code needs to be assigned to +Jonathan M. Lange otherwise it cannot be accepted. Please include an +appropriate statement of copyright assignment with contributions. + + +Testing +------- + +Please write tests for every feature. This project ought to be a model +example of well-tested Python code! + +Take particular care to make sure the *intent* of each test is clear. + +You can run tests with ``make check``, or by running ``./run-tests`` directly. + + +Source layout +------------- + +The top-level directory contains the ``testtools/`` package directory, and +miscellaneous files like README and setup.py. + +The ``testtools/`` directory is the Python package itself. It is separated +into submodules for internal clarity, but all public APIs should be “promoted” +into the top-level package by importing them in ``testtools/__init__.py``. +Users of testtools should never import a submodule, they are just +implementation details. + +Tests belong in ``testtools/tests/``. + + --- python-testtools-0.1~r16.orig/Makefile +++ python-testtools-0.1~r16/Makefile @@ -0,0 +1,16 @@ +# See README for copyright and licensing details. + +check: + ./run-tests + +TAGS: + ctags -e -R testtools/ + +tags: + ctags -R testtools/ + +clean: + rm -f TAGS tags testtools/*.pyc testtools/tests/*.pyc + + +.PHONY: tags TAGS check clean --- python-testtools-0.1~r16.orig/MANIFEST.in +++ python-testtools-0.1~r16/MANIFEST.in @@ -0,0 +1 @@ +include LICENSE --- python-testtools-0.1~r16.orig/run-tests +++ python-testtools-0.1~r16/run-tests @@ -0,0 +1,9 @@ +#!/usr/bin/python +# See README for copyright and licensing details. + +import unittest + +from testtools.tests import test_suite + + +unittest.TextTestRunner().run(test_suite()) --- python-testtools-0.1~r16.orig/MANUAL +++ python-testtools-0.1~r16/MANUAL @@ -0,0 +1,121 @@ +====== +Manual +====== + +Introduction +------------ + +This document provides overview of the features provided by testtools. Refer +to the API docs (i.e. docstrings) for full details on a particular feature. + +Extensions to TestCase +---------------------- + +TestCase.addCleanup +~~~~~~~~~~~~~~~~~~~ + +addCleanup is a robust way to arrange for a cleanup function to be called +before tearDown. This is a powerful and simple alternative to putting cleanup +logic in a try/finally block or tearDown method. e.g.:: + + def test_foo(self): + foo.lock() + self.addCleanup(foo.unlock) + ... + + +TestCase.skip +~~~~~~~~~~~~~ + +``skip`` is a simple way to have a test stop running and be reported as a +skipped test, rather than a success/error/failure. This is an alternative to +convoluted logic during test loading, permitting later and more localized +decisions about the appropriateness of running a test. Many reasons exist to +skip a test - for instance when a dependency is missing, or if the test is +expensive and should not be run while on laptop battery power, or if the test +is testing an incomplete feature (this is sometimes called a TODO). Using this +feature when running your test suite with a TestResult object that is missing +the ``addSkip`` method will result in the ``addError`` method being invoked +instead. + + +New assertion methods +~~~~~~~~~~~~~~~~~~~~~ + +testtools adds several assertion methods: + + * assertIn + * assertNotIn + * assertIs + * assertIsNot + * assertIsInstance + + +Improved assertRaises +~~~~~~~~~~~~~~~~~~~~~ + +TestCase.assertRaises returns the caught exception. This is useful for +asserting more things about the exception than just the type:: + + error = self.assertRaises(UnauthorisedError, thing.frobnicate) + self.assertEqual('bob', error.username) + self.assertEqual('User bob cannot frobnicate', str(error)) + + +Creation methods +~~~~~~~~~~~~~~~~ + +testtools.TestCase implements creation methods called ``getUniqueString`` and +``getUniqueInteger``. See pages 419-423 of *xUnit Test Patterns* by Meszaros +for a detailed discussion of creation methods. + + +Test renaming +~~~~~~~~~~~~~ + +``testtools.clone_test_with_new_id`` is a function to copy a test case +instance to one with a new name. This is helpful for implementing test +parameterization. + + +Extensions to TestResult +------------------------ + +TestResult.addSkip +~~~~~~~~~~~~~~~~~~ + +This method is called on result objects when a test skips. The +``testtools.TestResult`` class records skips in its ``skip_reasons`` instance +dict. The can be reported on in much the same way as succesful tests. + +TestResult.done +~~~~~~~~~~~~~~~ + +This method is called on result objects by testtools tests when the attribute +exists. It is used as a hook point to allow clean disconnection of network +resources and similar concerns. + +ThreadsafeForwardingResult +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +A TestResult which forwards activity to another test result, but synchronises +on a semaphore to ensure that all the activity for a single test arrives in a +batch. This allows simple TestResults which do not expect concurrent test +reporting to be fed the activity from multiple test threads, or processes. + +Note that when you provide multiple errors for a single test, the target sees +each error as a distinct complete test. + +Extensions to TestSuite +----------------------- + +ConcurrentTestSuite +~~~~~~~~~~~~~~~~~~~ + +A TestSuite for parallel testing. This is used in conjuction with a helper that +runs a single suite in some parallel fashion (for instance, forking, handing +off to a subprocess, to a compute cloud, or simple threads). +ConcurrentTestSuite uses the helper to get a number of separate runnable +objects with a run(result), runs them all in threads using the +ThreadsafeForwardingResult to coalesce their activity. + --- python-testtools-0.1~r16.orig/testtools/tests/test_testsuite.py +++ python-testtools-0.1~r16/testtools/tests/test_testsuite.py @@ -0,0 +1,44 @@ +# Copyright (c) 2009 Jonathan M. Lange. See LICENSE for details. + +"""Test ConcurrentTestSuite and related things.""" + +__metaclass__ = type + +import unittest + +from testtools import ( + ConcurrentTestSuite, + iterate_tests, + TestCase, + ) +from testtools.tests.helpers import LoggingResult + + +class TestConcurrentTestSuiteRun(TestCase): + + def test_trivial(self): + log = [] + result = LoggingResult(log) + class Sample(TestCase): + def test_method(self): + pass + test1 = Sample('test_method') + test2 = Sample('test_method') + original_suite = unittest.TestSuite([test1, test2]) + suite = ConcurrentTestSuite(original_suite, self.split_suite) + suite.run(result) + try: + self.assertEqual([('startTest', test1), ('addSuccess', test1), ('stopTest', test1), + ('startTest', test2), ('addSuccess', test2), ('stopTest', test2)], log) + except AssertionError: + self.assertEqual([('startTest', test2), ('addSuccess', test2), ('stopTest', test2), + ('startTest', test1), ('addSuccess', test1), ('stopTest', test1)], log) + + def split_suite(self, suite): + tests = list(iterate_tests(suite)) + return tests[0], tests[1] + + +def test_suite(): + from unittest import TestLoader + return TestLoader().loadTestsFromName(__name__) --- python-testtools-0.1~r16.orig/testtools/tests/test_testresult.py +++ python-testtools-0.1~r16/testtools/tests/test_testresult.py @@ -0,0 +1,197 @@ +# Copyright (c) 2008 Jonathan M. Lange. See LICENSE for details. + +"""Test TestResults and related things.""" + +__metaclass__ = type + +import sys +import threading + +from testtools import ( + MultiTestResult, + TestCase, + TestResult, + ThreadsafeForwardingResult, + ) +from testtools.tests.helpers import LoggingResult + + +class TestTestResultContract(TestCase): + """Tests for the contract of TestResults.""" + + def test_addSkipped(self): + # Calling addSkip(test, reason) completes ok. + result = self.makeResult() + result.addSkip(self, u"Skipped for some reason") + + +class TestTestResultContract(TestTestResultContract): + + def makeResult(self): + return TestResult() + + +class TestMultiTestresultContract(TestTestResultContract): + + def makeResult(self): + return MultiTestResult(TestResult(), TestResult()) + + +class TestThreadSafeForwardingResultContract(TestTestResultContract): + + def makeResult(self): + result_semaphore = threading.Semaphore(1) + target = TestResult() + return ThreadsafeForwardingResult(target, result_semaphore) + + +class TestTestResult(TestCase): + """Tests for `TestResult`.""" + + def makeResult(self): + """Make an arbitrary result for testing.""" + return TestResult() + + def test_addSkipped(self): + # Calling addSkip on a TestResult records the test that was skipped in + # its skip_reasons dict. + result = self.makeResult() + result.addSkip(self, u"Skipped for some reason") + self.assertEqual({u"Skipped for some reason":[self]}, + result.skip_reasons) + result.addSkip(self, u"Skipped for some reason") + self.assertEqual({u"Skipped for some reason":[self, self]}, + result.skip_reasons) + result.addSkip(self, u"Skipped for another reason") + self.assertEqual({u"Skipped for some reason":[self, self], + u"Skipped for another reason":[self]}, + result.skip_reasons) + + def test_done(self): + # `TestResult` has a `done` method that, by default, does nothing. + self.makeResult().done() + + +class TestWithFakeExceptions(TestCase): + + def makeExceptionInfo(self, exceptionFactory, *args, **kwargs): + try: + raise exceptionFactory(*args, **kwargs) + except: + return sys.exc_info() + + +class TestMultiTestResult(TestWithFakeExceptions): + """Tests for `MultiTestResult`.""" + + def setUp(self): + self.result1 = LoggingResult([]) + self.result2 = LoggingResult([]) + self.multiResult = MultiTestResult(self.result1, self.result2) + + def assertResultLogsEqual(self, expectedEvents): + """Assert that our test results have received the expected events.""" + self.assertEqual(expectedEvents, self.result1._events) + self.assertEqual(expectedEvents, self.result2._events) + + def test_empty(self): + # Initializing a `MultiTestResult` doesn't do anything to its + # `TestResult`s. + self.assertResultLogsEqual([]) + + def test_startTest(self): + # Calling `startTest` on a `MultiTestResult` calls `startTest` on all + # its `TestResult`s. + self.multiResult.startTest(self) + self.assertResultLogsEqual([('startTest', self)]) + + def test_stopTest(self): + # Calling `stopTest` on a `MultiTestResult` calls `stopTest` on all + # its `TestResult`s. + self.multiResult.stopTest(self) + self.assertResultLogsEqual([('stopTest', self)]) + + def test_addSkipped(self): + # Calling `addSkip` on a `MultiTestResult` calls addSkip on its + # results. + reason = u"Skipped for some reason" + self.multiResult.addSkip(self, reason) + self.assertResultLogsEqual([('addSkip', self, reason)]) + + def test_addSuccess(self): + # Calling `addSuccess` on a `MultiTestResult` calls `addSuccess` on + # all its `TestResult`s. + self.multiResult.addSuccess(self) + self.assertResultLogsEqual([('addSuccess', self)]) + + def test_done(self): + # Calling `done` on a `MultiTestResult` calls `done` on all its + # `TestResult`s. + self.multiResult.done() + self.assertResultLogsEqual([('done')]) + + def test_addFailure(self): + # Calling `addFailure` on a `MultiTestResult` calls `addFailure` on + # all its `TestResult`s. + exc_info = self.makeExceptionInfo(AssertionError, 'failure') + self.multiResult.addFailure(self, exc_info) + self.assertResultLogsEqual([('addFailure', self, exc_info)]) + + def test_addError(self): + # Calling `addError` on a `MultiTestResult` calls `addError` on all + # its `TestResult`s. + exc_info = self.makeExceptionInfo(RuntimeError, 'error') + self.multiResult.addError(self, exc_info) + self.assertResultLogsEqual([('addError', self, exc_info)]) + + +class TestThreadSafeForwardingResult(TestWithFakeExceptions): + """Tests for `MultiTestResult`.""" + + def setUp(self): + self.result_semaphore = threading.Semaphore(1) + self.target = LoggingResult([]) + self.result1 = ThreadsafeForwardingResult(self.target, + self.result_semaphore) + + def test_nonforwarding_methods(self): + # startTest and stopTest are not forwarded because they need to be + # batched. + self.result1.startTest(self) + self.result1.stopTest(self) + self.assertEqual([], self.target._events) + + def test_done(self): + self.result1.done() + self.result2 = ThreadsafeForwardingResult(self.target, + self.result_semaphore) + self.result2.done() + self.assertEqual(["done", "done"], self.target._events) + + def test_forwarding_methods(self): + # error, failure, skip and success are forwarded. + exc_info1 = self.makeExceptionInfo(RuntimeError, 'error') + self.result1.addError(self, exc_info1) + exc_info2 = self.makeExceptionInfo(AssertionError, 'failure') + self.result1.addFailure(self, exc_info2) + reason = u"Skipped for some reason" + self.result1.addSkip(self, reason) + self.result1.addSuccess(self) + self.assertEqual([('startTest', self), + ('addError', self, exc_info1), + ('stopTest', self), + ('startTest', self), + ('addFailure', self, exc_info2), + ('stopTest', self), + ('startTest', self), + ('addSkip', self, reason), + ('stopTest', self), + ('startTest', self), + ('addSuccess', self), + ('stopTest', self), + ], self.target._events) + + +def test_suite(): + from unittest import TestLoader + return TestLoader().loadTestsFromName(__name__) --- python-testtools-0.1~r16.orig/testtools/tests/test_testtools.py +++ python-testtools-0.1~r16/testtools/tests/test_testtools.py @@ -0,0 +1,472 @@ +# Copyright (c) 2008 Jonathan M. Lange. See LICENSE for details. + +"""Tests for extensions to the base test library.""" + +import unittest +from testtools import ( + TestCase, + clone_test_with_new_id, + skip, + skipIf, + skipUnless, + ) +from testtools.tests.helpers import LoggingResult + + +class TestEquality(TestCase): + """Test `TestCase`'s equality implementation.""" + + def test_identicalIsEqual(self): + # TestCase's are equal if they are identical. + self.assertEqual(self, self) + + def test_nonIdenticalInUnequal(self): + # TestCase's are not equal if they are not identical. + self.assertNotEqual(TestCase(), TestCase()) + + +class TestAssertions(TestCase): + """Test assertions in TestCase.""" + + def raiseError(self, exceptionFactory, *args, **kwargs): + raise exceptionFactory(*args, **kwargs) + + def test_formatTypes_single(self): + # Given a single class, _formatTypes returns the name. + class Foo: + pass + self.assertEqual('Foo', self._formatTypes(Foo)) + + def test_formatTypes_multiple(self): + # Given multiple types, _formatTypes returns the names joined by + # commas. + class Foo: + pass + class Bar: + pass + self.assertEqual('Foo, Bar', self._formatTypes([Foo, Bar])) + + def test_assertRaises(self): + # assertRaises asserts that a callable raises a particular exception. + self.assertRaises(RuntimeError, self.raiseError, RuntimeError) + + def test_assertRaises_fails_when_no_error_raised(self): + # assertRaises raises self.failureException when it's passed a + # callable that raises no error. + ret = ('orange', 42) + try: + self.assertRaises(RuntimeError, lambda: ret) + except self.failureException, e: + # We expected assertRaises to raise this exception. + self.assertEqual( + '%s not raised, %r returned instead.' + % (self._formatTypes(RuntimeError), ret), str(e)) + else: + self.fail('Expected assertRaises to fail, but it did not.') + + def test_assertRaises_fails_when_different_error_raised(self): + # assertRaises re-raises an exception that it didn't expect. + self.assertRaises( + ZeroDivisionError, + self.assertRaises, + RuntimeError, self.raiseError, ZeroDivisionError) + + def test_assertRaises_returns_the_raised_exception(self): + # assertRaises returns the exception object that was raised. This is + # useful for testing that exceptions have the right message. + + # This contraption stores the raised exception, so we can compare it + # to the return value of assertRaises. + raisedExceptions = [] + def raiseError(): + try: + raise RuntimeError('Deliberate error') + except RuntimeError, e: + raisedExceptions.append(e) + raise + + exception = self.assertRaises(RuntimeError, raiseError) + self.assertEqual(1, len(raisedExceptions)) + self.assertTrue( + exception is raisedExceptions[0], + "%r is not %r" % (exception, raisedExceptions[0])) + + def test_assertRaises_with_multiple_exceptions(self): + # assertRaises((ExceptionOne, ExceptionTwo), function) asserts that + # function raises one of ExceptionTwo or ExceptionOne. + expectedExceptions = (RuntimeError, ZeroDivisionError) + self.assertRaises( + expectedExceptions, self.raiseError, expectedExceptions[0]) + self.assertRaises( + expectedExceptions, self.raiseError, expectedExceptions[1]) + + def test_assertRaises_with_multiple_exceptions_failure_mode(self): + # If assertRaises is called expecting one of a group of exceptions and + # a callable that doesn't raise an exception, then fail with an + # appropriate error message. + expectedExceptions = (RuntimeError, ZeroDivisionError) + failure = self.assertRaises( + self.failureException, + self.assertRaises, expectedExceptions, lambda: None) + self.assertEqual( + '%s not raised, None returned instead.' + % self._formatTypes(expectedExceptions), str(failure)) + + def assertFails(self, message, function, *args, **kwargs): + """Assert that function raises a failure with the given message.""" + failure = self.assertRaises( + self.failureException, function, *args, **kwargs) + self.assertEqual(message, str(failure)) + + def test_assertIn_success(self): + # assertIn(needle, haystack) asserts that 'needle' is in 'haystack'. + self.assertIn(3, range(10)) + self.assertIn('foo', 'foo bar baz') + self.assertIn('foo', 'foo bar baz'.split()) + + def test_assertIn_failure(self): + # assertIn(needle, haystack) fails the test when 'needle' is not in + # 'haystack'. + self.assertFails('3 not in [0, 1, 2]', self.assertIn, 3, [0, 1, 2]) + self.assertFails( + '%r not in %r' % ('qux', 'foo bar baz'), + self.assertIn, 'qux', 'foo bar baz') + + def test_assertNotIn_success(self): + # assertNotIn(needle, haystack) asserts that 'needle' is not in + # 'haystack'. + self.assertNotIn(3, [0, 1, 2]) + self.assertNotIn('qux', 'foo bar baz') + + def test_assertNotIn_failure(self): + # assertNotIn(needle, haystack) fails the test when 'needle' is in + # 'haystack'. + self.assertFails('3 in [1, 2, 3]', self.assertNotIn, 3, [1, 2, 3]) + self.assertFails( + '%r in %r' % ('foo', 'foo bar baz'), + self.assertNotIn, 'foo', 'foo bar baz') + + def test_assertIsInstance(self): + # assertIsInstance asserts that an object is an instance of a class. + + class Foo: + """Simple class for testing assertIsInstance.""" + + foo = Foo() + self.assertIsInstance(foo, Foo) + + def test_assertIsInstance_multiple_classes(self): + # assertIsInstance asserts that an object is an instance of one of a + # group of classes. + + class Foo: + """Simple class for testing assertIsInstance.""" + + class Bar: + """Another simple class for testing assertIsInstance.""" + + foo = Foo() + self.assertIsInstance(foo, (Foo, Bar)) + self.assertIsInstance(Bar(), (Foo, Bar)) + + def test_assertIsInstance_failure(self): + # assertIsInstance(obj, klass) fails the test when obj is not an + # instance of klass. + + class Foo: + """Simple class for testing assertIsInstance.""" + + self.assertFails( + '42 is not an instance of %s' % self._formatTypes(Foo), + self.assertIsInstance, 42, Foo) + + def test_assertIsInstance_failure_multiple_classes(self): + # assertIsInstance(obj, (klass1, klass2)) fails the test when obj is + # not an instance of klass1 or klass2. + + class Foo: + """Simple class for testing assertIsInstance.""" + + class Bar: + """Another simple class for testing assertIsInstance.""" + + self.assertFails( + '42 is not an instance of %s' % self._formatTypes([Foo, Bar]), + self.assertIsInstance, 42, (Foo, Bar)) + + def test_assertIs(self): + # assertIs asserts that an object is identical to another object. + self.assertIs(None, None) + some_list = [42] + self.assertIs(some_list, some_list) + some_object = object() + self.assertIs(some_object, some_object) + + def test_assertIs_fails(self): + # assertIs raises assertion errors if one object is not identical to + # another. + self.assertFails('None is not 42', self.assertIs, None, 42) + self.assertFails('[42] is not [42]', self.assertIs, [42], [42]) + + def test_assertIsNot(self): + # assertIsNot asserts that an object is not identical to another + # object. + self.assertIsNot(None, 42) + self.assertIsNot([42], [42]) + self.assertIsNot(object(), object()) + + def test_assertIsNot_fails(self): + # assertIsNot raises assertion errors if one object is identical to + # another. + self.assertFails('None is None', self.assertIsNot, None, None) + some_list = [42] + self.assertFails( + '[42] is [42]', self.assertIsNot, some_list, some_list) + + +class TestAddCleanup(TestCase): + """Tests for TestCase.addCleanup.""" + + class LoggingTest(TestCase): + """A test that logs calls to setUp, runTest and tearDown.""" + + def setUp(self): + self._calls = ['setUp'] + + def brokenSetUp(self): + # A tearDown that deliberately fails. + self._calls = ['brokenSetUp'] + raise RuntimeError('Deliberate Failure') + + def runTest(self): + self._calls.append('runTest') + + def tearDown(self): + self._calls.append('tearDown') + + def setUp(self): + self._result_calls = [] + self.test = TestAddCleanup.LoggingTest('runTest') + self.logging_result = LoggingResult(self._result_calls) + + def assertErrorLogEqual(self, messages): + self.assertEqual(messages, [call[0] for call in self._result_calls]) + + def assertTestLogEqual(self, messages): + """Assert that the call log equals `messages`.""" + self.assertEqual(messages, self.test._calls) + + def logAppender(self, message): + """Return a cleanup that appends `message` to the tests log. + + Cleanups are callables that are added to a test by addCleanup. To + verify that our cleanups run in the right order, we add strings to a + list that acts as a log. This method returns a cleanup that will add + the given message to that log when run. + """ + self.test._calls.append(message) + + def test_fixture(self): + # A normal run of self.test logs 'setUp', 'runTest' and 'tearDown'. + # This test doesn't test addCleanup itself, it just sanity checks the + # fixture. + self.test.run(self.logging_result) + self.assertTestLogEqual(['setUp', 'runTest', 'tearDown']) + + def test_cleanup_run_before_tearDown(self): + # Cleanup functions added with 'addCleanup' are called before tearDown + # runs. + self.test.addCleanup(self.logAppender, 'cleanup') + self.test.run(self.logging_result) + self.assertTestLogEqual(['setUp', 'runTest', 'cleanup', 'tearDown']) + + def test_add_cleanup_called_if_setUp_fails(self): + # Cleanup functions added with 'addCleanup' are called even if setUp + # fails. Note that tearDown has a different behavior: it is only + # called when setUp succeeds. + self.test.setUp = self.test.brokenSetUp + self.test.addCleanup(self.logAppender, 'cleanup') + self.test.run(self.logging_result) + self.assertTestLogEqual(['brokenSetUp', 'cleanup']) + + def test_addCleanup_called_in_reverse_order(self): + # Cleanup functions added with 'addCleanup' are called in reverse + # order. + # + # One of the main uses of addCleanup is to dynamically create + # resources that need some sort of explicit tearDown. Often one + # resource will be created in terms of another, e.g., + # self.first = self.makeFirst() + # self.second = self.makeSecond(self.first) + # + # When this happens, we generally want to clean up the second resource + # before the first one, since the second depends on the first. + self.test.addCleanup(self.logAppender, 'first') + self.test.addCleanup(self.logAppender, 'second') + self.test.run(self.logging_result) + self.assertTestLogEqual( + ['setUp', 'runTest', 'second', 'first', 'tearDown']) + + def test_tearDown_runs_after_cleanup_failure(self): + # tearDown runs even if a cleanup function fails. + self.test.addCleanup(lambda: 1/0) + self.test.run(self.logging_result) + self.assertTestLogEqual(['setUp', 'runTest', 'tearDown']) + + def test_cleanups_continue_running_after_error(self): + # All cleanups are always run, even if one or two of them fail. + self.test.addCleanup(self.logAppender, 'first') + self.test.addCleanup(lambda: 1/0) + self.test.addCleanup(self.logAppender, 'second') + self.test.run(self.logging_result) + self.assertTestLogEqual( + ['setUp', 'runTest', 'second', 'first', 'tearDown']) + + def test_error_in_cleanups_are_captured(self): + # If a cleanup raises an error, we want to record it and fail the the + # test, even though we go on to run other cleanups. + self.test.addCleanup(lambda: 1/0) + self.test.run(self.logging_result) + self.assertErrorLogEqual(['startTest', 'addError', 'stopTest']) + + def test_keyboard_interrupt_not_caught(self): + # If a cleanup raises KeyboardInterrupt, it gets reraised. + def raiseKeyboardInterrupt(): + raise KeyboardInterrupt() + self.test.addCleanup(raiseKeyboardInterrupt) + self.assertRaises( + KeyboardInterrupt, self.test.run, self.logging_result) + + def test_multipleErrorsReported(self): + # Errors from all failing cleanups are reported. + self.test.addCleanup(lambda: 1/0) + self.test.addCleanup(lambda: 1/0) + self.test.run(self.logging_result) + self.assertErrorLogEqual( + ['startTest', 'addError', 'addError', 'stopTest']) + + +class TestUniqueFactories(TestCase): + """Tests for getUniqueString and getUniqueInteger.""" + + def test_getUniqueInteger(self): + # getUniqueInteger returns an integer that increments each time you + # call it. + one = self.getUniqueInteger() + self.assertEqual(1, one) + two = self.getUniqueInteger() + self.assertEqual(2, two) + + def test_getUniqueString(self): + # getUniqueString returns the current test name followed by a unique + # integer. + name_one = self.getUniqueString() + self.assertEqual('%s-%d' % (self._testMethodName, 1), name_one) + name_two = self.getUniqueString() + self.assertEqual('%s-%d' % (self._testMethodName, 2), name_two) + + +class TestCloneTestWithNewId(TestCase): + """Tests for clone_test_with_new_id.""" + + def test_clone_test_with_new_id(self): + class FooTestCase(TestCase): + def test_foo(self): + pass + test = FooTestCase('test_foo') + oldName = test.id() + newName = self.getUniqueString() + newTest = clone_test_with_new_id(test, newName) + self.assertEqual(newName, newTest.id()) + self.assertEqual(oldName, test.id(), + "the original test instance should be unchanged.") + + +class TestSkipping(TestCase): + """Tests for skipping of tests functionality.""" + + def test_skip_causes_skipException(self): + self.assertRaises(self.skipException, self.skip, "Skip this test") + + def test_skipException_in_setup_calls_result_addSkip(self): + class TestThatRaisesInSetUp(TestCase): + def setUp(self): + self.skip("skipping this test") + def test_that_passes(self): + pass + calls = [] + result = LoggingResult(calls) + test = TestThatRaisesInSetUp("test_that_passes") + test.run(result) + self.assertEqual([('startTest', test), + ('addSkip', test, "skipping this test"), ('stopTest', test)], + calls) + + def test_skipException_in_test_method_calls_result_addSkip(self): + class SkippingTest(TestCase): + def test_that_raises_skipException(self): + self.skip("skipping this test") + calls = [] + result = LoggingResult(calls) + test = SkippingTest("test_that_raises_skipException") + test.run(result) + self.assertEqual([('startTest', test), + ('addSkip', test, "skipping this test"), ('stopTest', test)], + calls) + + def test_skip__in_setup_with_old_result_object_calls_addError(self): + class SkippingTest(TestCase): + def setUp(self): + raise self.skipException("skipping this test") + def test_that_raises_skipException(self): + pass + result = unittest.TestResult() + test = SkippingTest("test_that_raises_skipException") + test.run(result) + self.assertEqual(1, len(result.errors)) + + def test_skip_with_old_result_object_calls_addError(self): + class SkippingTest(TestCase): + def test_that_raises_skipException(self): + raise self.skipException("skipping this test") + result = unittest.TestResult() + test = SkippingTest("test_that_raises_skipException") + test.run(result) + self.assertEqual(1, len(result.errors)) + + def test_skip_decorator(self): + class SkippingTest(TestCase): + @skip("skipping this test") + def test_that_is_decorated_with_skip(self): + self.fail() + result = unittest.TestResult() + test = SkippingTest("test_that_is_decorated_with_skip") + test.run(result) + self.assertEqual(1, len(result.errors)) + + + def test_skipIf_decorator(self): + class SkippingTest(TestCase): + @skipIf(True, "skipping this test") + def test_that_is_decorated_with_skipIf(self): + self.fail() + result = unittest.TestResult() + test = SkippingTest("test_that_is_decorated_with_skipIf") + test.run(result) + self.assertEqual(1, len(result.errors)) + + + def test_skipUnless_decorator(self): + class SkippingTest(TestCase): + @skipUnless(False, "skipping this test") + def test_that_is_decorated_with_skipUnless(self): + self.fail() + result = unittest.TestResult() + test = SkippingTest("test_that_is_decorated_with_skipUnless") + test.run(result) + self.assertEqual(1, len(result.errors)) + + +def test_suite(): + from unittest import TestLoader + return TestLoader().loadTestsFromName(__name__) --- python-testtools-0.1~r16.orig/testtools/tests/helpers.py +++ python-testtools-0.1~r16/testtools/tests/helpers.py @@ -0,0 +1,46 @@ +# Copyright (c) 2008 Jonathan M. Lange. See LICENSE for details. + +"""Helpers for tests.""" + +__metaclass__ = type +__all__ = [ + 'LoggingResult', + ] + +from testtools import TestResult + + +class LoggingResult(TestResult): + """TestResult that logs its event to a list.""" + + def __init__(self, log): + self._events = log + super(LoggingResult, self).__init__() + + def startTest(self, test): + self._events.append(('startTest', test)) + super(LoggingResult, self).startTest(test) + + def stopTest(self, test): + self._events.append(('stopTest', test)) + super(LoggingResult, self).stopTest(test) + + def addFailure(self, test, error): + self._events.append(('addFailure', test, error)) + super(LoggingResult, self).addFailure(test, error) + + def addError(self, test, error): + self._events.append(('addError', test, error)) + super(LoggingResult, self).addError(test, error) + + def addSkip(self, test, reason): + self._events.append(('addSkip', test, reason)) + super(LoggingResult, self).addSkip(test, reason) + + def addSuccess(self, test): + self._events.append(('addSuccess', test)) + super(LoggingResult, self).addSuccess(test) + + def done(self): + self._events.append('done') + super(LoggingResult, self).done() --- python-testtools-0.1~r16.orig/testtools/tests/__init__.py +++ python-testtools-0.1~r16/testtools/tests/__init__.py @@ -0,0 +1,10 @@ +# See README for copyright and licensing details. + +import unittest +from testtools.tests import test_testtools, test_testresult, test_testsuite + + +def test_suite(): + return unittest.TestSuite( + [test_testtools.test_suite(), test_testresult.test_suite(), + test_testsuite.test_suite()]) --- python-testtools-0.1~r16.orig/debian/changelog +++ python-testtools-0.1~r16/debian/changelog @@ -0,0 +1,13 @@ +python-testtools (0.1~r16-1) unstable; urgency=low + + * Add to Debian. Closes: #547479 + * Change Maintainer to me. + + -- Robert Collins Sun, 20 Sep 2009 16:45:45 +1000 + +python-testtools (0.1~r16-0ubuntu1) karmic; urgency=low + + * Initial release (LP: #359308) + * This is r16 from lp:~statik/testtools/add-manifest + + -- Elliot Murphy (personal) Wed, 08 Apr 2009 15:07:49 -0400 --- python-testtools-0.1~r16.orig/debian/copyright +++ python-testtools-0.1~r16/debian/copyright @@ -0,0 +1,24 @@ +This is python-testtools, packaged for Ubuntu by Elliot Murphy. +Now packaged for Debian by Robert Collins + +Homepage is https://launchpad.net/testtools + +Copyright (c) 2008 Jonathan M. Lange + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. --- python-testtools-0.1~r16.orig/debian/compat +++ python-testtools-0.1~r16/debian/compat @@ -0,0 +1 @@ +6 --- python-testtools-0.1~r16.orig/debian/rules +++ python-testtools-0.1~r16/debian/rules @@ -0,0 +1,5 @@ +#!/usr/bin/make -f + +include /usr/share/cdbs/1/rules/debhelper.mk +DEB_PYTHON_SYSTEM = pycentral +include /usr/share/cdbs/1/class/python-distutils.mk --- python-testtools-0.1~r16.orig/debian/control +++ python-testtools-0.1~r16/debian/control @@ -0,0 +1,27 @@ +Source: python-testtools +Maintainer: Robert Collins +Section: python +Priority: optional +Standards-Version: 3.8.1 +Build-Depends-Indep: + python-central (>= 0.6.7) +Build-Depends: + cdbs (>= 0.4.51), + debhelper (>= 6.0.4), + python (>= 2.5) +XS-Python-Version: all +Homepage: https://launchpad.net/testtools + +Package: python-testtools +Architecture: all +XB-Python-Version: ${python:Versions} +Depends: ${python:Depends}, + ${misc:Depends}, + python-pkg-resources +Provides: ${python:Provides} +Description: Extensions to the Python unittest library + testtools (formerly pyunit3k) is a set of extensions to the Python standard + library's unit testing framework. These extensions have been derived from + years of experience with unit testing in Python and come from many different + sources. It's hoped that these extensions will make their way into the + standard library eventually. --- python-testtools-0.1~r16.orig/debian/pycompat +++ python-testtools-0.1~r16/debian/pycompat @@ -0,0 +1 @@ +2