diff -Nru python-werkzeug-0.9.6+dfsg/AUTHORS python-werkzeug-0.10.4+dfsg1/AUTHORS --- python-werkzeug-0.9.6+dfsg/AUTHORS 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/AUTHORS 2015-03-26 14:11:46.000000000 +0000 @@ -26,6 +26,8 @@ - Ludvig Ericson - Kenneth Reitz - Daniel Neuhäuser +- Markus Unterwaditzer +- Joe Esposito Contributors of code for werkzeug/examples are: diff -Nru python-werkzeug-0.9.6+dfsg/CHANGES python-werkzeug-0.10.4+dfsg1/CHANGES --- python-werkzeug-0.9.6+dfsg/CHANGES 2014-06-07 10:34:16.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/CHANGES 2015-03-26 15:48:04.000000000 +0000 @@ -1,6 +1,142 @@ Werkzeug Changelog ================== +Version 0.10.4 +-------------- + +(bugfix release, released on March 26th 2015) + +- Re-release of 0.10.3 with packaging artifacts manually removed. + +Version 0.10.3 +-------------- + +(bugfix release, released on March 26th 2015) + +- Re-release of 0.10.2 without packaging artifacts. + +Version 0.10.2 +-------------- + +(bugfix release, released on March 26th 2015) + +- Fixed issue where ``empty`` could break third-party libraries that relied on + keyword arguments (pull request ``#675``) +- Improved ``Rule.empty`` by providing a ```get_empty_kwargs`` to allow setting + custom kwargs without having to override entire ``empty`` method. (pull + request ``#675``) +- Fixed ```extra_files``` parameter for reloader to not cause startup + to crash when included in server params +- Using `MultiDict` when building URLs is now not supported again. The behavior + introduced several regressions. +- Fix performance problems with stat-reloader (pull request ``#715``). + +Version 0.10.1 +-------------- + +(bugfix release, released on February 3rd 2015) + +- Fixed regression with multiple query values for URLs (pull request ``#667``). +- Fix issues with eventlet's monkeypatching and the builtin server (pull + request ``#663``). + +Version 0.10 +------------ + +Released on January 30th 2015, codename Bagger. + +- Changed the error handling of and improved testsuite for the caches in + ``contrib.cache``. +- Fixed a bug on Python 3 when creating adhoc ssl contexts, due to `sys.maxint` + not being defined. +- Fixed a bug on Python 3, that caused + :func:`~werkzeug.serving.make_ssl_devcert` to fail with an exception. +- Added exceptions for 504 and 505. +- Added support for ChromeOS detection. +- Added UUID converter to the routing system. +- Added message that explains how to quit the server. +- Fixed a bug on Python 2, that caused ``len`` for + :class:`werkzeug.datastructures.CombinedMultiDict` to crash. +- Added support for stdlib pbkdf2 hmac if a compatible digest + is found. +- Ported testsuite to use ``py.test``. +- Minor optimizations to various middlewares (pull requests ``#496`` and + ``#571``). +- Use stdlib ``ssl`` module instead of ``OpenSSL`` for the builtin server + (issue ``#434``). This means that OpenSSL contexts are not supported anymore, + but instead ``ssl.SSLContext`` from the stdlib. +- Allow protocol-relative URLs when building external URLs. +- Fixed Atom syndication to print time zone offset for tz-aware datetime + objects (pull request ``#254``). +- Improved reloader to track added files and to recover from broken + sys.modules setups with syntax errors in packages. +- ``cache.RedisCache`` now supports arbitrary ``**kwargs`` for the redis + object. +- ``werkzeug.test.Client`` now uses the original request method when resolving + 307 redirects (pull request ``#556``). +- ``werkzeug.datastructures.MIMEAccept`` now properly deals with mimetype + parameters (pull request ``#205``). +- ``werkzeug.datastructures.Accept`` now handles a quality of ``0`` as + intolerable, as per RFC 2616 (pull request ``#536``). +- ``werkzeug.urls.url_fix`` now properly encodes hostnames with ``idna`` + encoding (issue ``#559``). It also doesn't crash on malformed URLs anymore + (issue ``#582``). +- ``werkzeug.routing.MapAdapter.match`` now recognizes the difference between + the path ``/`` and an empty one (issue ``#360``). +- The interactive debugger now tries to decode non-ascii filenames (issue + ``#469``). +- Increased default key size of generated SSL certificates to 1024 bits (issue + ``#611``). +- Added support for specifying a ``Response`` subclass to use when calling + :func:`~werkzeug.utils.redirect`\ . +- ``werkzeug.test.EnvironBuilder`` now doesn't use the request method anymore + to guess the content type, and purely relies on the ``form``, ``files`` and + ``input_stream`` properties (issue ``#620``). +- Added Symbian to the user agent platform list. +- Fixed make_conditional to respect automatically_set_content_length +- Unset ``Content-Length`` when writing to response.stream (issue ``#451``) +- ``wrappers.Request.method`` is now always uppercase, eliminating + inconsistencies of the WSGI environment (issue ``647``). +- ``routing.Rule.empty`` now works correctly with subclasses of ``Rule`` (pull + request ``#645``). +- Made map updating safe in light of concurrent updates. +- Allow multiple values for the same field for url building (issue ``#658``). + +Version 0.9.7 +------------- + +(bugfix release, release date to be decided) + +- Fix unicode problems in ``werkzeug.debug.tbtools``. +- Fix Python 3-compatibility problems in ``werkzeug.posixemulation``. +- Backport fix of fatal typo for ``ImmutableList`` (issue ``#492``). +- Make creation of the cache dir for ``FileSystemCache`` atomic (issue + ``#468``). +- Use native strings for memcached keys to work with Python 3 client (issue + ``#539``). +- Fix charset detection for ``werkzeug.debug.tbtools.Frame`` objects (issues + ``#547`` and ``#532``). +- Fix ``AttributeError`` masking in ``werkzeug.utils.import_string`` (issue + ``#182``). +- Explicitly shut down server (issue ``#519``). +- Fix timeouts greater than 2592000 being misinterpreted as UNIX timestamps in + ``werkzeug.contrib.cache.MemcachedCache`` (issue ``#533``). +- Fix bug where ``werkzeug.exceptions.abort`` would raise an arbitrary subclass + of the expected class (issue ``#422``). +- Fix broken ``jsrouting`` (due to removal of ``werkzeug.templates``) +- ``werkzeug.urls.url_fix`` now doesn't crash on malformed URLs anymore, but + returns them unmodified. This is a cheap workaround for ``#582``, the proper + fix is included in version 0.10. +- The repr of ``werkzeug.wrappers.Request`` doesn't crash on non-ASCII-values + anymore (pull request ``#466``). +- Fix bug in ``cache.RedisCache`` when combined with ``redis.StrictRedis`` + object (pull request ``#583``). +- The ``qop`` parameter for ``WWW-Authenticate`` headers is now always quoted, + as required by RFC 2617 (issue ``#633``). +- Fix bug in ``werkzeug.contrib.cache.SimpleCache`` with Python 3 where add/set + may throw an exception when pruning old entries from the cache (pull request + ``#651``). + Version 0.9.6 ------------- @@ -43,7 +179,7 @@ (bugfix release, released on July 25th 2013) -- Restored beahvior of the ``data`` descriptor of the request class to pre 0.9 +- Restored behavior of the ``data`` descriptor of the request class to pre 0.9 behavior. This now also means that ``.data`` and ``.get_data()`` have different behavior. New code should use ``.get_data()`` always. @@ -72,7 +208,6 @@ - Fixed an `AttributeError` that sometimes occurred when accessing the :attr:`werkzeug.wrappers.BaseResponse.is_streamed` attribute. - Version 0.9.1 ------------- diff -Nru python-werkzeug-0.9.6+dfsg/debian/changelog python-werkzeug-0.10.4+dfsg1/debian/changelog --- python-werkzeug-0.9.6+dfsg/debian/changelog 2014-08-04 15:52:40.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/debian/changelog 2015-09-07 08:54:33.000000000 +0000 @@ -1,3 +1,29 @@ +python-werkzeug (0.10.4+dfsg1-1ubuntu1) wily; urgency=medium + + * Merge with Debian; remaining changes: + - Run the tests with LC_ALL=C.UTF-8. + - Drop the build dependencies on the memcache servers, not in main. + The tests run these tests conditionally, and for python2 only. + + -- Matthias Klose Mon, 07 Sep 2015 10:52:02 +0200 + +python-werkzeug (0.10.4+dfsg1-1) unstable; urgency=medium + + * New upstream release (closes: 792802) + - drop no longer needed 13218de4.patch and 0bad0c25.patch + - add missing docs/_themes to the upstream tarball + * Remove ubuntu.ttf, jquery.js and Werkzeug.egg-info via Files-Excluded in + debian/copyright (egg-info one closes: 671254) + * debian/watch: use pypi.debian.net redirector + * debian/rules: remove no longer needed get-orig-source and use + debian/python{3,}-werkzeug.links files instead of override_dh_python + * Replace Noah with DPMT in Maintainer (thanks Noah for all your work) + * Standards-Version bumped to 3.9.6 (no changes needed) + * Add python{3,}-requests and python{3,}-pytest to Build-Depends + (upstream switched test suite to py.tests) + + -- Piotr Ożarowski Mon, 20 Jul 2015 21:09:06 +0200 + python-werkzeug (0.9.6+dfsg-1ubuntu1) utopic; urgency=medium * Merge with Debian; remaining changes: diff -Nru python-werkzeug-0.9.6+dfsg/debian/control python-werkzeug-0.10.4+dfsg1/debian/control --- python-werkzeug-0.9.6+dfsg/debian/control 2014-08-04 15:52:40.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/debian/control 2015-09-07 08:54:33.000000000 +0000 @@ -1,14 +1,15 @@ Source: python-werkzeug Section: python Priority: optional -Maintainer: Noah Slater -Uploaders: Piotr Ożarowski , - Python Modules Packaging Team -Standards-Version: 3.9.5 +Maintainer: Python Modules Packaging Team +Uploaders: Piotr Ożarowski +Standards-Version: 3.9.6 Build-Depends: debhelper (>= 9), dh-python, python-sphinx (>= 1.0.7+dfsg-1~), python-all, python3-all, python-setuptools (>= 0.6b3), python3-setuptools (>= 0.6b3), # for tests: + python-pytest, python3-pytest, + python-requests, python3-requests, python-simplejson, python3-simplejson, python-nose, python3-nose, python-lxml, python3-lxml diff -Nru python-werkzeug-0.9.6+dfsg/debian/copyright python-werkzeug-0.10.4+dfsg1/debian/copyright --- python-werkzeug-0.9.6+dfsg/debian/copyright 2011-02-15 21:22:02.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/debian/copyright 2015-07-20 19:37:02.000000000 +0000 @@ -1,11 +1,12 @@ -Format-Specification: http://wiki.debian.org/Proposals/CopyrightFormat?action=recall&rev=180 +Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: Werkzeug -Upstream-Maintainer: Armin Ronacher +Upstream-Contact: Armin Ronacher Upstream-Source: http://werkzeug.pocoo.org/download +Files-Excluded: ubuntu.ttf FONT_LICENSE werkzeug/debug/shared/jquery.js Werkzeug.egg-info Files: * -Copyright: Copyright 2009-2010, the Werkzeug Team -License: BSD +Copyright: Copyright 2009-2015, the Werkzeug Team +License: BSD-3-clause Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: @@ -40,33 +41,3 @@ Copying and distribution of this package, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. - -Files: werkzeug/debug/shared/jquery.js -Copyright: Copyright 2008, John Resig -License: MIT | GPL-2 - -License: MIT - Copyright (c) 2008 John Resig, http://jquery.com/ - . - Permission is hereby granted, free of charge, to any person obtaining - a copy of this software and associated documentation files (the - "Software"), to deal in the Software without restriction, including - without limitation the rights to use, copy, modify, merge, publish, - distribute, sublicense, and/or sell copies of the Software, and to - permit persons to whom the Software is furnished to do so, subject to - the following conditions: - . - The above copyright notice and this permission notice shall be - included in all copies or substantial portions of the Software. - . - THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND - NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE - LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION - OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -License: GPL-2 - On Debian systems the full text of the GNU General Public License (Version 2) - can be found in the `/usr/share/common-licenses/GPL-2' file. diff -Nru python-werkzeug-0.9.6+dfsg/debian/patches/0bad0c25.patch python-werkzeug-0.10.4+dfsg1/debian/patches/0bad0c25.patch --- python-werkzeug-0.9.6+dfsg/debian/patches/0bad0c25.patch 2014-06-10 20:10:29.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/debian/patches/0bad0c25.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,24 +0,0 @@ -# taken from commit 0bad0c25f7d04da98328907d1c94a9b72fe57c57 -# Author: Daniel Neuhäuser -# Date: Tue Sep 3 20:42:42 2013 +0200 -# -# Use maxsize in generate_adhoc_ssl_pair -# -# maxint is not available on Python 3.x because integers have no maximum -# size on 3.x. -# -# The test I added fails because of a bug in pyopenssl. - -diff --git a/werkzeug/serving.py b/werkzeug/serving.py -index 2fb8660..43c4c71 100644 ---- a/werkzeug/serving.py -+++ b/werkzeug/serving.py -@@ -277,7 +277,7 @@ def generate_adhoc_ssl_pair(cn=None): - cn = '*' - - cert = crypto.X509() -- cert.set_serial_number(int(random() * sys.maxint)) -+ cert.set_serial_number(int(random() * sys.maxsize)) - cert.gmtime_adj_notBefore(0) - cert.gmtime_adj_notAfter(60 * 60 * 24 * 365) - diff -Nru python-werkzeug-0.9.6+dfsg/debian/patches/13218de4.patch python-werkzeug-0.10.4+dfsg1/debian/patches/13218de4.patch --- python-werkzeug-0.9.6+dfsg/debian/patches/13218de4.patch 2014-06-10 19:36:10.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/debian/patches/13218de4.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,47 +0,0 @@ -commit 13218deae25b6ee77d8dbab8af98ff1d38712ae4 -Author: Daniel Neuhäuser -Date: Fri Mar 21 21:45:41 2014 +0100 - - Fix #502 - - The code so far made far too many assumptions about the format used by - pickle, the changes introduced by PEP 3154 finally caused this to become - a real issue. This changes the code to make no assumptions about the - representation whatsoever, making it work with protocol 4, all future - protocols. - -diff --git a/werkzeug/testsuite/datastructures.py b/werkzeug/testsuite/datastructures.py -index 28441ea..fdbda45 100644 ---- a/werkzeug/testsuite/datastructures.py -+++ b/werkzeug/testsuite/datastructures.py -@@ -64,16 +64,26 @@ class MutableMultiDictBaseTestCase(WerkzeugTestCase): - def test_pickle(self): - cls = self.storage_class - -- for protocol in range(pickle.HIGHEST_PROTOCOL + 1): -- d = cls() -+ def create_instance(module=None): -+ if module is None: -+ d = cls() -+ else: -+ old = cls.__module__ -+ cls.__module__ = module -+ d = cls() -+ cls.__module__ = old - d.setlist(b'foo', [1, 2, 3, 4]) - d.setlist(b'bar', b'foo bar baz'.split()) -+ return d -+ -+ for protocol in range(pickle.HIGHEST_PROTOCOL + 1): -+ d = create_instance() - s = pickle.dumps(d, protocol) - ud = pickle.loads(s) - self.assert_equal(type(ud), type(d)) - self.assert_equal(ud, d) -- self.assert_equal(pickle.loads( -- s.replace(b'werkzeug.datastructures', b'werkzeug')), d) -+ alternative = pickle.dumps(create_instance('werkzeug'), protocol) -+ self.assert_equal(pickle.loads(alternative), d) - ud[b'newkey'] = b'bla' - self.assert_not_equal(ud, d) - diff -Nru python-werkzeug-0.9.6+dfsg/debian/patches/series python-werkzeug-0.10.4+dfsg1/debian/patches/series --- python-werkzeug-0.9.6+dfsg/debian/patches/series 2014-06-10 20:10:38.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/debian/patches/series 2015-07-20 19:10:25.000000000 +0000 @@ -1,3 +1 @@ drop_ubuntu_font.patch -13218de4.patch -0bad0c25.patch diff -Nru python-werkzeug-0.9.6+dfsg/debian/python3-werkzeug.links python-werkzeug-0.10.4+dfsg1/debian/python3-werkzeug.links --- python-werkzeug-0.9.6+dfsg/debian/python3-werkzeug.links 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/debian/python3-werkzeug.links 2015-07-20 19:27:40.000000000 +0000 @@ -0,0 +1 @@ +/usr/share/javascript/jquery/jquery.js /usr/lib/python3/dist-packages/werkzeug/debug/shared/jquery.js diff -Nru python-werkzeug-0.9.6+dfsg/debian/python-werkzeug.links python-werkzeug-0.10.4+dfsg1/debian/python-werkzeug.links --- python-werkzeug-0.9.6+dfsg/debian/python-werkzeug.links 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/debian/python-werkzeug.links 2015-07-20 19:30:26.000000000 +0000 @@ -0,0 +1 @@ +/usr/share/javascript/jquery/jquery.js /usr/lib/python2.7/dist-packages/werkzeug/debug/shared/jquery.js diff -Nru python-werkzeug-0.9.6+dfsg/debian/rules python-werkzeug-0.10.4+dfsg1/debian/rules --- python-werkzeug-0.9.6+dfsg/debian/rules 2014-08-04 15:52:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/debian/rules 2015-09-07 08:54:33.000000000 +0000 @@ -6,8 +6,10 @@ # permitted in any medium without royalty provided the copyright notice and this # notice are preserved. -export PYBUILD_DESTDIR_python2=debian/python-werkzeug/ -export PYBUILD_DESTDIR_python3=debian/python3-werkzeug/ +export PYBUILD_NAME=werkzeug +export PYBUILD_TEST_PYTEST=1 +# disable some tests as pytest.xprocess is not yet packaged +export PYBUILD_TEST_ARGS={dir}/tests -k-tests/contrib/test_cache.py %: dh $@ --with python2,python3,sphinxdoc --buildsystem pybuild @@ -29,25 +31,6 @@ dh_auto_install make documentation -override_dh_python2: - dh_python2 - dh_link -p python-werkzeug /usr/share/javascript/jquery/jquery.js \ - /usr/share/pyshared/werkzeug/debug/shared/jquery.js - -override_dh_python3: - dh_python3 - dh_link -p python3-werkzeug /usr/share/javascript/jquery/jquery.js \ - /usr/lib/python3/dist-packages/werkzeug/debug/shared/jquery.js - override_dh_fixperms: find debian/ -name '*\.png' -exec chmod -x '{}' \; dh_fixperms - -get-orig-source: - trap 'rm -rf $(CURDIR)/.werkzeug-tarball' EXIT;\ - VER=$(shell dpkg-parsechangelog | sed -rne 's,^Version: ([^-+]+).*,\1,p') &&\ - mkdir -p .werkzeug-tarball &&\ - uscan --force-download --rename --destdir=.werkzeug-tarball --upstream-version=$$VER &&\ - cd .werkzeug-tarball &&\ - tar xf python-werkzeug*tar.gz --exclude ubuntu.ttf --exclude FONT_LICENSE &&\ - tar Jcf ../python-werkzeug_$$VER+dfsg.orig.tar.xz Werkzeug-$$VER diff -Nru python-werkzeug-0.9.6+dfsg/debian/watch python-werkzeug-0.10.4+dfsg1/debian/watch --- python-werkzeug-0.9.6+dfsg/debian/watch 2011-10-16 13:41:24.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/debian/watch 2015-07-20 19:01:21.000000000 +0000 @@ -1,3 +1,3 @@ version=3 -opts=dversionmangle=s/\+dfsg// \ -http://pypi.python.org/packages/source/W/Werkzeug/Werkzeug-(.*)\.tar\.gz +opts=uversionmangle=s/(rc|a|b|c)/~$1/,dversionmangle=s/\+dfsg\d*$//,repacksuffix=+dfsg1 \ +http://pypi.debian.net/Werkzeug/Werkzeug-(.+)\.(?:zip|tgz|tbz|txz|(?:tar\.(?:gz|bz2|xz))) diff -Nru python-werkzeug-0.9.6+dfsg/docs/contents.rst.inc python-werkzeug-0.10.4+dfsg1/docs/contents.rst.inc --- python-werkzeug-0.9.6+dfsg/docs/contents.rst.inc 2013-06-13 10:25:35.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/docs/contents.rst.inc 2015-03-26 14:11:46.000000000 +0000 @@ -39,6 +39,7 @@ http datastructures utils + urls local middlewares exceptions diff -Nru python-werkzeug-0.9.6+dfsg/docs/installation.rst python-werkzeug-0.10.4+dfsg1/docs/installation.rst --- python-werkzeug-0.9.6+dfsg/docs/installation.rst 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/docs/installation.rst 2015-03-26 14:11:46.000000000 +0000 @@ -127,7 +127,7 @@ A few seconds later you are good to go. -.. _download page: http://werkzeug.pocoo.org/download +.. _download page: https://pypi.python.org/pypi/Werkzeug .. _setuptools: http://peak.telecommunity.com/DevCenter/setuptools .. _easy_install: http://peak.telecommunity.com/DevCenter/EasyInstall .. _Git: http://git-scm.org/ diff -Nru python-werkzeug-0.9.6+dfsg/docs/quickstart.rst python-werkzeug-0.10.4+dfsg1/docs/quickstart.rst --- python-werkzeug-0.9.6+dfsg/docs/quickstart.rst 2013-06-13 10:25:35.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/docs/quickstart.rst 2015-03-26 14:11:46.000000000 +0000 @@ -255,7 +255,7 @@ '400 BAD REQUEST' As you can see attributes work in both directions. So you can set both -:attr:`~BaseResponse.status` and `~BaseResponse.status_code` and the +:attr:`~BaseResponse.status` and :attr:`~BaseResponse.status_code` and the change will be reflected to the other. Also common headers are exposed as attributes or with methods to set / diff -Nru python-werkzeug-0.9.6+dfsg/docs/routing.rst python-werkzeug-0.10.4+dfsg1/docs/routing.rst --- python-werkzeug-0.9.6+dfsg/docs/routing.rst 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/docs/routing.rst 2015-03-26 14:11:46.000000000 +0000 @@ -103,6 +103,8 @@ .. autoclass:: FloatConverter +.. autoclass:: UUIDConverter + Maps, Rules and Adapters ======================== diff -Nru python-werkzeug-0.9.6+dfsg/docs/serving.rst python-werkzeug-0.10.4+dfsg1/docs/serving.rst --- python-werkzeug-0.9.6+dfsg/docs/serving.rst 2013-06-13 10:25:35.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/docs/serving.rst 2015-03-26 14:11:46.000000000 +0000 @@ -26,6 +26,8 @@ .. autofunction:: run_simple +.. autofunction:: is_running_from_reloader + .. autofunction:: make_ssl_devcert .. admonition:: Information @@ -35,6 +37,40 @@ under high load. For deployment setups have a look at the :ref:`deployment` pages. +.. _reloader: + +Reloader +-------- + +.. versionchanged:: 0.10 + +The Werkzeug reloader constantly monitors modules and paths of your web +application, and restarts the server if any of the observed files change. + +Since version 0.10, there are two backends the reloader supports: ``stat`` and +``watchdog``. + +- The default ``stat`` backend simply checks the ``mtime`` of all files in a + regular interval. This is sufficient for most cases, however, it is known to + drain a laptop's battery. + +- The ``watchdog`` backend uses filesystem events, and is much faster than + ``stat``. It requires the `watchdog `_ + module to be installed. + +If ``watchdog`` is installed and available it will automatically be used +instead of the builtin ``stat`` reloader. + +To switch between the backends you can use the `reloader_type` parameter of the +:func:`run_simple` function. ``'stat'`` sets it to the default stat based +polling and ``'watchdog'`` forces it to the watchdog backend. + +.. note:: + + Some edge cases, like modules that failed to import correctly, are not + handled by the stat reloader for performance reasons. The watchdog reloader + monitors such files too. + Virtual Hosts ------------- @@ -115,10 +151,9 @@ .. versionadded:: 0.6 -The builtin server supports SSL for testing purposes. If an SSL context -is provided it will be used. That means a server can either run in HTTP -or HTTPS mode, but not both. This feature requires the Python OpenSSL -library. +The builtin server supports SSL for testing purposes. If an SSL context is +provided it will be used. That means a server can either run in HTTP or HTTPS +mode, but not both. Quickstart `````````` @@ -135,26 +170,29 @@ ('/path/to/the/key.crt', '/path/to/the/key.key') 2. Now this tuple can be passed as ``ssl_context`` to the - :func:`run_simple` method: + :func:`run_simple` method:: - run_simple('localhost', 4000, application, - ssl_context=('/path/to/the/key.crt', - '/path/to/the/key.key')) + run_simple('localhost', 4000, application, + ssl_context=('/path/to/the/key.crt', + '/path/to/the/key.key')) You will have to acknowledge the certificate in your browser once then. Loading Contexts by Hand ```````````````````````` -Instead of using a tuple as ``ssl_context`` you can also create the -context programmatically. This way you have better control over it:: - - from OpenSSL import SSL - ctx = SSL.Context(SSL.SSLv23_METHOD) - ctx.use_privatekey_file('ssl.key') - ctx.use_certificate_file('ssl.cert') +In Python 2.7.9 and 3+ you also have the option to use a ``ssl.SSLContext`` +object instead of a simple tuple. This way you have better control over the SSL +behavior of Werkzeug's builtin server:: + + import ssl + ctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23) + ctx.load_cert_chain('ssl.cert', 'ssl.key') run_simple('localhost', 4000, application, ssl_context=ctx) + +.. versionchanged 0.10:: ``OpenSSL`` contexts are not supported anymore. + Generating Certificates ``````````````````````` @@ -178,3 +216,5 @@ certificate each time the server is reloaded. Adhoc certificates are discouraged because modern browsers do a bad job at supporting them for security reasons. + +This feature requires the pyOpenSSL library to be installed. diff -Nru python-werkzeug-0.9.6+dfsg/docs/terms.rst python-werkzeug-0.10.4+dfsg1/docs/terms.rst --- python-werkzeug-0.9.6+dfsg/docs/terms.rst 2013-06-13 10:25:35.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/docs/terms.rst 2015-03-26 14:11:46.000000000 +0000 @@ -13,7 +13,7 @@ WSGI a specification for Python web applications Werkzeug follows. It was specified in the :pep:`333` and is widely supported. Unlike previous solutions -it gurantees that web applications, servers and utilties can work together. +it guarantees that web applications, servers and utilties can work together. Response Object --------------- diff -Nru python-werkzeug-0.9.6+dfsg/docs/test.rst python-werkzeug-0.10.4+dfsg1/docs/test.rst --- python-werkzeug-0.9.6+dfsg/docs/test.rst 2013-06-13 10:25:35.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/docs/test.rst 2015-03-26 14:11:46.000000000 +0000 @@ -146,17 +146,26 @@ .. autoclass:: Client - .. automethod:: open(options) + .. automethod:: open - .. automethod:: get(options) + Shortcut methods are available for many HTTP methods: - .. automethod:: post(options) + .. automethod:: get - .. automethod:: put(options) + .. automethod:: patch - .. automethod:: delete(options) + .. automethod:: post + + .. automethod:: head + + .. automethod:: put + + .. automethod:: delete + + .. automethod:: options + + .. automethod:: trace - .. automethod:: head(options) .. autofunction:: create_environ([options]) Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/docs/_themes/werkzeug_theme_support.pyc and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/docs/_themes/werkzeug_theme_support.pyc differ diff -Nru python-werkzeug-0.9.6+dfsg/docs/transition.rst python-werkzeug-0.10.4+dfsg1/docs/transition.rst --- python-werkzeug-0.9.6+dfsg/docs/transition.rst 2013-06-13 10:25:35.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/docs/transition.rst 2015-03-26 14:11:46.000000000 +0000 @@ -3,7 +3,7 @@ Werkzeug originally had a magical import system hook that enabled everything to be imported from one module and still loading the actual -implementations lazily as necessary. Unfortunately this turned out be +implementations lazily as necessary. Unfortunately this turned out to be slow and also unreliable on alternative Python implementations and Google's App Engine. diff -Nru python-werkzeug-0.9.6+dfsg/docs/tutorial.rst python-werkzeug-0.10.4+dfsg1/docs/tutorial.rst --- python-werkzeug-0.9.6+dfsg/docs/tutorial.rst 2013-06-13 10:25:35.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/docs/tutorial.rst 2015-03-26 14:11:46.000000000 +0000 @@ -20,7 +20,7 @@ If you are on Ubuntu or Debian, you can use apt-get:: - sudo apt-get install redis + sudo apt-get install redis-server Redis was developed for UNIX systems and was never really designed to work on Windows. For development purposes, the unofficial ports however diff -Nru python-werkzeug-0.9.6+dfsg/docs/urls.rst python-werkzeug-0.10.4+dfsg1/docs/urls.rst --- python-werkzeug-0.9.6+dfsg/docs/urls.rst 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/docs/urls.rst 2015-03-26 14:11:46.000000000 +0000 @@ -0,0 +1,6 @@ +=========== +URL Helpers +=========== + +.. automodule:: werkzeug.urls + :members: diff -Nru python-werkzeug-0.9.6+dfsg/docs/utils.rst python-werkzeug-0.10.4+dfsg1/docs/utils.rst --- python-werkzeug-0.9.6+dfsg/docs/utils.rst 2013-06-13 10:25:35.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/docs/utils.rst 2015-03-26 14:11:46.000000000 +0000 @@ -49,32 +49,7 @@ URL Helpers =========== -.. module:: werkzeug.urls - -.. autoclass:: Href - -.. autofunction:: url_decode - -.. autofunction:: url_decode_stream - -.. autofunction:: url_encode - -.. autofunction:: url_encode_stream - -.. autofunction:: url_quote - -.. autofunction:: url_quote_plus - -.. autofunction:: url_unquote - -.. autofunction:: url_unquote_plus - -.. autofunction:: url_fix - -.. autofunction:: uri_to_iri - -.. autofunction:: iri_to_uri - +Please refer to :doc:`urls`. UserAgent Parsing ================= diff -Nru python-werkzeug-0.9.6+dfsg/LICENSE python-werkzeug-0.10.4+dfsg1/LICENSE --- python-werkzeug-0.9.6+dfsg/LICENSE 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/LICENSE 2015-03-18 23:08:06.000000000 +0000 @@ -1,4 +1,4 @@ -Copyright (c) 2011 by the Werkzeug Team, see AUTHORS for more details. +Copyright (c) 2014 by the Werkzeug Team, see AUTHORS for more details. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are diff -Nru python-werkzeug-0.9.6+dfsg/Makefile python-werkzeug-0.10.4+dfsg1/Makefile --- python-werkzeug-0.9.6+dfsg/Makefile 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/Makefile 2015-03-26 14:11:46.000000000 +0000 @@ -15,13 +15,13 @@ python scripts/make-release.py test: - python run-tests.py + py.test --tb=native tox-test: tox coverage: - @(nosetests $(TEST_OPTIONS) --with-coverage --cover-package=werkzeug --cover-html --cover-html-dir=coverage_out $(TESTS)) + @(coverage run --source=werkzeug --module py.test $(TEST_OPTIONS) $(TESTS)) doctest: @(cd docs; sphinx-build -b doctest . _build/doctest) @@ -30,6 +30,6 @@ $(MAKE) -C docs html dirhtml latex $(MAKE) -C docs/_build/latex all-pdf cd docs/_build/; mv html werkzeug-docs; zip -r werkzeug-docs.zip werkzeug-docs; mv werkzeug-docs html - rsync -a docs/_build/dirhtml/ pocoo.org:/var/www/werkzeug.pocoo.org/docs/ - rsync -a docs/_build/latex/Werkzeug.pdf pocoo.org:/var/www/werkzeug.pocoo.org/docs/werkzeug-docs.pdf - rsync -a docs/_build/werkzeug-docs.zip pocoo.org:/var/www/werkzeug.pocoo.org/docs/werkzeug-docs.zip + rsync -a docs/_build/dirhtml/ flow.srv.pocoo.org:/srv/websites/werkzeug.pocoo.org/docs/ + rsync -a docs/_build/latex/Werkzeug.pdf flow.srv.pocoo.org:/srv/websites/werkzeug.pocoo.org/docs/ + rsync -a docs/_build/werkzeug-docs.zip flow.srv.pocoo.org:/srv/websites/werkzeug.pocoo.org/docs/werkzeug-docs.zip diff -Nru python-werkzeug-0.9.6+dfsg/MANIFEST.in python-werkzeug-0.10.4+dfsg1/MANIFEST.in --- python-werkzeug-0.9.6+dfsg/MANIFEST.in 2013-06-13 10:25:35.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/MANIFEST.in 2015-03-26 14:38:27.000000000 +0000 @@ -1,16 +1,10 @@ include Makefile CHANGES LICENSE AUTHORS recursive-include werkzeug/debug/shared * -recursive-include werkzeug/debug/templates * -recursive-include werkzeug/testsuite/res * -recursive-include werkzeug/testsuite/multipart * recursive-include tests * recursive-include docs * -recursive-include examples * -recursive-exclude docs *.pyc -recursive-exclude docs *.pyo -recursive-exclude tests *.pyc -recursive-exclude tests *.pyo -recursive-exclude examples *.pyc -recursive-exclude examples *.pyo recursive-include artwork * +recursive-include examples * + prune docs/_build +prune docs/_themes +global-exclude *.py[cdo] __pycache__ *.so *.pyd diff -Nru python-werkzeug-0.9.6+dfsg/PKG-INFO python-werkzeug-0.10.4+dfsg1/PKG-INFO --- python-werkzeug-0.9.6+dfsg/PKG-INFO 2014-06-07 10:34:34.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/PKG-INFO 2015-03-26 15:49:54.000000000 +0000 @@ -1,6 +1,6 @@ -Metadata-Version: 1.0 +Metadata-Version: 1.1 Name: Werkzeug -Version: 0.9.6 +Version: 0.10.4 Summary: The Swiss Army knife of Python web development Home-page: http://werkzeug.pocoo.org/ Author: Armin Ronacher diff -Nru python-werkzeug-0.9.6+dfsg/README.rst python-werkzeug-0.10.4+dfsg1/README.rst --- python-werkzeug-0.9.6+dfsg/README.rst 2013-06-13 10:25:35.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/README.rst 2015-03-26 15:33:58.000000000 +0000 @@ -17,3 +17,15 @@ Details and example applications are available on the `Werkzeug website `_. + + +Branches +-------- + ++---------------------+--------------------------------------------------------------------------------+ +| ``master`` | .. image:: https://travis-ci.org/mitsuhiko/werkzeug.svg?branch=master | +| | :target: https://travis-ci.org/mitsuhiko/werkzeug | ++---------------------+--------------------------------------------------------------------------------+ +| ``0.9-maintenance`` | .. image:: https://travis-ci.org/mitsuhiko/werkzeug.svg?branch=0.9-maintenance | +| | :target: https://travis-ci.org/mitsuhiko/werkzeug | ++---------------------+--------------------------------------------------------------------------------+ diff -Nru python-werkzeug-0.9.6+dfsg/setup.cfg python-werkzeug-0.10.4+dfsg1/setup.cfg --- python-werkzeug-0.9.6+dfsg/setup.cfg 2014-06-07 10:34:34.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/setup.cfg 2015-03-26 15:49:54.000000000 +0000 @@ -6,3 +6,9 @@ [aliases] release = egg_info -RDb '' +[pytest] +norecursedirs = .* _build *.eggs + +[bdist_wheel] +universal = 1 + diff -Nru python-werkzeug-0.9.6+dfsg/setup.py python-werkzeug-0.10.4+dfsg1/setup.py --- python-werkzeug-0.9.6+dfsg/setup.py 2014-06-07 10:34:33.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/setup.py 2015-03-26 15:49:54.000000000 +0000 @@ -54,14 +54,27 @@ .. _github: http://github.com/mitsuhiko/werkzeug """ try: - from setuptools import setup + from setuptools import setup, Command except ImportError: - from distutils.core import setup + from distutils.core import setup, Command + + +class TestCommand(Command): + user_options = [] + + def initialize_options(self): + pass + def finalize_options(self): + pass + + def run(self): + import pytest + pytest.cmdline.main(args=[]) setup( name='Werkzeug', - version='0.9.6', + version='0.10.4', url='http://werkzeug.pocoo.org/', license='BSD', author='Armin Ronacher', @@ -79,10 +92,9 @@ 'Topic :: Internet :: WWW/HTTP :: Dynamic Content', 'Topic :: Software Development :: Libraries :: Python Modules' ], - packages=['werkzeug', 'werkzeug.debug', 'werkzeug.contrib', - 'werkzeug.testsuite', 'werkzeug.testsuite.contrib'], + packages=['werkzeug', 'werkzeug.debug', 'werkzeug.contrib'], + cmdclass=dict(test=TestCommand), include_package_data=True, - test_suite='werkzeug.testsuite.suite', zip_safe=False, platforms='any' ) diff -Nru python-werkzeug-0.9.6+dfsg/tests/conftest.py python-werkzeug-0.10.4+dfsg1/tests/conftest.py --- python-werkzeug-0.9.6+dfsg/tests/conftest.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/conftest.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,165 @@ +# -*- coding: utf-8 -*- +""" + tests.conftest + ~~~~~~~~~~~~~~ + + :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. + :license: BSD, see LICENSE for more details. +""" + +from __future__ import with_statement + +import os +import signal +import sys +import textwrap +import time + +import requests +import pytest + +from werkzeug import serving +from werkzeug.utils import cached_property +from werkzeug._compat import to_bytes + + +try: + __import__('pytest_xprocess') +except ImportError: + @pytest.fixture + def subprocess(): + pytest.skip('pytest-xprocess not installed.') +else: + @pytest.fixture + def subprocess(xprocess): + return xprocess + + +def _patch_reloader_loop(): + def f(x): + print('reloader loop finished') + return time.sleep(x) + + import werkzeug._reloader + werkzeug._reloader.ReloaderLoop._sleep = staticmethod(f) + + +def _get_pid_middleware(f): + def inner(environ, start_response): + if environ['PATH_INFO'] == '/_getpid': + start_response('200 OK', [('Content-Type', 'text/plain')]) + return [to_bytes(str(os.getpid()))] + return f(environ, start_response) + return inner + + +def _dev_server(): + _patch_reloader_loop() + sys.path.insert(0, sys.argv[1]) + import testsuite_app + app = _get_pid_middleware(testsuite_app.app) + serving.run_simple(hostname='localhost', application=app, + **testsuite_app.kwargs) + +if __name__ == '__main__': + _dev_server() + + +class _ServerInfo(object): + xprocess = None + addr = None + url = None + port = None + last_pid = None + + def __init__(self, xprocess, addr, url, port): + self.xprocess = xprocess + self.addr = addr + self.url = url + self.port = port + + @cached_property + def logfile(self): + return self.xprocess.getinfo('dev_server').logpath.open() + + def request_pid(self): + for i in range(20): + time.sleep(0.1 * i) + try: + self.last_pid = int(requests.get(self.url + '/_getpid', + verify=False).text) + return self.last_pid + except Exception as e: # urllib also raises socketerrors + print(self.url) + print(e) + return False + + def wait_for_reloader(self): + old_pid = self.last_pid + for i in range(20): + time.sleep(0.1 * i) + new_pid = self.request_pid() + if not new_pid: + raise RuntimeError('Server is down.') + if self.request_pid() != old_pid: + return + raise RuntimeError('Server did not reload.') + + def wait_for_reloader_loop(self): + for i in range(20): + time.sleep(0.1 * i) + line = self.logfile.readline() + if 'reloader loop finished' in line: + return + + +@pytest.fixture +def dev_server(tmpdir, subprocess, request, monkeypatch): + '''Run werkzeug.serving.run_simple in its own process. + + :param application: String for the module that will be created. The module + must have a global ``app`` object, a ``kwargs`` dict is also available + whose values will be passed to ``run_simple``. + ''' + def run_dev_server(application): + app_pkg = tmpdir.mkdir('testsuite_app') + appfile = app_pkg.join('__init__.py') + appfile.write('\n\n'.join(( + 'kwargs = dict(port=5001)', + textwrap.dedent(application) + ))) + + monkeypatch.delitem(sys.modules, 'testsuite_app', raising=False) + monkeypatch.syspath_prepend(str(tmpdir)) + import testsuite_app + port = testsuite_app.kwargs['port'] + + if testsuite_app.kwargs.get('ssl_context', None): + url_base = 'https://localhost:{0}'.format(port) + else: + url_base = 'http://localhost:{0}'.format(port) + + info = _ServerInfo( + subprocess, + 'localhost:{0}'.format(port), + url_base, + port + ) + + def preparefunc(cwd): + args = [sys.executable, __file__, str(tmpdir)] + return info.request_pid, args + + subprocess.ensure('dev_server', preparefunc, restart=True) + + def teardown(): + # Killing the process group that runs the server, not just the + # parent process attached. xprocess is confused about Werkzeug's + # reloader and won't help here. + pid = info.last_pid + os.killpg(os.getpgid(pid), signal.SIGTERM) + request.addfinalizer(teardown) + + return info + + return run_dev_server diff -Nru python-werkzeug-0.9.6+dfsg/tests/contrib/__init__.py python-werkzeug-0.10.4+dfsg1/tests/contrib/__init__.py --- python-werkzeug-0.9.6+dfsg/tests/contrib/__init__.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/contrib/__init__.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,10 @@ +# -*- coding: utf-8 -*- +""" + tests.contrib + ~~~~~~~~~~~~~~~~~~~~~~~~~~ + + Tests the contrib modules. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" diff -Nru python-werkzeug-0.9.6+dfsg/tests/contrib/test_atom.py python-werkzeug-0.10.4+dfsg1/tests/contrib/test_atom.py --- python-werkzeug-0.9.6+dfsg/tests/contrib/test_atom.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/contrib/test_atom.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,36 @@ +# -*- coding: utf-8 -*- +""" + tests.atom + ~~~~~~~~~~ + + Tests the cache system + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import datetime + +from werkzeug.contrib.atom import format_iso8601 + + +def test_format_iso8601(): + # naive datetime should be treated as utc + dt = datetime.datetime(2014, 8, 31, 2, 5, 6) + assert format_iso8601(dt) == '2014-08-31T02:05:06Z' + + # tz-aware datetime + dt = datetime.datetime(2014, 8, 31, 11, 5, 6, tzinfo=KST()) + assert format_iso8601(dt) == '2014-08-31T11:05:06+09:00' + + +class KST(datetime.tzinfo): + """KST implementation for test_format_iso8601().""" + + def utcoffset(self, dt): + return datetime.timedelta(hours=9) + + def tzname(self, dt): + return 'KST' + + def dst(self, dt): + return datetime.timedelta(0) diff -Nru python-werkzeug-0.9.6+dfsg/tests/contrib/test_cache.py python-werkzeug-0.10.4+dfsg1/tests/contrib/test_cache.py --- python-werkzeug-0.9.6+dfsg/tests/contrib/test_cache.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/contrib/test_cache.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,219 @@ +# -*- coding: utf-8 -*- +""" + tests.cache + ~~~~~~~~~~~ + + Tests the cache system + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import pytest +import os + +from werkzeug.contrib import cache + +try: + import redis +except ImportError: + redis = None + +try: + import pylibmc as memcache +except ImportError: + try: + from google.appengine.api import memcache + except ImportError: + try: + import memcache + except ImportError: + memcache = None + + +class CacheTests(object): + _can_use_fast_sleep = True + + @pytest.fixture + def make_cache(self): + '''Return a cache class or factory.''' + raise NotImplementedError() + + @pytest.fixture + def fast_sleep(self, monkeypatch): + if self._can_use_fast_sleep: + def sleep(delta): + orig_time = cache.time + monkeypatch.setattr(cache, 'time', lambda: orig_time() + delta) + + return sleep + else: + import time + return time.sleep + + @pytest.fixture + def c(self, make_cache): + '''Return a cache instance.''' + return make_cache() + + def test_generic_get_dict(self, c): + assert c.set('a', 'a') + assert c.set('b', 'b') + d = c.get_dict('a', 'b') + assert 'a' in d + assert 'a' == d['a'] + assert 'b' in d + assert 'b' == d['b'] + + def test_generic_set_get(self, c): + for i in range(3): + assert c.set(str(i), i * i) + for i in range(3): + result = c.get(str(i)) + assert result == i * i, result + + def test_generic_get_set(self, c): + assert c.set('foo', ['bar']) + assert c.get('foo') == ['bar'] + + def test_generic_get_many(self, c): + assert c.set('foo', ['bar']) + assert c.set('spam', 'eggs') + assert list(c.get_many('foo', 'spam')) == [['bar'], 'eggs'] + + def test_generic_set_many(self, c): + assert c.set_many({'foo': 'bar', 'spam': ['eggs']}) + assert c.get('foo') == 'bar' + assert c.get('spam') == ['eggs'] + + def test_generic_expire(self, c, fast_sleep): + assert c.set('foo', 'bar', 1) + fast_sleep(2) + assert c.get('foo') is None + + def test_generic_add(self, c): + # sanity check that add() works like set() + assert c.add('foo', 'bar') + assert c.get('foo') == 'bar' + assert not c.add('foo', 'qux') + assert c.get('foo') == 'bar' + + def test_generic_delete(self, c): + assert c.add('foo', 'bar') + assert c.get('foo') == 'bar' + assert c.delete('foo') + assert c.get('foo') is None + + def test_generic_delete_many(self, c): + assert c.add('foo', 'bar') + assert c.add('spam', 'eggs') + assert c.delete_many('foo', 'spam') + assert c.get('foo') is None + assert c.get('spam') is None + + def test_generic_inc_dec(self, c): + assert c.set('foo', 1) + assert c.inc('foo') == c.get('foo') == 2 + assert c.dec('foo') == c.get('foo') == 1 + assert c.delete('foo') + + def test_generic_true_false(self, c): + assert c.set('foo', True) + assert c.get('foo') == True + assert c.set('bar', False) + assert c.get('bar') == False + + def test_purge(self): + c = cache.SimpleCache(threshold=2) + c.set('a', 'a') + c.set('b', 'b') + c.set('c', 'c') + c.set('d', 'd') + # Cache purges old items *before* it sets new ones. + assert len(c._cache) == 3 + + +class TestSimpleCache(CacheTests): + @pytest.fixture + def make_cache(self): + return cache.SimpleCache + + +class TestFileSystemCache(CacheTests): + @pytest.fixture + def make_cache(self, tmpdir): + return lambda **kw: cache.FileSystemCache(cache_dir=str(tmpdir), **kw) + + def test_filesystemcache_prune(self, make_cache): + THRESHOLD = 13 + c = make_cache(threshold=THRESHOLD) + for i in range(2 * THRESHOLD): + assert c.set(str(i), i) + cache_files = os.listdir(c._path) + assert len(cache_files) <= THRESHOLD + + def test_filesystemcache_clear(self, c): + assert c.set('foo', 'bar') + cache_files = os.listdir(c._path) + assert len(cache_files) == 1 + assert c.clear() + cache_files = os.listdir(c._path) + assert len(cache_files) == 0 + + +# Don't use pytest marker +# https://bitbucket.org/hpk42/pytest/issue/568 +if redis is not None: + class TestRedisCache(CacheTests): + _can_use_fast_sleep = False + + @pytest.fixture(params=[ + ([], dict()), + ([redis.Redis()], dict()), + ([redis.StrictRedis()], dict()) + ]) + def make_cache(self, xprocess, request): + def preparefunc(cwd): + return 'server is now ready', ['redis-server'] + + xprocess.ensure('redis_server', preparefunc) + args, kwargs = request.param + c = cache.RedisCache(*args, key_prefix='werkzeug-test-case:', + **kwargs) + request.addfinalizer(c.clear) + return lambda: c + + def test_compat(self, c): + assert c._client.set(c.key_prefix + 'foo', 'Awesome') + assert c.get('foo') == b'Awesome' + assert c._client.set(c.key_prefix + 'foo', '42') + assert c.get('foo') == 42 + + +# Don't use pytest marker +# https://bitbucket.org/hpk42/pytest/issue/568 +if memcache is not None: + class TestMemcachedCache(CacheTests): + _can_use_fast_sleep = False + + @pytest.fixture + def make_cache(self, xprocess, request): + def preparefunc(cwd): + return '', ['memcached'] + + xprocess.ensure('memcached', preparefunc) + c = cache.MemcachedCache(key_prefix='werkzeug-test-case:') + request.addfinalizer(c.clear) + return lambda: c + + def test_compat(self, c): + assert c._client.set(c.key_prefix + 'foo', 'bar') + assert c.get('foo') == 'bar' + + def test_huge_timeouts(self, c): + # Timeouts greater than epoch are interpreted as POSIX timestamps + # (i.e. not relative to now, but relative to epoch) + import random + epoch = 2592000 + timeout = epoch + random.random() * 100 + c.set('foo', 'bar', timeout) + assert c.get('foo') == 'bar' diff -Nru python-werkzeug-0.9.6+dfsg/tests/contrib/test_fixers.py python-werkzeug-0.10.4+dfsg1/tests/contrib/test_fixers.py --- python-werkzeug-0.9.6+dfsg/tests/contrib/test_fixers.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/contrib/test_fixers.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,183 @@ +# -*- coding: utf-8 -*- +""" + tests.fixers + ~~~~~~~~~~~~ + + Server / Browser fixers. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +from tests import strict_eq +from werkzeug.datastructures import ResponseCacheControl +from werkzeug.http import parse_cache_control_header + +from werkzeug.test import create_environ, Client +from werkzeug.wrappers import Request, Response +from werkzeug.contrib import fixers +from werkzeug.utils import redirect + + +@Request.application +def path_check_app(request): + return Response('PATH_INFO: %s\nSCRIPT_NAME: %s' % ( + request.environ.get('PATH_INFO', ''), + request.environ.get('SCRIPT_NAME', '') + )) + + +class TestServerFixer(object): + + def test_cgi_root_fix(self): + app = fixers.CGIRootFix(path_check_app) + response = Response.from_app(app, dict(create_environ(), + SCRIPT_NAME='/foo', + PATH_INFO='/bar', + SERVER_SOFTWARE='lighttpd/1.4.27' + )) + assert response.get_data() == b'PATH_INFO: /foo/bar\nSCRIPT_NAME: ' + + def test_cgi_root_fix_custom_app_root(self): + app = fixers.CGIRootFix(path_check_app, app_root='/baz/poop/') + response = Response.from_app(app, dict(create_environ(), + SCRIPT_NAME='/foo', + PATH_INFO='/bar' + )) + assert response.get_data() == b'PATH_INFO: /foo/bar\nSCRIPT_NAME: baz/poop' + + def test_path_info_from_request_uri_fix(self): + app = fixers.PathInfoFromRequestUriFix(path_check_app) + for key in 'REQUEST_URI', 'REQUEST_URL', 'UNENCODED_URL': + env = dict(create_environ(), SCRIPT_NAME='/test', PATH_INFO='/?????') + env[key] = '/test/foo%25bar?drop=this' + response = Response.from_app(app, env) + assert response.get_data() == b'PATH_INFO: /foo%bar\nSCRIPT_NAME: /test' + + def test_proxy_fix(self): + @Request.application + def app(request): + return Response('%s|%s' % ( + request.remote_addr, + # do not use request.host as this fixes too :) + request.environ['HTTP_HOST'] + )) + app = fixers.ProxyFix(app, num_proxies=2) + environ = dict(create_environ(), + HTTP_X_FORWARDED_PROTO="https", + HTTP_X_FORWARDED_HOST='example.com', + HTTP_X_FORWARDED_FOR='1.2.3.4, 5.6.7.8', + REMOTE_ADDR='127.0.0.1', + HTTP_HOST='fake' + ) + + response = Response.from_app(app, environ) + + assert response.get_data() == b'1.2.3.4|example.com' + + # And we must check that if it is a redirection it is + # correctly done: + + redirect_app = redirect('/foo/bar.hml') + response = Response.from_app(redirect_app, environ) + + wsgi_headers = response.get_wsgi_headers(environ) + assert wsgi_headers['Location'] == 'https://example.com/foo/bar.hml' + + def test_proxy_fix_weird_enum(self): + @fixers.ProxyFix + @Request.application + def app(request): + return Response(request.remote_addr) + environ = dict(create_environ(), + HTTP_X_FORWARDED_FOR=',', + REMOTE_ADDR='127.0.0.1', + ) + + response = Response.from_app(app, environ) + strict_eq(response.get_data(), b'127.0.0.1') + + def test_header_rewriter_fix(self): + @Request.application + def application(request): + return Response("", headers=[ + ('X-Foo', 'bar') + ]) + application = fixers.HeaderRewriterFix(application, ('X-Foo',), (('X-Bar', '42'),)) + response = Response.from_app(application, create_environ()) + assert response.headers['Content-Type'] == 'text/plain; charset=utf-8' + assert 'X-Foo' not in response.headers + assert response.headers['X-Bar'] == '42' + + +class TestBrowserFixer(object): + + def test_ie_fixes(self): + @fixers.InternetExplorerFix + @Request.application + def application(request): + response = Response('binary data here', mimetype='application/vnd.ms-excel') + response.headers['Vary'] = 'Cookie' + response.headers['Content-Disposition'] = 'attachment; filename=foo.xls' + return response + + c = Client(application, Response) + response = c.get('/', headers=[ + ('User-Agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)') + ]) + + # IE gets no vary + assert response.get_data() == b'binary data here' + assert 'vary' not in response.headers + assert response.headers['content-disposition'] == 'attachment; filename=foo.xls' + assert response.headers['content-type'] == 'application/vnd.ms-excel' + + # other browsers do + c = Client(application, Response) + response = c.get('/') + assert response.get_data() == b'binary data here' + assert 'vary' in response.headers + + cc = ResponseCacheControl() + cc.no_cache = True + + @fixers.InternetExplorerFix + @Request.application + def application(request): + response = Response('binary data here', mimetype='application/vnd.ms-excel') + response.headers['Pragma'] = ', '.join(pragma) + response.headers['Cache-Control'] = cc.to_header() + response.headers['Content-Disposition'] = 'attachment; filename=foo.xls' + return response + + + # IE has no pragma or cache control + pragma = ('no-cache',) + c = Client(application, Response) + response = c.get('/', headers=[ + ('User-Agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)') + ]) + assert response.get_data() == b'binary data here' + assert 'pragma' not in response.headers + assert 'cache-control' not in response.headers + assert response.headers['content-disposition'] == 'attachment; filename=foo.xls' + + # IE has simplified pragma + pragma = ('no-cache', 'x-foo') + cc.proxy_revalidate = True + response = c.get('/', headers=[ + ('User-Agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)') + ]) + assert response.get_data() == b'binary data here' + assert response.headers['pragma'] == 'x-foo' + assert response.headers['cache-control'] == 'proxy-revalidate' + assert response.headers['content-disposition'] == 'attachment; filename=foo.xls' + + # regular browsers get everything + response = c.get('/') + assert response.get_data() == b'binary data here' + assert response.headers['pragma'] == 'no-cache, x-foo' + cc = parse_cache_control_header(response.headers['cache-control'], + cls=ResponseCacheControl) + assert cc.no_cache + assert cc.proxy_revalidate + assert response.headers['content-disposition'] == 'attachment; filename=foo.xls' diff -Nru python-werkzeug-0.9.6+dfsg/tests/contrib/test_iterio.py python-werkzeug-0.10.4+dfsg1/tests/contrib/test_iterio.py --- python-werkzeug-0.9.6+dfsg/tests/contrib/test_iterio.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/contrib/test_iterio.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,183 @@ +# -*- coding: utf-8 -*- +""" + tests.iterio + ~~~~~~~~~~~~ + + Tests the iterio object. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import pytest +from functools import partial + +from tests import strict_eq +from werkzeug.contrib.iterio import IterIO, greenlet + + +class TestIterO(object): + + def test_basic_native(self): + io = IterIO(["Hello", "World", "1", "2", "3"]) + assert io.tell() == 0 + assert io.read(2) == "He" + assert io.tell() == 2 + assert io.read(3) == "llo" + assert io.tell() == 5 + io.seek(0) + assert io.read(5) == "Hello" + assert io.tell() == 5 + assert io._buf == "Hello" + assert io.read() == "World123" + assert io.tell() == 13 + io.close() + assert io.closed + + io = IterIO(["Hello\n", "World!"]) + assert io.readline() == 'Hello\n' + assert io._buf == 'Hello\n' + assert io.read() == 'World!' + assert io._buf == 'Hello\nWorld!' + assert io.tell() == 12 + io.seek(0) + assert io.readlines() == ['Hello\n', 'World!'] + + io = IterIO(['Line one\nLine ', 'two\nLine three']) + assert list(io) == ['Line one\n', 'Line two\n', 'Line three'] + io = IterIO(iter('Line one\nLine two\nLine three')) + assert list(io) == ['Line one\n', 'Line two\n', 'Line three'] + io = IterIO(['Line one\nL', 'ine', ' two', '\nLine three']) + assert list(io) == ['Line one\n', 'Line two\n', 'Line three'] + + io = IterIO(["foo\n", "bar"]) + io.seek(-4, 2) + assert io.read(4) == '\nbar' + + pytest.raises(IOError, io.seek, 2, 100) + io.close() + pytest.raises(ValueError, io.read) + + def test_basic_bytes(self): + io = IterIO([b"Hello", b"World", b"1", b"2", b"3"]) + assert io.tell() == 0 + assert io.read(2) == b"He" + assert io.tell() == 2 + assert io.read(3) == b"llo" + assert io.tell() == 5 + io.seek(0) + assert io.read(5) == b"Hello" + assert io.tell() == 5 + assert io._buf == b"Hello" + assert io.read() == b"World123" + assert io.tell() == 13 + io.close() + assert io.closed + + io = IterIO([b"Hello\n", b"World!"]) + assert io.readline() == b'Hello\n' + assert io._buf == b'Hello\n' + assert io.read() == b'World!' + assert io._buf == b'Hello\nWorld!' + assert io.tell() == 12 + io.seek(0) + assert io.readlines() == [b'Hello\n', b'World!'] + + io = IterIO([b"foo\n", b"bar"]) + io.seek(-4, 2) + assert io.read(4) == b'\nbar' + + pytest.raises(IOError, io.seek, 2, 100) + io.close() + pytest.raises(ValueError, io.read) + + def test_basic_unicode(self): + io = IterIO([u"Hello", u"World", u"1", u"2", u"3"]) + assert io.tell() == 0 + assert io.read(2) == u"He" + assert io.tell() == 2 + assert io.read(3) == u"llo" + assert io.tell() == 5 + io.seek(0) + assert io.read(5) == u"Hello" + assert io.tell() == 5 + assert io._buf == u"Hello" + assert io.read() == u"World123" + assert io.tell() == 13 + io.close() + assert io.closed + + io = IterIO([u"Hello\n", u"World!"]) + assert io.readline() == u'Hello\n' + assert io._buf == u'Hello\n' + assert io.read() == u'World!' + assert io._buf == u'Hello\nWorld!' + assert io.tell() == 12 + io.seek(0) + assert io.readlines() == [u'Hello\n', u'World!'] + + io = IterIO([u"foo\n", u"bar"]) + io.seek(-4, 2) + assert io.read(4) == u'\nbar' + + pytest.raises(IOError, io.seek, 2, 100) + io.close() + pytest.raises(ValueError, io.read) + + def test_sentinel_cases(self): + io = IterIO([]) + strict_eq(io.read(), '') + io = IterIO([], b'') + strict_eq(io.read(), b'') + io = IterIO([], u'') + strict_eq(io.read(), u'') + + io = IterIO([]) + strict_eq(io.read(), '') + io = IterIO([b'']) + strict_eq(io.read(), b'') + io = IterIO([u'']) + strict_eq(io.read(), u'') + + io = IterIO([]) + strict_eq(io.readline(), '') + io = IterIO([], b'') + strict_eq(io.readline(), b'') + io = IterIO([], u'') + strict_eq(io.readline(), u'') + + io = IterIO([]) + strict_eq(io.readline(), '') + io = IterIO([b'']) + strict_eq(io.readline(), b'') + io = IterIO([u'']) + strict_eq(io.readline(), u'') + + +@pytest.mark.skipif(greenlet is None, reason='Greenlet is not installed.') +class TestIterI(object): + def test_basic(self): + def producer(out): + out.write('1\n') + out.write('2\n') + out.flush() + out.write('3\n') + iterable = IterIO(producer) + assert next(iterable) == '1\n2\n' + assert next(iterable) == '3\n' + pytest.raises(StopIteration, next, iterable) + + def test_sentinel_cases(self): + def producer_dummy_flush(out): + out.flush() + iterable = IterIO(producer_dummy_flush) + strict_eq(next(iterable), '') + + def producer_empty(out): + pass + iterable = IterIO(producer_empty) + pytest.raises(StopIteration, next, iterable) + + iterable = IterIO(producer_dummy_flush, b'') + strict_eq(next(iterable), b'') + iterable = IterIO(producer_dummy_flush, u'') + strict_eq(next(iterable), u'') diff -Nru python-werkzeug-0.9.6+dfsg/tests/contrib/test_securecookie.py python-werkzeug-0.10.4+dfsg1/tests/contrib/test_securecookie.py --- python-werkzeug-0.9.6+dfsg/tests/contrib/test_securecookie.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/contrib/test_securecookie.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +""" + tests.securecookie + ~~~~~~~~~~~~~~~~~~ + + Tests the secure cookie. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" + +from werkzeug.utils import parse_cookie +from werkzeug.wrappers import Request, Response +from werkzeug.contrib.securecookie import SecureCookie + + +def test_basic_support(): + c = SecureCookie(secret_key=b'foo') + assert c.new + assert not c.modified + assert not c.should_save + c['x'] = 42 + assert c.modified + assert c.should_save + s = c.serialize() + + c2 = SecureCookie.unserialize(s, b'foo') + assert c is not c2 + assert not c2.new + assert not c2.modified + assert not c2.should_save + assert c2 == c + + c3 = SecureCookie.unserialize(s, b'wrong foo') + assert not c3.modified + assert not c3.new + assert c3 == {} + +def test_wrapper_support(): + req = Request.from_values() + resp = Response() + c = SecureCookie.load_cookie(req, secret_key=b'foo') + assert c.new + c['foo'] = 42 + assert c.secret_key == b'foo' + c.save_cookie(resp) + + req = Request.from_values(headers={ + 'Cookie': 'session="%s"' % parse_cookie(resp.headers['set-cookie'])['session'] + }) + c2 = SecureCookie.load_cookie(req, secret_key=b'foo') + assert not c2.new + assert c2 == c diff -Nru python-werkzeug-0.9.6+dfsg/tests/contrib/test_sessions.py python-werkzeug-0.10.4+dfsg1/tests/contrib/test_sessions.py --- python-werkzeug-0.9.6+dfsg/tests/contrib/test_sessions.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/contrib/test_sessions.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,75 @@ +# -*- coding: utf-8 -*- +""" + tests.sessions + ~~~~~~~~~~~~~~ + + Added tests for the sessions. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import pytest +import os +import shutil +from tempfile import gettempdir + +from werkzeug.contrib.sessions import FilesystemSessionStore + + +def test_default_tempdir(): + store = FilesystemSessionStore() + assert store.path == gettempdir() + +def test_basic_fs_sessions(tmpdir): + store = FilesystemSessionStore(str(tmpdir)) + x = store.new() + assert x.new + assert not x.modified + x['foo'] = [1, 2, 3] + assert x.modified + store.save(x) + + x2 = store.get(x.sid) + assert not x2.new + assert not x2.modified + assert x2 is not x + assert x2 == x + x2['test'] = 3 + assert x2.modified + assert not x2.new + store.save(x2) + + x = store.get(x.sid) + store.delete(x) + x2 = store.get(x.sid) + # the session is not new when it was used previously. + assert not x2.new + +def test_non_urandom(tmpdir): + urandom = os.urandom + del os.urandom + try: + store = FilesystemSessionStore(str(tmpdir)) + store.new() + finally: + os.urandom = urandom + + +def test_renewing_fs_session(tmpdir): + store = FilesystemSessionStore(str(tmpdir), renew_missing=True) + x = store.new() + store.save(x) + store.delete(x) + x2 = store.get(x.sid) + assert x2.new + +def test_fs_session_lising(tmpdir): + store = FilesystemSessionStore(str(tmpdir), renew_missing=True) + sessions = set() + for x in range(10): + sess = store.new() + store.save(sess) + sessions.add(sess.sid) + + listed_sessions = set(store.list()) + assert sessions == listed_sessions diff -Nru python-werkzeug-0.9.6+dfsg/tests/contrib/test_wrappers.py python-werkzeug-0.10.4+dfsg1/tests/contrib/test_wrappers.py --- python-werkzeug-0.9.6+dfsg/tests/contrib/test_wrappers.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/contrib/test_wrappers.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,85 @@ +# -*- coding: utf-8 -*- +""" + tests.contrib.wrappers + ~~~~~~~~~~~~~~~~~~~~~~ + + Added tests for the sessions. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" + +from __future__ import with_statement + +from werkzeug.contrib import wrappers +from werkzeug import routing +from werkzeug.wrappers import Request, Response + + +def test_reverse_slash_behavior(): + class MyRequest(wrappers.ReverseSlashBehaviorRequestMixin, Request): + pass + req = MyRequest.from_values('/foo/bar', 'http://example.com/test') + assert req.url == 'http://example.com/test/foo/bar' + assert req.path == 'foo/bar' + assert req.script_root == '/test/' + + # make sure the routing system works with the slashes in + # reverse order as well. + map = routing.Map([routing.Rule('/foo/bar', endpoint='foo')]) + adapter = map.bind_to_environ(req.environ) + assert adapter.match() == ('foo', {}) + adapter = map.bind(req.host, req.script_root) + assert adapter.match(req.path) == ('foo', {}) + +def test_dynamic_charset_request_mixin(): + class MyRequest(wrappers.DynamicCharsetRequestMixin, Request): + pass + env = {'CONTENT_TYPE': 'text/html'} + req = MyRequest(env) + assert req.charset == 'latin1' + + env = {'CONTENT_TYPE': 'text/html; charset=utf-8'} + req = MyRequest(env) + assert req.charset == 'utf-8' + + env = {'CONTENT_TYPE': 'application/octet-stream'} + req = MyRequest(env) + assert req.charset == 'latin1' + assert req.url_charset == 'latin1' + + MyRequest.url_charset = 'utf-8' + env = {'CONTENT_TYPE': 'application/octet-stream'} + req = MyRequest(env) + assert req.charset == 'latin1' + assert req.url_charset == 'utf-8' + + def return_ascii(x): + return "ascii" + env = {'CONTENT_TYPE': 'text/plain; charset=x-weird-charset'} + req = MyRequest(env) + req.unknown_charset = return_ascii + assert req.charset == 'ascii' + assert req.url_charset == 'utf-8' + +def test_dynamic_charset_response_mixin(): + class MyResponse(wrappers.DynamicCharsetResponseMixin, Response): + default_charset = 'utf-7' + resp = MyResponse(mimetype='text/html') + assert resp.charset == 'utf-7' + resp.charset = 'utf-8' + assert resp.charset == 'utf-8' + assert resp.mimetype == 'text/html' + assert resp.mimetype_params == {'charset': 'utf-8'} + resp.mimetype_params['charset'] = 'iso-8859-15' + assert resp.charset == 'iso-8859-15' + resp.set_data(u'Hällo Wörld') + assert b''.join(resp.iter_encoded()) == \ + u'Hällo Wörld'.encode('iso-8859-15') + del resp.headers['content-type'] + try: + resp.charset = 'utf-8' + except TypeError as e: + pass + else: + assert False, 'expected type error on charset setting without ct' diff -Nru python-werkzeug-0.9.6+dfsg/tests/__init__.py python-werkzeug-0.10.4+dfsg1/tests/__init__.py --- python-werkzeug-0.9.6+dfsg/tests/__init__.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/__init__.py 2015-03-26 14:11:46.000000000 +0000 @@ -0,0 +1,27 @@ +# -*- coding: utf-8 -*- +""" + tests + ~~~~~ + + Contains all test Werkzeug tests. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +from __future__ import with_statement + +from werkzeug._compat import text_type + + +def strict_eq(x, y): + '''Equality test bypassing the implicit string conversion in Python 2''' + __tracebackhide__ = True + assert x == y + assert issubclass(type(x), type(y)) or issubclass(type(y), type(x)) + if isinstance(x, dict) and isinstance(y, dict): + x = sorted(x.items()) + y = sorted(y.items()) + elif isinstance(x, set) and isinstance(y, set): + x = sorted(x) + y = sorted(y) + assert repr(x) == repr(y) Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/firefox3-2png1txt/file1.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/firefox3-2png1txt/file1.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/firefox3-2png1txt/file2.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/firefox3-2png1txt/file2.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/firefox3-2png1txt/request.txt and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/firefox3-2png1txt/request.txt differ diff -Nru python-werkzeug-0.9.6+dfsg/tests/multipart/firefox3-2png1txt/text.txt python-werkzeug-0.10.4+dfsg1/tests/multipart/firefox3-2png1txt/text.txt --- python-werkzeug-0.9.6+dfsg/tests/multipart/firefox3-2png1txt/text.txt 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/multipart/firefox3-2png1txt/text.txt 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1 @@ +example text \ No newline at end of file Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/firefox3-2pnglongtext/file1.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/firefox3-2pnglongtext/file1.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/firefox3-2pnglongtext/file2.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/firefox3-2pnglongtext/file2.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/firefox3-2pnglongtext/request.txt and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/firefox3-2pnglongtext/request.txt differ diff -Nru python-werkzeug-0.9.6+dfsg/tests/multipart/firefox3-2pnglongtext/text.txt python-werkzeug-0.10.4+dfsg1/tests/multipart/firefox3-2pnglongtext/text.txt --- python-werkzeug-0.9.6+dfsg/tests/multipart/firefox3-2pnglongtext/text.txt 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/multipart/firefox3-2pnglongtext/text.txt 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,3 @@ +--long text +--with boundary +--lookalikes-- \ No newline at end of file Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/ie6-2png1txt/file1.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/ie6-2png1txt/file1.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/ie6-2png1txt/file2.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/ie6-2png1txt/file2.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/ie6-2png1txt/request.txt and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/ie6-2png1txt/request.txt differ diff -Nru python-werkzeug-0.9.6+dfsg/tests/multipart/ie6-2png1txt/text.txt python-werkzeug-0.10.4+dfsg1/tests/multipart/ie6-2png1txt/text.txt --- python-werkzeug-0.9.6+dfsg/tests/multipart/ie6-2png1txt/text.txt 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/multipart/ie6-2png1txt/text.txt 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1 @@ +ie6 sucks :-/ \ No newline at end of file Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/ie7_full_path_request.txt and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/ie7_full_path_request.txt differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/opera8-2png1txt/file1.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/opera8-2png1txt/file1.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/opera8-2png1txt/file2.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/opera8-2png1txt/file2.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/opera8-2png1txt/request.txt and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/opera8-2png1txt/request.txt differ diff -Nru python-werkzeug-0.9.6+dfsg/tests/multipart/opera8-2png1txt/text.txt python-werkzeug-0.10.4+dfsg1/tests/multipart/opera8-2png1txt/text.txt --- python-werkzeug-0.9.6+dfsg/tests/multipart/opera8-2png1txt/text.txt 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/multipart/opera8-2png1txt/text.txt 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1 @@ +blafasel öäü \ No newline at end of file diff -Nru python-werkzeug-0.9.6+dfsg/tests/multipart/test_collect.py python-werkzeug-0.10.4+dfsg1/tests/multipart/test_collect.py --- python-werkzeug-0.9.6+dfsg/tests/multipart/test_collect.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/multipart/test_collect.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,57 @@ +#!/usr/bin/env python +""" +Hacky helper application to collect form data. +""" +from werkzeug.serving import run_simple +from werkzeug.wrappers import Request, Response + + +def copy_stream(request): + from os import mkdir + from time import time + folder = 'request-%d' % time() + mkdir(folder) + environ = request.environ + f = open(folder + '/request.txt', 'wb+') + f.write(environ['wsgi.input'].read(int(environ['CONTENT_LENGTH']))) + f.flush() + f.seek(0) + environ['wsgi.input'] = f + request.stat_folder = folder + + +def stats(request): + copy_stream(request) + f1 = request.files['file1'] + f2 = request.files['file2'] + text = request.form['text'] + f1.save(request.stat_folder + '/file1.bin') + f2.save(request.stat_folder + '/file2.bin') + with open(request.stat_folder + '/text.txt', 'w') as f: + f.write(text.encode('utf-8')) + return Response('Done.') + + +def upload_file(request): + return Response(''' +

Upload File

+
+
+
+
+ +
+ ''', mimetype='text/html') + + +def application(environ, start_responseonse): + request = Request(environ) + if request.method == 'POST': + response = stats(request) + else: + response = upload_file(request) + return response(environ, start_responseonse) + + +if __name__ == '__main__': + run_simple('localhost', 5000, application, use_debugger=True) Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/webkit3-2png1txt/file1.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/webkit3-2png1txt/file1.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/webkit3-2png1txt/file2.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/webkit3-2png1txt/file2.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/tests/multipart/webkit3-2png1txt/request.txt and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/tests/multipart/webkit3-2png1txt/request.txt differ diff -Nru python-werkzeug-0.9.6+dfsg/tests/multipart/webkit3-2png1txt/text.txt python-werkzeug-0.10.4+dfsg1/tests/multipart/webkit3-2png1txt/text.txt --- python-werkzeug-0.9.6+dfsg/tests/multipart/webkit3-2png1txt/text.txt 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/multipart/webkit3-2png1txt/text.txt 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1 @@ +this is another text with ümläüts \ No newline at end of file diff -Nru python-werkzeug-0.9.6+dfsg/tests/res/test.txt python-werkzeug-0.10.4+dfsg1/tests/res/test.txt --- python-werkzeug-0.9.6+dfsg/tests/res/test.txt 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/res/test.txt 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1 @@ +FOUND diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_compat.py python-werkzeug-0.10.4+dfsg1/tests/test_compat.py --- python-werkzeug-0.9.6+dfsg/tests/test_compat.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_compat.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,30 @@ +# -*- coding: utf-8 -*- +""" + tests.compat + ~~~~~~~~~~~~ + + Ensure that old stuff does not break on update. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import warnings + +from werkzeug.wrappers import Response +from werkzeug.test import create_environ + + +def test_old_imports(): + from werkzeug.utils import Headers, MultiDict, CombinedMultiDict, \ + Headers, EnvironHeaders + from werkzeug.http import Accept, MIMEAccept, CharsetAccept, \ + LanguageAccept, ETags, HeaderSet, WWWAuthenticate, \ + Authorization + +def test_exposed_werkzeug_mod(): + import werkzeug + for key in werkzeug.__all__: + # deprecated, skip it + if key in ('templates', 'Template'): + continue + getattr(werkzeug, key) diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_datastructures.py python-werkzeug-0.10.4+dfsg1/tests/test_datastructures.py --- python-werkzeug-0.9.6+dfsg/tests/test_datastructures.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_datastructures.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,891 @@ +# -*- coding: utf-8 -*- +""" + tests.datastructures + ~~~~~~~~~~~~~~~~~~~~ + + Tests the functionality of the provided Werkzeug + datastructures. + + Classes prefixed with an underscore are mixins and are not discovered by + the test runner. + + TODO: + + - FileMultiDict + - Immutable types undertested + - Split up dict tests + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" + +from __future__ import with_statement + +import pytest +from tests import strict_eq + + +import pickle +from contextlib import contextmanager +from copy import copy, deepcopy + +from werkzeug import datastructures +from werkzeug._compat import iterkeys, itervalues, iteritems, iterlists, \ + iterlistvalues, text_type, PY2 +from werkzeug.exceptions import BadRequestKeyError + + +class TestNativeItermethods(object): + def test_basic(self): + @datastructures.native_itermethods(['keys', 'values', 'items']) + class StupidDict(object): + def keys(self, multi=1): + return iter(['a', 'b', 'c'] * multi) + + def values(self, multi=1): + return iter([1, 2, 3] * multi) + + def items(self, multi=1): + return iter(zip(iterkeys(self, multi=multi), + itervalues(self, multi=multi))) + + d = StupidDict() + expected_keys = ['a', 'b', 'c'] + expected_values = [1, 2, 3] + expected_items = list(zip(expected_keys, expected_values)) + + assert list(iterkeys(d)) == expected_keys + assert list(itervalues(d)) == expected_values + assert list(iteritems(d)) == expected_items + + assert list(iterkeys(d, 2)) == expected_keys * 2 + assert list(itervalues(d, 2)) == expected_values * 2 + assert list(iteritems(d, 2)) == expected_items * 2 + + +class _MutableMultiDictTests(object): + storage_class = None + + def test_pickle(self): + cls = self.storage_class + + def create_instance(module=None): + if module is None: + d = cls() + else: + old = cls.__module__ + cls.__module__ = module + d = cls() + cls.__module__ = old + d.setlist(b'foo', [1, 2, 3, 4]) + d.setlist(b'bar', b'foo bar baz'.split()) + return d + + for protocol in range(pickle.HIGHEST_PROTOCOL + 1): + d = create_instance() + s = pickle.dumps(d, protocol) + ud = pickle.loads(s) + assert type(ud) == type(d) + assert ud == d + alternative = pickle.dumps(create_instance('werkzeug'), protocol) + assert pickle.loads(alternative) == d + ud[b'newkey'] = b'bla' + assert ud != d + + def test_basic_interface(self): + md = self.storage_class() + assert isinstance(md, dict) + + mapping = [('a', 1), ('b', 2), ('a', 2), ('d', 3), + ('a', 1), ('a', 3), ('d', 4), ('c', 3)] + md = self.storage_class(mapping) + + # simple getitem gives the first value + assert md['a'] == 1 + assert md['c'] == 3 + with pytest.raises(KeyError): + md['e'] + assert md.get('a') == 1 + + # list getitem + assert md.getlist('a') == [1, 2, 1, 3] + assert md.getlist('d') == [3, 4] + # do not raise if key not found + assert md.getlist('x') == [] + + # simple setitem overwrites all values + md['a'] = 42 + assert md.getlist('a') == [42] + + # list setitem + md.setlist('a', [1, 2, 3]) + assert md['a'] == 1 + assert md.getlist('a') == [1, 2, 3] + + # verify that it does not change original lists + l1 = [1, 2, 3] + md.setlist('a', l1) + del l1[:] + assert md['a'] == 1 + + # setdefault, setlistdefault + assert md.setdefault('u', 23) == 23 + assert md.getlist('u') == [23] + del md['u'] + + md.setlist('u', [-1, -2]) + + # delitem + del md['u'] + with pytest.raises(KeyError): + md['u'] + del md['d'] + assert md.getlist('d') == [] + + # keys, values, items, lists + assert list(sorted(md.keys())) == ['a', 'b', 'c'] + assert list(sorted(iterkeys(md))) == ['a', 'b', 'c'] + + assert list(sorted(itervalues(md))) == [1, 2, 3] + assert list(sorted(itervalues(md))) == [1, 2, 3] + + assert list(sorted(md.items())) == [('a', 1), ('b', 2), ('c', 3)] + assert list(sorted(md.items(multi=True))) == \ + [('a', 1), ('a', 2), ('a', 3), ('b', 2), ('c', 3)] + assert list(sorted(iteritems(md))) == [('a', 1), ('b', 2), ('c', 3)] + assert list(sorted(iteritems(md, multi=True))) == \ + [('a', 1), ('a', 2), ('a', 3), ('b', 2), ('c', 3)] + + assert list(sorted(md.lists())) == \ + [('a', [1, 2, 3]), ('b', [2]), ('c', [3])] + assert list(sorted(iterlists(md))) == \ + [('a', [1, 2, 3]), ('b', [2]), ('c', [3])] + + # copy method + c = md.copy() + assert c['a'] == 1 + assert c.getlist('a') == [1, 2, 3] + + # copy method 2 + c = copy(md) + assert c['a'] == 1 + assert c.getlist('a') == [1, 2, 3] + + # deepcopy method + c = md.deepcopy() + assert c['a'] == 1 + assert c.getlist('a') == [1, 2, 3] + + # deepcopy method 2 + c = deepcopy(md) + assert c['a'] == 1 + assert c.getlist('a') == [1, 2, 3] + + # update with a multidict + od = self.storage_class([('a', 4), ('a', 5), ('y', 0)]) + md.update(od) + assert md.getlist('a') == [1, 2, 3, 4, 5] + assert md.getlist('y') == [0] + + # update with a regular dict + md = c + od = {'a': 4, 'y': 0} + md.update(od) + assert md.getlist('a') == [1, 2, 3, 4] + assert md.getlist('y') == [0] + + # pop, poplist, popitem, popitemlist + assert md.pop('y') == 0 + assert 'y' not in md + assert md.poplist('a') == [1, 2, 3, 4] + assert 'a' not in md + assert md.poplist('missing') == [] + + # remaining: b=2, c=3 + popped = md.popitem() + assert popped in [('b', 2), ('c', 3)] + popped = md.popitemlist() + assert popped in [('b', [2]), ('c', [3])] + + # type conversion + md = self.storage_class({'a': '4', 'b': ['2', '3']}) + assert md.get('a', type=int) == 4 + assert md.getlist('b', type=int) == [2, 3] + + # repr + md = self.storage_class([('a', 1), ('a', 2), ('b', 3)]) + assert "('a', 1)" in repr(md) + assert "('a', 2)" in repr(md) + assert "('b', 3)" in repr(md) + + # add and getlist + md.add('c', '42') + md.add('c', '23') + assert md.getlist('c') == ['42', '23'] + md.add('c', 'blah') + assert md.getlist('c', type=int) == [42, 23] + + # setdefault + md = self.storage_class() + md.setdefault('x', []).append(42) + md.setdefault('x', []).append(23) + assert md['x'] == [42, 23] + + # to dict + md = self.storage_class() + md['foo'] = 42 + md.add('bar', 1) + md.add('bar', 2) + assert md.to_dict() == {'foo': 42, 'bar': 1} + assert md.to_dict(flat=False) == {'foo': [42], 'bar': [1, 2]} + + # popitem from empty dict + with pytest.raises(KeyError): + self.storage_class().popitem() + + with pytest.raises(KeyError): + self.storage_class().popitemlist() + + # key errors are of a special type + with pytest.raises(BadRequestKeyError): + self.storage_class()[42] + + # setlist works + md = self.storage_class() + md['foo'] = 42 + md.setlist('foo', [1, 2]) + assert md.getlist('foo') == [1, 2] + + +class _ImmutableDictTests(object): + storage_class = None + + def test_follows_dict_interface(self): + cls = self.storage_class + + data = {'foo': 1, 'bar': 2, 'baz': 3} + d = cls(data) + + assert d['foo'] == 1 + assert d['bar'] == 2 + assert d['baz'] == 3 + assert sorted(d.keys()) == ['bar', 'baz', 'foo'] + assert 'foo' in d + assert 'foox' not in d + assert len(d) == 3 + + def test_copies_are_mutable(self): + cls = self.storage_class + immutable = cls({'a': 1}) + with pytest.raises(TypeError): + immutable.pop('a') + + mutable = immutable.copy() + mutable.pop('a') + assert 'a' in immutable + assert mutable is not immutable + assert copy(immutable) is immutable + + def test_dict_is_hashable(self): + cls = self.storage_class + immutable = cls({'a': 1, 'b': 2}) + immutable2 = cls({'a': 2, 'b': 2}) + x = set([immutable]) + assert immutable in x + assert immutable2 not in x + x.discard(immutable) + assert immutable not in x + assert immutable2 not in x + x.add(immutable2) + assert immutable not in x + assert immutable2 in x + x.add(immutable) + assert immutable in x + assert immutable2 in x + + +class TestImmutableTypeConversionDict(_ImmutableDictTests): + storage_class = datastructures.ImmutableTypeConversionDict + + +class TestImmutableMultiDict(_ImmutableDictTests): + storage_class = datastructures.ImmutableMultiDict + + def test_multidict_is_hashable(self): + cls = self.storage_class + immutable = cls({'a': [1, 2], 'b': 2}) + immutable2 = cls({'a': [1], 'b': 2}) + x = set([immutable]) + assert immutable in x + assert immutable2 not in x + x.discard(immutable) + assert immutable not in x + assert immutable2 not in x + x.add(immutable2) + assert immutable not in x + assert immutable2 in x + x.add(immutable) + assert immutable in x + assert immutable2 in x + + +class TestImmutableDict(_ImmutableDictTests): + storage_class = datastructures.ImmutableDict + + +class TestImmutableOrderedMultiDict(_ImmutableDictTests): + storage_class = datastructures.ImmutableOrderedMultiDict + + def test_ordered_multidict_is_hashable(self): + a = self.storage_class([('a', 1), ('b', 1), ('a', 2)]) + b = self.storage_class([('a', 1), ('a', 2), ('b', 1)]) + assert hash(a) != hash(b) + + +class TestMultiDict(_MutableMultiDictTests): + storage_class = datastructures.MultiDict + + def test_multidict_pop(self): + make_d = lambda: self.storage_class({'foo': [1, 2, 3, 4]}) + d = make_d() + assert d.pop('foo') == 1 + assert not d + d = make_d() + assert d.pop('foo', 32) == 1 + assert not d + d = make_d() + assert d.pop('foos', 32) == 32 + assert d + + with pytest.raises(KeyError): + d.pop('foos') + + def test_setlistdefault(self): + md = self.storage_class() + assert md.setlistdefault('u', [-1, -2]) == [-1, -2] + assert md.getlist('u') == [-1, -2] + assert md['u'] == -1 + + def test_iter_interfaces(self): + mapping = [('a', 1), ('b', 2), ('a', 2), ('d', 3), + ('a', 1), ('a', 3), ('d', 4), ('c', 3)] + md = self.storage_class(mapping) + assert list(zip(md.keys(), md.listvalues())) == list(md.lists()) + assert list(zip(md, iterlistvalues(md))) == list(iterlists(md)) + assert list(zip(iterkeys(md), iterlistvalues(md))) == \ + list(iterlists(md)) + + +class TestOrderedMultiDict(_MutableMultiDictTests): + storage_class = datastructures.OrderedMultiDict + + def test_ordered_interface(self): + cls = self.storage_class + + d = cls() + assert not d + d.add('foo', 'bar') + assert len(d) == 1 + d.add('foo', 'baz') + assert len(d) == 1 + assert list(iteritems(d)) == [('foo', 'bar')] + assert list(d) == ['foo'] + assert list(iteritems(d, multi=True)) == \ + [('foo', 'bar'), ('foo', 'baz')] + del d['foo'] + assert not d + assert len(d) == 0 + assert list(d) == [] + + d.update([('foo', 1), ('foo', 2), ('bar', 42)]) + d.add('foo', 3) + assert d.getlist('foo') == [1, 2, 3] + assert d.getlist('bar') == [42] + assert list(iteritems(d)) == [('foo', 1), ('bar', 42)] + + expected = ['foo', 'bar'] + + assert list(d.keys()) == expected + assert list(d) == expected + assert list(iterkeys(d)) == expected + + assert list(iteritems(d, multi=True)) == \ + [('foo', 1), ('foo', 2), ('bar', 42), ('foo', 3)] + assert len(d) == 2 + + assert d.pop('foo') == 1 + assert d.pop('blafasel', None) is None + assert d.pop('blafasel', 42) == 42 + assert len(d) == 1 + assert d.poplist('bar') == [42] + assert not d + + d.get('missingkey') is None + + d.add('foo', 42) + d.add('foo', 23) + d.add('bar', 2) + d.add('foo', 42) + assert d == datastructures.MultiDict(d) + id = self.storage_class(d) + assert d == id + d.add('foo', 2) + assert d != id + + d.update({'blah': [1, 2, 3]}) + assert d['blah'] == 1 + assert d.getlist('blah') == [1, 2, 3] + + # setlist works + d = self.storage_class() + d['foo'] = 42 + d.setlist('foo', [1, 2]) + assert d.getlist('foo') == [1, 2] + + with pytest.raises(BadRequestKeyError): + d.pop('missing') + with pytest.raises(BadRequestKeyError): + d['missing'] + + # popping + d = self.storage_class() + d.add('foo', 23) + d.add('foo', 42) + d.add('foo', 1) + assert d.popitem() == ('foo', 23) + with pytest.raises(BadRequestKeyError): + d.popitem() + assert not d + + d.add('foo', 23) + d.add('foo', 42) + d.add('foo', 1) + assert d.popitemlist() == ('foo', [23, 42, 1]) + + with pytest.raises(BadRequestKeyError): + d.popitemlist() + + def test_iterables(self): + a = datastructures.MultiDict((("key_a", "value_a"),)) + b = datastructures.MultiDict((("key_b", "value_b"),)) + ab = datastructures.CombinedMultiDict((a,b)) + + assert sorted(ab.lists()) == [('key_a', ['value_a']), ('key_b', ['value_b'])] + assert sorted(ab.listvalues()) == [['value_a'], ['value_b']] + assert sorted(ab.keys()) == ["key_a", "key_b"] + + assert sorted(iterlists(ab)) == [('key_a', ['value_a']), ('key_b', ['value_b'])] + assert sorted(iterlistvalues(ab)) == [['value_a'], ['value_b']] + assert sorted(iterkeys(ab)) == ["key_a", "key_b"] + + +class TestCombinedMultiDict(object): + storage_class = datastructures.CombinedMultiDict + + def test_basic_interface(self): + d1 = datastructures.MultiDict([('foo', '1')]) + d2 = datastructures.MultiDict([('bar', '2'), ('bar', '3')]) + d = self.storage_class([d1, d2]) + + # lookup + assert d['foo'] == '1' + assert d['bar'] == '2' + assert d.getlist('bar') == ['2', '3'] + + assert sorted(d.items()) == [('bar', '2'), ('foo', '1')] + assert sorted(d.items(multi=True)) == \ + [('bar', '2'), ('bar', '3'), ('foo', '1')] + assert 'missingkey' not in d + assert 'foo' in d + + # type lookup + assert d.get('foo', type=int) == 1 + assert d.getlist('bar', type=int) == [2, 3] + + # get key errors for missing stuff + with pytest.raises(KeyError): + d['missing'] + + # make sure that they are immutable + with pytest.raises(TypeError): + d['foo'] = 'blub' + + # copies are immutable + d = d.copy() + with pytest.raises(TypeError): + d['foo'] = 'blub' + + # make sure lists merges + md1 = datastructures.MultiDict((("foo", "bar"),)) + md2 = datastructures.MultiDict((("foo", "blafasel"),)) + x = self.storage_class((md1, md2)) + assert list(iterlists(x)) == [('foo', ['bar', 'blafasel'])] + + def test_length(self): + d1 = datastructures.MultiDict([('foo', '1')]) + d2 = datastructures.MultiDict([('bar', '2')]) + assert len(d1) == len(d2) == 1 + d = self.storage_class([d1, d2]) + assert len(d) == 2 + d1.clear() + assert len(d1) == 0 + assert len(d) == 1 + + +class TestHeaders(object): + storage_class = datastructures.Headers + + def test_basic_interface(self): + headers = self.storage_class() + headers.add('Content-Type', 'text/plain') + headers.add('X-Foo', 'bar') + assert 'x-Foo' in headers + assert 'Content-type' in headers + + headers['Content-Type'] = 'foo/bar' + assert headers['Content-Type'] == 'foo/bar' + assert len(headers.getlist('Content-Type')) == 1 + + # list conversion + assert headers.to_wsgi_list() == [ + ('Content-Type', 'foo/bar'), + ('X-Foo', 'bar') + ] + assert str(headers) == ( + "Content-Type: foo/bar\r\n" + "X-Foo: bar\r\n" + "\r\n" + ) + assert str(self.storage_class()) == "\r\n" + + # extended add + headers.add('Content-Disposition', 'attachment', filename='foo') + assert headers['Content-Disposition'] == 'attachment; filename=foo' + + headers.add('x', 'y', z='"') + assert headers['x'] == r'y; z="\""' + + def test_defaults_and_conversion(self): + # defaults + headers = self.storage_class([ + ('Content-Type', 'text/plain'), + ('X-Foo', 'bar'), + ('X-Bar', '1'), + ('X-Bar', '2') + ]) + assert headers.getlist('x-bar') == ['1', '2'] + assert headers.get('x-Bar') == '1' + assert headers.get('Content-Type') == 'text/plain' + + assert headers.setdefault('X-Foo', 'nope') == 'bar' + assert headers.setdefault('X-Bar', 'nope') == '1' + assert headers.setdefault('X-Baz', 'quux') == 'quux' + assert headers.setdefault('X-Baz', 'nope') == 'quux' + headers.pop('X-Baz') + + # type conversion + assert headers.get('x-bar', type=int) == 1 + assert headers.getlist('x-bar', type=int) == [1, 2] + + # list like operations + assert headers[0] == ('Content-Type', 'text/plain') + assert headers[:1] == self.storage_class([('Content-Type', 'text/plain')]) + del headers[:2] + del headers[-1] + assert headers == self.storage_class([('X-Bar', '1')]) + + def test_copying(self): + a = self.storage_class([('foo', 'bar')]) + b = a.copy() + a.add('foo', 'baz') + assert a.getlist('foo') == ['bar', 'baz'] + assert b.getlist('foo') == ['bar'] + + def test_popping(self): + headers = self.storage_class([('a', 1)]) + assert headers.pop('a') == 1 + assert headers.pop('b', 2) == 2 + + with pytest.raises(KeyError): + headers.pop('c') + + def test_set_arguments(self): + a = self.storage_class() + a.set('Content-Disposition', 'useless') + a.set('Content-Disposition', 'attachment', filename='foo') + assert a['Content-Disposition'] == 'attachment; filename=foo' + + def test_reject_newlines(self): + h = self.storage_class() + + for variation in 'foo\nbar', 'foo\r\nbar', 'foo\rbar': + with pytest.raises(ValueError): + h['foo'] = variation + with pytest.raises(ValueError): + h.add('foo', variation) + with pytest.raises(ValueError): + h.add('foo', 'test', option=variation) + with pytest.raises(ValueError): + h.set('foo', variation) + with pytest.raises(ValueError): + h.set('foo', 'test', option=variation) + + def test_slicing(self): + # there's nothing wrong with these being native strings + # Headers doesn't care about the data types + h = self.storage_class() + h.set('X-Foo-Poo', 'bleh') + h.set('Content-Type', 'application/whocares') + h.set('X-Forwarded-For', '192.168.0.123') + h[:] = [(k, v) for k, v in h if k.startswith(u'X-')] + assert list(h) == [ + ('X-Foo-Poo', 'bleh'), + ('X-Forwarded-For', '192.168.0.123') + ] + + def test_bytes_operations(self): + h = self.storage_class() + h.set('X-Foo-Poo', 'bleh') + h.set('X-Whoops', b'\xff') + + assert h.get('x-foo-poo', as_bytes=True) == b'bleh' + assert h.get('x-whoops', as_bytes=True) == b'\xff' + + def test_to_wsgi_list(self): + h = self.storage_class() + h.set(u'Key', u'Value') + for key, value in h.to_wsgi_list(): + if PY2: + strict_eq(key, b'Key') + strict_eq(value, b'Value') + else: + strict_eq(key, u'Key') + strict_eq(value, u'Value') + + +class TestEnvironHeaders(object): + storage_class = datastructures.EnvironHeaders + + def test_basic_interface(self): + # this happens in multiple WSGI servers because they + # use a vary naive way to convert the headers; + broken_env = { + 'HTTP_CONTENT_TYPE': 'text/html', + 'CONTENT_TYPE': 'text/html', + 'HTTP_CONTENT_LENGTH': '0', + 'CONTENT_LENGTH': '0', + 'HTTP_ACCEPT': '*', + 'wsgi.version': (1, 0) + } + headers = self.storage_class(broken_env) + assert headers + assert len(headers) == 3 + assert sorted(headers) == [ + ('Accept', '*'), + ('Content-Length', '0'), + ('Content-Type', 'text/html') + ] + assert not self.storage_class({'wsgi.version': (1, 0)}) + assert len(self.storage_class({'wsgi.version': (1, 0)})) == 0 + + def test_return_type_is_unicode(self): + # environ contains native strings; we return unicode + headers = self.storage_class({ + 'HTTP_FOO': '\xe2\x9c\x93', + 'CONTENT_TYPE': 'text/plain', + }) + assert headers['Foo'] == u"\xe2\x9c\x93" + assert isinstance(headers['Foo'], text_type) + assert isinstance(headers['Content-Type'], text_type) + iter_output = dict(iter(headers)) + assert iter_output['Foo'] == u"\xe2\x9c\x93" + assert isinstance(iter_output['Foo'], text_type) + assert isinstance(iter_output['Content-Type'], text_type) + + def test_bytes_operations(self): + foo_val = '\xff' + h = self.storage_class({ + 'HTTP_X_FOO': foo_val + }) + + assert h.get('x-foo', as_bytes=True) == b'\xff' + assert h.get('x-foo') == u'\xff' + + +class TestHeaderSet(object): + storage_class = datastructures.HeaderSet + + def test_basic_interface(self): + hs = self.storage_class() + hs.add('foo') + hs.add('bar') + assert 'Bar' in hs + assert hs.find('foo') == 0 + assert hs.find('BAR') == 1 + assert hs.find('baz') < 0 + hs.discard('missing') + hs.discard('foo') + assert hs.find('foo') < 0 + assert hs.find('bar') == 0 + + with pytest.raises(IndexError): + hs.index('missing') + + assert hs.index('bar') == 0 + assert hs + hs.clear() + assert not hs + + +class TestImmutableList(object): + storage_class = datastructures.ImmutableList + + def test_list_hashable(self): + t = (1, 2, 3, 4) + l = self.storage_class(t) + assert hash(t) == hash(l) + assert t != l + + +def make_call_asserter(func=None): + """Utility to assert a certain number of function calls. + + :param func: Additional callback for each function call. + + >>> assert_calls, func = make_call_asserter() + >>> with assert_calls(2): + func() + func() + """ + + calls = [0] + + @contextmanager + def asserter(count, msg=None): + calls[0] = 0 + yield + assert calls[0] == count + + def wrapped(*args, **kwargs): + calls[0] += 1 + if func is not None: + return func(*args, **kwargs) + + return asserter, wrapped + + +class TestCallbackDict(object): + storage_class = datastructures.CallbackDict + + def test_callback_dict_reads(self): + assert_calls, func = make_call_asserter() + initial = {'a': 'foo', 'b': 'bar'} + dct = self.storage_class(initial=initial, on_update=func) + with assert_calls(0, 'callback triggered by read-only method'): + # read-only methods + dct['a'] + dct.get('a') + pytest.raises(KeyError, lambda: dct['x']) + 'a' in dct + list(iter(dct)) + dct.copy() + with assert_calls(0, 'callback triggered without modification'): + # methods that may write but don't + dct.pop('z', None) + dct.setdefault('a') + + def test_callback_dict_writes(self): + assert_calls, func = make_call_asserter() + initial = {'a': 'foo', 'b': 'bar'} + dct = self.storage_class(initial=initial, on_update=func) + with assert_calls(8, 'callback not triggered by write method'): + # always-write methods + dct['z'] = 123 + dct['z'] = 123 # must trigger again + del dct['z'] + dct.pop('b', None) + dct.setdefault('x') + dct.popitem() + dct.update([]) + dct.clear() + with assert_calls(0, 'callback triggered by failed del'): + pytest.raises(KeyError, lambda: dct.__delitem__('x')) + with assert_calls(0, 'callback triggered by failed pop'): + pytest.raises(KeyError, lambda: dct.pop('x')) + + +class TestCacheControl(object): + def test_repr(self): + cc = datastructures.RequestCacheControl( + [("max-age", "0"), ("private", "True")], + ) + assert repr(cc) == "" + + +class TestAccept(object): + storage_class = datastructures.Accept + + def test_accept_basic(self): + accept = self.storage_class([('tinker', 0), ('tailor', 0.333), + ('soldier', 0.667), ('sailor', 1)]) + # check __getitem__ on indices + assert accept[3] == ('tinker', 0) + assert accept[2] == ('tailor', 0.333) + assert accept[1] == ('soldier', 0.667) + assert accept[0], ('sailor', 1) + # check __getitem__ on string + assert accept['tinker'] == 0 + assert accept['tailor'] == 0.333 + assert accept['soldier'] == 0.667 + assert accept['sailor'] == 1 + assert accept['spy'] == 0 + # check quality method + assert accept.quality('tinker') == 0 + assert accept.quality('tailor') == 0.333 + assert accept.quality('soldier') == 0.667 + assert accept.quality('sailor') == 1 + assert accept.quality('spy') == 0 + # check __contains__ + assert 'sailor' in accept + assert 'spy' not in accept + # check index method + assert accept.index('tinker') == 3 + assert accept.index('tailor') == 2 + assert accept.index('soldier') == 1 + assert accept.index('sailor') == 0 + with pytest.raises(ValueError): + accept.index('spy') + # check find method + assert accept.find('tinker') == 3 + assert accept.find('tailor') == 2 + assert accept.find('soldier') == 1 + assert accept.find('sailor') == 0 + assert accept.find('spy') == -1 + # check to_header method + assert accept.to_header() == \ + 'sailor,soldier;q=0.667,tailor;q=0.333,tinker;q=0' + # check best_match method + assert accept.best_match(['tinker', 'tailor', 'soldier', 'sailor'], + default=None) == 'sailor' + assert accept.best_match(['tinker', 'tailor', 'soldier'], + default=None) == 'soldier' + assert accept.best_match(['tinker', 'tailor'], default=None) == \ + 'tailor' + assert accept.best_match(['tinker'], default=None) is None + assert accept.best_match(['tinker'], default='x') == 'x' + + def test_accept_wildcard(self): + accept = self.storage_class([('*', 0), ('asterisk', 1)]) + assert '*' in accept + assert accept.best_match(['asterisk', 'star'], default=None) == \ + 'asterisk' + assert accept.best_match(['star'], default=None) is None + + @pytest.mark.skipif(True, reason='Werkzeug doesn\'t respect specificity.') + def test_accept_wildcard_specificity(self): + accept = self.storage_class([('asterisk', 0), ('star', 0.5), ('*', 1)]) + assert accept.best_match(['star', 'asterisk'], default=None) == 'star' + assert accept.best_match(['asterisk', 'star'], default=None) == 'star' + assert accept.best_match(['asterisk', 'times'], default=None) == \ + 'times' + assert accept.best_match(['asterisk'], default=None) is None diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_debug.py python-werkzeug-0.10.4+dfsg1/tests/test_debug.py --- python-werkzeug-0.9.6+dfsg/tests/test_debug.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_debug.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,228 @@ +# -*- coding: utf-8 -*- +""" + tests.debug + ~~~~~~~~~~~ + + Tests some debug utilities. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import sys +import os +import re +import io + +import pytest + +from werkzeug.debug.repr import debug_repr, DebugReprGenerator, \ + dump, helper +from werkzeug.debug.console import HTMLStringO +from werkzeug.debug.tbtools import Traceback +from werkzeug._compat import PY2 + + +class TestDebugRepr(object): + + def test_basic_repr(self): + assert debug_repr([]) == u'[]' + assert debug_repr([1, 2]) == \ + u'[1, 2]' + assert debug_repr([1, 'test']) == \ + u'[1, \'test\']' + assert debug_repr([None]) == \ + u'[None]' + + def test_sequence_repr(self): + assert debug_repr(list(range(20))) == ( + u'[0, 1, ' + u'2, 3, ' + u'4, 5, ' + u'6, 7, ' + u'8, ' + u'9, 10, ' + u'11, 12, ' + u'13, 14, ' + u'15, 16, ' + u'17, 18, ' + u'19]' + ) + + + def test_mapping_repr(self): + assert debug_repr({}) == u'{}' + assert debug_repr({'foo': 42}) == ( + u'{\'foo\'' + u': 42' + u'}' + ) + assert debug_repr(dict(zip(range(10), [None] * 10))) == ( + u'{0: None, 1: None, 2: None, 3: None, 4: None, 5: None, 6: None, 7: None, 8: None, 9: None}' + ) + assert debug_repr((1, 'zwei', u'drei')) == ( + u'(1, \'' + u'zwei\', %s\'drei\')' + ) % ('u' if PY2 else '') + + def test_custom_repr(self): + class Foo(object): + def __repr__(self): + return '' + assert debug_repr(Foo()) == \ + '<Foo 42>' + + def test_list_subclass_repr(self): + class MyList(list): + pass + assert debug_repr(MyList([1, 2])) == ( + u'tests.test_debug.MyList([' + u'1, 2])' + ) + + def test_regex_repr(self): + assert debug_repr(re.compile(r'foo\d')) == \ + u're.compile(r\'foo\\d\')' + # No ur'' in Py3 + # http://bugs.python.org/issue15096 + assert debug_repr(re.compile(u'foo\\d')) == ( + u're.compile(%sr\'foo\\d\')' % + ('u' if PY2 else '') + ) + + def test_set_repr(self): + assert debug_repr(frozenset('x')) == \ + u'frozenset([\'x\'])' + assert debug_repr(set('x')) == \ + u'set([\'x\'])' + + def test_recursive_repr(self): + a = [1] + a.append(a) + assert debug_repr(a) == u'[1, [...]]' + + def test_broken_repr(self): + class Foo(object): + def __repr__(self): + raise Exception('broken!') + + assert debug_repr(Foo()) == ( + u'<broken repr (Exception: ' + u'broken!)>' + ) + + +class Foo(object): + x = 42 + y = 23 + + + def __init__(self): + self.z = 15 + + +class TestDebugHelpers(object): + + def test_object_dumping(self): + drg = DebugReprGenerator() + out = drg.dump_object(Foo()) + assert re.search('Details for tests.test_debug.Foo object at', out) + assert re.search('x.*42(?s)', out) + assert re.search('y.*23(?s)', out) + assert re.search('z.*15(?s)', out) + + out = drg.dump_object({'x': 42, 'y': 23}) + assert re.search('Contents of', out) + assert re.search('x.*42(?s)', out) + assert re.search('y.*23(?s)', out) + + out = drg.dump_object({'x': 42, 'y': 23, 23: 11}) + assert not re.search('Contents of', out) + + out = drg.dump_locals({'x': 42, 'y': 23}) + assert re.search('Local variables in frame', out) + assert re.search('x.*42(?s)', out) + assert re.search('y.*23(?s)', out) + + def test_debug_dump(self): + old = sys.stdout + sys.stdout = HTMLStringO() + try: + dump([1, 2, 3]) + x = sys.stdout.reset() + dump() + y = sys.stdout.reset() + finally: + sys.stdout = old + + assert 'Details for list object at' in x + assert '1' in x + assert 'Local variables in frame' in y + assert 'x' in y + assert 'old' in y + + def test_debug_help(self): + old = sys.stdout + sys.stdout = HTMLStringO() + try: + helper([1, 2, 3]) + x = sys.stdout.reset() + finally: + sys.stdout = old + + assert 'Help on list object' in x + assert '__delitem__' in x + + +class TestTraceback(object): + + def test_log(self): + try: + 1/0 + except ZeroDivisionError: + traceback = Traceback(*sys.exc_info()) + + buffer_ = io.BytesIO() if PY2 else io.StringIO() + traceback.log(buffer_) + assert buffer_.getvalue().strip() == traceback.plaintext.strip() + + def test_sourcelines_encoding(self): + source = (u'# -*- coding: latin1 -*-\n\n' + u'def foo():\n' + u' """höhö"""\n' + u' 1 / 0\n' + u'foo()').encode('latin1') + code = compile(source, filename='lol.py', mode='exec') + try: + eval(code) + except ZeroDivisionError: + tb = sys.exc_info()[2] + traceback = Traceback(*sys.exc_info()) + + frames = traceback.frames + assert len(frames) == 3 + assert frames[1].filename == 'lol.py' + assert frames[2].filename == 'lol.py' + + class Loader(object): + def get_source(self, module): + return source + + frames[1].loader = frames[2].loader = Loader() + assert frames[1].sourcelines == frames[2].sourcelines + assert [line.code for line in frames[1].get_annotated_lines()] == \ + [line.code for line in frames[2].get_annotated_lines()] + assert u'höhö' in frames[1].sourcelines[3] + + def test_filename_encoding(self, tmpdir, monkeypatch): + moduledir = tmpdir.mkdir('föö') + moduledir.join('bar.py').write('def foo():\n 1/0\n') + monkeypatch.syspath_prepend(str(moduledir)) + + import bar + + try: + bar.foo() + except ZeroDivisionError: + traceback = Traceback(*sys.exc_info()) + + assert u'föö' in u'\n'.join(frame.render() for frame in traceback.frames) diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_exceptions.py python-werkzeug-0.10.4+dfsg1/tests/test_exceptions.py --- python-werkzeug-0.9.6+dfsg/tests/test_exceptions.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_exceptions.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,83 @@ +# -*- coding: utf-8 -*- +""" + tests.exceptions + ~~~~~~~~~~~~~~~~ + + The tests for the exception classes. + + TODO: + + - This is undertested. HTML is never checked + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import pytest + +from werkzeug import exceptions +from werkzeug.wrappers import Response +from werkzeug._compat import text_type + + +def test_proxy_exception(): + orig_resp = Response('Hello World') + with pytest.raises(exceptions.HTTPException) as excinfo: + exceptions.abort(orig_resp) + resp = excinfo.value.get_response({}) + assert resp is orig_resp + assert resp.get_data() == b'Hello World' + + +@pytest.mark.parametrize('test', [ + (exceptions.BadRequest, 400), + (exceptions.Unauthorized, 401), + (exceptions.Forbidden, 403), + (exceptions.NotFound, 404), + (exceptions.MethodNotAllowed, 405, ['GET', 'HEAD']), + (exceptions.NotAcceptable, 406), + (exceptions.RequestTimeout, 408), + (exceptions.Gone, 410), + (exceptions.LengthRequired, 411), + (exceptions.PreconditionFailed, 412), + (exceptions.RequestEntityTooLarge, 413), + (exceptions.RequestURITooLarge, 414), + (exceptions.UnsupportedMediaType, 415), + (exceptions.UnprocessableEntity, 422), + (exceptions.InternalServerError, 500), + (exceptions.NotImplemented, 501), + (exceptions.BadGateway, 502), + (exceptions.ServiceUnavailable, 503) +]) +def test_aborter_general(test): + exc_type = test[0] + args = test[1:] + + with pytest.raises(exc_type) as exc_info: + exceptions.abort(*args) + assert type(exc_info.value) is exc_type + + +def test_aborter_custom(): + myabort = exceptions.Aborter({1: exceptions.NotFound}) + pytest.raises(LookupError, myabort, 404) + pytest.raises(exceptions.NotFound, myabort, 1) + + myabort = exceptions.Aborter(extra={1: exceptions.NotFound}) + pytest.raises(exceptions.NotFound, myabort, 404) + pytest.raises(exceptions.NotFound, myabort, 1) + + +def test_exception_repr(): + exc = exceptions.NotFound() + assert text_type(exc) == '404: Not Found' + assert repr(exc) == "" + + exc = exceptions.NotFound('Not There') + assert text_type(exc) == '404: Not Found' + assert repr(exc) == "" + +def test_special_exceptions(): + exc = exceptions.MethodNotAllowed(['GET', 'HEAD', 'POST']) + h = dict(exc.get_headers({})) + assert h['Allow'] == 'GET, HEAD, POST' + assert 'The method is not allowed' in exc.get_description() diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_formparser.py python-werkzeug-0.10.4+dfsg1/tests/test_formparser.py --- python-werkzeug-0.9.6+dfsg/tests/test_formparser.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_formparser.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,403 @@ +# -*- coding: utf-8 -*- +""" + tests.formparser + ~~~~~~~~~~~~~~~~ + + Tests the form parsing facilities. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +from __future__ import with_statement + +import pytest + +from os.path import join, dirname + +from tests import strict_eq + +from werkzeug import formparser +from werkzeug.test import create_environ, Client +from werkzeug.wrappers import Request, Response +from werkzeug.exceptions import RequestEntityTooLarge +from werkzeug.datastructures import MultiDict +from werkzeug.formparser import parse_form_data +from werkzeug._compat import BytesIO + + +@Request.application +def form_data_consumer(request): + result_object = request.args['object'] + if result_object == 'text': + return Response(repr(request.form['text'])) + f = request.files[result_object] + return Response(b'\n'.join(( + repr(f.filename).encode('ascii'), + repr(f.name).encode('ascii'), + repr(f.content_type).encode('ascii'), + f.stream.read() + ))) + + +def get_contents(filename): + with open(filename, 'rb') as f: + return f.read() + + +class TestFormParser(object): + + def test_limiting(self): + data = b'foo=Hello+World&bar=baz' + req = Request.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='application/x-www-form-urlencoded', + method='POST') + req.max_content_length = 400 + strict_eq(req.form['foo'], u'Hello World') + + req = Request.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='application/x-www-form-urlencoded', + method='POST') + req.max_form_memory_size = 7 + pytest.raises(RequestEntityTooLarge, lambda: req.form['foo']) + + req = Request.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='application/x-www-form-urlencoded', + method='POST') + req.max_form_memory_size = 400 + strict_eq(req.form['foo'], u'Hello World') + + data = (b'--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\n' + b'Hello World\r\n' + b'--foo\r\nContent-Disposition: form-field; name=bar\r\n\r\n' + b'bar=baz\r\n--foo--') + req = Request.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='multipart/form-data; boundary=foo', + method='POST') + req.max_content_length = 4 + pytest.raises(RequestEntityTooLarge, lambda: req.form['foo']) + + req = Request.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='multipart/form-data; boundary=foo', + method='POST') + req.max_content_length = 400 + strict_eq(req.form['foo'], u'Hello World') + + req = Request.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='multipart/form-data; boundary=foo', + method='POST') + req.max_form_memory_size = 7 + pytest.raises(RequestEntityTooLarge, lambda: req.form['foo']) + + req = Request.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='multipart/form-data; boundary=foo', + method='POST') + req.max_form_memory_size = 400 + strict_eq(req.form['foo'], u'Hello World') + + def test_missing_multipart_boundary(self): + data = (b'--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\n' + b'Hello World\r\n' + b'--foo\r\nContent-Disposition: form-field; name=bar\r\n\r\n' + b'bar=baz\r\n--foo--') + req = Request.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='multipart/form-data', + method='POST') + assert req.form == {} + + def test_parse_form_data_put_without_content(self): + # A PUT without a Content-Type header returns empty data + + # Both rfc1945 and rfc2616 (1.0 and 1.1) say "Any HTTP/[1.0/1.1] message + # containing an entity-body SHOULD include a Content-Type header field + # defining the media type of that body." In the case where either + # headers are omitted, parse_form_data should still work. + env = create_environ('/foo', 'http://example.org/', method='PUT') + del env['CONTENT_TYPE'] + del env['CONTENT_LENGTH'] + + stream, form, files = formparser.parse_form_data(env) + strict_eq(stream.read(), b'') + strict_eq(len(form), 0) + strict_eq(len(files), 0) + + def test_parse_form_data_get_without_content(self): + env = create_environ('/foo', 'http://example.org/', method='GET') + del env['CONTENT_TYPE'] + del env['CONTENT_LENGTH'] + + stream, form, files = formparser.parse_form_data(env) + strict_eq(stream.read(), b'') + strict_eq(len(form), 0) + strict_eq(len(files), 0) + + def test_large_file(self): + data = b'x' * (1024 * 600) + req = Request.from_values(data={'foo': (BytesIO(data), 'test.txt')}, + method='POST') + # make sure we have a real file here, because we expect to be + # on the disk. > 1024 * 500 + assert hasattr(req.files['foo'].stream, u'fileno') + # close file to prevent fds from leaking + req.files['foo'].close() + + def test_streaming_parse(self): + data = b'x' * (1024 * 600) + class StreamMPP(formparser.MultiPartParser): + def parse(self, file, boundary, content_length): + i = iter(self.parse_lines(file, boundary, content_length)) + one = next(i) + two = next(i) + return self.cls(()), {'one': one, 'two': two} + class StreamFDP(formparser.FormDataParser): + def _sf_parse_multipart(self, stream, mimetype, + content_length, options): + form, files = StreamMPP( + self.stream_factory, self.charset, self.errors, + max_form_memory_size=self.max_form_memory_size, + cls=self.cls).parse(stream, options.get('boundary').encode('ascii'), + content_length) + return stream, form, files + parse_functions = {} + parse_functions.update(formparser.FormDataParser.parse_functions) + parse_functions['multipart/form-data'] = _sf_parse_multipart + class StreamReq(Request): + form_data_parser_class = StreamFDP + req = StreamReq.from_values(data={'foo': (BytesIO(data), 'test.txt')}, + method='POST') + strict_eq('begin_file', req.files['one'][0]) + strict_eq(('foo', 'test.txt'), req.files['one'][1][1:]) + strict_eq('cont', req.files['two'][0]) + strict_eq(data, req.files['two'][1]) + + +class TestMultiPart(object): + + def test_basic(self): + resources = join(dirname(__file__), 'multipart') + client = Client(form_data_consumer, Response) + + repository = [ + ('firefox3-2png1txt', '---------------------------186454651713519341951581030105', [ + (u'anchor.png', 'file1', 'image/png', 'file1.png'), + (u'application_edit.png', 'file2', 'image/png', 'file2.png') + ], u'example text'), + ('firefox3-2pnglongtext', '---------------------------14904044739787191031754711748', [ + (u'accept.png', 'file1', 'image/png', 'file1.png'), + (u'add.png', 'file2', 'image/png', 'file2.png') + ], u'--long text\r\n--with boundary\r\n--lookalikes--'), + ('opera8-2png1txt', '----------zEO9jQKmLc2Cq88c23Dx19', [ + (u'arrow_branch.png', 'file1', 'image/png', 'file1.png'), + (u'award_star_bronze_1.png', 'file2', 'image/png', 'file2.png') + ], u'blafasel öäü'), + ('webkit3-2png1txt', '----WebKitFormBoundaryjdSFhcARk8fyGNy6', [ + (u'gtk-apply.png', 'file1', 'image/png', 'file1.png'), + (u'gtk-no.png', 'file2', 'image/png', 'file2.png') + ], u'this is another text with ümläüts'), + ('ie6-2png1txt', '---------------------------7d91b03a20128', [ + (u'file1.png', 'file1', 'image/x-png', 'file1.png'), + (u'file2.png', 'file2', 'image/x-png', 'file2.png') + ], u'ie6 sucks :-/') + ] + + for name, boundary, files, text in repository: + folder = join(resources, name) + data = get_contents(join(folder, 'request.txt')) + for filename, field, content_type, fsname in files: + response = client.post('/?object=' + field, data=data, content_type= + 'multipart/form-data; boundary="%s"' % boundary, + content_length=len(data)) + lines = response.get_data().split(b'\n', 3) + strict_eq(lines[0], repr(filename).encode('ascii')) + strict_eq(lines[1], repr(field).encode('ascii')) + strict_eq(lines[2], repr(content_type).encode('ascii')) + strict_eq(lines[3], get_contents(join(folder, fsname))) + response = client.post('/?object=text', data=data, content_type= + 'multipart/form-data; boundary="%s"' % boundary, + content_length=len(data)) + strict_eq(response.get_data(), repr(text).encode('utf-8')) + + def test_ie7_unc_path(self): + client = Client(form_data_consumer, Response) + data_file = join(dirname(__file__), 'multipart', 'ie7_full_path_request.txt') + data = get_contents(data_file) + boundary = '---------------------------7da36d1b4a0164' + response = client.post('/?object=cb_file_upload_multiple', data=data, content_type= + 'multipart/form-data; boundary="%s"' % boundary, content_length=len(data)) + lines = response.get_data().split(b'\n', 3) + strict_eq(lines[0], + repr(u'Sellersburg Town Council Meeting 02-22-2010doc.doc').encode('ascii')) + + def test_end_of_file(self): + # This test looks innocent but it was actually timeing out in + # the Werkzeug 0.5 release version (#394) + data = ( + b'--foo\r\n' + b'Content-Disposition: form-data; name="test"; filename="test.txt"\r\n' + b'Content-Type: text/plain\r\n\r\n' + b'file contents and no end' + ) + data = Request.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='multipart/form-data; boundary=foo', + method='POST') + assert not data.files + assert not data.form + + def test_broken(self): + data = ( + '--foo\r\n' + 'Content-Disposition: form-data; name="test"; filename="test.txt"\r\n' + 'Content-Transfer-Encoding: base64\r\n' + 'Content-Type: text/plain\r\n\r\n' + 'broken base 64' + '--foo--' + ) + _, form, files = formparser.parse_form_data(create_environ(data=data, + method='POST', content_type='multipart/form-data; boundary=foo')) + assert not files + assert not form + + pytest.raises(ValueError, formparser.parse_form_data, + create_environ(data=data, method='POST', + content_type='multipart/form-data; boundary=foo'), + silent=False) + + def test_file_no_content_type(self): + data = ( + b'--foo\r\n' + b'Content-Disposition: form-data; name="test"; filename="test.txt"\r\n\r\n' + b'file contents\r\n--foo--' + ) + data = Request.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='multipart/form-data; boundary=foo', + method='POST') + assert data.files['test'].filename == 'test.txt' + strict_eq(data.files['test'].read(), b'file contents') + + def test_extra_newline(self): + # this test looks innocent but it was actually timeing out in + # the Werkzeug 0.5 release version (#394) + data = ( + b'\r\n\r\n--foo\r\n' + b'Content-Disposition: form-data; name="foo"\r\n\r\n' + b'a string\r\n' + b'--foo--' + ) + data = Request.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='multipart/form-data; boundary=foo', + method='POST') + assert not data.files + strict_eq(data.form['foo'], u'a string') + + def test_headers(self): + data = (b'--foo\r\n' + b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n' + b'X-Custom-Header: blah\r\n' + b'Content-Type: text/plain; charset=utf-8\r\n\r\n' + b'file contents, just the contents\r\n' + b'--foo--') + req = Request.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='multipart/form-data; boundary=foo', + method='POST') + foo = req.files['foo'] + strict_eq(foo.mimetype, 'text/plain') + strict_eq(foo.mimetype_params, {'charset': 'utf-8'}) + strict_eq(foo.headers['content-type'], foo.content_type) + strict_eq(foo.content_type, 'text/plain; charset=utf-8') + strict_eq(foo.headers['x-custom-header'], 'blah') + + def test_nonstandard_line_endings(self): + for nl in b'\n', b'\r', b'\r\n': + data = nl.join(( + b'--foo', + b'Content-Disposition: form-data; name=foo', + b'', + b'this is just bar', + b'--foo', + b'Content-Disposition: form-data; name=bar', + b'', + b'blafasel', + b'--foo--' + )) + req = Request.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='multipart/form-data; ' + 'boundary=foo', method='POST') + strict_eq(req.form['foo'], u'this is just bar') + strict_eq(req.form['bar'], u'blafasel') + + def test_failures(self): + def parse_multipart(stream, boundary, content_length): + parser = formparser.MultiPartParser(content_length) + return parser.parse(stream, boundary, content_length) + pytest.raises(ValueError, parse_multipart, BytesIO(), b'broken ', 0) + + data = b'--foo\r\n\r\nHello World\r\n--foo--' + pytest.raises(ValueError, parse_multipart, BytesIO(data), b'foo', len(data)) + + data = b'--foo\r\nContent-Disposition: form-field; name=foo\r\n' \ + b'Content-Transfer-Encoding: base64\r\n\r\nHello World\r\n--foo--' + pytest.raises(ValueError, parse_multipart, BytesIO(data), b'foo', len(data)) + + data = b'--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\nHello World\r\n' + pytest.raises(ValueError, parse_multipart, BytesIO(data), b'foo', len(data)) + + x = formparser.parse_multipart_headers(['foo: bar\r\n', ' x test\r\n']) + strict_eq(x['foo'], 'bar\n x test') + pytest.raises(ValueError, formparser.parse_multipart_headers, + ['foo: bar\r\n', ' x test']) + + def test_bad_newline_bad_newline_assumption(self): + class ISORequest(Request): + charset = 'latin1' + contents = b'U2vlbmUgbORu' + data = b'--foo\r\nContent-Disposition: form-data; name="test"\r\n' \ + b'Content-Transfer-Encoding: base64\r\n\r\n' + \ + contents + b'\r\n--foo--' + req = ISORequest.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='multipart/form-data; boundary=foo', + method='POST') + strict_eq(req.form['test'], u'Sk\xe5ne l\xe4n') + + def test_empty_multipart(self): + environ = {} + data = b'--boundary--' + environ['REQUEST_METHOD'] = 'POST' + environ['CONTENT_TYPE'] = 'multipart/form-data; boundary=boundary' + environ['CONTENT_LENGTH'] = str(len(data)) + environ['wsgi.input'] = BytesIO(data) + stream, form, files = parse_form_data(environ, silent=False) + rv = stream.read() + assert rv == b'' + assert form == MultiDict() + assert files == MultiDict() + + +class TestInternalFunctions(object): + + def test_line_parser(self): + assert formparser._line_parse('foo') == ('foo', False) + assert formparser._line_parse('foo\r\n') == ('foo', True) + assert formparser._line_parse('foo\r') == ('foo', True) + assert formparser._line_parse('foo\n') == ('foo', True) + + def test_find_terminator(self): + lineiter = iter(b'\n\n\nfoo\nbar\nbaz'.splitlines(True)) + find_terminator = formparser.MultiPartParser()._find_terminator + line = find_terminator(lineiter) + assert line == b'foo' + assert list(lineiter) == [b'bar\n', b'baz'] + assert find_terminator([]) == b'' + assert find_terminator([b'']) == b'' diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_http.py python-werkzeug-0.10.4+dfsg1/tests/test_http.py --- python-werkzeug-0.9.6+dfsg/tests/test_http.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_http.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,445 @@ +# -*- coding: utf-8 -*- +""" + tests.http + ~~~~~~~~~~ + + HTTP parsing utilities. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import pytest + +from datetime import datetime + +from tests import strict_eq +from werkzeug._compat import itervalues, wsgi_encoding_dance + +from werkzeug import http, datastructures +from werkzeug.test import create_environ + + +class TestHTTPUtility(object): + + def test_accept(self): + a = http.parse_accept_header('en-us,ru;q=0.5') + assert list(itervalues(a)) == ['en-us', 'ru'] + assert a.best == 'en-us' + assert a.find('ru') == 1 + pytest.raises(ValueError, a.index, 'de') + assert a.to_header() == 'en-us,ru;q=0.5' + + def test_mime_accept(self): + a = http.parse_accept_header('text/xml,application/xml,' + 'application/xhtml+xml,' + 'application/foo;quiet=no; bar=baz;q=0.6,' + 'text/html;q=0.9,text/plain;q=0.8,' + 'image/png,*/*;q=0.5', + datastructures.MIMEAccept) + pytest.raises(ValueError, lambda: a['missing']) + assert a['image/png'] == 1 + assert a['text/plain'] == 0.8 + assert a['foo/bar'] == 0.5 + assert a['application/foo;quiet=no; bar=baz'] == 0.6 + assert a[a.find('foo/bar')] == ('*/*', 0.5) + + def test_accept_matches(self): + a = http.parse_accept_header('text/xml,application/xml,application/xhtml+xml,' + 'text/html;q=0.9,text/plain;q=0.8,' + 'image/png', datastructures.MIMEAccept) + assert a.best_match(['text/html', 'application/xhtml+xml']) == \ + 'application/xhtml+xml' + assert a.best_match(['text/html']) == 'text/html' + assert a.best_match(['foo/bar']) is None + assert a.best_match(['foo/bar', 'bar/foo'], default='foo/bar') == 'foo/bar' + assert a.best_match(['application/xml', 'text/xml']) == 'application/xml' + + def test_charset_accept(self): + a = http.parse_accept_header('ISO-8859-1,utf-8;q=0.7,*;q=0.7', + datastructures.CharsetAccept) + assert a['iso-8859-1'] == a['iso8859-1'] + assert a['iso-8859-1'] == 1 + assert a['UTF8'] == 0.7 + assert a['ebcdic'] == 0.7 + + def test_language_accept(self): + a = http.parse_accept_header('de-AT,de;q=0.8,en;q=0.5', + datastructures.LanguageAccept) + assert a.best == 'de-AT' + assert 'de_AT' in a + assert 'en' in a + assert a['de-at'] == 1 + assert a['en'] == 0.5 + + def test_set_header(self): + hs = http.parse_set_header('foo, Bar, "Blah baz", Hehe') + assert 'blah baz' in hs + assert 'foobar' not in hs + assert 'foo' in hs + assert list(hs) == ['foo', 'Bar', 'Blah baz', 'Hehe'] + hs.add('Foo') + assert hs.to_header() == 'foo, Bar, "Blah baz", Hehe' + + def test_list_header(self): + hl = http.parse_list_header('foo baz, blah') + assert hl == ['foo baz', 'blah'] + + def test_dict_header(self): + d = http.parse_dict_header('foo="bar baz", blah=42') + assert d == {'foo': 'bar baz', 'blah': '42'} + + def test_cache_control_header(self): + cc = http.parse_cache_control_header('max-age=0, no-cache') + assert cc.max_age == 0 + assert cc.no_cache + cc = http.parse_cache_control_header('private, community="UCI"', None, + datastructures.ResponseCacheControl) + assert cc.private + assert cc['community'] == 'UCI' + + c = datastructures.ResponseCacheControl() + assert c.no_cache is None + assert c.private is None + c.no_cache = True + assert c.no_cache == '*' + c.private = True + assert c.private == '*' + del c.private + assert c.private is None + assert c.to_header() == 'no-cache' + + def test_authorization_header(self): + a = http.parse_authorization_header('Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==') + assert a.type == 'basic' + assert a.username == 'Aladdin' + assert a.password == 'open sesame' + + a = http.parse_authorization_header('''Digest username="Mufasa", + realm="testrealm@host.invalid", + nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093", + uri="/dir/index.html", + qop=auth, + nc=00000001, + cnonce="0a4f113b", + response="6629fae49393a05397450978507c4ef1", + opaque="5ccc069c403ebaf9f0171e9517f40e41"''') + assert a.type == 'digest' + assert a.username == 'Mufasa' + assert a.realm == 'testrealm@host.invalid' + assert a.nonce == 'dcd98b7102dd2f0e8b11d0f600bfb0c093' + assert a.uri == '/dir/index.html' + assert 'auth' in a.qop + assert a.nc == '00000001' + assert a.cnonce == '0a4f113b' + assert a.response == '6629fae49393a05397450978507c4ef1' + assert a.opaque == '5ccc069c403ebaf9f0171e9517f40e41' + + a = http.parse_authorization_header('''Digest username="Mufasa", + realm="testrealm@host.invalid", + nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093", + uri="/dir/index.html", + response="e257afa1414a3340d93d30955171dd0e", + opaque="5ccc069c403ebaf9f0171e9517f40e41"''') + assert a.type == 'digest' + assert a.username == 'Mufasa' + assert a.realm == 'testrealm@host.invalid' + assert a.nonce == 'dcd98b7102dd2f0e8b11d0f600bfb0c093' + assert a.uri == '/dir/index.html' + assert a.response == 'e257afa1414a3340d93d30955171dd0e' + assert a.opaque == '5ccc069c403ebaf9f0171e9517f40e41' + + assert http.parse_authorization_header('') is None + assert http.parse_authorization_header(None) is None + assert http.parse_authorization_header('foo') is None + + def test_www_authenticate_header(self): + wa = http.parse_www_authenticate_header('Basic realm="WallyWorld"') + assert wa.type == 'basic' + assert wa.realm == 'WallyWorld' + wa.realm = 'Foo Bar' + assert wa.to_header() == 'Basic realm="Foo Bar"' + + wa = http.parse_www_authenticate_header('''Digest + realm="testrealm@host.com", + qop="auth,auth-int", + nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093", + opaque="5ccc069c403ebaf9f0171e9517f40e41"''') + assert wa.type == 'digest' + assert wa.realm == 'testrealm@host.com' + assert 'auth' in wa.qop + assert 'auth-int' in wa.qop + assert wa.nonce == 'dcd98b7102dd2f0e8b11d0f600bfb0c093' + assert wa.opaque == '5ccc069c403ebaf9f0171e9517f40e41' + + wa = http.parse_www_authenticate_header('broken') + assert wa.type == 'broken' + + assert not http.parse_www_authenticate_header('').type + assert not http.parse_www_authenticate_header('') + + def test_etags(self): + assert http.quote_etag('foo') == '"foo"' + assert http.quote_etag('foo', True) == 'w/"foo"' + assert http.unquote_etag('"foo"') == ('foo', False) + assert http.unquote_etag('w/"foo"') == ('foo', True) + es = http.parse_etags('"foo", "bar", w/"baz", blar') + assert sorted(es) == ['bar', 'blar', 'foo'] + assert 'foo' in es + assert 'baz' not in es + assert es.contains_weak('baz') + assert 'blar' in es + assert es.contains_raw('w/"baz"') + assert es.contains_raw('"foo"') + assert sorted(es.to_header().split(', ')) == ['"bar"', '"blar"', '"foo"', 'w/"baz"'] + + def test_etags_nonzero(self): + etags = http.parse_etags('w/"foo"') + assert bool(etags) + assert etags.contains_raw('w/"foo"') + + def test_parse_date(self): + assert http.parse_date('Sun, 06 Nov 1994 08:49:37 GMT ') == datetime(1994, 11, 6, 8, 49, 37) + assert http.parse_date('Sunday, 06-Nov-94 08:49:37 GMT') == datetime(1994, 11, 6, 8, 49, 37) + assert http.parse_date(' Sun Nov 6 08:49:37 1994') == datetime(1994, 11, 6, 8, 49, 37) + assert http.parse_date('foo') is None + + def test_parse_date_overflows(self): + assert http.parse_date(' Sun 02 Feb 1343 08:49:37 GMT') == datetime(1343, 2, 2, 8, 49, 37) + assert http.parse_date('Thu, 01 Jan 1970 00:00:00 GMT') == datetime(1970, 1, 1, 0, 0) + assert http.parse_date('Thu, 33 Jan 1970 00:00:00 GMT') is None + + def test_remove_entity_headers(self): + now = http.http_date() + headers1 = [('Date', now), ('Content-Type', 'text/html'), ('Content-Length', '0')] + headers2 = datastructures.Headers(headers1) + + http.remove_entity_headers(headers1) + assert headers1 == [('Date', now)] + + http.remove_entity_headers(headers2) + assert headers2 == datastructures.Headers([(u'Date', now)]) + + def test_remove_hop_by_hop_headers(self): + headers1 = [('Connection', 'closed'), ('Foo', 'bar'), + ('Keep-Alive', 'wtf')] + headers2 = datastructures.Headers(headers1) + + http.remove_hop_by_hop_headers(headers1) + assert headers1 == [('Foo', 'bar')] + + http.remove_hop_by_hop_headers(headers2) + assert headers2 == datastructures.Headers([('Foo', 'bar')]) + + def test_parse_options_header(self): + assert http.parse_options_header(r'something; foo="other\"thing"') == \ + ('something', {'foo': 'other"thing'}) + assert http.parse_options_header(r'something; foo="other\"thing"; meh=42') == \ + ('something', {'foo': 'other"thing', 'meh': '42'}) + assert http.parse_options_header(r'something; foo="other\"thing"; meh=42; bleh') == \ + ('something', {'foo': 'other"thing', 'meh': '42', 'bleh': None}) + assert http.parse_options_header('something; foo="other;thing"; meh=42; bleh') == \ + ('something', {'foo': 'other;thing', 'meh': '42', 'bleh': None}) + assert http.parse_options_header('something; foo="otherthing"; meh=; bleh') == \ + ('something', {'foo': 'otherthing', 'meh': None, 'bleh': None}) + + + + def test_dump_options_header(self): + assert http.dump_options_header('foo', {'bar': 42}) == \ + 'foo; bar=42' + assert http.dump_options_header('foo', {'bar': 42, 'fizz': None}) in \ + ('foo; bar=42; fizz', 'foo; fizz; bar=42') + + def test_dump_header(self): + assert http.dump_header([1, 2, 3]) == '1, 2, 3' + assert http.dump_header([1, 2, 3], allow_token=False) == '"1", "2", "3"' + assert http.dump_header({'foo': 'bar'}, allow_token=False) == 'foo="bar"' + assert http.dump_header({'foo': 'bar'}) == 'foo=bar' + + def test_is_resource_modified(self): + env = create_environ() + + # ignore POST + env['REQUEST_METHOD'] = 'POST' + assert not http.is_resource_modified(env, etag='testing') + env['REQUEST_METHOD'] = 'GET' + + # etagify from data + pytest.raises(TypeError, http.is_resource_modified, env, + data='42', etag='23') + env['HTTP_IF_NONE_MATCH'] = http.generate_etag(b'awesome') + assert not http.is_resource_modified(env, data=b'awesome') + + env['HTTP_IF_MODIFIED_SINCE'] = http.http_date(datetime(2008, 1, 1, 12, 30)) + assert not http.is_resource_modified(env, + last_modified=datetime(2008, 1, 1, 12, 00)) + assert http.is_resource_modified(env, + last_modified=datetime(2008, 1, 1, 13, 00)) + + def test_date_formatting(self): + assert http.cookie_date(0) == 'Thu, 01-Jan-1970 00:00:00 GMT' + assert http.cookie_date(datetime(1970, 1, 1)) == 'Thu, 01-Jan-1970 00:00:00 GMT' + assert http.http_date(0) == 'Thu, 01 Jan 1970 00:00:00 GMT' + assert http.http_date(datetime(1970, 1, 1)) == 'Thu, 01 Jan 1970 00:00:00 GMT' + + def test_cookies(self): + strict_eq( + dict(http.parse_cookie('dismiss-top=6; CP=null*; PHPSESSID=0a539d42abc001cd' + 'c762809248d4beed; a=42; b="\\\";"')), + { + 'CP': u'null*', + 'PHPSESSID': u'0a539d42abc001cdc762809248d4beed', + 'a': u'42', + 'dismiss-top': u'6', + 'b': u'\";' + } + ) + rv = http.dump_cookie('foo', 'bar baz blub', 360, httponly=True, + sync_expires=False) + assert type(rv) is str + assert set(rv.split('; ')) == set(['HttpOnly', 'Max-Age=360', + 'Path=/', 'foo="bar baz blub"']) + + strict_eq(dict(http.parse_cookie('fo234{=bar; blub=Blah')), + {'fo234{': u'bar', 'blub': u'Blah'}) + + def test_cookie_quoting(self): + val = http.dump_cookie("foo", "?foo") + strict_eq(val, 'foo="?foo"; Path=/') + strict_eq(dict(http.parse_cookie(val)), {'foo': u'?foo'}) + + strict_eq(dict(http.parse_cookie(r'foo="foo\054bar"')), + {'foo': u'foo,bar'}) + + def test_cookie_domain_resolving(self): + val = http.dump_cookie('foo', 'bar', domain=u'\N{SNOWMAN}.com') + strict_eq(val, 'foo=bar; Domain=xn--n3h.com; Path=/') + + def test_cookie_unicode_dumping(self): + val = http.dump_cookie('foo', u'\N{SNOWMAN}') + h = datastructures.Headers() + h.add('Set-Cookie', val) + assert h['Set-Cookie'] == 'foo="\\342\\230\\203"; Path=/' + + cookies = http.parse_cookie(h['Set-Cookie']) + assert cookies['foo'] == u'\N{SNOWMAN}' + + def test_cookie_unicode_keys(self): + # Yes, this is technically against the spec but happens + val = http.dump_cookie(u'fö', u'fö') + assert val == wsgi_encoding_dance(u'fö="f\\303\\266"; Path=/', 'utf-8') + cookies = http.parse_cookie(val) + assert cookies[u'fö'] == u'fö' + + def test_cookie_unicode_parsing(self): + # This is actually a correct test. This is what is being submitted + # by firefox if you set an unicode cookie and we get the cookie sent + # in on Python 3 under PEP 3333. + cookies = http.parse_cookie(u'fö=fö') + assert cookies[u'fö'] == u'fö' + + def test_cookie_domain_encoding(self): + val = http.dump_cookie('foo', 'bar', domain=u'\N{SNOWMAN}.com') + strict_eq(val, 'foo=bar; Domain=xn--n3h.com; Path=/') + + val = http.dump_cookie('foo', 'bar', domain=u'.\N{SNOWMAN}.com') + strict_eq(val, 'foo=bar; Domain=.xn--n3h.com; Path=/') + + val = http.dump_cookie('foo', 'bar', domain=u'.foo.com') + strict_eq(val, 'foo=bar; Domain=.foo.com; Path=/') + + +class TestRange(object): + + def test_if_range_parsing(self): + rv = http.parse_if_range_header('"Test"') + assert rv.etag == 'Test' + assert rv.date is None + assert rv.to_header() == '"Test"' + + # weak information is dropped + rv = http.parse_if_range_header('w/"Test"') + assert rv.etag == 'Test' + assert rv.date is None + assert rv.to_header() == '"Test"' + + # broken etags are supported too + rv = http.parse_if_range_header('bullshit') + assert rv.etag == 'bullshit' + assert rv.date is None + assert rv.to_header() == '"bullshit"' + + rv = http.parse_if_range_header('Thu, 01 Jan 1970 00:00:00 GMT') + assert rv.etag is None + assert rv.date == datetime(1970, 1, 1) + assert rv.to_header() == 'Thu, 01 Jan 1970 00:00:00 GMT' + + for x in '', None: + rv = http.parse_if_range_header(x) + assert rv.etag is None + assert rv.date is None + assert rv.to_header() == '' + + def test_range_parsing(self): + rv = http.parse_range_header('bytes=52') + assert rv is None + + rv = http.parse_range_header('bytes=52-') + assert rv.units == 'bytes' + assert rv.ranges == [(52, None)] + assert rv.to_header() == 'bytes=52-' + + rv = http.parse_range_header('bytes=52-99') + assert rv.units == 'bytes' + assert rv.ranges == [(52, 100)] + assert rv.to_header() == 'bytes=52-99' + + rv = http.parse_range_header('bytes=52-99,-1000') + assert rv.units == 'bytes' + assert rv.ranges == [(52, 100), (-1000, None)] + assert rv.to_header() == 'bytes=52-99,-1000' + + rv = http.parse_range_header('bytes = 1 - 100') + assert rv.units == 'bytes' + assert rv.ranges == [(1, 101)] + assert rv.to_header() == 'bytes=1-100' + + rv = http.parse_range_header('AWesomes=0-999') + assert rv.units == 'awesomes' + assert rv.ranges == [(0, 1000)] + assert rv.to_header() == 'awesomes=0-999' + + def test_content_range_parsing(self): + rv = http.parse_content_range_header('bytes 0-98/*') + assert rv.units == 'bytes' + assert rv.start == 0 + assert rv.stop == 99 + assert rv.length is None + assert rv.to_header() == 'bytes 0-98/*' + + rv = http.parse_content_range_header('bytes 0-98/*asdfsa') + assert rv is None + + rv = http.parse_content_range_header('bytes 0-99/100') + assert rv.to_header() == 'bytes 0-99/100' + rv.start = None + rv.stop = None + assert rv.units == 'bytes' + assert rv.to_header() == 'bytes */100' + + rv = http.parse_content_range_header('bytes */100') + assert rv.start is None + assert rv.stop is None + assert rv.length == 100 + assert rv.units == 'bytes' + + +class TestRegression(object): + + def test_best_match_works(self): + # was a bug in 0.6 + rv = http.parse_accept_header('foo=,application/xml,application/xhtml+xml,' + 'text/html;q=0.9,text/plain;q=0.8,' + 'image/png,*/*;q=0.5', + datastructures.MIMEAccept).best_match(['foo/bar']) + assert rv == 'foo/bar' diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_internal.py python-werkzeug-0.10.4+dfsg1/tests/test_internal.py --- python-werkzeug-0.9.6+dfsg/tests/test_internal.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_internal.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,72 @@ +# -*- coding: utf-8 -*- +""" + tests.internal + ~~~~~~~~~~~~~~ + + Internal tests. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import pytest + +from datetime import datetime +from warnings import filterwarnings, resetwarnings + +from werkzeug.wrappers import Request, Response + +from werkzeug import _internal as internal +from werkzeug.test import create_environ + + +def test_date_to_unix(): + assert internal._date_to_unix(datetime(1970, 1, 1)) == 0 + assert internal._date_to_unix(datetime(1970, 1, 1, 1, 0, 0)) == 3600 + assert internal._date_to_unix(datetime(1970, 1, 1, 1, 1, 1)) == 3661 + x = datetime(2010, 2, 15, 16, 15, 39) + assert internal._date_to_unix(x) == 1266250539 + +def test_easteregg(): + req = Request.from_values('/?macgybarchakku') + resp = Response.force_type(internal._easteregg(None), req) + assert b'About Werkzeug' in resp.get_data() + assert b'the Swiss Army knife of Python web development' in resp.get_data() + +def test_wrapper_internals(): + req = Request.from_values(data={'foo': 'bar'}, method='POST') + req._load_form_data() + assert req.form.to_dict() == {'foo': 'bar'} + + # second call does not break + req._load_form_data() + assert req.form.to_dict() == {'foo': 'bar'} + + # check reprs + assert repr(req) == "" + resp = Response() + assert repr(resp) == '' + resp.set_data('Hello World!') + assert repr(resp) == '' + resp.response = iter(['Test']) + assert repr(resp) == '' + + # unicode data does not set content length + response = Response([u'Hällo Wörld']) + headers = response.get_wsgi_headers(create_environ()) + assert u'Content-Length' not in headers + + response = Response([u'Hällo Wörld'.encode('utf-8')]) + headers = response.get_wsgi_headers(create_environ()) + assert u'Content-Length' in headers + + # check for internal warnings + filterwarnings('error', category=Warning) + response = Response() + environ = create_environ() + response.response = 'What the...?' + pytest.raises(Warning, lambda: list(response.iter_encoded())) + pytest.raises(Warning, lambda: list(response.get_app_iter(environ))) + response.direct_passthrough = True + pytest.raises(Warning, lambda: list(response.iter_encoded())) + pytest.raises(Warning, lambda: list(response.get_app_iter(environ))) + resetwarnings() diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_local.py python-werkzeug-0.10.4+dfsg1/tests/test_local.py --- python-werkzeug-0.9.6+dfsg/tests/test_local.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_local.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,150 @@ +# -*- coding: utf-8 -*- +""" + tests.local + ~~~~~~~~~~~~~~~~~~~~~~~~ + + Local and local proxy tests. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import pytest + +import time +from threading import Thread + +from werkzeug import local + + +def test_basic_local(): + l = local.Local() + l.foo = 0 + values = [] + def value_setter(idx): + time.sleep(0.01 * idx) + l.foo = idx + time.sleep(0.02) + values.append(l.foo) + threads = [Thread(target=value_setter, args=(x,)) + for x in [1, 2, 3]] + for thread in threads: + thread.start() + time.sleep(0.2) + assert sorted(values) == [1, 2, 3] + + def delfoo(): + del l.foo + delfoo() + pytest.raises(AttributeError, lambda: l.foo) + pytest.raises(AttributeError, delfoo) + + local.release_local(l) + +def test_local_release(): + l = local.Local() + l.foo = 42 + local.release_local(l) + assert not hasattr(l, 'foo') + + ls = local.LocalStack() + ls.push(42) + local.release_local(ls) + assert ls.top is None + +def test_local_proxy(): + foo = [] + ls = local.LocalProxy(lambda: foo) + ls.append(42) + ls.append(23) + ls[1:] = [1, 2, 3] + assert foo == [42, 1, 2, 3] + assert repr(foo) == repr(ls) + assert foo[0] == 42 + foo += [1] + assert list(foo) == [42, 1, 2, 3, 1] + +def test_local_proxy_operations_math(): + foo = 2 + ls = local.LocalProxy(lambda: foo) + assert ls + 1 == 3 + assert 1 + ls == 3 + assert ls - 1 == 1 + assert 1 - ls == -1 + assert ls * 1 == 2 + assert 1 * ls == 2 + assert ls / 1 == 2 + assert 1.0 / ls == 0.5 + assert ls // 1.0 == 2.0 + assert 1.0 // ls == 0.0 + assert ls % 2 == 0 + assert 2 % ls == 0 + +def test_local_proxy_operations_strings(): + foo = "foo" + ls = local.LocalProxy(lambda: foo) + assert ls + "bar" == "foobar" + assert "bar" + ls == "barfoo" + assert ls * 2 == "foofoo" + + foo = "foo %s" + assert ls % ("bar",) == "foo bar" + +def test_local_stack(): + ident = local.get_ident() + + ls = local.LocalStack() + assert ident not in ls._local.__storage__ + assert ls.top is None + ls.push(42) + assert ident in ls._local.__storage__ + assert ls.top == 42 + ls.push(23) + assert ls.top == 23 + ls.pop() + assert ls.top == 42 + ls.pop() + assert ls.top is None + assert ls.pop() is None + assert ls.pop() is None + + proxy = ls() + ls.push([1, 2]) + assert proxy == [1, 2] + ls.push((1, 2)) + assert proxy == (1, 2) + ls.pop() + ls.pop() + assert repr(proxy) == '' + + assert ident not in ls._local.__storage__ + +def test_local_proxies_with_callables(): + foo = 42 + ls = local.LocalProxy(lambda: foo) + assert ls == 42 + foo = [23] + ls.append(42) + assert ls == [23, 42] + assert foo == [23, 42] + +def test_custom_idents(): + ident = 0 + l = local.Local() + stack = local.LocalStack() + mgr = local.LocalManager([l, stack], ident_func=lambda: ident) + + l.foo = 42 + stack.push({'foo': 42}) + ident = 1 + l.foo = 23 + stack.push({'foo': 23}) + ident = 0 + assert l.foo == 42 + assert stack.top['foo'] == 42 + stack.pop() + assert stack.top is None + ident = 1 + assert l.foo == 23 + assert stack.top['foo'] == 23 + stack.pop() + assert stack.top is None diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_routing.py python-werkzeug-0.10.4+dfsg1/tests/test_routing.py --- python-werkzeug-0.9.6+dfsg/tests/test_routing.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_routing.py 2015-03-26 15:33:58.000000000 +0000 @@ -0,0 +1,742 @@ +# -*- coding: utf-8 -*- +""" + tests.routing + ~~~~~~~~~~~~~ + + Routing tests. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import pytest + +import uuid + +from tests import strict_eq + +from werkzeug import routing as r +from werkzeug.wrappers import Response +from werkzeug.datastructures import ImmutableDict, MultiDict +from werkzeug.test import create_environ + + +def test_basic_routing(): + map = r.Map([ + r.Rule('/', endpoint='index'), + r.Rule('/foo', endpoint='foo'), + r.Rule('/bar/', endpoint='bar') + ]) + adapter = map.bind('example.org', '/') + assert adapter.match('/') == ('index', {}) + assert adapter.match('/foo') == ('foo', {}) + assert adapter.match('/bar/') == ('bar', {}) + pytest.raises(r.RequestRedirect, lambda: adapter.match('/bar')) + pytest.raises(r.NotFound, lambda: adapter.match('/blub')) + + adapter = map.bind('example.org', '/test') + with pytest.raises(r.RequestRedirect) as excinfo: + adapter.match('/bar') + assert excinfo.value.new_url == 'http://example.org/test/bar/' + + adapter = map.bind('example.org', '/') + with pytest.raises(r.RequestRedirect) as excinfo: + adapter.match('/bar') + assert excinfo.value.new_url == 'http://example.org/bar/' + + adapter = map.bind('example.org', '/') + with pytest.raises(r.RequestRedirect) as excinfo: + adapter.match('/bar', query_args={'aha': 'muhaha'}) + assert excinfo.value.new_url == 'http://example.org/bar/?aha=muhaha' + + adapter = map.bind('example.org', '/') + with pytest.raises(r.RequestRedirect) as excinfo: + adapter.match('/bar', query_args='aha=muhaha') + assert excinfo.value.new_url == 'http://example.org/bar/?aha=muhaha' + + adapter = map.bind_to_environ(create_environ('/bar?foo=bar', + 'http://example.org/')) + with pytest.raises(r.RequestRedirect) as excinfo: + adapter.match() + assert excinfo.value.new_url == 'http://example.org/bar/?foo=bar' + +def test_environ_defaults(): + environ = create_environ("/foo") + strict_eq(environ["PATH_INFO"], '/foo') + m = r.Map([r.Rule("/foo", endpoint="foo"), r.Rule("/bar", endpoint="bar")]) + a = m.bind_to_environ(environ) + strict_eq(a.match("/foo"), ('foo', {})) + strict_eq(a.match(), ('foo', {})) + strict_eq(a.match("/bar"), ('bar', {})) + pytest.raises(r.NotFound, a.match, "/bars") + +def test_environ_nonascii_pathinfo(): + environ = create_environ(u'/лошадь') + m = r.Map([ + r.Rule(u'/', endpoint='index'), + r.Rule(u'/лошадь', endpoint='horse') + ]) + a = m.bind_to_environ(environ) + strict_eq(a.match(u'/'), ('index', {})) + strict_eq(a.match(u'/лошадь'), ('horse', {})) + pytest.raises(r.NotFound, a.match, u'/барсук') + +def test_basic_building(): + map = r.Map([ + r.Rule('/', endpoint='index'), + r.Rule('/foo', endpoint='foo'), + r.Rule('/bar/', endpoint='bar'), + r.Rule('/bar/', endpoint='bari'), + r.Rule('/bar/', endpoint='barf'), + r.Rule('/bar/', endpoint='barp'), + r.Rule('/hehe', endpoint='blah', subdomain='blah') + ]) + adapter = map.bind('example.org', '/', subdomain='blah') + + assert adapter.build('index', {}) == 'http://example.org/' + assert adapter.build('foo', {}) == 'http://example.org/foo' + assert adapter.build('bar', {'baz': 'blub'}) == 'http://example.org/bar/blub' + assert adapter.build('bari', {'bazi': 50}) == 'http://example.org/bar/50' + multivalues = MultiDict([('bazi', 50), ('bazi', None)]) + assert adapter.build('bari', multivalues) == 'http://example.org/bar/50' + assert adapter.build('barf', {'bazf': 0.815}) == 'http://example.org/bar/0.815' + assert adapter.build('barp', {'bazp': 'la/di'}) == 'http://example.org/bar/la/di' + assert adapter.build('blah', {}) == '/hehe' + pytest.raises(r.BuildError, lambda: adapter.build('urks')) + + adapter = map.bind('example.org', '/test', subdomain='blah') + assert adapter.build('index', {}) == 'http://example.org/test/' + assert adapter.build('foo', {}) == 'http://example.org/test/foo' + assert adapter.build('bar', {'baz': 'blub'}) == 'http://example.org/test/bar/blub' + assert adapter.build('bari', {'bazi': 50}) == 'http://example.org/test/bar/50' + assert adapter.build('barf', {'bazf': 0.815}) == 'http://example.org/test/bar/0.815' + assert adapter.build('barp', {'bazp': 'la/di'}) == 'http://example.org/test/bar/la/di' + assert adapter.build('blah', {}) == '/test/hehe' + + adapter = map.bind('example.org') + assert adapter.build('foo', {}) == '/foo' + assert adapter.build('foo', {}, force_external=True) == 'http://example.org/foo' + adapter = map.bind('example.org', url_scheme='') + assert adapter.build('foo', {}) == '/foo' + assert adapter.build('foo', {}, force_external=True) == '//example.org/foo' + +def test_defaults(): + map = r.Map([ + r.Rule('/foo/', defaults={'page': 1}, endpoint='foo'), + r.Rule('/foo/', endpoint='foo') + ]) + adapter = map.bind('example.org', '/') + + assert adapter.match('/foo/') == ('foo', {'page': 1}) + pytest.raises(r.RequestRedirect, lambda: adapter.match('/foo/1')) + assert adapter.match('/foo/2') == ('foo', {'page': 2}) + assert adapter.build('foo', {}) == '/foo/' + assert adapter.build('foo', {'page': 1}) == '/foo/' + assert adapter.build('foo', {'page': 2}) == '/foo/2' + +def test_greedy(): + map = r.Map([ + r.Rule('/foo', endpoint='foo'), + r.Rule('/', endpoint='bar'), + r.Rule('//', endpoint='bar') + ]) + adapter = map.bind('example.org', '/') + + assert adapter.match('/foo') == ('foo', {}) + assert adapter.match('/blub') == ('bar', {'bar': 'blub'}) + assert adapter.match('/he/he') == ('bar', {'bar': 'he', 'blub': 'he'}) + + assert adapter.build('foo', {}) == '/foo' + assert adapter.build('bar', {'bar': 'blub'}) == '/blub' + assert adapter.build('bar', {'bar': 'blub', 'blub': 'bar'}) == '/blub/bar' + +def test_path(): + map = r.Map([ + r.Rule('/', defaults={'name': 'FrontPage'}, endpoint='page'), + r.Rule('/Special', endpoint='special'), + r.Rule('/', endpoint='year'), + r.Rule('/', endpoint='page'), + r.Rule('//edit', endpoint='editpage'), + r.Rule('//silly/', endpoint='sillypage'), + r.Rule('//silly//edit', endpoint='editsillypage'), + r.Rule('/Talk:', endpoint='talk'), + r.Rule('/User:', endpoint='user'), + r.Rule('/User:/', endpoint='userpage'), + r.Rule('/Files/', endpoint='files'), + ]) + adapter = map.bind('example.org', '/') + + assert adapter.match('/') == ('page', {'name':'FrontPage'}) + pytest.raises(r.RequestRedirect, lambda: adapter.match('/FrontPage')) + assert adapter.match('/Special') == ('special', {}) + assert adapter.match('/2007') == ('year', {'year':2007}) + assert adapter.match('/Some/Page') == ('page', {'name':'Some/Page'}) + assert adapter.match('/Some/Page/edit') == ('editpage', {'name':'Some/Page'}) + assert adapter.match('/Foo/silly/bar') == ('sillypage', {'name':'Foo', 'name2':'bar'}) + assert adapter.match('/Foo/silly/bar/edit') == ('editsillypage', {'name':'Foo', 'name2':'bar'}) + assert adapter.match('/Talk:Foo/Bar') == ('talk', {'name':'Foo/Bar'}) + assert adapter.match('/User:thomas') == ('user', {'username':'thomas'}) + assert adapter.match('/User:thomas/projects/werkzeug') == \ + ('userpage', {'username':'thomas', 'name':'projects/werkzeug'}) + assert adapter.match('/Files/downloads/werkzeug/0.2.zip') == \ + ('files', {'file':'downloads/werkzeug/0.2.zip'}) + +def test_dispatch(): + env = create_environ('/') + map = r.Map([ + r.Rule('/', endpoint='root'), + r.Rule('/foo/', endpoint='foo') + ]) + adapter = map.bind_to_environ(env) + + raise_this = None + def view_func(endpoint, values): + if raise_this is not None: + raise raise_this + return Response(repr((endpoint, values))) + dispatch = lambda p, q=False: Response.force_type(adapter.dispatch(view_func, p, + catch_http_exceptions=q), env) + + assert dispatch('/').data == b"('root', {})" + assert dispatch('/foo').status_code == 301 + raise_this = r.NotFound() + pytest.raises(r.NotFound, lambda: dispatch('/bar')) + assert dispatch('/bar', True).status_code == 404 + +def test_http_host_before_server_name(): + env = { + 'HTTP_HOST': 'wiki.example.com', + 'SERVER_NAME': 'web0.example.com', + 'SERVER_PORT': '80', + 'SCRIPT_NAME': '', + 'PATH_INFO': '', + 'REQUEST_METHOD': 'GET', + 'wsgi.url_scheme': 'http' + } + map = r.Map([r.Rule('/', endpoint='index', subdomain='wiki')]) + adapter = map.bind_to_environ(env, server_name='example.com') + assert adapter.match('/') == ('index', {}) + assert adapter.build('index', force_external=True) == 'http://wiki.example.com/' + assert adapter.build('index') == '/' + + env['HTTP_HOST'] = 'admin.example.com' + adapter = map.bind_to_environ(env, server_name='example.com') + assert adapter.build('index') == 'http://wiki.example.com/' + +def test_adapter_url_parameter_sorting(): + map = r.Map([r.Rule('/', endpoint='index')], sort_parameters=True, + sort_key=lambda x: x[1]) + adapter = map.bind('localhost', '/') + assert adapter.build('index', {'x': 20, 'y': 10, 'z': 30}, + force_external=True) == 'http://localhost/?y=10&x=20&z=30' + +def test_request_direct_charset_bug(): + map = r.Map([r.Rule(u'/öäü/')]) + adapter = map.bind('localhost', '/') + + with pytest.raises(r.RequestRedirect) as excinfo: + adapter.match(u'/öäü') + assert excinfo.value.new_url == 'http://localhost/%C3%B6%C3%A4%C3%BC/' + +def test_request_redirect_default(): + map = r.Map([r.Rule(u'/foo', defaults={'bar': 42}), + r.Rule(u'/foo/')]) + adapter = map.bind('localhost', '/') + + with pytest.raises(r.RequestRedirect) as excinfo: + adapter.match(u'/foo/42') + assert excinfo.value.new_url == 'http://localhost/foo' + +def test_request_redirect_default_subdomain(): + map = r.Map([r.Rule(u'/foo', defaults={'bar': 42}, subdomain='test'), + r.Rule(u'/foo/', subdomain='other')]) + adapter = map.bind('localhost', '/', subdomain='other') + + with pytest.raises(r.RequestRedirect) as excinfo: + adapter.match(u'/foo/42') + assert excinfo.value.new_url == 'http://test.localhost/foo' + +def test_adapter_match_return_rule(): + rule = r.Rule('/foo/', endpoint='foo') + map = r.Map([rule]) + adapter = map.bind('localhost', '/') + assert adapter.match('/foo/', return_rule=True) == (rule, {}) + +def test_server_name_interpolation(): + server_name = 'example.invalid' + map = r.Map([r.Rule('/', endpoint='index'), + r.Rule('/', endpoint='alt', subdomain='alt')]) + + env = create_environ('/', 'http://%s/' % server_name) + adapter = map.bind_to_environ(env, server_name=server_name) + assert adapter.match() == ('index', {}) + + env = create_environ('/', 'http://alt.%s/' % server_name) + adapter = map.bind_to_environ(env, server_name=server_name) + assert adapter.match() == ('alt', {}) + + env = create_environ('/', 'http://%s/' % server_name) + adapter = map.bind_to_environ(env, server_name='foo') + assert adapter.subdomain == '' + +def test_rule_emptying(): + rule = r.Rule('/foo', {'meh': 'muh'}, 'x', ['POST'], + False, 'x', True, None) + rule2 = rule.empty() + assert rule.__dict__ == rule2.__dict__ + rule.methods.add('GET') + assert rule.__dict__ != rule2.__dict__ + rule.methods.discard('GET') + rule.defaults['meh'] = 'aha' + assert rule.__dict__ != rule2.__dict__ + +def test_rule_templates(): + testcase = r.RuleTemplate( + [ r.Submount('/test/$app', + [ r.Rule('/foo/', endpoint='handle_foo') + , r.Rule('/bar/', endpoint='handle_bar') + , r.Rule('/baz/', endpoint='handle_baz') + ]), + r.EndpointPrefix('${app}', + [ r.Rule('/${app}-blah', endpoint='bar') + , r.Rule('/${app}-meh', endpoint='baz') + ]), + r.Subdomain('$app', + [ r.Rule('/blah', endpoint='x_bar') + , r.Rule('/meh', endpoint='x_baz') + ]) + ]) + + url_map = r.Map( + [ testcase(app='test1') + , testcase(app='test2') + , testcase(app='test3') + , testcase(app='test4') + ]) + + out = sorted([(x.rule, x.subdomain, x.endpoint) + for x in url_map.iter_rules()]) + + assert out == ([ + ('/blah', 'test1', 'x_bar'), + ('/blah', 'test2', 'x_bar'), + ('/blah', 'test3', 'x_bar'), + ('/blah', 'test4', 'x_bar'), + ('/meh', 'test1', 'x_baz'), + ('/meh', 'test2', 'x_baz'), + ('/meh', 'test3', 'x_baz'), + ('/meh', 'test4', 'x_baz'), + ('/test/test1/bar/', '', 'handle_bar'), + ('/test/test1/baz/', '', 'handle_baz'), + ('/test/test1/foo/', '', 'handle_foo'), + ('/test/test2/bar/', '', 'handle_bar'), + ('/test/test2/baz/', '', 'handle_baz'), + ('/test/test2/foo/', '', 'handle_foo'), + ('/test/test3/bar/', '', 'handle_bar'), + ('/test/test3/baz/', '', 'handle_baz'), + ('/test/test3/foo/', '', 'handle_foo'), + ('/test/test4/bar/', '', 'handle_bar'), + ('/test/test4/baz/', '', 'handle_baz'), + ('/test/test4/foo/', '', 'handle_foo'), + ('/test1-blah', '', 'test1bar'), + ('/test1-meh', '', 'test1baz'), + ('/test2-blah', '', 'test2bar'), + ('/test2-meh', '', 'test2baz'), + ('/test3-blah', '', 'test3bar'), + ('/test3-meh', '', 'test3baz'), + ('/test4-blah', '', 'test4bar'), + ('/test4-meh', '', 'test4baz') + ]) + +def test_non_string_parts(): + m = r.Map([ + r.Rule('/', endpoint='foo') + ]) + a = m.bind('example.com') + assert a.build('foo', {'foo': 42}) == '/42' + +def test_complex_routing_rules(): + m = r.Map([ + r.Rule('/', endpoint='index'), + r.Rule('/', endpoint='an_int'), + r.Rule('/', endpoint='a_string'), + r.Rule('/foo/', endpoint='nested'), + r.Rule('/foobar/', endpoint='nestedbar'), + r.Rule('/foo//', endpoint='nested_show'), + r.Rule('/foo//edit', endpoint='nested_edit'), + r.Rule('/users/', endpoint='users', defaults={'page': 1}), + r.Rule('/users/page/', endpoint='users'), + r.Rule('/foox', endpoint='foox'), + r.Rule('//', endpoint='barx_path_path') + ]) + a = m.bind('example.com') + + assert a.match('/') == ('index', {}) + assert a.match('/42') == ('an_int', {'blub': 42}) + assert a.match('/blub') == ('a_string', {'blub': 'blub'}) + assert a.match('/foo/') == ('nested', {}) + assert a.match('/foobar/') == ('nestedbar', {}) + assert a.match('/foo/1/2/3/') == ('nested_show', {'testing': '1/2/3'}) + assert a.match('/foo/1/2/3/edit') == ('nested_edit', {'testing': '1/2/3'}) + assert a.match('/users/') == ('users', {'page': 1}) + assert a.match('/users/page/2') == ('users', {'page': 2}) + assert a.match('/foox') == ('foox', {}) + assert a.match('/1/2/3') == ('barx_path_path', {'bar': '1', 'blub': '2/3'}) + + assert a.build('index') == '/' + assert a.build('an_int', {'blub': 42}) == '/42' + assert a.build('a_string', {'blub': 'test'}) == '/test' + assert a.build('nested') == '/foo/' + assert a.build('nestedbar') == '/foobar/' + assert a.build('nested_show', {'testing': '1/2/3'}) == '/foo/1/2/3/' + assert a.build('nested_edit', {'testing': '1/2/3'}) == '/foo/1/2/3/edit' + assert a.build('users', {'page': 1}) == '/users/' + assert a.build('users', {'page': 2}) == '/users/page/2' + assert a.build('foox') == '/foox' + assert a.build('barx_path_path', {'bar': '1', 'blub': '2/3'}) == '/1/2/3' + +def test_default_converters(): + class MyMap(r.Map): + default_converters = r.Map.default_converters.copy() + default_converters['foo'] = r.UnicodeConverter + assert isinstance(r.Map.default_converters, ImmutableDict) + m = MyMap([ + r.Rule('/a/', endpoint='a'), + r.Rule('/b/', endpoint='b'), + r.Rule('/c/', endpoint='c') + ], converters={'bar': r.UnicodeConverter}) + a = m.bind('example.org', '/') + assert a.match('/a/1') == ('a', {'a': '1'}) + assert a.match('/b/2') == ('b', {'b': '2'}) + assert a.match('/c/3') == ('c', {'c': '3'}) + assert 'foo' not in r.Map.default_converters + +def test_uuid_converter(): + m = r.Map([ + r.Rule('/a/', endpoint='a') + ]) + a = m.bind('example.org', '/') + rooute, kwargs = a.match('/a/a8098c1a-f86e-11da-bd1a-00112444be1e') + assert type(kwargs['a_uuid']) == uuid.UUID + +def test_converter_with_tuples(): + ''' + Regression test for https://github.com/mitsuhiko/werkzeug/issues/709 + ''' + class TwoValueConverter(r.BaseConverter): + def __init__(self, *args, **kwargs): + super(TwoValueConverter, self).__init__(*args, **kwargs) + self.regex = r'(\w\w+)/(\w\w+)' + + def to_python(self, two_values): + one, two = two_values.split('/') + return one, two + + def to_url(self, values): + return "%s/%s" % (values[0], values[1]) + + map = r.Map([ + r.Rule('//', endpoint='handler') + ], converters={'two': TwoValueConverter}) + a = map.bind('example.org', '/') + route, kwargs = a.match('/qwert/yuiop/') + assert kwargs['foo'] == ('qwert', 'yuiop') + +def test_build_append_unknown(): + map = r.Map([ + r.Rule('/bar/', endpoint='barf') + ]) + adapter = map.bind('example.org', '/', subdomain='blah') + assert adapter.build('barf', {'bazf': 0.815, 'bif' : 1.0}) == \ + 'http://example.org/bar/0.815?bif=1.0' + assert adapter.build('barf', {'bazf': 0.815, 'bif' : 1.0}, + append_unknown=False) == 'http://example.org/bar/0.815' + +def test_build_append_multiple(): + map = r.Map([ + r.Rule('/bar/', endpoint='barf') + ]) + adapter = map.bind('example.org', '/', subdomain='blah') + params = {'bazf': 0.815, 'bif': [1.0, 3.0], 'pof': 2.0} + a, b = adapter.build('barf', params).split('?') + assert a == 'http://example.org/bar/0.815' + assert set(b.split('&')) == set('pof=2.0&bif=1.0&bif=3.0'.split('&')) + + +def test_method_fallback(): + map = r.Map([ + r.Rule('/', endpoint='index', methods=['GET']), + r.Rule('/', endpoint='hello_name', methods=['GET']), + r.Rule('/select', endpoint='hello_select', methods=['POST']), + r.Rule('/search_get', endpoint='search', methods=['GET']), + r.Rule('/search_post', endpoint='search', methods=['POST']) + ]) + adapter = map.bind('example.com') + assert adapter.build('index') == '/' + assert adapter.build('index', method='GET') == '/' + assert adapter.build('hello_name', {'name': 'foo'}) == '/foo' + assert adapter.build('hello_select') == '/select' + assert adapter.build('hello_select', method='POST') == '/select' + assert adapter.build('search') == '/search_get' + assert adapter.build('search', method='GET') == '/search_get' + assert adapter.build('search', method='POST') == '/search_post' + +def test_implicit_head(): + url_map = r.Map([ + r.Rule('/get', methods=['GET'], endpoint='a'), + r.Rule('/post', methods=['POST'], endpoint='b') + ]) + adapter = url_map.bind('example.org') + assert adapter.match('/get', method='HEAD') == ('a', {}) + pytest.raises(r.MethodNotAllowed, adapter.match, + '/post', method='HEAD') + +def test_protocol_joining_bug(): + m = r.Map([r.Rule('/', endpoint='x')]) + a = m.bind('example.org') + assert a.build('x', {'foo': 'x:y'}) == '/x:y' + assert a.build('x', {'foo': 'x:y'}, force_external=True) == \ + 'http://example.org/x:y' + +def test_allowed_methods_querying(): + m = r.Map([r.Rule('/', methods=['GET', 'HEAD']), + r.Rule('/foo', methods=['POST'])]) + a = m.bind('example.org') + assert sorted(a.allowed_methods('/foo')) == ['GET', 'HEAD', 'POST'] + +def test_external_building_with_port(): + map = r.Map([ + r.Rule('/', endpoint='index'), + ]) + adapter = map.bind('example.org:5000', '/') + built_url = adapter.build('index', {}, force_external=True) + assert built_url == 'http://example.org:5000/', built_url + +def test_external_building_with_port_bind_to_environ(): + map = r.Map([ + r.Rule('/', endpoint='index'), + ]) + adapter = map.bind_to_environ( + create_environ('/', 'http://example.org:5000/'), + server_name="example.org:5000" + ) + built_url = adapter.build('index', {}, force_external=True) + assert built_url == 'http://example.org:5000/', built_url + +def test_external_building_with_port_bind_to_environ_wrong_servername(): + map = r.Map([ + r.Rule('/', endpoint='index'), + ]) + environ = create_environ('/', 'http://example.org:5000/') + adapter = map.bind_to_environ(environ, server_name="example.org") + assert adapter.subdomain == '' + +def test_converter_parser(): + args, kwargs = r.parse_converter_args(u'test, a=1, b=3.0') + + assert args == ('test',) + assert kwargs == {'a': 1, 'b': 3.0 } + + args, kwargs = r.parse_converter_args('') + assert not args and not kwargs + + args, kwargs = r.parse_converter_args('a, b, c,') + assert args == ('a', 'b', 'c') + assert not kwargs + + args, kwargs = r.parse_converter_args('True, False, None') + assert args == (True, False, None) + + args, kwargs = r.parse_converter_args('"foo", u"bar"') + assert args == ('foo', 'bar') + +def test_alias_redirects(): + m = r.Map([ + r.Rule('/', endpoint='index'), + r.Rule('/index.html', endpoint='index', alias=True), + r.Rule('/users/', defaults={'page': 1}, endpoint='users'), + r.Rule('/users/index.html', defaults={'page': 1}, alias=True, + endpoint='users'), + r.Rule('/users/page/', endpoint='users'), + r.Rule('/users/page-.html', alias=True, endpoint='users'), + ]) + a = m.bind('example.com') + + def ensure_redirect(path, new_url, args=None): + with pytest.raises(r.RequestRedirect) as excinfo: + a.match(path, query_args=args) + assert excinfo.value.new_url == 'http://example.com' + new_url + + ensure_redirect('/index.html', '/') + ensure_redirect('/users/index.html', '/users/') + ensure_redirect('/users/page-2.html', '/users/page/2') + ensure_redirect('/users/page-1.html', '/users/') + ensure_redirect('/users/page-1.html', '/users/?foo=bar', {'foo': 'bar'}) + + assert a.build('index') == '/' + assert a.build('users', {'page': 1}) == '/users/' + assert a.build('users', {'page': 2}) == '/users/page/2' + +@pytest.mark.parametrize('prefix', ('', '/aaa')) +def test_double_defaults(prefix): + m = r.Map([ + r.Rule(prefix + '/', defaults={'foo': 1, 'bar': False}, endpoint='x'), + r.Rule(prefix + '/', defaults={'bar': False}, endpoint='x'), + r.Rule(prefix + '/bar/', defaults={'foo': 1, 'bar': True}, endpoint='x'), + r.Rule(prefix + '/bar/', defaults={'bar': True}, endpoint='x') + ]) + a = m.bind('example.com') + + assert a.match(prefix + '/') == ('x', {'foo': 1, 'bar': False}) + assert a.match(prefix + '/2') == ('x', {'foo': 2, 'bar': False}) + assert a.match(prefix + '/bar/') == ('x', {'foo': 1, 'bar': True}) + assert a.match(prefix + '/bar/2') == ('x', {'foo': 2, 'bar': True}) + + assert a.build('x', {'foo': 1, 'bar': False}) == prefix + '/' + assert a.build('x', {'foo': 2, 'bar': False}) == prefix + '/2' + assert a.build('x', {'bar': False}) == prefix + '/' + assert a.build('x', {'foo': 1, 'bar': True}) == prefix + '/bar/' + assert a.build('x', {'foo': 2, 'bar': True}) == prefix + '/bar/2' + assert a.build('x', {'bar': True}) == prefix + '/bar/' + +def test_host_matching(): + m = r.Map([ + r.Rule('/', endpoint='index', host='www.'), + r.Rule('/', endpoint='files', host='files.'), + r.Rule('/foo/', defaults={'page': 1}, host='www.', endpoint='x'), + r.Rule('/', host='files.', endpoint='x') + ], host_matching=True) + + a = m.bind('www.example.com') + assert a.match('/') == ('index', {'domain': 'example.com'}) + assert a.match('/foo/') == ('x', {'domain': 'example.com', 'page': 1}) + + with pytest.raises(r.RequestRedirect) as excinfo: + a.match('/foo') + assert excinfo.value.new_url == 'http://www.example.com/foo/' + + a = m.bind('files.example.com') + assert a.match('/') == ('files', {'domain': 'example.com'}) + assert a.match('/2') == ('x', {'domain': 'example.com', 'page': 2}) + + with pytest.raises(r.RequestRedirect) as excinfo: + a.match('/1') + assert excinfo.value.new_url == 'http://www.example.com/foo/' + +def test_host_matching_building(): + m = r.Map([ + r.Rule('/', endpoint='index', host='www.domain.com'), + r.Rule('/', endpoint='foo', host='my.domain.com') + ], host_matching=True) + + www = m.bind('www.domain.com') + assert www.match('/') == ('index', {}) + assert www.build('index') == '/' + assert www.build('foo') == 'http://my.domain.com/' + + my = m.bind('my.domain.com') + assert my.match('/') == ('foo', {}) + assert my.build('foo') == '/' + assert my.build('index') == 'http://www.domain.com/' + +def test_server_name_casing(): + m = r.Map([ + r.Rule('/', endpoint='index', subdomain='foo') + ]) + + env = create_environ() + env['SERVER_NAME'] = env['HTTP_HOST'] = 'FOO.EXAMPLE.COM' + a = m.bind_to_environ(env, server_name='example.com') + assert a.match('/') == ('index', {}) + + env = create_environ() + env['SERVER_NAME'] = '127.0.0.1' + env['SERVER_PORT'] = '5000' + del env['HTTP_HOST'] + a = m.bind_to_environ(env, server_name='example.com') + with pytest.raises(r.NotFound): + a.match() + +def test_redirect_request_exception_code(): + exc = r.RequestRedirect('http://www.google.com/') + exc.code = 307 + env = create_environ() + strict_eq(exc.get_response(env).status_code, exc.code) + +def test_redirect_path_quoting(): + url_map = r.Map([ + r.Rule('/', defaults={'page': 1}, endpoint='category'), + r.Rule('//page/', endpoint='category') + ]) + adapter = url_map.bind('example.com') + + with pytest.raises(r.RequestRedirect) as excinfo: + adapter.match('/foo bar/page/1') + response = excinfo.value.get_response({}) + strict_eq(response.headers['location'], + u'http://example.com/foo%20bar') + +def test_unicode_rules(): + m = r.Map([ + r.Rule(u'/войти/', endpoint='enter'), + r.Rule(u'/foo+bar/', endpoint='foobar') + ]) + a = m.bind(u'☃.example.com') + with pytest.raises(r.RequestRedirect) as excinfo: + a.match(u'/войти') + strict_eq(excinfo.value.new_url, + 'http://xn--n3h.example.com/%D0%B2%D0%BE%D0%B9%D1%82%D0%B8/') + + endpoint, values = a.match(u'/войти/') + strict_eq(endpoint, 'enter') + strict_eq(values, {}) + + with pytest.raises(r.RequestRedirect) as excinfo: + a.match(u'/foo+bar') + strict_eq(excinfo.value.new_url, 'http://xn--n3h.example.com/foo+bar/') + + endpoint, values = a.match(u'/foo+bar/') + strict_eq(endpoint, 'foobar') + strict_eq(values, {}) + + url = a.build('enter', {}, force_external=True) + strict_eq(url, 'http://xn--n3h.example.com/%D0%B2%D0%BE%D0%B9%D1%82%D0%B8/') + + url = a.build('foobar', {}, force_external=True) + strict_eq(url, 'http://xn--n3h.example.com/foo+bar/') + +def test_empty_path_info(): + m = r.Map([ + r.Rule("/", endpoint="index"), + ]) + + b = m.bind("example.com", script_name="/approot") + with pytest.raises(r.RequestRedirect) as excinfo: + b.match("") + assert excinfo.value.new_url == "http://example.com/approot/" + + a = m.bind("example.com") + with pytest.raises(r.RequestRedirect) as excinfo: + a.match("") + assert excinfo.value.new_url == "http://example.com/" + +def test_map_repr(): + m = r.Map([ + r.Rule(u'/wat', endpoint='enter'), + r.Rule(u'/woop', endpoint='foobar') + ]) + rv = repr(m) + strict_eq(rv, + "Map([ foobar>, enter>])") + +def test_empty_subclass_rules_with_custom_kwargs(): + class CustomRule(r.Rule): + def __init__(self, string=None, custom=None, *args, **kwargs): + self.custom = custom + super(CustomRule, self).__init__(string, *args, **kwargs) + + rule1 = CustomRule(u'/foo', endpoint='bar') + try: + rule2 = rule1.empty() + assert rule1.rule == rule2.rule + except TypeError as e: # raised without fix in PR #675 + raise e diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_security.py python-werkzeug-0.10.4+dfsg1/tests/test_security.py --- python-werkzeug-0.9.6+dfsg/tests/test_security.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_security.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,94 @@ +# -*- coding: utf-8 -*- +""" + tests.security + ~~~~~~~~~~~~~~ + + Tests the security helpers. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import os + +from werkzeug.security import check_password_hash, generate_password_hash, \ + safe_join, pbkdf2_hex, safe_str_cmp + + +def test_safe_str_cmp(): + assert safe_str_cmp('a', 'a') is True + assert safe_str_cmp(b'a', u'a') is True + assert safe_str_cmp('a', 'b') is False + assert safe_str_cmp(b'aaa', 'aa') is False + assert safe_str_cmp(b'aaa', 'bbb') is False + assert safe_str_cmp(b'aaa', u'aaa') is True + +def test_password_hashing(): + hash0 = generate_password_hash('default') + assert check_password_hash(hash0, 'default') + assert hash0.startswith('pbkdf2:sha1:1000$') + + hash1 = generate_password_hash('default', 'sha1') + hash2 = generate_password_hash(u'default', method='sha1') + assert hash1 != hash2 + assert check_password_hash(hash1, 'default') + assert check_password_hash(hash2, 'default') + assert hash1.startswith('sha1$') + assert hash2.startswith('sha1$') + + fakehash = generate_password_hash('default', method='plain') + assert fakehash == 'plain$$default' + assert check_password_hash(fakehash, 'default') + + mhash = generate_password_hash(u'default', method='md5') + assert mhash.startswith('md5$') + assert check_password_hash(mhash, 'default') + + legacy = 'md5$$c21f969b5f03d33d43e04f8f136e7682' + assert check_password_hash(legacy, 'default') + + legacy = u'md5$$c21f969b5f03d33d43e04f8f136e7682' + assert check_password_hash(legacy, 'default') + +def test_safe_join(): + assert safe_join('foo', 'bar/baz') == os.path.join('foo', 'bar/baz') + assert safe_join('foo', '../bar/baz') is None + if os.name == 'nt': + assert safe_join('foo', 'foo\\bar') is None + +def test_pbkdf2(): + def check(data, salt, iterations, keylen, expected): + rv = pbkdf2_hex(data, salt, iterations, keylen) + assert rv == expected + + # From RFC 6070 + check('password', 'salt', 1, None, + '0c60c80f961f0e71f3a9b524af6012062fe037a6') + check('password', 'salt', 1, 20, + '0c60c80f961f0e71f3a9b524af6012062fe037a6') + check('password', 'salt', 2, 20, + 'ea6c014dc72d6f8ccd1ed92ace1d41f0d8de8957') + check('password', 'salt', 4096, 20, + '4b007901b765489abead49d926f721d065a429c1') + check('passwordPASSWORDpassword', 'saltSALTsaltSALTsaltSALTsaltSALTsalt', + 4096, 25, '3d2eec4fe41c849b80c8d83662c0e44a8b291a964cf2f07038') + check('pass\x00word', 'sa\x00lt', 4096, 16, + '56fa6aa75548099dcc37d7f03425e0c3') + # This one is from the RFC but it just takes for ages + ##check('password', 'salt', 16777216, 20, + ## 'eefe3d61cd4da4e4e9945b3d6ba2158c2634e984') + + # From Crypt-PBKDF2 + check('password', 'ATHENA.MIT.EDUraeburn', 1, 16, + 'cdedb5281bb2f801565a1122b2563515') + check('password', 'ATHENA.MIT.EDUraeburn', 1, 32, + 'cdedb5281bb2f801565a1122b25635150ad1f7a04bb9f3a333ecc0e2e1f70837') + check('password', 'ATHENA.MIT.EDUraeburn', 2, 16, + '01dbee7f4a9e243e988b62c73cda935d') + check('password', 'ATHENA.MIT.EDUraeburn', 2, 32, + '01dbee7f4a9e243e988b62c73cda935da05378b93244ec8f48a99e61ad799d86') + check('password', 'ATHENA.MIT.EDUraeburn', 1200, 32, + '5c08eb61fdf71e4e4ec3cf6ba1f5512ba7e52ddbc5e5142f708a31e2e62b1e13') + check('X' * 64, 'pass phrase equals block size', 1200, 32, + '139c30c0966bc32ba55fdbf212530ac9c5ec59f1a452f5cc9ad940fea0598ed1') + check('X' * 65, 'pass phrase exceeds block size', 1200, 32, + '9ccad6d468770cd51b10e6a68721be611a8b4d282601db3b36be9246915ec82a') diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_serving.py python-werkzeug-0.10.4+dfsg1/tests/test_serving.py --- python-werkzeug-0.9.6+dfsg/tests/test_serving.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_serving.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,198 @@ +# -*- coding: utf-8 -*- +""" + tests.serving + ~~~~~~~~~~~~~ + + Added serving tests. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import os +import ssl +import subprocess +import textwrap + + +try: + import OpenSSL +except ImportError: + OpenSSL = None + +try: + import watchdog +except ImportError: + watchdog = None + +import requests +import requests.exceptions +import pytest + +from werkzeug import __version__ as version, serving + + +def test_serving(dev_server): + server = dev_server('from werkzeug.testapp import test_app as app') + rv = requests.get('http://%s/?foo=bar&baz=blah' % server.addr).content + assert b'WSGI Information' in rv + assert b'foo=bar&baz=blah' in rv + assert b'Werkzeug/' + version.encode('ascii') in rv + + +def test_broken_app(dev_server): + server = dev_server(''' + def app(environ, start_response): + 1 // 0 + ''') + + r = requests.get(server.url + '/?foo=bar&baz=blah') + assert r.status_code == 500 + assert 'Internal Server Error' in r.text + + +@pytest.mark.skipif(not hasattr(ssl, 'SSLContext'), + reason='Missing PEP 466 (Python 2.7.9+) or Python 3.') +@pytest.mark.skipif(OpenSSL is None, + reason='OpenSSL is required for cert generation.') +def test_stdlib_ssl_contexts(dev_server, tmpdir): + certificate, private_key = \ + serving.make_ssl_devcert(str(tmpdir.mkdir('certs'))) + + server = dev_server(''' + def app(environ, start_response): + start_response('200 OK', [('Content-Type', 'text/html')]) + return [b'hello'] + + import ssl + ctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23) + ctx.load_cert_chain("%s", "%s") + kwargs['ssl_context'] = ctx + ''' % (certificate, private_key)) + + assert server.addr is not None + r = requests.get(server.url, verify=False) + assert r.content == b'hello' + + +@pytest.mark.skipif(OpenSSL is None, reason='OpenSSL is not installed.') +def test_ssl_context_adhoc(dev_server): + server = dev_server(''' + def app(environ, start_response): + start_response('200 OK', [('Content-Type', 'text/html')]) + return [b'hello'] + + kwargs['ssl_context'] = 'adhoc' + ''') + r = requests.get(server.url, verify=False) + assert r.content == b'hello' + + +@pytest.mark.skipif(OpenSSL is None, reason='OpenSSL is not installed.') +def test_make_ssl_devcert(tmpdir): + certificate, private_key = \ + serving.make_ssl_devcert(str(tmpdir)) + assert os.path.isfile(certificate) + assert os.path.isfile(private_key) + + +@pytest.mark.skipif(watchdog is None, reason='Watchdog not installed.') +def test_reloader_broken_imports(tmpdir, dev_server): + # We explicitly assert that the server reloads on change, even though in + # this case the import could've just been retried. This is to assert + # correct behavior for apps that catch and cache import errors. + # + # Because this feature is achieved by recursively watching a large amount + # of directories, this only works for the watchdog reloader. The stat + # reloader is too inefficient to watch such a large amount of files. + + real_app = tmpdir.join('real_app.py') + real_app.write("lol syntax error") + + server = dev_server(''' + trials = [] + def app(environ, start_response): + assert not trials, 'should have reloaded' + trials.append(1) + import real_app + return real_app.real_app(environ, start_response) + + kwargs['use_reloader'] = True + kwargs['reloader_interval'] = 0.1 + kwargs['reloader_type'] = 'watchdog' + ''') + server.wait_for_reloader_loop() + + r = requests.get(server.url) + assert r.status_code == 500 + + real_app.write(textwrap.dedent(''' + def real_app(environ, start_response): + start_response('200 OK', [('Content-Type', 'text/html')]) + return [b'hello'] + ''')) + server.wait_for_reloader() + + r = requests.get(server.url) + assert r.status_code == 200 + assert r.content == b'hello' + + +@pytest.mark.skipif(watchdog is None, reason='Watchdog not installed.') +def test_reloader_nested_broken_imports(tmpdir, dev_server): + real_app = tmpdir.mkdir('real_app') + real_app.join('__init__.py').write('from real_app.sub import real_app') + sub = real_app.mkdir('sub').join('__init__.py') + sub.write("lol syntax error") + + server = dev_server(''' + trials = [] + def app(environ, start_response): + assert not trials, 'should have reloaded' + trials.append(1) + import real_app + return real_app.real_app(environ, start_response) + + kwargs['use_reloader'] = True + kwargs['reloader_interval'] = 0.1 + kwargs['reloader_type'] = 'watchdog' + ''') + server.wait_for_reloader_loop() + + r = requests.get(server.url) + assert r.status_code == 500 + + sub.write(textwrap.dedent(''' + def real_app(environ, start_response): + start_response('200 OK', [('Content-Type', 'text/html')]) + return [b'hello'] + ''')) + server.wait_for_reloader() + + r = requests.get(server.url) + assert r.status_code == 200 + assert r.content == b'hello' + + +def test_monkeypached_sleep(tmpdir): + # removing the staticmethod wrapper in the definition of + # ReloaderLoop._sleep works most of the time, since `sleep` is a c + # function, and unlike python functions which are descriptors, doesn't + # become a method when attached to a class. however, if the user has called + # `eventlet.monkey_patch` before importing `_reloader`, `time.sleep` is a + # python function, and subsequently calling `ReloaderLoop._sleep` fails + # with a TypeError. This test checks that _sleep is attached correctly. + script = tmpdir.mkdir('app').join('test.py') + script.write(textwrap.dedent(''' + import time + + def sleep(secs): + pass + + # simulate eventlet.monkey_patch by replacing the builtin sleep + # with a regular function before _reloader is imported + time.sleep = sleep + + from werkzeug._reloader import ReloaderLoop + ReloaderLoop()._sleep(0) + ''')) + subprocess.check_call(['python', str(script)]) diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_test.py python-werkzeug-0.10.4+dfsg1/tests/test_test.py --- python-werkzeug-0.9.6+dfsg/tests/test_test.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_test.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,550 @@ +# -*- coding: utf-8 -*- +""" + tests.test + ~~~~~~~~~~ + + Tests the testing tools. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" + +from __future__ import with_statement + +import pytest + +import sys +from io import BytesIO +from werkzeug._compat import iteritems, to_bytes, implements_iterator +from functools import partial + +from tests import strict_eq + +from werkzeug.wrappers import Request, Response, BaseResponse +from werkzeug.test import Client, EnvironBuilder, create_environ, \ + ClientRedirectError, stream_encode_multipart, run_wsgi_app +from werkzeug.utils import redirect +from werkzeug.formparser import parse_form_data +from werkzeug.datastructures import MultiDict, FileStorage + + +def cookie_app(environ, start_response): + """A WSGI application which sets a cookie, and returns as a ersponse any + cookie which exists. + """ + response = Response(environ.get('HTTP_COOKIE', 'No Cookie'), + mimetype='text/plain') + response.set_cookie('test', 'test') + return response(environ, start_response) + + +def redirect_loop_app(environ, start_response): + response = redirect('http://localhost/some/redirect/') + return response(environ, start_response) + + +def redirect_with_get_app(environ, start_response): + req = Request(environ) + if req.url not in ('http://localhost/', + 'http://localhost/first/request', + 'http://localhost/some/redirect/'): + assert False, 'redirect_demo_app() did not expect URL "%s"' % req.url + if '/some/redirect' not in req.url: + response = redirect('http://localhost/some/redirect/') + else: + response = Response('current url: %s' % req.url) + return response(environ, start_response) + + +def redirect_with_post_app(environ, start_response): + req = Request(environ) + if req.url == 'http://localhost/some/redirect/': + assert req.method == 'GET', 'request should be GET' + assert not req.form, 'request should not have data' + response = Response('current url: %s' % req.url) + else: + response = redirect('http://localhost/some/redirect/') + return response(environ, start_response) + + +def external_redirect_demo_app(environ, start_response): + response = redirect('http://example.com/') + return response(environ, start_response) + + +def external_subdomain_redirect_demo_app(environ, start_response): + if 'test.example.com' in environ['HTTP_HOST']: + response = Response('redirected successfully to subdomain') + else: + response = redirect('http://test.example.com/login') + return response(environ, start_response) + + +def multi_value_post_app(environ, start_response): + req = Request(environ) + assert req.form['field'] == 'val1', req.form['field'] + assert req.form.getlist('field') == ['val1', 'val2'], req.form.getlist('field') + response = Response('ok') + return response(environ, start_response) + + + +def test_cookie_forging(): + c = Client(cookie_app) + c.set_cookie('localhost', 'foo', 'bar') + appiter, code, headers = c.open() + strict_eq(list(appiter), [b'foo=bar']) + +def test_set_cookie_app(): + c = Client(cookie_app) + appiter, code, headers = c.open() + assert 'Set-Cookie' in dict(headers) + +def test_cookiejar_stores_cookie(): + c = Client(cookie_app) + appiter, code, headers = c.open() + assert 'test' in c.cookie_jar._cookies['localhost.local']['/'] + +def test_no_initial_cookie(): + c = Client(cookie_app) + appiter, code, headers = c.open() + strict_eq(b''.join(appiter), b'No Cookie') + +def test_resent_cookie(): + c = Client(cookie_app) + c.open() + appiter, code, headers = c.open() + strict_eq(b''.join(appiter), b'test=test') + +def test_disable_cookies(): + c = Client(cookie_app, use_cookies=False) + c.open() + appiter, code, headers = c.open() + strict_eq(b''.join(appiter), b'No Cookie') + +def test_cookie_for_different_path(): + c = Client(cookie_app) + c.open('/path1') + appiter, code, headers = c.open('/path2') + strict_eq(b''.join(appiter), b'test=test') + +def test_environ_builder_basics(): + b = EnvironBuilder() + assert b.content_type is None + b.method = 'POST' + assert b.content_type is None + b.form['test'] = 'normal value' + assert b.content_type == 'application/x-www-form-urlencoded' + b.files.add_file('test', BytesIO(b'test contents'), 'test.txt') + assert b.files['test'].content_type == 'text/plain' + assert b.content_type == 'multipart/form-data' + + req = b.get_request() + b.close() + + strict_eq(req.url, u'http://localhost/') + strict_eq(req.method, 'POST') + strict_eq(req.form['test'], u'normal value') + assert req.files['test'].content_type == 'text/plain' + strict_eq(req.files['test'].filename, u'test.txt') + strict_eq(req.files['test'].read(), b'test contents') + +def test_environ_builder_headers(): + b = EnvironBuilder(environ_base={'HTTP_USER_AGENT': 'Foo/0.1'}, + environ_overrides={'wsgi.version': (1, 1)}) + b.headers['X-Beat-My-Horse'] = 'very well sir' + env = b.get_environ() + strict_eq(env['HTTP_USER_AGENT'], 'Foo/0.1') + strict_eq(env['HTTP_X_BEAT_MY_HORSE'], 'very well sir') + strict_eq(env['wsgi.version'], (1, 1)) + + b.headers['User-Agent'] = 'Bar/1.0' + env = b.get_environ() + strict_eq(env['HTTP_USER_AGENT'], 'Bar/1.0') + +def test_environ_builder_headers_content_type(): + b = EnvironBuilder(headers={'Content-Type': 'text/plain'}) + env = b.get_environ() + assert env['CONTENT_TYPE'] == 'text/plain' + b = EnvironBuilder(content_type='text/html', + headers={'Content-Type': 'text/plain'}) + env = b.get_environ() + assert env['CONTENT_TYPE'] == 'text/html' + +def test_environ_builder_paths(): + b = EnvironBuilder(path='/foo', base_url='http://example.com/') + strict_eq(b.base_url, 'http://example.com/') + strict_eq(b.path, '/foo') + strict_eq(b.script_root, '') + strict_eq(b.host, 'example.com') + + b = EnvironBuilder(path='/foo', base_url='http://example.com/bar') + strict_eq(b.base_url, 'http://example.com/bar/') + strict_eq(b.path, '/foo') + strict_eq(b.script_root, '/bar') + strict_eq(b.host, 'example.com') + + b.host = 'localhost' + strict_eq(b.base_url, 'http://localhost/bar/') + b.base_url = 'http://localhost:8080/' + strict_eq(b.host, 'localhost:8080') + strict_eq(b.server_name, 'localhost') + strict_eq(b.server_port, 8080) + + b.host = 'foo.invalid' + b.url_scheme = 'https' + b.script_root = '/test' + env = b.get_environ() + strict_eq(env['SERVER_NAME'], 'foo.invalid') + strict_eq(env['SERVER_PORT'], '443') + strict_eq(env['SCRIPT_NAME'], '/test') + strict_eq(env['PATH_INFO'], '/foo') + strict_eq(env['HTTP_HOST'], 'foo.invalid') + strict_eq(env['wsgi.url_scheme'], 'https') + strict_eq(b.base_url, 'https://foo.invalid/test/') + +def test_environ_builder_content_type(): + builder = EnvironBuilder() + assert builder.content_type is None + builder.method = 'POST' + assert builder.content_type is None + builder.method = 'PUT' + assert builder.content_type is None + builder.method = 'PATCH' + assert builder.content_type is None + builder.method = 'DELETE' + assert builder.content_type is None + builder.method = 'GET' + assert builder.content_type == None + builder.form['foo'] = 'bar' + assert builder.content_type == 'application/x-www-form-urlencoded' + builder.files.add_file('blafasel', BytesIO(b'foo'), 'test.txt') + assert builder.content_type == 'multipart/form-data' + req = builder.get_request() + strict_eq(req.form['foo'], u'bar') + strict_eq(req.files['blafasel'].read(), b'foo') + +def test_environ_builder_stream_switch(): + d = MultiDict(dict(foo=u'bar', blub=u'blah', hu=u'hum')) + for use_tempfile in False, True: + stream, length, boundary = stream_encode_multipart( + d, use_tempfile, threshold=150) + assert isinstance(stream, BytesIO) != use_tempfile + + form = parse_form_data({'wsgi.input': stream, 'CONTENT_LENGTH': str(length), + 'CONTENT_TYPE': 'multipart/form-data; boundary="%s"' % + boundary})[1] + strict_eq(form, d) + stream.close() + +def test_environ_builder_unicode_file_mix(): + for use_tempfile in False, True: + f = FileStorage(BytesIO(u'\N{SNOWMAN}'.encode('utf-8')), + 'snowman.txt') + d = MultiDict(dict(f=f, s=u'\N{SNOWMAN}')) + stream, length, boundary = stream_encode_multipart( + d, use_tempfile, threshold=150) + assert isinstance(stream, BytesIO) != use_tempfile + + _, form, files = parse_form_data({ + 'wsgi.input': stream, + 'CONTENT_LENGTH': str(length), + 'CONTENT_TYPE': 'multipart/form-data; boundary="%s"' % + boundary + }) + strict_eq(form['s'], u'\N{SNOWMAN}') + strict_eq(files['f'].name, 'f') + strict_eq(files['f'].filename, u'snowman.txt') + strict_eq(files['f'].read(), + u'\N{SNOWMAN}'.encode('utf-8')) + stream.close() + +def test_create_environ(): + env = create_environ('/foo?bar=baz', 'http://example.org/') + expected = { + 'wsgi.multiprocess': False, + 'wsgi.version': (1, 0), + 'wsgi.run_once': False, + 'wsgi.errors': sys.stderr, + 'wsgi.multithread': False, + 'wsgi.url_scheme': 'http', + 'SCRIPT_NAME': '', + 'CONTENT_TYPE': '', + 'CONTENT_LENGTH': '0', + 'SERVER_NAME': 'example.org', + 'REQUEST_METHOD': 'GET', + 'HTTP_HOST': 'example.org', + 'PATH_INFO': '/foo', + 'SERVER_PORT': '80', + 'SERVER_PROTOCOL': 'HTTP/1.1', + 'QUERY_STRING': 'bar=baz' + } + for key, value in iteritems(expected): + assert env[key] == value + strict_eq(env['wsgi.input'].read(0), b'') + strict_eq(create_environ('/foo', 'http://example.com/')['SCRIPT_NAME'], '') + +def test_file_closing(): + closed = [] + class SpecialInput(object): + def read(self, size): + return '' + def close(self): + closed.append(self) + + env = create_environ(data={'foo': SpecialInput()}) + strict_eq(len(closed), 1) + builder = EnvironBuilder() + builder.files.add_file('blah', SpecialInput()) + builder.close() + strict_eq(len(closed), 2) + +def test_follow_redirect(): + env = create_environ('/', base_url='http://localhost') + c = Client(redirect_with_get_app) + appiter, code, headers = c.open(environ_overrides=env, follow_redirects=True) + strict_eq(code, '200 OK') + strict_eq(b''.join(appiter), b'current url: http://localhost/some/redirect/') + + # Test that the :cls:`Client` is aware of user defined response wrappers + c = Client(redirect_with_get_app, response_wrapper=BaseResponse) + resp = c.get('/', follow_redirects=True) + strict_eq(resp.status_code, 200) + strict_eq(resp.data, b'current url: http://localhost/some/redirect/') + + # test with URL other than '/' to make sure redirected URL's are correct + c = Client(redirect_with_get_app, response_wrapper=BaseResponse) + resp = c.get('/first/request', follow_redirects=True) + strict_eq(resp.status_code, 200) + strict_eq(resp.data, b'current url: http://localhost/some/redirect/') + +def test_follow_redirect_with_post_307(): + def redirect_with_post_307_app(environ, start_response): + req = Request(environ) + if req.url == 'http://localhost/some/redirect/': + assert req.method == 'POST', 'request should be POST' + assert not req.form, 'request should not have data' + response = Response('current url: %s' % req.url) + else: + response = redirect('http://localhost/some/redirect/', code=307) + return response(environ, start_response) + + c = Client(redirect_with_post_307_app, response_wrapper=BaseResponse) + resp = c.post('/', follow_redirects=True, data='foo=blub+hehe&blah=42') + assert resp.status_code == 200 + assert resp.data == b'current url: http://localhost/some/redirect/' + +def test_follow_external_redirect(): + env = create_environ('/', base_url='http://localhost') + c = Client(external_redirect_demo_app) + pytest.raises(RuntimeError, lambda: + c.get(environ_overrides=env, follow_redirects=True)) + +def test_follow_external_redirect_on_same_subdomain(): + env = create_environ('/', base_url='http://example.com') + c = Client(external_subdomain_redirect_demo_app, allow_subdomain_redirects=True) + c.get(environ_overrides=env, follow_redirects=True) + + # check that this does not work for real external domains + env = create_environ('/', base_url='http://localhost') + pytest.raises(RuntimeError, lambda: + c.get(environ_overrides=env, follow_redirects=True)) + + # check that subdomain redirects fail if no `allow_subdomain_redirects` is applied + c = Client(external_subdomain_redirect_demo_app) + pytest.raises(RuntimeError, lambda: + c.get(environ_overrides=env, follow_redirects=True)) + +def test_follow_redirect_loop(): + c = Client(redirect_loop_app, response_wrapper=BaseResponse) + with pytest.raises(ClientRedirectError): + resp = c.get('/', follow_redirects=True) + +def test_follow_redirect_with_post(): + c = Client(redirect_with_post_app, response_wrapper=BaseResponse) + resp = c.post('/', follow_redirects=True, data='foo=blub+hehe&blah=42') + strict_eq(resp.status_code, 200) + strict_eq(resp.data, b'current url: http://localhost/some/redirect/') + +def test_path_info_script_name_unquoting(): + def test_app(environ, start_response): + start_response('200 OK', [('Content-Type', 'text/plain')]) + return [environ['PATH_INFO'] + '\n' + environ['SCRIPT_NAME']] + c = Client(test_app, response_wrapper=BaseResponse) + resp = c.get('/foo%40bar') + strict_eq(resp.data, b'/foo@bar\n') + c = Client(test_app, response_wrapper=BaseResponse) + resp = c.get('/foo%40bar', 'http://localhost/bar%40baz') + strict_eq(resp.data, b'/foo@bar\n/bar@baz') + +def test_multi_value_submit(): + c = Client(multi_value_post_app, response_wrapper=BaseResponse) + data = { + 'field': ['val1','val2'] + } + resp = c.post('/', data=data) + strict_eq(resp.status_code, 200) + c = Client(multi_value_post_app, response_wrapper=BaseResponse) + data = MultiDict({ + 'field': ['val1', 'val2'] + }) + resp = c.post('/', data=data) + strict_eq(resp.status_code, 200) + +def test_iri_support(): + b = EnvironBuilder(u'/föö-bar', base_url=u'http://☃.net/') + strict_eq(b.path, '/f%C3%B6%C3%B6-bar') + strict_eq(b.base_url, 'http://xn--n3h.net/') + +@pytest.mark.parametrize('buffered', (True, False)) +@pytest.mark.parametrize('iterable', (True, False)) +def test_run_wsgi_apps(buffered, iterable): + leaked_data = [] + + def simple_app(environ, start_response): + start_response('200 OK', [('Content-Type', 'text/html')]) + return ['Hello World!'] + + def yielding_app(environ, start_response): + start_response('200 OK', [('Content-Type', 'text/html')]) + yield 'Hello ' + yield 'World!' + + def late_start_response(environ, start_response): + yield 'Hello ' + yield 'World' + start_response('200 OK', [('Content-Type', 'text/html')]) + yield '!' + + def depends_on_close(environ, start_response): + leaked_data.append('harhar') + start_response('200 OK', [('Content-Type', 'text/html')]) + class Rv(object): + def __iter__(self): + yield 'Hello ' + yield 'World' + yield '!' + + def close(self): + assert leaked_data.pop() == 'harhar' + + return Rv() + + + for app in (simple_app, yielding_app, late_start_response, + depends_on_close): + if iterable: + app = iterable_middleware(app) + app_iter, status, headers = run_wsgi_app(app, {}, buffered=buffered) + strict_eq(status, '200 OK') + strict_eq(list(headers), [('Content-Type', 'text/html')]) + strict_eq(''.join(app_iter), 'Hello World!') + + if hasattr(app_iter, 'close'): + app_iter.close() + assert not leaked_data + +def test_run_wsgi_app_closing_iterator(): + got_close = [] + @implements_iterator + class CloseIter(object): + def __init__(self): + self.iterated = False + def __iter__(self): + return self + def close(self): + got_close.append(None) + def __next__(self): + if self.iterated: + raise StopIteration() + self.iterated = True + return 'bar' + + def bar(environ, start_response): + start_response('200 OK', [('Content-Type', 'text/plain')]) + return CloseIter() + + app_iter, status, headers = run_wsgi_app(bar, {}) + assert status == '200 OK' + assert list(headers) == [('Content-Type', 'text/plain')] + assert next(app_iter) == 'bar' + pytest.raises(StopIteration, partial(next, app_iter)) + app_iter.close() + + assert run_wsgi_app(bar, {}, True)[0] == ['bar'] + + assert len(got_close) == 2 + +def iterable_middleware(app): + '''Guarantee that the app returns an iterable''' + def inner(environ, start_response): + rv = app(environ, start_response) + + class Iterable(object): + def __iter__(self): + return iter(rv) + + if hasattr(rv, 'close'): + def close(self): + rv.close() + + return Iterable() + return inner + +def test_multiple_cookies(): + @Request.application + def test_app(request): + response = Response(repr(sorted(request.cookies.items()))) + response.set_cookie(u'test1', b'foo') + response.set_cookie(u'test2', b'bar') + return response + client = Client(test_app, Response) + resp = client.get('/') + strict_eq(resp.data, b'[]') + resp = client.get('/') + strict_eq(resp.data, + to_bytes(repr([('test1', u'foo'), ('test2', u'bar')]), 'ascii')) + +def test_correct_open_invocation_on_redirect(): + class MyClient(Client): + counter = 0 + def open(self, *args, **kwargs): + self.counter += 1 + env = kwargs.setdefault('environ_overrides', {}) + env['werkzeug._foo'] = self.counter + return Client.open(self, *args, **kwargs) + + @Request.application + def test_app(request): + return Response(str(request.environ['werkzeug._foo'])) + + c = MyClient(test_app, response_wrapper=Response) + strict_eq(c.get('/').data, b'1') + strict_eq(c.get('/').data, b'2') + strict_eq(c.get('/').data, b'3') + +def test_correct_encoding(): + req = Request.from_values(u'/\N{SNOWMAN}', u'http://example.com/foo') + strict_eq(req.script_root, u'/foo') + strict_eq(req.path, u'/\N{SNOWMAN}') + +def test_full_url_requests_with_args(): + base = 'http://example.com/' + + @Request.application + def test_app(request): + return Response(request.args['x']) + client = Client(test_app, Response) + resp = client.get('/?x=42', base) + strict_eq(resp.data, b'42') + resp = client.get('http://www.example.com/?x=23', base) + strict_eq(resp.data, b'23') + +def test_delete_requests_with_form(): + @Request.application + def test_app(request): + return Response(request.form.get('x', None)) + + client = Client(test_app, Response) + resp = client.delete('/', data={'x': 42}) + strict_eq(resp.data, b'42') diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_urls.py python-werkzeug-0.10.4+dfsg1/tests/test_urls.py --- python-werkzeug-0.9.6+dfsg/tests/test_urls.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_urls.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,405 @@ +# -*- coding: utf-8 -*- +""" + tests.urls + ~~~~~~~~~~ + + URL helper tests. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import pytest + +from tests import strict_eq + +from werkzeug.datastructures import OrderedMultiDict +from werkzeug import urls +from werkzeug._compat import text_type, NativeStringIO, BytesIO + + +def test_parsing(): + url = urls.url_parse('http://anon:hunter2@[2001:db8:0:1]:80/a/b/c') + assert url.netloc == 'anon:hunter2@[2001:db8:0:1]:80' + assert url.username == 'anon' + assert url.password == 'hunter2' + assert url.port == 80 + assert url.ascii_host == '2001:db8:0:1' + + assert url.get_file_location() == (None, None) # no file scheme + + +@pytest.mark.parametrize('implicit_format', (True, False)) +@pytest.mark.parametrize('localhost', ('127.0.0.1', '::1', 'localhost')) +def test_fileurl_parsing_windows(implicit_format, localhost, monkeypatch): + if implicit_format: + pathformat = None + monkeypatch.setattr('os.name', 'nt') + else: + pathformat = 'windows' + monkeypatch.delattr('os.name') # just to make sure it won't get used + + url = urls.url_parse('file:///C:/Documents and Settings/Foobar/stuff.txt') + assert url.netloc == '' + assert url.scheme == 'file' + assert url.get_file_location(pathformat) == \ + (None, r'C:\Documents and Settings\Foobar\stuff.txt') + + url = urls.url_parse('file://///server.tld/file.txt') + assert url.get_file_location(pathformat) == ('server.tld', r'file.txt') + + url = urls.url_parse('file://///server.tld') + assert url.get_file_location(pathformat) == ('server.tld', '') + + url = urls.url_parse('file://///%s' % localhost) + assert url.get_file_location(pathformat) == (None, '') + + url = urls.url_parse('file://///%s/file.txt' % localhost) + assert url.get_file_location(pathformat) == (None, r'file.txt') + + +def test_replace(): + url = urls.url_parse('http://de.wikipedia.org/wiki/Troll') + strict_eq(url.replace(query='foo=bar'), + urls.url_parse('http://de.wikipedia.org/wiki/Troll?foo=bar')) + strict_eq(url.replace(scheme='https'), + urls.url_parse('https://de.wikipedia.org/wiki/Troll')) + + +def test_quoting(): + strict_eq(urls.url_quote(u'\xf6\xe4\xfc'), '%C3%B6%C3%A4%C3%BC') + strict_eq(urls.url_unquote(urls.url_quote(u'#%="\xf6')), u'#%="\xf6') + strict_eq(urls.url_quote_plus('foo bar'), 'foo+bar') + strict_eq(urls.url_unquote_plus('foo+bar'), u'foo bar') + strict_eq(urls.url_quote_plus('foo+bar'), 'foo%2Bbar') + strict_eq(urls.url_unquote_plus('foo%2Bbar'), u'foo+bar') + strict_eq(urls.url_encode({b'a': None, b'b': b'foo bar'}), 'b=foo+bar') + strict_eq(urls.url_encode({u'a': None, u'b': u'foo bar'}), 'b=foo+bar') + strict_eq(urls.url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffsklärung)'), + 'http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)') + strict_eq(urls.url_quote_plus(42), '42') + strict_eq(urls.url_quote(b'\xff'), '%FF') + + +def test_bytes_unquoting(): + strict_eq(urls.url_unquote(urls.url_quote( + u'#%="\xf6', charset='latin1'), charset=None), b'#%="\xf6') + + +def test_url_decoding(): + x = urls.url_decode(b'foo=42&bar=23&uni=H%C3%A4nsel') + strict_eq(x['foo'], u'42') + strict_eq(x['bar'], u'23') + strict_eq(x['uni'], u'Hänsel') + + x = urls.url_decode(b'foo=42;bar=23;uni=H%C3%A4nsel', separator=b';') + strict_eq(x['foo'], u'42') + strict_eq(x['bar'], u'23') + strict_eq(x['uni'], u'Hänsel') + + x = urls.url_decode(b'%C3%9Ch=H%C3%A4nsel', decode_keys=True) + strict_eq(x[u'Üh'], u'Hänsel') + + +def test_url_bytes_decoding(): + x = urls.url_decode(b'foo=42&bar=23&uni=H%C3%A4nsel', charset=None) + strict_eq(x[b'foo'], b'42') + strict_eq(x[b'bar'], b'23') + strict_eq(x[b'uni'], u'Hänsel'.encode('utf-8')) + + +def test_streamed_url_decoding(): + item1 = u'a' * 100000 + item2 = u'b' * 400 + string = ('a=%s&b=%s&c=%s' % (item1, item2, item2)).encode('ascii') + gen = urls.url_decode_stream(BytesIO(string), limit=len(string), + return_iterator=True) + strict_eq(next(gen), ('a', item1)) + strict_eq(next(gen), ('b', item2)) + strict_eq(next(gen), ('c', item2)) + pytest.raises(StopIteration, lambda: next(gen)) + + +def test_stream_decoding_string_fails(): + pytest.raises(TypeError, urls.url_decode_stream, 'testing') + + +def test_url_encoding(): + strict_eq(urls.url_encode({'foo': 'bar 45'}), 'foo=bar+45') + d = {'foo': 1, 'bar': 23, 'blah': u'Hänsel'} + strict_eq(urls.url_encode(d, sort=True), 'bar=23&blah=H%C3%A4nsel&foo=1') + strict_eq(urls.url_encode(d, sort=True, separator=u';'), 'bar=23;blah=H%C3%A4nsel;foo=1') + + +def test_sorted_url_encode(): + strict_eq(urls.url_encode({u"a": 42, u"b": 23, 1: 1, 2: 2}, + sort=True, key=lambda i: text_type(i[0])), '1=1&2=2&a=42&b=23') + strict_eq(urls.url_encode({u'A': 1, u'a': 2, u'B': 3, 'b': 4}, sort=True, + key=lambda x: x[0].lower() + x[0]), 'A=1&a=2&B=3&b=4') + + +def test_streamed_url_encoding(): + out = NativeStringIO() + urls.url_encode_stream({'foo': 'bar 45'}, out) + strict_eq(out.getvalue(), 'foo=bar+45') + + d = {'foo': 1, 'bar': 23, 'blah': u'Hänsel'} + out = NativeStringIO() + urls.url_encode_stream(d, out, sort=True) + strict_eq(out.getvalue(), 'bar=23&blah=H%C3%A4nsel&foo=1') + out = NativeStringIO() + urls.url_encode_stream(d, out, sort=True, separator=u';') + strict_eq(out.getvalue(), 'bar=23;blah=H%C3%A4nsel;foo=1') + + gen = urls.url_encode_stream(d, sort=True) + strict_eq(next(gen), 'bar=23') + strict_eq(next(gen), 'blah=H%C3%A4nsel') + strict_eq(next(gen), 'foo=1') + pytest.raises(StopIteration, lambda: next(gen)) + + +def test_url_fixing(): + x = urls.url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffskl\xe4rung)') + assert x == 'http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)' + + x = urls.url_fix("http://just.a.test/$-_.+!*'(),") + assert x == "http://just.a.test/$-_.+!*'()," + + x = urls.url_fix('http://höhöhö.at/höhöhö/hähähä') + assert x == r'http://xn--hhh-snabb.at/h%C3%B6h%C3%B6h%C3%B6/h%C3%A4h%C3%A4h%C3%A4' + + +def test_url_fixing_filepaths(): + x = urls.url_fix(r'file://C:\Users\Administrator\My Documents\ÑÈáÇíí') + assert x == r'file:///C%3A/Users/Administrator/My%20Documents/%C3%91%C3%88%C3%A1%C3%87%C3%AD%C3%AD' + + a = urls.url_fix(r'file:/C:/') + b = urls.url_fix(r'file://C:/') + c = urls.url_fix(r'file:///C:/') + assert a == b == c == r'file:///C%3A/' + + x = urls.url_fix(r'file://host/sub/path') + assert x == r'file://host/sub/path' + + x = urls.url_fix(r'file:///') + assert x == r'file:///' + + +def test_url_fixing_qs(): + x = urls.url_fix(b'http://example.com/?foo=%2f%2f') + assert x == 'http://example.com/?foo=%2f%2f' + + x = urls.url_fix('http://acronyms.thefreedictionary.com/Algebraic+Methods+of+Solving+the+Schr%C3%B6dinger+Equation') + assert x == 'http://acronyms.thefreedictionary.com/Algebraic+Methods+of+Solving+the+Schr%C3%B6dinger+Equation' + + +def test_iri_support(): + strict_eq(urls.uri_to_iri('http://xn--n3h.net/'), + u'http://\u2603.net/') + strict_eq( + urls.uri_to_iri(b'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th'), + u'http://\xfcser:p\xe4ssword@\u2603.net/p\xe5th') + strict_eq(urls.iri_to_uri(u'http://☃.net/'), 'http://xn--n3h.net/') + strict_eq( + urls.iri_to_uri(u'http://üser:pässword@☃.net/påth'), + 'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th') + + strict_eq(urls.uri_to_iri('http://test.com/%3Fmeh?foo=%26%2F'), + u'http://test.com/%3Fmeh?foo=%26%2F') + + # this should work as well, might break on 2.4 because of a broken + # idna codec + strict_eq(urls.uri_to_iri(b'/foo'), u'/foo') + strict_eq(urls.iri_to_uri(u'/foo'), '/foo') + + strict_eq(urls.iri_to_uri(u'http://föö.com:8080/bam/baz'), + 'http://xn--f-1gaa.com:8080/bam/baz') + + +def test_iri_safe_conversion(): + strict_eq(urls.iri_to_uri(u'magnet:?foo=bar'), + 'magnet:?foo=bar') + strict_eq(urls.iri_to_uri(u'itms-service://?foo=bar'), + 'itms-service:?foo=bar') + strict_eq(urls.iri_to_uri(u'itms-service://?foo=bar', + safe_conversion=True), + 'itms-service://?foo=bar') + + +def test_iri_safe_quoting(): + uri = 'http://xn--f-1gaa.com/%2F%25?q=%C3%B6&x=%3D%25#%25' + iri = u'http://föö.com/%2F%25?q=ö&x=%3D%25#%25' + strict_eq(urls.uri_to_iri(uri), iri) + strict_eq(urls.iri_to_uri(urls.uri_to_iri(uri)), uri) + + +def test_ordered_multidict_encoding(): + d = OrderedMultiDict() + d.add('foo', 1) + d.add('foo', 2) + d.add('foo', 3) + d.add('bar', 0) + d.add('foo', 4) + assert urls.url_encode(d) == 'foo=1&foo=2&foo=3&bar=0&foo=4' + + +def test_multidict_encoding(): + d = OrderedMultiDict() + d.add('2013-10-10T23:26:05.657975+0000', '2013-10-10T23:26:05.657975+0000') + assert urls.url_encode(d) == '2013-10-10T23%3A26%3A05.657975%2B0000=2013-10-10T23%3A26%3A05.657975%2B0000' + + +def test_href(): + x = urls.Href('http://www.example.com/') + strict_eq(x(u'foo'), 'http://www.example.com/foo') + strict_eq(x.foo(u'bar'), 'http://www.example.com/foo/bar') + strict_eq(x.foo(u'bar', x=42), 'http://www.example.com/foo/bar?x=42') + strict_eq(x.foo(u'bar', class_=42), 'http://www.example.com/foo/bar?class=42') + strict_eq(x.foo(u'bar', {u'class': 42}), 'http://www.example.com/foo/bar?class=42') + pytest.raises(AttributeError, lambda: x.__blah__) + + x = urls.Href('blah') + strict_eq(x.foo(u'bar'), 'blah/foo/bar') + + pytest.raises(TypeError, x.foo, {u"foo": 23}, x=42) + + x = urls.Href('') + strict_eq(x('foo'), 'foo') + + +def test_href_url_join(): + x = urls.Href(u'test') + assert x(u'foo:bar') == u'test/foo:bar' + assert x(u'http://example.com/') == u'test/http://example.com/' + assert x.a() == u'test/a' + + +def test_href_past_root(): + base_href = urls.Href('http://www.blagga.com/1/2/3') + strict_eq(base_href('../foo'), 'http://www.blagga.com/1/2/foo') + strict_eq(base_href('../../foo'), 'http://www.blagga.com/1/foo') + strict_eq(base_href('../../../foo'), 'http://www.blagga.com/foo') + strict_eq(base_href('../../../../foo'), 'http://www.blagga.com/foo') + strict_eq(base_href('../../../../../foo'), 'http://www.blagga.com/foo') + strict_eq(base_href('../../../../../../foo'), 'http://www.blagga.com/foo') + + +def test_url_unquote_plus_unicode(): + # was broken in 0.6 + strict_eq(urls.url_unquote_plus(u'\x6d'), u'\x6d') + assert type(urls.url_unquote_plus(u'\x6d')) is text_type + + +def test_quoting_of_local_urls(): + rv = urls.iri_to_uri(u'/foo\x8f') + strict_eq(rv, '/foo%C2%8F') + assert type(rv) is str + + +def test_url_attributes(): + rv = urls.url_parse('http://foo%3a:bar%3a@[::1]:80/123?x=y#frag') + strict_eq(rv.scheme, 'http') + strict_eq(rv.auth, 'foo%3a:bar%3a') + strict_eq(rv.username, u'foo:') + strict_eq(rv.password, u'bar:') + strict_eq(rv.raw_username, 'foo%3a') + strict_eq(rv.raw_password, 'bar%3a') + strict_eq(rv.host, '::1') + assert rv.port == 80 + strict_eq(rv.path, '/123') + strict_eq(rv.query, 'x=y') + strict_eq(rv.fragment, 'frag') + + rv = urls.url_parse(u'http://\N{SNOWMAN}.com/') + strict_eq(rv.host, u'\N{SNOWMAN}.com') + strict_eq(rv.ascii_host, 'xn--n3h.com') + + +def test_url_attributes_bytes(): + rv = urls.url_parse(b'http://foo%3a:bar%3a@[::1]:80/123?x=y#frag') + strict_eq(rv.scheme, b'http') + strict_eq(rv.auth, b'foo%3a:bar%3a') + strict_eq(rv.username, u'foo:') + strict_eq(rv.password, u'bar:') + strict_eq(rv.raw_username, b'foo%3a') + strict_eq(rv.raw_password, b'bar%3a') + strict_eq(rv.host, b'::1') + assert rv.port == 80 + strict_eq(rv.path, b'/123') + strict_eq(rv.query, b'x=y') + strict_eq(rv.fragment, b'frag') + + +def test_url_joining(): + strict_eq(urls.url_join('/foo', '/bar'), '/bar') + strict_eq(urls.url_join('http://example.com/foo', '/bar'), + 'http://example.com/bar') + strict_eq(urls.url_join('file:///tmp/', 'test.html'), + 'file:///tmp/test.html') + strict_eq(urls.url_join('file:///tmp/x', 'test.html'), + 'file:///tmp/test.html') + strict_eq(urls.url_join('file:///tmp/x', '../../../x.html'), + 'file:///x.html') + + +def test_partial_unencoded_decode(): + ref = u'foo=정상처리'.encode('euc-kr') + x = urls.url_decode(ref, charset='euc-kr') + strict_eq(x['foo'], u'정상처리') + + +def test_iri_to_uri_idempotence_ascii_only(): + uri = u'http://www.idempoten.ce' + uri = urls.iri_to_uri(uri) + assert urls.iri_to_uri(uri) == uri + + +def test_iri_to_uri_idempotence_non_ascii(): + uri = u'http://\N{SNOWMAN}/\N{SNOWMAN}' + uri = urls.iri_to_uri(uri) + assert urls.iri_to_uri(uri) == uri + + +def test_uri_to_iri_idempotence_ascii_only(): + uri = 'http://www.idempoten.ce' + uri = urls.uri_to_iri(uri) + assert urls.uri_to_iri(uri) == uri + + +def test_uri_to_iri_idempotence_non_ascii(): + uri = 'http://xn--n3h/%E2%98%83' + uri = urls.uri_to_iri(uri) + assert urls.uri_to_iri(uri) == uri + + +def test_iri_to_uri_to_iri(): + iri = u'http://föö.com/' + uri = urls.iri_to_uri(iri) + assert urls.uri_to_iri(uri) == iri + + +def test_uri_to_iri_to_uri(): + uri = 'http://xn--f-rgao.com/%C3%9E' + iri = urls.uri_to_iri(uri) + assert urls.iri_to_uri(iri) == uri + + +def test_uri_iri_normalization(): + uri = 'http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93' + iri = u'http://föñ.com/\N{BALLOT BOX}/fred?utf8=\u2713' + + tests = [ + u'http://föñ.com/\N{BALLOT BOX}/fred?utf8=\u2713', + u'http://xn--f-rgao.com/\u2610/fred?utf8=\N{CHECK MARK}', + b'http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93', + u'http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93', + u'http://föñ.com/\u2610/fred?utf8=%E2%9C%93', + b'http://xn--f-rgao.com/\xe2\x98\x90/fred?utf8=\xe2\x9c\x93', + ] + + for test in tests: + assert urls.uri_to_iri(test) == iri + assert urls.iri_to_uri(test) == uri + assert urls.uri_to_iri(urls.iri_to_uri(test)) == iri + assert urls.iri_to_uri(urls.uri_to_iri(test)) == uri + assert urls.uri_to_iri(urls.uri_to_iri(test)) == iri + assert urls.iri_to_uri(urls.iri_to_uri(test)) == uri diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_utils.py python-werkzeug-0.10.4+dfsg1/tests/test_utils.py --- python-werkzeug-0.9.6+dfsg/tests/test_utils.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_utils.py 2015-03-26 15:33:58.000000000 +0000 @@ -0,0 +1,253 @@ +# -*- coding: utf-8 -*- +""" + tests.utils + ~~~~~~~~~~~ + + General utilities. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +from __future__ import with_statement + +import pytest + +from datetime import datetime +from functools import partial + +from werkzeug import utils +from werkzeug.datastructures import Headers +from werkzeug.http import parse_date, http_date +from werkzeug.wrappers import BaseResponse +from werkzeug.test import Client, run_wsgi_app +from werkzeug._compat import text_type, implements_iterator + + +def test_redirect(): + resp = utils.redirect(u'/füübär') + assert b'/f%C3%BC%C3%BCb%C3%A4r' in resp.get_data() + assert resp.headers['Location'] == '/f%C3%BC%C3%BCb%C3%A4r' + assert resp.status_code == 302 + + resp = utils.redirect(u'http://☃.net/', 307) + assert b'http://xn--n3h.net/' in resp.get_data() + assert resp.headers['Location'] == 'http://xn--n3h.net/' + assert resp.status_code == 307 + + resp = utils.redirect('http://example.com/', 305) + assert resp.headers['Location'] == 'http://example.com/' + assert resp.status_code == 305 + +def test_redirect_no_unicode_header_keys(): + # Make sure all headers are native keys. This was a bug at one point + # due to an incorrect conversion. + resp = utils.redirect('http://example.com/', 305) + for key, value in resp.headers.items(): + assert type(key) == str + assert type(value) == text_type + assert resp.headers['Location'] == 'http://example.com/' + assert resp.status_code == 305 + +def test_redirect_xss(): + location = 'http://example.com/?xss=">' + resp = utils.redirect(location) + assert b'' not in resp.get_data() + + location = 'http://example.com/?xss="onmouseover="alert(1)' + resp = utils.redirect(location) + assert b'href="http://example.com/?xss="onmouseover="alert(1)"' not in resp.get_data() + + +def test_redirect_with_custom_response_class(): + class MyResponse(BaseResponse): + pass + + location = "http://example.com/redirect" + resp = utils.redirect(location, Response=MyResponse) + + assert isinstance(resp, MyResponse) + assert resp.headers['Location'] == location + + +def test_cached_property(): + foo = [] + class A(object): + def prop(self): + foo.append(42) + return 42 + prop = utils.cached_property(prop) + + a = A() + p = a.prop + q = a.prop + assert p == q == 42 + assert foo == [42] + + foo = [] + class A(object): + def _prop(self): + foo.append(42) + return 42 + prop = utils.cached_property(_prop, name='prop') + del _prop + + a = A() + p = a.prop + q = a.prop + assert p == q == 42 + assert foo == [42] + +def test_environ_property(): + class A(object): + environ = {'string': 'abc', 'number': '42'} + + string = utils.environ_property('string') + missing = utils.environ_property('missing', 'spam') + read_only = utils.environ_property('number') + number = utils.environ_property('number', load_func=int) + broken_number = utils.environ_property('broken_number', load_func=int) + date = utils.environ_property('date', None, parse_date, http_date, + read_only=False) + foo = utils.environ_property('foo') + + a = A() + assert a.string == 'abc' + assert a.missing == 'spam' + def test_assign(): + a.read_only = 'something' + pytest.raises(AttributeError, test_assign) + assert a.number == 42 + assert a.broken_number == None + assert a.date is None + a.date = datetime(2008, 1, 22, 10, 0, 0, 0) + assert a.environ['date'] == 'Tue, 22 Jan 2008 10:00:00 GMT' + +def test_escape(): + class Foo(str): + def __html__(self): + return text_type(self) + assert utils.escape(None) == '' + assert utils.escape(42) == '42' + assert utils.escape('<>') == '<>' + assert utils.escape('"foo"') == '"foo"' + assert utils.escape(Foo('')) == '' + +def test_unescape(): + assert utils.unescape('<ä>') == u'<ä>' + +def test_import_string(): + import cgi + from werkzeug.debug import DebuggedApplication + assert utils.import_string('cgi.escape') is cgi.escape + assert utils.import_string(u'cgi.escape') is cgi.escape + assert utils.import_string('cgi:escape') is cgi.escape + assert utils.import_string('XXXXXXXXXXXX', True) is None + assert utils.import_string('cgi.XXXXXXXXXXXX', True) is None + assert utils.import_string(u'werkzeug.debug.DebuggedApplication') is DebuggedApplication + pytest.raises(ImportError, utils.import_string, 'XXXXXXXXXXXXXXXX') + pytest.raises(ImportError, utils.import_string, 'cgi.XXXXXXXXXX') + + +def test_import_string_attribute_error(tmpdir, monkeypatch): + monkeypatch.syspath_prepend(str(tmpdir)) + tmpdir.join('foo_test.py').write('from bar_test import value') + tmpdir.join('bar_test.py').write('raise AttributeError("screw you!")') + with pytest.raises(AttributeError) as foo_exc: + utils.import_string('foo_test') + assert 'screw you!' in str(foo_exc) + + with pytest.raises(AttributeError) as bar_exc: + utils.import_string('bar_test') + assert 'screw you!' in str(bar_exc) + + +def test_find_modules(): + assert list(utils.find_modules('werkzeug.debug')) == [ + 'werkzeug.debug.console', 'werkzeug.debug.repr', + 'werkzeug.debug.tbtools' + ] + +def test_html_builder(): + html = utils.html + xhtml = utils.xhtml + assert html.p('Hello World') == '

Hello World

' + assert html.a('Test', href='#') == 'Test' + assert html.br() == '
' + assert xhtml.br() == '
' + assert html.img(src='foo') == '' + assert xhtml.img(src='foo') == '' + assert html.html(html.head( + html.title('foo'), + html.script(type='text/javascript') + )) == ( + 'foo' + ) + assert html('') == '<foo>' + assert html.input(disabled=True) == '' + assert xhtml.input(disabled=True) == '' + assert html.input(disabled='') == '' + assert xhtml.input(disabled='') == '' + assert html.input(disabled=None) == '' + assert xhtml.input(disabled=None) == '' + assert html.script('alert("Hello World");') == \ + '' + assert xhtml.script('alert("Hello World");') == \ + '' + +def test_validate_arguments(): + take_none = lambda: None + take_two = lambda a, b: None + take_two_one_default = lambda a, b=0: None + + assert utils.validate_arguments(take_two, (1, 2,), {}) == ((1, 2), {}) + assert utils.validate_arguments(take_two, (1,), {'b': 2}) == ((1, 2), {}) + assert utils.validate_arguments(take_two_one_default, (1,), {}) == ((1, 0), {}) + assert utils.validate_arguments(take_two_one_default, (1, 2), {}) == ((1, 2), {}) + + pytest.raises(utils.ArgumentValidationError, + utils.validate_arguments, take_two, (), {}) + + assert utils.validate_arguments(take_none, (1, 2,), {'c': 3}) == ((), {}) + pytest.raises(utils.ArgumentValidationError, + utils.validate_arguments, take_none, (1,), {}, drop_extra=False) + pytest.raises(utils.ArgumentValidationError, + utils.validate_arguments, take_none, (), {'a': 1}, drop_extra=False) + +def test_header_set_duplication_bug(): + headers = Headers([ + ('Content-Type', 'text/html'), + ('Foo', 'bar'), + ('Blub', 'blah') + ]) + headers['blub'] = 'hehe' + headers['blafasel'] = 'humm' + assert headers == Headers([ + ('Content-Type', 'text/html'), + ('Foo', 'bar'), + ('blub', 'hehe'), + ('blafasel', 'humm') + ]) + +def test_append_slash_redirect(): + def app(env, sr): + return utils.append_slash_redirect(env)(env, sr) + client = Client(app, BaseResponse) + response = client.get('foo', base_url='http://example.org/app') + assert response.status_code == 301 + assert response.headers['Location'] == 'http://example.org/app/foo/' + +def test_cached_property_doc(): + @utils.cached_property + def foo(): + """testing""" + return 42 + assert foo.__doc__ == 'testing' + assert foo.__name__ == 'foo' + assert foo.__module__ == __name__ + +def test_secure_filename(): + assert utils.secure_filename('My cool movie.mov') == 'My_cool_movie.mov' + assert utils.secure_filename('../../../etc/passwd') == 'etc_passwd' + assert utils.secure_filename(u'i contain cool \xfcml\xe4uts.txt') == \ + 'i_contain_cool_umlauts.txt' diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_wrappers.py python-werkzeug-0.10.4+dfsg1/tests/test_wrappers.py --- python-werkzeug-0.9.6+dfsg/tests/test_wrappers.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_wrappers.py 2015-03-26 15:33:58.000000000 +0000 @@ -0,0 +1,900 @@ +# -*- coding: utf-8 -*- +""" + tests.wrappers + ~~~~~~~~~~~~~~ + + Tests for the response and request objects. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import pytest + +import pickle +from io import BytesIO +from datetime import datetime +from werkzeug._compat import iteritems + +from tests import strict_eq + +from werkzeug import wrappers +from werkzeug.exceptions import SecurityError +from werkzeug.wsgi import LimitedStream +from werkzeug.datastructures import MultiDict, ImmutableOrderedMultiDict, \ + ImmutableList, ImmutableTypeConversionDict, CharsetAccept, \ + MIMEAccept, LanguageAccept, Accept, CombinedMultiDict +from werkzeug.test import Client, create_environ, run_wsgi_app +from werkzeug._compat import implements_iterator, text_type + + +class RequestTestResponse(wrappers.BaseResponse): + """Subclass of the normal response class we use to test response + and base classes. Has some methods to test if things in the + response match. + """ + + def __init__(self, response, status, headers): + wrappers.BaseResponse.__init__(self, response, status, headers) + self.body_data = pickle.loads(self.get_data()) + + def __getitem__(self, key): + return self.body_data[key] + + +def request_demo_app(environ, start_response): + request = wrappers.BaseRequest(environ) + assert 'werkzeug.request' in environ + start_response('200 OK', [('Content-Type', 'text/plain')]) + return [pickle.dumps({ + 'args': request.args, + 'args_as_list': list(request.args.lists()), + 'form': request.form, + 'form_as_list': list(request.form.lists()), + 'environ': prepare_environ_pickle(request.environ), + 'data': request.get_data() + })] + + +def prepare_environ_pickle(environ): + result = {} + for key, value in iteritems(environ): + try: + pickle.dumps((key, value)) + except Exception: + continue + result[key] = value + return result + + + +def assert_environ(environ, method): + strict_eq(environ['REQUEST_METHOD'], method) + strict_eq(environ['PATH_INFO'], '/') + strict_eq(environ['SCRIPT_NAME'], '') + strict_eq(environ['SERVER_NAME'], 'localhost') + strict_eq(environ['wsgi.version'], (1, 0)) + strict_eq(environ['wsgi.url_scheme'], 'http') + +def test_base_request(): + client = Client(request_demo_app, RequestTestResponse) + + # get requests + response = client.get('/?foo=bar&foo=hehe') + strict_eq(response['args'], MultiDict([('foo', u'bar'), ('foo', u'hehe')])) + strict_eq(response['args_as_list'], [('foo', [u'bar', u'hehe'])]) + strict_eq(response['form'], MultiDict()) + strict_eq(response['form_as_list'], []) + strict_eq(response['data'], b'') + assert_environ(response['environ'], 'GET') + + # post requests with form data + response = client.post('/?blub=blah', data='foo=blub+hehe&blah=42', + content_type='application/x-www-form-urlencoded') + strict_eq(response['args'], MultiDict([('blub', u'blah')])) + strict_eq(response['args_as_list'], [('blub', [u'blah'])]) + strict_eq(response['form'], MultiDict([('foo', u'blub hehe'), ('blah', u'42')])) + strict_eq(response['data'], b'') + # currently we do not guarantee that the values are ordered correctly + # for post data. + ## strict_eq(response['form_as_list'], [('foo', ['blub hehe']), ('blah', ['42'])]) + assert_environ(response['environ'], 'POST') + + # patch requests with form data + response = client.patch('/?blub=blah', data='foo=blub+hehe&blah=42', + content_type='application/x-www-form-urlencoded') + strict_eq(response['args'], MultiDict([('blub', u'blah')])) + strict_eq(response['args_as_list'], [('blub', [u'blah'])]) + strict_eq(response['form'], + MultiDict([('foo', u'blub hehe'), ('blah', u'42')])) + strict_eq(response['data'], b'') + assert_environ(response['environ'], 'PATCH') + + # post requests with json data + json = b'{"foo": "bar", "blub": "blah"}' + response = client.post('/?a=b', data=json, content_type='application/json') + strict_eq(response['data'], json) + strict_eq(response['args'], MultiDict([('a', u'b')])) + strict_eq(response['form'], MultiDict()) + +def test_query_string_is_bytes(): + req = wrappers.Request.from_values(u'/?foo=%2f') + strict_eq(req.query_string, b'foo=%2f') + +def test_request_repr(): + req = wrappers.Request.from_values('/foobar') + assert "" == repr(req) + # test with non-ascii characters + req = wrappers.Request.from_values('/привет') + assert "" == repr(req) + # test with unicode type for python 2 + req = wrappers.Request.from_values(u'/привет') + assert "" == repr(req) + +def test_access_route(): + req = wrappers.Request.from_values(headers={ + 'X-Forwarded-For': '192.168.1.2, 192.168.1.1' + }) + req.environ['REMOTE_ADDR'] = '192.168.1.3' + assert req.access_route == ['192.168.1.2', '192.168.1.1'] + strict_eq(req.remote_addr, '192.168.1.3') + + req = wrappers.Request.from_values() + req.environ['REMOTE_ADDR'] = '192.168.1.3' + strict_eq(list(req.access_route), ['192.168.1.3']) + +def test_url_request_descriptors(): + req = wrappers.Request.from_values('/bar?foo=baz', 'http://example.com/test') + strict_eq(req.path, u'/bar') + strict_eq(req.full_path, u'/bar?foo=baz') + strict_eq(req.script_root, u'/test') + strict_eq(req.url, u'http://example.com/test/bar?foo=baz') + strict_eq(req.base_url, u'http://example.com/test/bar') + strict_eq(req.url_root, u'http://example.com/test/') + strict_eq(req.host_url, u'http://example.com/') + strict_eq(req.host, 'example.com') + strict_eq(req.scheme, 'http') + + req = wrappers.Request.from_values('/bar?foo=baz', 'https://example.com/test') + strict_eq(req.scheme, 'https') + +def test_url_request_descriptors_query_quoting(): + next = 'http%3A%2F%2Fwww.example.com%2F%3Fnext%3D%2F' + req = wrappers.Request.from_values('/bar?next=' + next, 'http://example.com/') + assert req.path == u'/bar' + strict_eq(req.full_path, u'/bar?next=' + next) + strict_eq(req.url, u'http://example.com/bar?next=' + next) + +def test_url_request_descriptors_hosts(): + req = wrappers.Request.from_values('/bar?foo=baz', 'http://example.com/test') + req.trusted_hosts = ['example.com'] + strict_eq(req.path, u'/bar') + strict_eq(req.full_path, u'/bar?foo=baz') + strict_eq(req.script_root, u'/test') + strict_eq(req.url, u'http://example.com/test/bar?foo=baz') + strict_eq(req.base_url, u'http://example.com/test/bar') + strict_eq(req.url_root, u'http://example.com/test/') + strict_eq(req.host_url, u'http://example.com/') + strict_eq(req.host, 'example.com') + strict_eq(req.scheme, 'http') + + req = wrappers.Request.from_values('/bar?foo=baz', 'https://example.com/test') + strict_eq(req.scheme, 'https') + + req = wrappers.Request.from_values('/bar?foo=baz', 'http://example.com/test') + req.trusted_hosts = ['example.org'] + pytest.raises(SecurityError, lambda: req.url) + pytest.raises(SecurityError, lambda: req.base_url) + pytest.raises(SecurityError, lambda: req.url_root) + pytest.raises(SecurityError, lambda: req.host_url) + pytest.raises(SecurityError, lambda: req.host) + +def test_authorization_mixin(): + request = wrappers.Request.from_values(headers={ + 'Authorization': 'Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==' + }) + a = request.authorization + strict_eq(a.type, 'basic') + strict_eq(a.username, 'Aladdin') + strict_eq(a.password, 'open sesame') + +def test_stream_only_mixing(): + request = wrappers.PlainRequest.from_values( + data=b'foo=blub+hehe', + content_type='application/x-www-form-urlencoded' + ) + assert list(request.files.items()) == [] + assert list(request.form.items()) == [] + pytest.raises(AttributeError, lambda: request.data) + strict_eq(request.stream.read(), b'foo=blub+hehe') + +def test_base_response(): + # unicode + response = wrappers.BaseResponse(u'öäü') + strict_eq(response.get_data(), u'öäü'.encode('utf-8')) + + # writing + response = wrappers.Response('foo') + response.stream.write('bar') + strict_eq(response.get_data(), b'foobar') + + # set cookie + response = wrappers.BaseResponse() + response.set_cookie('foo', 'bar', 60, 0, '/blub', 'example.org') + strict_eq(response.headers.to_wsgi_list(), [ + ('Content-Type', 'text/plain; charset=utf-8'), + ('Set-Cookie', 'foo=bar; Domain=example.org; Expires=Thu, ' + '01-Jan-1970 00:00:00 GMT; Max-Age=60; Path=/blub') + ]) + + # delete cookie + response = wrappers.BaseResponse() + response.delete_cookie('foo') + strict_eq(response.headers.to_wsgi_list(), [ + ('Content-Type', 'text/plain; charset=utf-8'), + ('Set-Cookie', 'foo=; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Max-Age=0; Path=/') + ]) + + # close call forwarding + closed = [] + @implements_iterator + class Iterable(object): + def __next__(self): + raise StopIteration() + def __iter__(self): + return self + def close(self): + closed.append(True) + response = wrappers.BaseResponse(Iterable()) + response.call_on_close(lambda: closed.append(True)) + app_iter, status, headers = run_wsgi_app(response, + create_environ(), + buffered=True) + strict_eq(status, '200 OK') + strict_eq(''.join(app_iter), '') + strict_eq(len(closed), 2) + + # with statement + del closed[:] + response = wrappers.BaseResponse(Iterable()) + with response: + pass + assert len(closed) == 1 + +def test_response_status_codes(): + response = wrappers.BaseResponse() + response.status_code = 404 + strict_eq(response.status, '404 NOT FOUND') + response.status = '200 OK' + strict_eq(response.status_code, 200) + response.status = '999 WTF' + strict_eq(response.status_code, 999) + response.status_code = 588 + strict_eq(response.status_code, 588) + strict_eq(response.status, '588 UNKNOWN') + response.status = 'wtf' + strict_eq(response.status_code, 0) + strict_eq(response.status, '0 wtf') + +def test_type_forcing(): + def wsgi_application(environ, start_response): + start_response('200 OK', [('Content-Type', 'text/html')]) + return ['Hello World!'] + base_response = wrappers.BaseResponse('Hello World!', content_type='text/html') + + class SpecialResponse(wrappers.Response): + def foo(self): + return 42 + + # good enough for this simple application, but don't ever use that in + # real world examples! + fake_env = {} + + for orig_resp in wsgi_application, base_response: + response = SpecialResponse.force_type(orig_resp, fake_env) + assert response.__class__ is SpecialResponse + strict_eq(response.foo(), 42) + strict_eq(response.get_data(), b'Hello World!') + assert response.content_type == 'text/html' + + # without env, no arbitrary conversion + pytest.raises(TypeError, SpecialResponse.force_type, wsgi_application) + +def test_accept_mixin(): + request = wrappers.Request({ + 'HTTP_ACCEPT': 'text/xml,application/xml,application/xhtml+xml,' + 'text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5', + 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', + 'HTTP_ACCEPT_ENCODING': 'gzip,deflate', + 'HTTP_ACCEPT_LANGUAGE': 'en-us,en;q=0.5' + }) + assert request.accept_mimetypes == MIMEAccept([ + ('text/xml', 1), ('image/png', 1), ('application/xml', 1), + ('application/xhtml+xml', 1), ('text/html', 0.9), + ('text/plain', 0.8), ('*/*', 0.5) + ]) + strict_eq(request.accept_charsets, CharsetAccept([ + ('ISO-8859-1', 1), ('utf-8', 0.7), ('*', 0.7) + ])) + strict_eq(request.accept_encodings, Accept([ + ('gzip', 1), ('deflate', 1)])) + strict_eq(request.accept_languages, LanguageAccept([ + ('en-us', 1), ('en', 0.5)])) + + request = wrappers.Request({'HTTP_ACCEPT': ''}) + strict_eq(request.accept_mimetypes, MIMEAccept()) + +def test_etag_request_mixin(): + request = wrappers.Request({ + 'HTTP_CACHE_CONTROL': 'no-store, no-cache', + 'HTTP_IF_MATCH': 'w/"foo", bar, "baz"', + 'HTTP_IF_NONE_MATCH': 'w/"foo", bar, "baz"', + 'HTTP_IF_MODIFIED_SINCE': 'Tue, 22 Jan 2008 11:18:44 GMT', + 'HTTP_IF_UNMODIFIED_SINCE': 'Tue, 22 Jan 2008 11:18:44 GMT' + }) + assert request.cache_control.no_store + assert request.cache_control.no_cache + + for etags in request.if_match, request.if_none_match: + assert etags('bar') + assert etags.contains_raw('w/"foo"') + assert etags.contains_weak('foo') + assert not etags.contains('foo') + + assert request.if_modified_since == datetime(2008, 1, 22, 11, 18, 44) + assert request.if_unmodified_since == datetime(2008, 1, 22, 11, 18, 44) + +def test_user_agent_mixin(): + user_agents = [ + ('Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.11) ' + 'Gecko/20071127 Firefox/2.0.0.11', 'firefox', 'macos', '2.0.0.11', + 'en-US'), + ('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; de-DE) Opera 8.54', + 'opera', 'windows', '8.54', 'de-DE'), + ('Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420 ' + '(KHTML, like Gecko) Version/3.0 Mobile/1A543a Safari/419.3', + 'safari', 'iphone', '419.3', 'en'), + ('Bot Googlebot/2.1 ( http://www.googlebot.com/bot.html)', + 'google', None, '2.1', None), + ('Mozilla/5.0 (X11; CrOS armv7l 3701.81.0) AppleWebKit/537.31 ' + '(KHTML, like Gecko) Chrome/26.0.1410.57 Safari/537.31', + 'chrome', 'chromeos', '26.0.1410.57', None), + ('Mozilla/5.0 (Windows NT 6.3; Trident/7.0; .NET4.0E; rv:11.0) like Gecko', + 'msie', 'windows', '11.0', None), + ('Mozilla/5.0 (SymbianOS/9.3; Series60/3.2 NokiaE5-00/101.003; ' + 'Profile/MIDP-2.1 Configuration/CLDC-1.1 ) AppleWebKit/533.4 (KHTML, like Gecko) ' + 'NokiaBrowser/7.3.1.35 Mobile Safari/533.4 3gpp-gba', + 'safari', 'symbian', '533.4', None) + + ] + for ua, browser, platform, version, lang in user_agents: + request = wrappers.Request({'HTTP_USER_AGENT': ua}) + strict_eq(request.user_agent.browser, browser) + strict_eq(request.user_agent.platform, platform) + strict_eq(request.user_agent.version, version) + strict_eq(request.user_agent.language, lang) + assert bool(request.user_agent) + strict_eq(request.user_agent.to_header(), ua) + strict_eq(str(request.user_agent), ua) + + request = wrappers.Request({'HTTP_USER_AGENT': 'foo'}) + assert not request.user_agent + +def test_stream_wrapping(): + class LowercasingStream(object): + def __init__(self, stream): + self._stream = stream + def read(self, size=-1): + return self._stream.read(size).lower() + def readline(self, size=-1): + return self._stream.readline(size).lower() + + data = b'foo=Hello+World' + req = wrappers.Request.from_values('/', method='POST', data=data, + content_type='application/x-www-form-urlencoded') + req.stream = LowercasingStream(req.stream) + assert req.form['foo'] == 'hello world' + +def test_data_descriptor_triggers_parsing(): + data = b'foo=Hello+World' + req = wrappers.Request.from_values('/', method='POST', data=data, + content_type='application/x-www-form-urlencoded') + + assert req.data == b'' + assert req.form['foo'] == u'Hello World' + +def test_get_data_method_parsing_caching_behavior(): + data = b'foo=Hello+World' + req = wrappers.Request.from_values('/', method='POST', data=data, + content_type='application/x-www-form-urlencoded') + + # get_data() caches, so form stays available + assert req.get_data() == data + assert req.form['foo'] == u'Hello World' + assert req.get_data() == data + + # here we access the form data first, caching is bypassed + req = wrappers.Request.from_values('/', method='POST', data=data, + content_type='application/x-www-form-urlencoded') + assert req.form['foo'] == u'Hello World' + assert req.get_data() == b'' + + # Another case is uncached get data which trashes everything + req = wrappers.Request.from_values('/', method='POST', data=data, + content_type='application/x-www-form-urlencoded') + assert req.get_data(cache=False) == data + assert req.get_data(cache=False) == b'' + assert req.form == {} + + # Or we can implicitly start the form parser which is similar to + # the old .data behavior + req = wrappers.Request.from_values('/', method='POST', data=data, + content_type='application/x-www-form-urlencoded') + assert req.get_data(parse_form_data=True) == b'' + assert req.form['foo'] == u'Hello World' + +def test_etag_response_mixin(): + response = wrappers.Response('Hello World') + assert response.get_etag() == (None, None) + response.add_etag() + assert response.get_etag() == ('b10a8db164e0754105b7a99be72e3fe5', False) + assert not response.cache_control + response.cache_control.must_revalidate = True + response.cache_control.max_age = 60 + response.headers['Content-Length'] = len(response.get_data()) + assert response.headers['Cache-Control'] in ('must-revalidate, max-age=60', + 'max-age=60, must-revalidate') + + assert 'date' not in response.headers + env = create_environ() + env.update({ + 'REQUEST_METHOD': 'GET', + 'HTTP_IF_NONE_MATCH': response.get_etag()[0] + }) + response.make_conditional(env) + assert 'date' in response.headers + + # after the thing is invoked by the server as wsgi application + # (we're emulating this here), there must not be any entity + # headers left and the status code would have to be 304 + resp = wrappers.Response.from_app(response, env) + assert resp.status_code == 304 + assert not 'content-length' in resp.headers + + # make sure date is not overriden + response = wrappers.Response('Hello World') + response.date = 1337 + d = response.date + response.make_conditional(env) + assert response.date == d + + # make sure content length is only set if missing + response = wrappers.Response('Hello World') + response.content_length = 999 + response.make_conditional(env) + assert response.content_length == 999 + +def test_etag_response_mixin_freezing(): + class WithFreeze(wrappers.ETagResponseMixin, wrappers.BaseResponse): + pass + class WithoutFreeze(wrappers.BaseResponse, wrappers.ETagResponseMixin): + pass + + response = WithFreeze('Hello World') + response.freeze() + strict_eq(response.get_etag(), + (text_type(wrappers.generate_etag(b'Hello World')), False)) + response = WithoutFreeze('Hello World') + response.freeze() + assert response.get_etag() == (None, None) + response = wrappers.Response('Hello World') + response.freeze() + assert response.get_etag() == (None, None) + +def test_authenticate_mixin(): + resp = wrappers.Response() + resp.www_authenticate.type = 'basic' + resp.www_authenticate.realm = 'Testing' + strict_eq(resp.headers['WWW-Authenticate'], u'Basic realm="Testing"') + resp.www_authenticate.realm = None + resp.www_authenticate.type = None + assert 'WWW-Authenticate' not in resp.headers + +def test_authenticate_mixin_quoted_qop(): + # Example taken from https://github.com/mitsuhiko/werkzeug/issues/633 + resp = wrappers.Response() + resp.www_authenticate.set_digest('REALM','NONCE',qop=("auth","auth-int")) + + actual = set((resp.headers['WWW-Authenticate'] + ',').split()) + expected = set('Digest nonce="NONCE", realm="REALM", qop="auth, auth-int",'.split()) + assert actual == expected + + resp.www_authenticate.set_digest('REALM','NONCE',qop=("auth",)) + + actual = set((resp.headers['WWW-Authenticate'] + ',').split()) + expected = set('Digest nonce="NONCE", realm="REALM", qop="auth",'.split()) + assert actual == expected + +def test_response_stream_mixin(): + response = wrappers.Response() + response.stream.write('Hello ') + response.stream.write('World!') + assert response.response == ['Hello ', 'World!'] + assert response.get_data() == b'Hello World!' + +def test_common_response_descriptors_mixin(): + response = wrappers.Response() + response.mimetype = 'text/html' + assert response.mimetype == 'text/html' + assert response.content_type == 'text/html; charset=utf-8' + assert response.mimetype_params == {'charset': 'utf-8'} + response.mimetype_params['x-foo'] = 'yep' + del response.mimetype_params['charset'] + assert response.content_type == 'text/html; x-foo=yep' + + now = datetime.utcnow().replace(microsecond=0) + + assert response.content_length is None + response.content_length = '42' + assert response.content_length == 42 + + for attr in 'date', 'age', 'expires': + assert getattr(response, attr) is None + setattr(response, attr, now) + assert getattr(response, attr) == now + + assert response.retry_after is None + response.retry_after = now + assert response.retry_after == now + + assert not response.vary + response.vary.add('Cookie') + response.vary.add('Content-Language') + assert 'cookie' in response.vary + assert response.vary.to_header() == 'Cookie, Content-Language' + response.headers['Vary'] = 'Content-Encoding' + assert response.vary.as_set() == set(['content-encoding']) + + response.allow.update(['GET', 'POST']) + assert response.headers['Allow'] == 'GET, POST' + + response.content_language.add('en-US') + response.content_language.add('fr') + assert response.headers['Content-Language'] == 'en-US, fr' + +def test_common_request_descriptors_mixin(): + request = wrappers.Request.from_values( + content_type='text/html; charset=utf-8', + content_length='23', + headers={ + 'Referer': 'http://www.example.com/', + 'Date': 'Sat, 28 Feb 2009 19:04:35 GMT', + 'Max-Forwards': '10', + 'Pragma': 'no-cache', + 'Content-Encoding': 'gzip', + 'Content-MD5': '9a3bc6dbc47a70db25b84c6e5867a072' + } + ) + + assert request.content_type == 'text/html; charset=utf-8' + assert request.mimetype == 'text/html' + assert request.mimetype_params == {'charset': 'utf-8'} + assert request.content_length == 23 + assert request.referrer == 'http://www.example.com/' + assert request.date == datetime(2009, 2, 28, 19, 4, 35) + assert request.max_forwards == 10 + assert 'no-cache' in request.pragma + assert request.content_encoding == 'gzip' + assert request.content_md5 == '9a3bc6dbc47a70db25b84c6e5867a072' + +def test_shallow_mode(): + request = wrappers.Request({'QUERY_STRING': 'foo=bar'}, shallow=True) + assert request.args['foo'] == 'bar' + pytest.raises(RuntimeError, lambda: request.form['foo']) + +def test_form_parsing_failed(): + data = ( + b'--blah\r\n' + ) + data = wrappers.Request.from_values( + input_stream=BytesIO(data), + content_length=len(data), + content_type='multipart/form-data; boundary=foo', + method='POST' + ) + assert not data.files + assert not data.form + +def test_file_closing(): + data = (b'--foo\r\n' + b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n' + b'Content-Type: text/plain; charset=utf-8\r\n\r\n' + b'file contents, just the contents\r\n' + b'--foo--') + req = wrappers.Request.from_values( + input_stream=BytesIO(data), + content_length=len(data), + content_type='multipart/form-data; boundary=foo', + method='POST' + ) + foo = req.files['foo'] + assert foo.mimetype == 'text/plain' + assert foo.filename == 'foo.txt' + + assert foo.closed == False + req.close() + assert foo.closed == True + +def test_file_closing_with(): + data = (b'--foo\r\n' + b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n' + b'Content-Type: text/plain; charset=utf-8\r\n\r\n' + b'file contents, just the contents\r\n' + b'--foo--') + req = wrappers.Request.from_values( + input_stream=BytesIO(data), + content_length=len(data), + content_type='multipart/form-data; boundary=foo', + method='POST' + ) + with req: + foo = req.files['foo'] + assert foo.mimetype == 'text/plain' + assert foo.filename == 'foo.txt' + + assert foo.closed == True + +def test_url_charset_reflection(): + req = wrappers.Request.from_values() + req.charset = 'utf-7' + assert req.url_charset == 'utf-7' + +def test_response_streamed(): + r = wrappers.Response() + assert not r.is_streamed + r = wrappers.Response("Hello World") + assert not r.is_streamed + r = wrappers.Response(["foo", "bar"]) + assert not r.is_streamed + def gen(): + if 0: + yield None + r = wrappers.Response(gen()) + assert r.is_streamed + +def test_response_iter_wrapping(): + def uppercasing(iterator): + for item in iterator: + yield item.upper() + def generator(): + yield 'foo' + yield 'bar' + req = wrappers.Request.from_values() + resp = wrappers.Response(generator()) + del resp.headers['Content-Length'] + resp.response = uppercasing(resp.iter_encoded()) + actual_resp = wrappers.Response.from_app(resp, req.environ, buffered=True) + assert actual_resp.get_data() == b'FOOBAR' + +def test_response_freeze(): + def generate(): + yield "foo" + yield "bar" + resp = wrappers.Response(generate()) + resp.freeze() + assert resp.response == [b'foo', b'bar'] + assert resp.headers['content-length'] == '6' + +def test_other_method_payload(): + data = b'Hello World' + req = wrappers.Request.from_values(input_stream=BytesIO(data), + content_length=len(data), + content_type='text/plain', + method='WHAT_THE_FUCK') + assert req.get_data() == data + assert isinstance(req.stream, LimitedStream) + +def test_urlfication(): + resp = wrappers.Response() + resp.headers['Location'] = u'http://üser:pässword@☃.net/påth' + resp.headers['Content-Location'] = u'http://☃.net/' + headers = resp.get_wsgi_headers(create_environ()) + assert headers['location'] == \ + 'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th' + assert headers['content-location'] == 'http://xn--n3h.net/' + +def test_new_response_iterator_behavior(): + req = wrappers.Request.from_values() + resp = wrappers.Response(u'Hello Wörld!') + + def get_content_length(resp): + headers = resp.get_wsgi_headers(req.environ) + return headers.get('content-length', type=int) + + def generate_items(): + yield "Hello " + yield u"Wörld!" + + # werkzeug encodes when set to `data` now, which happens + # if a string is passed to the response object. + assert resp.response == [u'Hello Wörld!'.encode('utf-8')] + assert resp.get_data() == u'Hello Wörld!'.encode('utf-8') + assert get_content_length(resp) == 13 + assert not resp.is_streamed + assert resp.is_sequence + + # try the same for manual assignment + resp.set_data(u'Wörd') + assert resp.response == [u'Wörd'.encode('utf-8')] + assert resp.get_data() == u'Wörd'.encode('utf-8') + assert get_content_length(resp) == 5 + assert not resp.is_streamed + assert resp.is_sequence + + # automatic generator sequence conversion + resp.response = generate_items() + assert resp.is_streamed + assert not resp.is_sequence + assert resp.get_data() == u'Hello Wörld!'.encode('utf-8') + assert resp.response == [b'Hello ', u'Wörld!'.encode('utf-8')] + assert not resp.is_streamed + assert resp.is_sequence + + # automatic generator sequence conversion + resp.response = generate_items() + resp.implicit_sequence_conversion = False + assert resp.is_streamed + assert not resp.is_sequence + pytest.raises(RuntimeError, lambda: resp.get_data()) + resp.make_sequence() + assert resp.get_data() == u'Hello Wörld!'.encode('utf-8') + assert resp.response == [b'Hello ', u'Wörld!'.encode('utf-8')] + assert not resp.is_streamed + assert resp.is_sequence + + # stream makes it a list no matter how the conversion is set + for val in True, False: + resp.implicit_sequence_conversion = val + resp.response = ("foo", "bar") + assert resp.is_sequence + resp.stream.write('baz') + assert resp.response == ['foo', 'bar', 'baz'] + +def test_form_data_ordering(): + class MyRequest(wrappers.Request): + parameter_storage_class = ImmutableOrderedMultiDict + + req = MyRequest.from_values('/?foo=1&bar=0&foo=3') + assert list(req.args) == ['foo', 'bar'] + assert list(req.args.items(multi=True)) == [ + ('foo', '1'), + ('bar', '0'), + ('foo', '3') + ] + assert isinstance(req.args, ImmutableOrderedMultiDict) + assert isinstance(req.values, CombinedMultiDict) + assert req.values['foo'] == '1' + assert req.values.getlist('foo') == ['1', '3'] + +def test_storage_classes(): + class MyRequest(wrappers.Request): + dict_storage_class = dict + list_storage_class = list + parameter_storage_class = dict + req = MyRequest.from_values('/?foo=baz', headers={ + 'Cookie': 'foo=bar' + }) + assert type(req.cookies) is dict + assert req.cookies == {'foo': 'bar'} + assert type(req.access_route) is list + + assert type(req.args) is dict + assert type(req.values) is CombinedMultiDict + assert req.values['foo'] == u'baz' + + req = wrappers.Request.from_values(headers={ + 'Cookie': 'foo=bar' + }) + assert type(req.cookies) is ImmutableTypeConversionDict + assert req.cookies == {'foo': 'bar'} + assert type(req.access_route) is ImmutableList + + MyRequest.list_storage_class = tuple + req = MyRequest.from_values() + assert type(req.access_route) is tuple + +def test_response_headers_passthrough(): + headers = wrappers.Headers() + resp = wrappers.Response(headers=headers) + assert resp.headers is headers + +def test_response_304_no_content_length(): + resp = wrappers.Response('Test', status=304) + env = create_environ() + assert 'content-length' not in resp.get_wsgi_headers(env) + +def test_ranges(): + # basic range stuff + req = wrappers.Request.from_values() + assert req.range is None + req = wrappers.Request.from_values(headers={'Range': 'bytes=0-499'}) + assert req.range.ranges == [(0, 500)] + + resp = wrappers.Response() + resp.content_range = req.range.make_content_range(1000) + assert resp.content_range.units == 'bytes' + assert resp.content_range.start == 0 + assert resp.content_range.stop == 500 + assert resp.content_range.length == 1000 + assert resp.headers['Content-Range'] == 'bytes 0-499/1000' + + resp.content_range.unset() + assert 'Content-Range' not in resp.headers + + resp.headers['Content-Range'] = 'bytes 0-499/1000' + assert resp.content_range.units == 'bytes' + assert resp.content_range.start == 0 + assert resp.content_range.stop == 500 + assert resp.content_range.length == 1000 + +def test_auto_content_length(): + resp = wrappers.Response('Hello World!') + assert resp.content_length == 12 + + resp = wrappers.Response(['Hello World!']) + assert resp.content_length is None + assert resp.get_wsgi_headers({})['Content-Length'] == '12' + +def test_stream_content_length(): + resp = wrappers.Response() + resp.stream.writelines(['foo', 'bar', 'baz']) + assert resp.get_wsgi_headers({})['Content-Length'] == '9' + + resp = wrappers.Response() + resp.make_conditional({'REQUEST_METHOD': 'GET'}) + resp.stream.writelines(['foo', 'bar', 'baz']) + assert resp.get_wsgi_headers({})['Content-Length'] == '9' + + resp = wrappers.Response('foo') + resp.stream.writelines(['bar', 'baz']) + assert resp.get_wsgi_headers({})['Content-Length'] == '9' + +def test_disabled_auto_content_length(): + class MyResponse(wrappers.Response): + automatically_set_content_length = False + resp = MyResponse('Hello World!') + assert resp.content_length is None + + resp = MyResponse(['Hello World!']) + assert resp.content_length is None + assert 'Content-Length' not in resp.get_wsgi_headers({}) + + resp = MyResponse() + resp.make_conditional({ + 'REQUEST_METHOD': 'GET' + }) + assert resp.content_length is None + assert 'Content-Length' not in resp.get_wsgi_headers({}) + +def test_location_header_autocorrect(): + env = create_environ() + class MyResponse(wrappers.Response): + autocorrect_location_header = False + resp = MyResponse('Hello World!') + resp.headers['Location'] = '/test' + assert resp.get_wsgi_headers(env)['Location'] == '/test' + + resp = wrappers.Response('Hello World!') + resp.headers['Location'] = '/test' + assert resp.get_wsgi_headers(env)['Location'] == 'http://localhost/test' + +def test_modified_url_encoding(): + class ModifiedRequest(wrappers.Request): + url_charset = 'euc-kr' + + req = ModifiedRequest.from_values(u'/?foo=정상처리'.encode('euc-kr')) + strict_eq(req.args['foo'], u'정상처리') + + +def test_request_method_case_sensitivity(): + req = wrappers.Request({'REQUEST_METHOD': 'get'}) + assert req.method == 'GET' diff -Nru python-werkzeug-0.9.6+dfsg/tests/test_wsgi.py python-werkzeug-0.10.4+dfsg1/tests/test_wsgi.py --- python-werkzeug-0.9.6+dfsg/tests/test_wsgi.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/tests/test_wsgi.py 2015-03-26 14:11:47.000000000 +0000 @@ -0,0 +1,367 @@ +# -*- coding: utf-8 -*- +""" + tests.wsgi + ~~~~~~~~~~ + + Tests the WSGI utilities. + + :copyright: (c) 2014 by Armin Ronacher. + :license: BSD, see LICENSE for more details. +""" +import pytest + +from os import path +from contextlib import closing + +from tests import strict_eq + +from werkzeug.wrappers import BaseResponse +from werkzeug.exceptions import BadRequest, ClientDisconnected +from werkzeug.test import Client, create_environ, run_wsgi_app +from werkzeug import wsgi +from werkzeug._compat import StringIO, BytesIO, NativeStringIO, to_native, \ + to_bytes + + +def test_shareddatamiddleware_get_file_loader(): + app = wsgi.SharedDataMiddleware(None, {}) + assert callable(app.get_file_loader('foo')) + +def test_shared_data_middleware(tmpdir): + def null_application(environ, start_response): + start_response('404 NOT FOUND', [('Content-Type', 'text/plain')]) + yield b'NOT FOUND' + + test_dir = str(tmpdir) + with open(path.join(test_dir, to_native(u'äöü', 'utf-8')), 'w') as test_file: + test_file.write(u'FOUND') + + app = wsgi.SharedDataMiddleware(null_application, { + '/': path.join(path.dirname(__file__), 'res'), + '/sources': path.join(path.dirname(__file__), 'res'), + '/pkg': ('werkzeug.debug', 'shared'), + '/foo': test_dir + }) + + for p in '/test.txt', '/sources/test.txt', '/foo/äöü': + app_iter, status, headers = run_wsgi_app(app, create_environ(p)) + assert status == '200 OK' + with closing(app_iter) as app_iter: + data = b''.join(app_iter).strip() + assert data == b'FOUND' + + app_iter, status, headers = run_wsgi_app( + app, create_environ('/pkg/debugger.js')) + with closing(app_iter) as app_iter: + contents = b''.join(app_iter) + assert b'$(function() {' in contents + + app_iter, status, headers = run_wsgi_app( + app, create_environ('/missing')) + assert status == '404 NOT FOUND' + assert b''.join(app_iter).strip() == b'NOT FOUND' + +def test_dispatchermiddleware(): + def null_application(environ, start_response): + start_response('404 NOT FOUND', [('Content-Type', 'text/plain')]) + yield b'NOT FOUND' + + def dummy_application(environ, start_response): + start_response('200 OK', [('Content-Type', 'text/plain')]) + yield to_bytes(environ['SCRIPT_NAME']) + + app = wsgi.DispatcherMiddleware(null_application, { + '/test1': dummy_application, + '/test2/very': dummy_application, + }) + tests = { + '/test1': ('/test1', '/test1/asfd', '/test1/very'), + '/test2/very': ('/test2/very', '/test2/very/long/path/after/script/name') + } + for name, urls in tests.items(): + for p in urls: + environ = create_environ(p) + app_iter, status, headers = run_wsgi_app(app, environ) + assert status == '200 OK' + assert b''.join(app_iter).strip() == to_bytes(name) + + app_iter, status, headers = run_wsgi_app( + app, create_environ('/missing')) + assert status == '404 NOT FOUND' + assert b''.join(app_iter).strip() == b'NOT FOUND' + +def test_get_host(): + env = {'HTTP_X_FORWARDED_HOST': 'example.org', + 'SERVER_NAME': 'bullshit', 'HOST_NAME': 'ignore me dammit'} + assert wsgi.get_host(env) == 'example.org' + assert wsgi.get_host(create_environ('/', 'http://example.org')) == \ + 'example.org' + +def test_get_host_multiple_forwarded(): + env = {'HTTP_X_FORWARDED_HOST': 'example.com, example.org', + 'SERVER_NAME': 'bullshit', 'HOST_NAME': 'ignore me dammit'} + assert wsgi.get_host(env) == 'example.com' + assert wsgi.get_host(create_environ('/', 'http://example.com')) == \ + 'example.com' + +def test_get_host_validation(): + env = {'HTTP_X_FORWARDED_HOST': 'example.org', + 'SERVER_NAME': 'bullshit', 'HOST_NAME': 'ignore me dammit'} + assert wsgi.get_host(env, trusted_hosts=['.example.org']) == 'example.org' + pytest.raises(BadRequest, wsgi.get_host, env, + trusted_hosts=['example.com']) + +def test_responder(): + def foo(environ, start_response): + return BaseResponse(b'Test') + client = Client(wsgi.responder(foo), BaseResponse) + response = client.get('/') + assert response.status_code == 200 + assert response.data == b'Test' + +def test_pop_path_info(): + original_env = {'SCRIPT_NAME': '/foo', 'PATH_INFO': '/a/b///c'} + + # regular path info popping + def assert_tuple(script_name, path_info): + assert env.get('SCRIPT_NAME') == script_name + assert env.get('PATH_INFO') == path_info + env = original_env.copy() + pop = lambda: wsgi.pop_path_info(env) + + assert_tuple('/foo', '/a/b///c') + assert pop() == 'a' + assert_tuple('/foo/a', '/b///c') + assert pop() == 'b' + assert_tuple('/foo/a/b', '///c') + assert pop() == 'c' + assert_tuple('/foo/a/b///c', '') + assert pop() is None + +def test_peek_path_info(): + env = { + 'SCRIPT_NAME': '/foo', + 'PATH_INFO': '/aaa/b///c' + } + + assert wsgi.peek_path_info(env) == 'aaa' + assert wsgi.peek_path_info(env) == 'aaa' + assert wsgi.peek_path_info(env, charset=None) == b'aaa' + assert wsgi.peek_path_info(env, charset=None) == b'aaa' + +def test_path_info_and_script_name_fetching(): + env = create_environ(u'/\N{SNOWMAN}', u'http://example.com/\N{COMET}/') + assert wsgi.get_path_info(env) == u'/\N{SNOWMAN}' + assert wsgi.get_path_info(env, charset=None) == u'/\N{SNOWMAN}'.encode('utf-8') + assert wsgi.get_script_name(env) == u'/\N{COMET}' + assert wsgi.get_script_name(env, charset=None) == u'/\N{COMET}'.encode('utf-8') + +def test_query_string_fetching(): + env = create_environ(u'/?\N{SNOWMAN}=\N{COMET}') + qs = wsgi.get_query_string(env) + strict_eq(qs, '%E2%98%83=%E2%98%84') + +def test_limited_stream(): + class RaisingLimitedStream(wsgi.LimitedStream): + def on_exhausted(self): + raise BadRequest('input stream exhausted') + + io = BytesIO(b'123456') + stream = RaisingLimitedStream(io, 3) + strict_eq(stream.read(), b'123') + pytest.raises(BadRequest, stream.read) + + io = BytesIO(b'123456') + stream = RaisingLimitedStream(io, 3) + strict_eq(stream.tell(), 0) + strict_eq(stream.read(1), b'1') + strict_eq(stream.tell(), 1) + strict_eq(stream.read(1), b'2') + strict_eq(stream.tell(), 2) + strict_eq(stream.read(1), b'3') + strict_eq(stream.tell(), 3) + pytest.raises(BadRequest, stream.read) + + io = BytesIO(b'123456\nabcdefg') + stream = wsgi.LimitedStream(io, 9) + strict_eq(stream.readline(), b'123456\n') + strict_eq(stream.readline(), b'ab') + + io = BytesIO(b'123456\nabcdefg') + stream = wsgi.LimitedStream(io, 9) + strict_eq(stream.readlines(), [b'123456\n', b'ab']) + + io = BytesIO(b'123456\nabcdefg') + stream = wsgi.LimitedStream(io, 9) + strict_eq(stream.readlines(2), [b'12']) + strict_eq(stream.readlines(2), [b'34']) + strict_eq(stream.readlines(), [b'56\n', b'ab']) + + io = BytesIO(b'123456\nabcdefg') + stream = wsgi.LimitedStream(io, 9) + strict_eq(stream.readline(100), b'123456\n') + + io = BytesIO(b'123456\nabcdefg') + stream = wsgi.LimitedStream(io, 9) + strict_eq(stream.readlines(100), [b'123456\n', b'ab']) + + io = BytesIO(b'123456') + stream = wsgi.LimitedStream(io, 3) + strict_eq(stream.read(1), b'1') + strict_eq(stream.read(1), b'2') + strict_eq(stream.read(), b'3') + strict_eq(stream.read(), b'') + + io = BytesIO(b'123456') + stream = wsgi.LimitedStream(io, 3) + strict_eq(stream.read(-1), b'123') + + io = BytesIO(b'123456') + stream = wsgi.LimitedStream(io, 0) + strict_eq(stream.read(-1), b'') + + io = StringIO(u'123456') + stream = wsgi.LimitedStream(io, 0) + strict_eq(stream.read(-1), u'') + + io = StringIO(u'123\n456\n') + stream = wsgi.LimitedStream(io, 8) + strict_eq(list(stream), [u'123\n', u'456\n']) + +def test_limited_stream_disconnection(): + io = BytesIO(b'A bit of content') + + # disconnect detection on out of bytes + stream = wsgi.LimitedStream(io, 255) + with pytest.raises(ClientDisconnected): + stream.read() + + # disconnect detection because file close + io = BytesIO(b'x' * 255) + io.close() + stream = wsgi.LimitedStream(io, 255) + with pytest.raises(ClientDisconnected): + stream.read() + +def test_path_info_extraction(): + x = wsgi.extract_path_info('http://example.com/app', '/app/hello') + assert x == u'/hello' + x = wsgi.extract_path_info('http://example.com/app', + 'https://example.com/app/hello') + assert x == u'/hello' + x = wsgi.extract_path_info('http://example.com/app/', + 'https://example.com/app/hello') + assert x == u'/hello' + x = wsgi.extract_path_info('http://example.com/app/', + 'https://example.com/app') + assert x == u'/' + x = wsgi.extract_path_info(u'http://☃.net/', u'/fööbär') + assert x == u'/fööbär' + x = wsgi.extract_path_info(u'http://☃.net/x', u'http://☃.net/x/fööbär') + assert x == u'/fööbär' + + env = create_environ(u'/fööbär', u'http://☃.net/x/') + x = wsgi.extract_path_info(env, u'http://☃.net/x/fööbär') + assert x == u'/fööbär' + + x = wsgi.extract_path_info('http://example.com/app/', + 'https://example.com/a/hello') + assert x is None + x = wsgi.extract_path_info('http://example.com/app/', + 'https://example.com/app/hello', + collapse_http_schemes=False) + assert x is None + +def test_get_host_fallback(): + assert wsgi.get_host({ + 'SERVER_NAME': 'foobar.example.com', + 'wsgi.url_scheme': 'http', + 'SERVER_PORT': '80' + }) == 'foobar.example.com' + assert wsgi.get_host({ + 'SERVER_NAME': 'foobar.example.com', + 'wsgi.url_scheme': 'http', + 'SERVER_PORT': '81' + }) == 'foobar.example.com:81' + +def test_get_current_url_unicode(): + env = create_environ() + env['QUERY_STRING'] = 'foo=bar&baz=blah&meh=\xcf' + rv = wsgi.get_current_url(env) + strict_eq(rv, + u'http://localhost/?foo=bar&baz=blah&meh=\ufffd') + +def test_multi_part_line_breaks(): + data = 'abcdef\r\nghijkl\r\nmnopqrstuvwxyz\r\nABCDEFGHIJK' + test_stream = NativeStringIO(data) + lines = list(wsgi.make_line_iter(test_stream, limit=len(data), + buffer_size=16)) + assert lines == ['abcdef\r\n', 'ghijkl\r\n', 'mnopqrstuvwxyz\r\n', + 'ABCDEFGHIJK'] + + data = 'abc\r\nThis line is broken by the buffer length.' \ + '\r\nFoo bar baz' + test_stream = NativeStringIO(data) + lines = list(wsgi.make_line_iter(test_stream, limit=len(data), + buffer_size=24)) + assert lines == ['abc\r\n', 'This line is broken by the buffer ' + 'length.\r\n', 'Foo bar baz'] + +def test_multi_part_line_breaks_bytes(): + data = b'abcdef\r\nghijkl\r\nmnopqrstuvwxyz\r\nABCDEFGHIJK' + test_stream = BytesIO(data) + lines = list(wsgi.make_line_iter(test_stream, limit=len(data), + buffer_size=16)) + assert lines == [b'abcdef\r\n', b'ghijkl\r\n', b'mnopqrstuvwxyz\r\n', + b'ABCDEFGHIJK'] + + data = b'abc\r\nThis line is broken by the buffer length.' \ + b'\r\nFoo bar baz' + test_stream = BytesIO(data) + lines = list(wsgi.make_line_iter(test_stream, limit=len(data), + buffer_size=24)) + assert lines == [b'abc\r\n', b'This line is broken by the buffer ' + b'length.\r\n', b'Foo bar baz'] + +def test_multi_part_line_breaks_problematic(): + data = 'abc\rdef\r\nghi' + for x in range(1, 10): + test_stream = NativeStringIO(data) + lines = list(wsgi.make_line_iter(test_stream, limit=len(data), + buffer_size=4)) + assert lines == ['abc\r', 'def\r\n', 'ghi'] + +def test_iter_functions_support_iterators(): + data = ['abcdef\r\nghi', 'jkl\r\nmnopqrstuvwxyz\r', '\nABCDEFGHIJK'] + lines = list(wsgi.make_line_iter(data)) + assert lines == ['abcdef\r\n', 'ghijkl\r\n', 'mnopqrstuvwxyz\r\n', + 'ABCDEFGHIJK'] + +def test_make_chunk_iter(): + data = [u'abcdefXghi', u'jklXmnopqrstuvwxyzX', u'ABCDEFGHIJK'] + rv = list(wsgi.make_chunk_iter(data, 'X')) + assert rv == [u'abcdef', u'ghijkl', u'mnopqrstuvwxyz', u'ABCDEFGHIJK'] + + data = u'abcdefXghijklXmnopqrstuvwxyzXABCDEFGHIJK' + test_stream = StringIO(data) + rv = list(wsgi.make_chunk_iter(test_stream, 'X', limit=len(data), + buffer_size=4)) + assert rv == [u'abcdef', u'ghijkl', u'mnopqrstuvwxyz', u'ABCDEFGHIJK'] + +def test_make_chunk_iter_bytes(): + data = [b'abcdefXghi', b'jklXmnopqrstuvwxyzX', b'ABCDEFGHIJK'] + rv = list(wsgi.make_chunk_iter(data, 'X')) + assert rv == [b'abcdef', b'ghijkl', b'mnopqrstuvwxyz', b'ABCDEFGHIJK'] + + data = b'abcdefXghijklXmnopqrstuvwxyzXABCDEFGHIJK' + test_stream = BytesIO(data) + rv = list(wsgi.make_chunk_iter(test_stream, 'X', limit=len(data), + buffer_size=4)) + assert rv == [b'abcdef', b'ghijkl', b'mnopqrstuvwxyz', b'ABCDEFGHIJK'] + +def test_lines_longer_buffer_size(): + data = '1234567890\n1234567890\n' + for bufsize in range(1, 15): + lines = list(wsgi.make_line_iter(NativeStringIO(data), limit=len(data), + buffer_size=4)) + assert lines == ['1234567890\n', '1234567890\n'] diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/_compat.py python-werkzeug-0.10.4+dfsg1/werkzeug/_compat.py --- python-werkzeug-0.9.6+dfsg/werkzeug/_compat.py 2013-08-17 20:48:51.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/_compat.py 2015-03-26 14:11:47.000000000 +0000 @@ -82,7 +82,7 @@ leave it as unicode. """ try: - return str(s) + return to_native(s) except UnicodeError: return s diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/contrib/atom.py python-werkzeug-0.10.4+dfsg1/werkzeug/contrib/atom.py --- python-werkzeug-0.9.6+dfsg/werkzeug/contrib/atom.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/contrib/atom.py 2015-03-26 14:11:47.000000000 +0000 @@ -44,7 +44,10 @@ def format_iso8601(obj): """Format a datetime object for iso8601""" - return obj.strftime('%Y-%m-%dT%H:%M:%SZ') + iso8601 = obj.isoformat() + if obj.tzinfo: + return iso8601 + return iso8601 + 'Z' @implements_to_string @@ -61,6 +64,7 @@ :param updated: the time the feed was modified the last time. Must be a :class:`datetime.datetime` object. If not present the latest entry's `updated` is used. + Treated as UTC if naive datetime. :param feed_url: the URL to the feed. Should be the URL that was requested. :param author: the author of the feed. Must be either a string (the @@ -239,7 +243,8 @@ :param id: a globally unique id for the entry. Must be an URI. If not present the URL is used, but one of both is required. :param updated: the time the entry was modified the last time. Must - be a :class:`datetime.datetime` object. Required. + be a :class:`datetime.datetime` object. Treated as + UTC if naive datetime. Required. :param author: the author of the entry. Must be either a string (the name) or a dict with name (required) and uri or email (both optional). Can be a list of (may be @@ -247,7 +252,8 @@ multiple authors. Required if the feed does not have an author element. :param published: the time the entry was initially published. Must - be a :class:`datetime.datetime` object. + be a :class:`datetime.datetime` object. Treated as + UTC if naive datetime. :param rights: copyright information for the entry. :param rights_type: the type attribute for the rights element. One of ``'html'``, ``'text'`` or ``'xhtml'``. Default is diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/contrib/cache.py python-werkzeug-0.10.4+dfsg1/werkzeug/contrib/cache.py --- python-werkzeug-0.9.6+dfsg/werkzeug/contrib/cache.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/contrib/cache.py 2015-03-26 15:33:58.000000000 +0000 @@ -58,16 +58,17 @@ """ import os import re +import errno import tempfile from hashlib import md5 from time import time try: import cPickle as pickle -except ImportError: +except ImportError: # pragma: no cover import pickle from werkzeug._compat import iteritems, string_types, text_type, \ - integer_types, to_bytes + integer_types, to_native from werkzeug.posixemulation import rename @@ -82,10 +83,8 @@ ... assert k*k == v """ - if hasattr(mappingorseq, "iteritems"): - return mappingorseq.iteritems() - elif hasattr(mappingorseq, "items"): - return mappingorseq.items() + if hasattr(mappingorseq, 'items'): + return iteritems(mappingorseq) return mappingorseq @@ -93,37 +92,37 @@ """Baseclass for the cache systems. All the cache systems implement this API or a superset of it. - :param default_timeout: the default timeout that is used if no timeout is - specified on :meth:`set`. + :param default_timeout: the default timeout (in seconds) that is used if no + timeout is specified on :meth:`set`. """ def __init__(self, default_timeout=300): self.default_timeout = default_timeout def get(self, key): - """Looks up key in the cache and returns the value for it. - If the key does not exist `None` is returned instead. + """Look up key in the cache and return the value for it. :param key: the key to be looked up. + :returns: The value if it exists and is readable, else ``None``. """ return None def delete(self, key): - """Deletes `key` from the cache. If it does not exist in the cache - nothing happens. + """Delete `key` from the cache. :param key: the key to delete. + :returns: Whether the key existed and has been deleted. + :rtype: boolean """ - pass + return True def get_many(self, *keys): """Returns a list of values for the given keys. - For each key a item in the list is created. Example:: + For each key a item in the list is created:: foo, bar = cache.get_many("foo", "bar") - If a key can't be looked up `None` is returned for that key - instead. + Has the same error handling as :meth:`get`. :param keys: The function accepts multiple keys as positional arguments. @@ -131,7 +130,7 @@ return map(self.get, keys) def get_dict(self, *keys): - """Works like :meth:`get_many` but returns a dict:: + """Like :meth:`get_many` but return a dict:: d = cache.get_dict("foo", "bar") foo = d["foo"] @@ -143,15 +142,19 @@ return dict(zip(keys, self.get_many(*keys))) def set(self, key, value, timeout=None): - """Adds a new key/value to the cache (overwrites value, if key already + """Add a new key/value to the cache (overwrites value, if key already exists in the cache). :param key: the key to set :param value: the value for the key :param timeout: the cache timeout for the key (if not specified, it uses the default timeout). + :returns: ``True`` if key has been updated, ``False`` for backend + errors. Pickling errors, however, will raise a subclass of + ``pickle.PickleError``. + :rtype: boolean """ - pass + return True def add(self, key, value, timeout=None): """Works like :meth:`set` but does not overwrite the values of already @@ -161,8 +164,11 @@ :param value: the value for the key :param timeout: the cache timeout for the key or the default timeout if not specified. + :returns: Same as :meth:`set`, but also ``False`` for already + existing keys. + :rtype: boolean """ - pass + return True def set_many(self, mapping, timeout=None): """Sets multiple keys and values from a mapping. @@ -170,24 +176,32 @@ :param mapping: a mapping with the keys/values to set. :param timeout: the cache timeout for the key (if not specified, it uses the default timeout). + :returns: Whether all given keys have been set. + :rtype: boolean """ + rv = True for key, value in _items(mapping): - self.set(key, value, timeout) + if not self.set(key, value, timeout): + rv = False + return rv def delete_many(self, *keys): """Deletes multiple keys at once. :param keys: The function accepts multiple keys as positional arguments. + :returns: Whether all given keys have been deleted. + :rtype: boolean """ - for key in keys: - self.delete(key) + return all(self.delete(key) for key in keys) def clear(self): """Clears the cache. Keep in mind that not all caches support completely clearing the cache. + :returns: Whether the cache has been cleared. + :rtype: boolean """ - pass + return True def inc(self, key, delta=1): """Increments the value of a key by `delta`. If the key does @@ -197,8 +211,10 @@ :param key: the key to increment. :param delta: the delta to add. + :returns: The new value or ``None`` for backend errors. """ - self.set(key, (self.get(key) or 0) + delta) + value = (self.get(key) or 0) + delta + return value if self.set(key, value) else None def dec(self, key, delta=1): """Decrements the value of a key by `delta`. If the key does @@ -208,8 +224,10 @@ :param key: the key to increment. :param delta: the delta to subtract. + :returns: The new value or `None` for backend errors. """ - self.set(key, (self.get(key) or 0) - delta) + value = (self.get(key) or 0) - delta + return value if self.set(key, value) else None class NullCache(BaseCache): @@ -241,14 +259,20 @@ def _prune(self): if len(self._cache) > self._threshold: now = time() + toremove = [] for idx, (key, (expires, _)) in enumerate(self._cache.items()): if expires <= now or idx % 3 == 0: - self._cache.pop(key, None) + toremove.append(key) + for key in toremove: + self._cache.pop(key, None) def get(self, key): - expires, value = self._cache.get(key, (0, None)) - if expires > time(): - return pickle.loads(value) + try: + expires, value = self._cache[key] + if expires > time(): + return pickle.loads(value) + except (KeyError, pickle.PickleError): + return None def set(self, key, value, timeout=None): if timeout is None: @@ -256,21 +280,24 @@ self._prune() self._cache[key] = (time() + timeout, pickle.dumps(value, pickle.HIGHEST_PROTOCOL)) + return True def add(self, key, value, timeout=None): if timeout is None: timeout = self.default_timeout - if len(self._cache) > self._threshold: - self._prune() + self._prune() item = (time() + timeout, pickle.dumps(value, pickle.HIGHEST_PROTOCOL)) + if key in self._cache: + return False self._cache.setdefault(key, item) + return True def delete(self, key): - self._cache.pop(key, None) + return self._cache.pop(key, None) is not None -_test_memcached_key = re.compile(br'[^\x00-\x21\xff]{1,250}$').match +_test_memcached_key = re.compile(r'[^\x00-\x21\xff]{1,250}$').match class MemcachedCache(BaseCache): """A cache that uses memcached as backend. @@ -280,6 +307,13 @@ event that a tuple/list is passed, Werkzeug tries to import the best available memcache library. + This cache looks into the following packages/modules to find bindings for + memcached: + + - ``pylibmc`` + - ``google.appengine.api.memcached`` + - ``memcached`` + Implementation notes: This cache backend works around some limitations in memcached to simplify the interface. For example unicode keys are encoded to utf-8 on the fly. Methods such as :meth:`~BaseCache.get_dict` return @@ -311,15 +345,21 @@ # client. self._client = servers - self.key_prefix = to_bytes(key_prefix) + self.key_prefix = to_native(key_prefix) - def get(self, key): - if isinstance(key, text_type): - key = key.encode('utf-8') + def _normalize_key(self, key): + key = to_native(key, 'utf-8') if self.key_prefix: key = self.key_prefix + key + return key + + def _normalize_timeout(self, timeout): + return int(time()) + timeout + + def get(self, key): + key = self._normalize_key(key) # memcached doesn't support keys longer than that. Because often - # checks for so long keys can occour because it's tested from user + # checks for so long keys can occur because it's tested from user # submitted data etc we fail silently for getting. if _test_memcached_key(key): return self._client.get(key) @@ -328,13 +368,9 @@ key_mapping = {} have_encoded_keys = False for key in keys: - if isinstance(key, unicode): - encoded_key = key.encode('utf-8') + encoded_key = self._normalize_key(key) + if not isinstance(key, str): have_encoded_keys = True - else: - encoded_key = key - if self.key_prefix: - encoded_key = self.key_prefix + encoded_key if _test_memcached_key(key): key_mapping[encoded_key] = key d = rv = self._client.get_multi(key_mapping.keys()) @@ -351,20 +387,16 @@ def add(self, key, value, timeout=None): if timeout is None: timeout = self.default_timeout - if isinstance(key, text_type): - key = key.encode('utf-8') - if self.key_prefix: - key = self.key_prefix + key - self._client.add(key, value, timeout) + key = self._normalize_key(key) + timeout = self._normalize_timeout(timeout) + return self._client.add(key, value, timeout) def set(self, key, value, timeout=None): if timeout is None: timeout = self.default_timeout - if isinstance(key, text_type): - key = key.encode('utf-8') - if self.key_prefix: - key = self.key_prefix + key - self._client.set(key, value, timeout) + key = self._normalize_key(key) + timeout = self._normalize_timeout(timeout) + return self._client.set(key, value, timeout) def get_many(self, *keys): d = self.get_dict(*keys) @@ -375,48 +407,36 @@ timeout = self.default_timeout new_mapping = {} for key, value in _items(mapping): - if isinstance(key, text_type): - key = key.encode('utf-8') - if self.key_prefix: - key = self.key_prefix + key + key = self._normalize_key(key) new_mapping[key] = value - self._client.set_multi(new_mapping, timeout) + + timeout = self._normalize_timeout(timeout) + failed_keys = self._client.set_multi(new_mapping, timeout) + return not failed_keys def delete(self, key): - if isinstance(key, unicode): - key = key.encode('utf-8') - if self.key_prefix: - key = self.key_prefix + key + key = self._normalize_key(key) if _test_memcached_key(key): - self._client.delete(key) + return self._client.delete(key) def delete_many(self, *keys): new_keys = [] for key in keys: - if isinstance(key, unicode): - key = key.encode('utf-8') - if self.key_prefix: - key = self.key_prefix + key + key = self._normalize_key(key) if _test_memcached_key(key): new_keys.append(key) - self._client.delete_multi(new_keys) + return self._client.delete_multi(new_keys) def clear(self): - self._client.flush_all() + return self._client.flush_all() def inc(self, key, delta=1): - if isinstance(key, unicode): - key = key.encode('utf-8') - if self.key_prefix: - key = self.key_prefix + key - self._client.incr(key, delta) + key = self._normalize_key(key) + return self._client.incr(key, delta) def dec(self, key, delta=1): - if isinstance(key, unicode): - key = key.encode('utf-8') - if self.key_prefix: - key = self.key_prefix + key - self._client.decr(key, delta) + key = self._normalize_key(key) + return self._client.decr(key, delta) def import_preferred_memcache_lib(self, servers): """Returns an initialized memcache client. Used by the constructor.""" @@ -466,6 +486,9 @@ .. versionchanged:: 0.8.3 This cache backend now supports password authentication. + .. versionchanged:: 0.10 + ``**kwargs`` is now passed to the redis object. + :param host: address of the Redis server or an object which API is compatible with the official Python Redis client (redis-py). :param port: port number on which Redis server listens for connections. @@ -474,17 +497,23 @@ :param default_timeout: the default timeout that is used if no timeout is specified on :meth:`~BaseCache.set`. :param key_prefix: A prefix that should be added to all keys. + + Any additional keyword arguments will be passed to ``redis.Redis``. """ def __init__(self, host='localhost', port=6379, password=None, - db=0, default_timeout=300, key_prefix=None): + db=0, default_timeout=300, key_prefix=None, **kwargs): BaseCache.__init__(self, default_timeout) if isinstance(host, string_types): try: import redis except ImportError: raise RuntimeError('no redis module found') - self._client = redis.Redis(host=host, port=port, password=password, db=db) + if kwargs.get('decode_responses', None): + raise ValueError('decode_responses is not supported by ' + 'RedisCache.') + self._client = redis.Redis(host=host, port=port, password=password, + db=db, **kwargs) else: self._client = host self.key_prefix = key_prefix or '' @@ -505,7 +534,10 @@ if value is None: return None if value.startswith(b'!'): - return pickle.loads(value[1:]) + try: + return pickle.loads(value[1:]) + except pickle.PickleError: + return None try: return int(value) except ValueError: @@ -524,15 +556,17 @@ if timeout is None: timeout = self.default_timeout dump = self.dump_object(value) - self._client.setex(self.key_prefix + key, dump, timeout) + return self._client.setex(name=self.key_prefix + key, + value=dump, time=timeout) def add(self, key, value, timeout=None): if timeout is None: timeout = self.default_timeout dump = self.dump_object(value) - added = self._client.setnx(self.key_prefix + key, dump) - if added: - self._client.expire(self.key_prefix + key, timeout) + return ( + self._client.setnx(name=self.key_prefix + key, value=dump) and + self._client.expire(name=self.key_prefix + key, time=timeout) + ) def set_many(self, mapping, timeout=None): if timeout is None: @@ -540,32 +574,34 @@ pipe = self._client.pipeline() for key, value in _items(mapping): dump = self.dump_object(value) - pipe.setex(self.key_prefix + key, dump, timeout) - pipe.execute() + pipe.setex(name=self.key_prefix + key, value=dump, time=timeout) + return pipe.execute() def delete(self, key): - self._client.delete(self.key_prefix + key) + return self._client.delete(self.key_prefix + key) def delete_many(self, *keys): if not keys: return if self.key_prefix: keys = [self.key_prefix + key for key in keys] - self._client.delete(*keys) + return self._client.delete(*keys) def clear(self): + status = False if self.key_prefix: keys = self._client.keys(self.key_prefix + '*') if keys: - self._client.delete(*keys) + status = self._client.delete(*keys) else: - self._client.flushdb() + status = self._client.flushdb() + return status def inc(self, key, delta=1): - return self._client.incr(self.key_prefix + key, delta) + return self._client.incr(name=self.key_prefix + key, amount=delta) def dec(self, key, delta=1): - return self._client.decr(self.key_prefix + key, delta) + return self._client.decr(name=self.key_prefix + key, amount=delta) class FileSystemCache(BaseCache): @@ -590,8 +626,12 @@ self._path = cache_dir self._threshold = threshold self._mode = mode - if not os.path.exists(self._path): + + try: os.makedirs(self._path) + except OSError as ex: + if ex.errno != errno.EEXIST: + raise def _list_dir(self): """return a list of (fully qualified) cache filenames @@ -603,31 +643,25 @@ entries = self._list_dir() if len(entries) > self._threshold: now = time() - for idx, fname in enumerate(entries): - remove = False - f = None - try: - try: - f = open(fname, 'rb') + try: + for idx, fname in enumerate(entries): + remove = False + with open(fname, 'rb') as f: expires = pickle.load(f) - remove = expires <= now or idx % 3 == 0 - finally: - if f is not None: - f.close() - except Exception: - pass - if remove: - try: + remove = expires <= now or idx % 3 == 0 + + if remove: os.remove(fname) - except (IOError, OSError): - pass + except (IOError, OSError): + pass def clear(self): for fname in self._list_dir(): try: os.remove(fname) except (IOError, OSError): - pass + return False + return True def _get_filename(self, key): if isinstance(key, text_type): @@ -638,20 +672,20 @@ def get(self, key): filename = self._get_filename(key) try: - f = open(filename, 'rb') - try: + with open(filename, 'rb') as f: if pickle.load(f) >= time(): return pickle.load(f) - finally: - f.close() - os.remove(filename) - except Exception: + else: + os.remove(filename) + return None + except (IOError, OSError, pickle.PickleError): return None def add(self, key, value, timeout=None): filename = self._get_filename(key) if not os.path.exists(filename): - self.set(key, value, timeout) + return self.set(key, value, timeout) + return False def set(self, key, value, timeout=None): if timeout is None: @@ -661,19 +695,20 @@ try: fd, tmp = tempfile.mkstemp(suffix=self._fs_transaction_suffix, dir=self._path) - f = os.fdopen(fd, 'wb') - try: + with os.fdopen(fd, 'wb') as f: pickle.dump(int(time() + timeout), f, 1) pickle.dump(value, f, pickle.HIGHEST_PROTOCOL) - finally: - f.close() rename(tmp, filename) os.chmod(filename, self._mode) except (IOError, OSError): - pass + return False + else: + return True def delete(self, key): try: os.remove(self._get_filename(key)) except (IOError, OSError): - pass + return False + else: + return True diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/contrib/fixers.py python-werkzeug-0.10.4+dfsg1/werkzeug/contrib/fixers.py --- python-werkzeug-0.9.6+dfsg/werkzeug/contrib/fixers.py 2013-06-13 10:25:35.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/contrib/fixers.py 2015-03-26 14:11:47.000000000 +0000 @@ -94,7 +94,11 @@ class ProxyFix(object): """This middleware can be applied to add HTTP proxy support to an application that was not designed with HTTP proxies in mind. It - sets `REMOTE_ADDR`, `HTTP_HOST` from `X-Forwarded` headers. + sets `REMOTE_ADDR`, `HTTP_HOST` from `X-Forwarded` headers. While + Werkzeug-based applications already can use + :py:func:`werkzeug.wsgi.get_host` to retrieve the current host even if + behind proxy setups, this middleware can be used for applications which + access the WSGI environment directly. If you have more than one proxy server in front of your app, set `num_proxies` accordingly. diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/contrib/iterio.py python-werkzeug-0.10.4+dfsg1/werkzeug/contrib/iterio.py --- python-werkzeug-0.9.6+dfsg/werkzeug/contrib/iterio.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/contrib/iterio.py 2015-03-26 14:11:47.000000000 +0000 @@ -305,7 +305,10 @@ nl_pos = self._buf.find(_newline(self._buf), self.pos) buf = [] try: - pos = self.pos + if self._buf is None: + pos = self.pos + else: + pos = len(self._buf) while nl_pos < 0: item = next(self._gen) local_pos = item.find(_newline(item)) diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/contrib/jsrouting.py python-werkzeug-0.10.4+dfsg1/werkzeug/contrib/jsrouting.py --- python-werkzeug-0.9.6+dfsg/werkzeug/contrib/jsrouting.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/contrib/jsrouting.py 2015-03-18 23:08:06.000000000 +0000 @@ -26,13 +26,13 @@ def render_template(name_parts, rules, converters): result = u'' if name_parts: - for idx in xrange(0, len(name_parts) - 1): + for idx in range(0, len(name_parts) - 1): name = u'.'.join(name_parts[:idx + 1]) result += u"if (typeof %s === 'undefined') %s = {}\n" % (name, name) result += '%s = ' % '.'.join(name_parts) result += """(function (server_name, script_name, subdomain, url_scheme) { - var converters = %(converters)s; - var rules = $rules; + var converters = [%(converters)s]; + var rules = %(rules)s; function in_array(array, value) { if (array.indexOf != undefined) { return array.indexOf(value) != -1; @@ -163,7 +163,9 @@ + '/' + lstrip(rv.path, '/'); } }; -})""" % {'converters': u', '.join(converters)} +})""" % {'converters': u', '.join(converters), + 'rules': rules} + return result diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/datastructures.py python-werkzeug-0.10.4+dfsg1/werkzeug/datastructures.py --- python-werkzeug-0.9.6+dfsg/werkzeug/datastructures.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/datastructures.py 2015-03-26 14:11:47.000000000 +0000 @@ -131,7 +131,7 @@ def __repr__(self): return '%s(%s)' % ( self.__class__.__name__, - dict.__repr__(self), + list.__repr__(self), ) @@ -827,7 +827,7 @@ def _options_header_vkw(value, kw): return dump_options_header(value, dict((k.replace('_', '-'), v) - for k, v in kw.items())) + for k, v in kw.items())) def _unicodify_header_value(value): @@ -1354,11 +1354,20 @@ rv.extend(d.getlist(key, type)) return rv - def keys(self): + def _keys_impl(self): + """This function exists so __len__ can be implemented more efficiently, + saving one list creation from an iterator. + + Using this for Python 2's ``dict.keys`` behavior would be useless since + `dict.keys` in Python 2 returns a list, while we have a set here. + """ rv = set() for d in self.dicts: - rv.update(d.keys()) - return iter(rv) + rv.update(iterkeys(d)) + return rv + + def keys(self): + return iter(self._keys_impl()) __iter__ = keys @@ -1406,7 +1415,7 @@ return rv def __len__(self): - return len(self.keys()) + return len(self._keys_impl()) def __contains__(self, key): for d in self.dicts: @@ -1650,7 +1659,8 @@ for client_item, quality in self: if quality <= best_quality: break - if self._value_matches(server_item, client_item): + if (self._value_matches(server_item, client_item) + and quality > 0): best_quality = quality result = server_item return result @@ -1831,9 +1841,11 @@ return self.to_header() def __repr__(self): - return '<%s %r>' % ( + return '<%s %s>' % ( self.__class__.__name__, - self.to_header() + " ".join( + "%s=%r" % (k, v) for k, v in sorted(self.items()) + ), ) @@ -2357,7 +2369,7 @@ """Provides simple access to `WWW-Authenticate` headers.""" #: list of keys that require quoting in the generated header - _require_quoting = frozenset(['domain', 'nonce', 'opaque', 'realm']) + _require_quoting = frozenset(['domain', 'nonce', 'opaque', 'realm', 'qop']) def __init__(self, auth_type=None, values=None, on_update=None): dict.__init__(self, values or ()) @@ -2599,6 +2611,7 @@ def __nonzero__(self): return bool(self.filename) + __bool__ = __nonzero__ def __getattr__(self, name): return getattr(self.stream, name) diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/debug/shared/jquery.js python-werkzeug-0.10.4+dfsg1/werkzeug/debug/shared/jquery.js --- python-werkzeug-0.9.6+dfsg/werkzeug/debug/shared/jquery.js 2013-06-13 10:25:35.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/debug/shared/jquery.js 1970-01-01 00:00:00.000000000 +0000 @@ -1,167 +0,0 @@ -/*! - * jQuery JavaScript Library v1.4.4 - * http://jquery.com/ - * - * Copyright 2010, John Resig - * Dual licensed under the MIT or GPL Version 2 licenses. - * http://jquery.org/license - * - * Includes Sizzle.js - * http://sizzlejs.com/ - * Copyright 2010, The Dojo Foundation - * Released under the MIT, BSD, and GPL Licenses. - * - * Date: Thu Nov 11 19:04:53 2010 -0500 - */ -(function(E,B){function ka(a,b,d){if(d===B&&a.nodeType===1){d=a.getAttribute("data-"+b);if(typeof d==="string"){try{d=d==="true"?true:d==="false"?false:d==="null"?null:!c.isNaN(d)?parseFloat(d):Ja.test(d)?c.parseJSON(d):d}catch(e){}c.data(a,b,d)}else d=B}return d}function U(){return false}function ca(){return true}function la(a,b,d){d[0].type=a;return c.event.handle.apply(b,d)}function Ka(a){var b,d,e,f,h,l,k,o,x,r,A,C=[];f=[];h=c.data(this,this.nodeType?"events":"__events__");if(typeof h==="function")h= -h.events;if(!(a.liveFired===this||!h||!h.live||a.button&&a.type==="click")){if(a.namespace)A=RegExp("(^|\\.)"+a.namespace.split(".").join("\\.(?:.*\\.)?")+"(\\.|$)");a.liveFired=this;var J=h.live.slice(0);for(k=0;kd)break;a.currentTarget=f.elem;a.data=f.handleObj.data;a.handleObj=f.handleObj;A=f.handleObj.origHandler.apply(f.elem,arguments);if(A===false||a.isPropagationStopped()){d=f.level;if(A===false)b=false;if(a.isImmediatePropagationStopped())break}}return b}}function Y(a,b){return(a&&a!=="*"?a+".":"")+b.replace(La, -"`").replace(Ma,"&")}function ma(a,b,d){if(c.isFunction(b))return c.grep(a,function(f,h){return!!b.call(f,h,f)===d});else if(b.nodeType)return c.grep(a,function(f){return f===b===d});else if(typeof b==="string"){var e=c.grep(a,function(f){return f.nodeType===1});if(Na.test(b))return c.filter(b,e,!d);else b=c.filter(b,e)}return c.grep(a,function(f){return c.inArray(f,b)>=0===d})}function na(a,b){var d=0;b.each(function(){if(this.nodeName===(a[d]&&a[d].nodeName)){var e=c.data(a[d++]),f=c.data(this, -e);if(e=e&&e.events){delete f.handle;f.events={};for(var h in e)for(var l in e[h])c.event.add(this,h,e[h][l],e[h][l].data)}}})}function Oa(a,b){b.src?c.ajax({url:b.src,async:false,dataType:"script"}):c.globalEval(b.text||b.textContent||b.innerHTML||"");b.parentNode&&b.parentNode.removeChild(b)}function oa(a,b,d){var e=b==="width"?a.offsetWidth:a.offsetHeight;if(d==="border")return e;c.each(b==="width"?Pa:Qa,function(){d||(e-=parseFloat(c.css(a,"padding"+this))||0);if(d==="margin")e+=parseFloat(c.css(a, -"margin"+this))||0;else e-=parseFloat(c.css(a,"border"+this+"Width"))||0});return e}function da(a,b,d,e){if(c.isArray(b)&&b.length)c.each(b,function(f,h){d||Ra.test(a)?e(a,h):da(a+"["+(typeof h==="object"||c.isArray(h)?f:"")+"]",h,d,e)});else if(!d&&b!=null&&typeof b==="object")c.isEmptyObject(b)?e(a,""):c.each(b,function(f,h){da(a+"["+f+"]",h,d,e)});else e(a,b)}function S(a,b){var d={};c.each(pa.concat.apply([],pa.slice(0,b)),function(){d[this]=a});return d}function qa(a){if(!ea[a]){var b=c("<"+ -a+">").appendTo("body"),d=b.css("display");b.remove();if(d==="none"||d==="")d="block";ea[a]=d}return ea[a]}function fa(a){return c.isWindow(a)?a:a.nodeType===9?a.defaultView||a.parentWindow:false}var t=E.document,c=function(){function a(){if(!b.isReady){try{t.documentElement.doScroll("left")}catch(j){setTimeout(a,1);return}b.ready()}}var b=function(j,s){return new b.fn.init(j,s)},d=E.jQuery,e=E.$,f,h=/^(?:[^<]*(<[\w\W]+>)[^>]*$|#([\w\-]+)$)/,l=/\S/,k=/^\s+/,o=/\s+$/,x=/\W/,r=/\d/,A=/^<(\w+)\s*\/?>(?:<\/\1>)?$/, -C=/^[\],:{}\s]*$/,J=/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g,w=/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g,I=/(?:^|:|,)(?:\s*\[)+/g,L=/(webkit)[ \/]([\w.]+)/,g=/(opera)(?:.*version)?[ \/]([\w.]+)/,i=/(msie) ([\w.]+)/,n=/(mozilla)(?:.*? rv:([\w.]+))?/,m=navigator.userAgent,p=false,q=[],u,y=Object.prototype.toString,F=Object.prototype.hasOwnProperty,M=Array.prototype.push,N=Array.prototype.slice,O=String.prototype.trim,D=Array.prototype.indexOf,R={};b.fn=b.prototype={init:function(j, -s){var v,z,H;if(!j)return this;if(j.nodeType){this.context=this[0]=j;this.length=1;return this}if(j==="body"&&!s&&t.body){this.context=t;this[0]=t.body;this.selector="body";this.length=1;return this}if(typeof j==="string")if((v=h.exec(j))&&(v[1]||!s))if(v[1]){H=s?s.ownerDocument||s:t;if(z=A.exec(j))if(b.isPlainObject(s)){j=[t.createElement(z[1])];b.fn.attr.call(j,s,true)}else j=[H.createElement(z[1])];else{z=b.buildFragment([v[1]],[H]);j=(z.cacheable?z.fragment.cloneNode(true):z.fragment).childNodes}return b.merge(this, -j)}else{if((z=t.getElementById(v[2]))&&z.parentNode){if(z.id!==v[2])return f.find(j);this.length=1;this[0]=z}this.context=t;this.selector=j;return this}else if(!s&&!x.test(j)){this.selector=j;this.context=t;j=t.getElementsByTagName(j);return b.merge(this,j)}else return!s||s.jquery?(s||f).find(j):b(s).find(j);else if(b.isFunction(j))return f.ready(j);if(j.selector!==B){this.selector=j.selector;this.context=j.context}return b.makeArray(j,this)},selector:"",jquery:"1.4.4",length:0,size:function(){return this.length}, -toArray:function(){return N.call(this,0)},get:function(j){return j==null?this.toArray():j<0?this.slice(j)[0]:this[j]},pushStack:function(j,s,v){var z=b();b.isArray(j)?M.apply(z,j):b.merge(z,j);z.prevObject=this;z.context=this.context;if(s==="find")z.selector=this.selector+(this.selector?" ":"")+v;else if(s)z.selector=this.selector+"."+s+"("+v+")";return z},each:function(j,s){return b.each(this,j,s)},ready:function(j){b.bindReady();if(b.isReady)j.call(t,b);else q&&q.push(j);return this},eq:function(j){return j=== --1?this.slice(j):this.slice(j,+j+1)},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},slice:function(){return this.pushStack(N.apply(this,arguments),"slice",N.call(arguments).join(","))},map:function(j){return this.pushStack(b.map(this,function(s,v){return j.call(s,v,s)}))},end:function(){return this.prevObject||b(null)},push:M,sort:[].sort,splice:[].splice};b.fn.init.prototype=b.fn;b.extend=b.fn.extend=function(){var j,s,v,z,H,G=arguments[0]||{},K=1,Q=arguments.length,ga=false; -if(typeof G==="boolean"){ga=G;G=arguments[1]||{};K=2}if(typeof G!=="object"&&!b.isFunction(G))G={};if(Q===K){G=this;--K}for(;K0))if(q){var s=0,v=q;for(q=null;j=v[s++];)j.call(t,b);b.fn.trigger&&b(t).trigger("ready").unbind("ready")}}},bindReady:function(){if(!p){p=true;if(t.readyState==="complete")return setTimeout(b.ready,1);if(t.addEventListener){t.addEventListener("DOMContentLoaded",u,false);E.addEventListener("load",b.ready,false)}else if(t.attachEvent){t.attachEvent("onreadystatechange",u);E.attachEvent("onload", -b.ready);var j=false;try{j=E.frameElement==null}catch(s){}t.documentElement.doScroll&&j&&a()}}},isFunction:function(j){return b.type(j)==="function"},isArray:Array.isArray||function(j){return b.type(j)==="array"},isWindow:function(j){return j&&typeof j==="object"&&"setInterval"in j},isNaN:function(j){return j==null||!r.test(j)||isNaN(j)},type:function(j){return j==null?String(j):R[y.call(j)]||"object"},isPlainObject:function(j){if(!j||b.type(j)!=="object"||j.nodeType||b.isWindow(j))return false;if(j.constructor&& -!F.call(j,"constructor")&&!F.call(j.constructor.prototype,"isPrototypeOf"))return false;for(var s in j);return s===B||F.call(j,s)},isEmptyObject:function(j){for(var s in j)return false;return true},error:function(j){throw j;},parseJSON:function(j){if(typeof j!=="string"||!j)return null;j=b.trim(j);if(C.test(j.replace(J,"@").replace(w,"]").replace(I,"")))return E.JSON&&E.JSON.parse?E.JSON.parse(j):(new Function("return "+j))();else b.error("Invalid JSON: "+j)},noop:function(){},globalEval:function(j){if(j&& -l.test(j)){var s=t.getElementsByTagName("head")[0]||t.documentElement,v=t.createElement("script");v.type="text/javascript";if(b.support.scriptEval)v.appendChild(t.createTextNode(j));else v.text=j;s.insertBefore(v,s.firstChild);s.removeChild(v)}},nodeName:function(j,s){return j.nodeName&&j.nodeName.toUpperCase()===s.toUpperCase()},each:function(j,s,v){var z,H=0,G=j.length,K=G===B||b.isFunction(j);if(v)if(K)for(z in j){if(s.apply(j[z],v)===false)break}else for(;H
a";var f=d.getElementsByTagName("*"),h=d.getElementsByTagName("a")[0],l=t.createElement("select"), -k=l.appendChild(t.createElement("option"));if(!(!f||!f.length||!h)){c.support={leadingWhitespace:d.firstChild.nodeType===3,tbody:!d.getElementsByTagName("tbody").length,htmlSerialize:!!d.getElementsByTagName("link").length,style:/red/.test(h.getAttribute("style")),hrefNormalized:h.getAttribute("href")==="/a",opacity:/^0.55$/.test(h.style.opacity),cssFloat:!!h.style.cssFloat,checkOn:d.getElementsByTagName("input")[0].value==="on",optSelected:k.selected,deleteExpando:true,optDisabled:false,checkClone:false, -scriptEval:false,noCloneEvent:true,boxModel:null,inlineBlockNeedsLayout:false,shrinkWrapBlocks:false,reliableHiddenOffsets:true};l.disabled=true;c.support.optDisabled=!k.disabled;b.type="text/javascript";try{b.appendChild(t.createTextNode("window."+e+"=1;"))}catch(o){}a.insertBefore(b,a.firstChild);if(E[e]){c.support.scriptEval=true;delete E[e]}try{delete b.test}catch(x){c.support.deleteExpando=false}a.removeChild(b);if(d.attachEvent&&d.fireEvent){d.attachEvent("onclick",function r(){c.support.noCloneEvent= -false;d.detachEvent("onclick",r)});d.cloneNode(true).fireEvent("onclick")}d=t.createElement("div");d.innerHTML="";a=t.createDocumentFragment();a.appendChild(d.firstChild);c.support.checkClone=a.cloneNode(true).cloneNode(true).lastChild.checked;c(function(){var r=t.createElement("div");r.style.width=r.style.paddingLeft="1px";t.body.appendChild(r);c.boxModel=c.support.boxModel=r.offsetWidth===2;if("zoom"in r.style){r.style.display="inline";r.style.zoom= -1;c.support.inlineBlockNeedsLayout=r.offsetWidth===2;r.style.display="";r.innerHTML="
";c.support.shrinkWrapBlocks=r.offsetWidth!==2}r.innerHTML="
t
";var A=r.getElementsByTagName("td");c.support.reliableHiddenOffsets=A[0].offsetHeight===0;A[0].style.display="";A[1].style.display="none";c.support.reliableHiddenOffsets=c.support.reliableHiddenOffsets&&A[0].offsetHeight===0;r.innerHTML="";t.body.removeChild(r).style.display= -"none"});a=function(r){var A=t.createElement("div");r="on"+r;var C=r in A;if(!C){A.setAttribute(r,"return;");C=typeof A[r]==="function"}return C};c.support.submitBubbles=a("submit");c.support.changeBubbles=a("change");a=b=d=f=h=null}})();var ra={},Ja=/^(?:\{.*\}|\[.*\])$/;c.extend({cache:{},uuid:0,expando:"jQuery"+c.now(),noData:{embed:true,object:"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000",applet:true},data:function(a,b,d){if(c.acceptData(a)){a=a==E?ra:a;var e=a.nodeType,f=e?a[c.expando]:null,h= -c.cache;if(!(e&&!f&&typeof b==="string"&&d===B)){if(e)f||(a[c.expando]=f=++c.uuid);else h=a;if(typeof b==="object")if(e)h[f]=c.extend(h[f],b);else c.extend(h,b);else if(e&&!h[f])h[f]={};a=e?h[f]:h;if(d!==B)a[b]=d;return typeof b==="string"?a[b]:a}}},removeData:function(a,b){if(c.acceptData(a)){a=a==E?ra:a;var d=a.nodeType,e=d?a[c.expando]:a,f=c.cache,h=d?f[e]:e;if(b){if(h){delete h[b];d&&c.isEmptyObject(h)&&c.removeData(a)}}else if(d&&c.support.deleteExpando)delete a[c.expando];else if(a.removeAttribute)a.removeAttribute(c.expando); -else if(d)delete f[e];else for(var l in a)delete a[l]}},acceptData:function(a){if(a.nodeName){var b=c.noData[a.nodeName.toLowerCase()];if(b)return!(b===true||a.getAttribute("classid")!==b)}return true}});c.fn.extend({data:function(a,b){var d=null;if(typeof a==="undefined"){if(this.length){var e=this[0].attributes,f;d=c.data(this[0]);for(var h=0,l=e.length;h-1)return true;return false},val:function(a){if(!arguments.length){var b=this[0];if(b){if(c.nodeName(b,"option")){var d=b.attributes.value;return!d||d.specified?b.value:b.text}if(c.nodeName(b,"select")){var e=b.selectedIndex;d=[];var f=b.options;b=b.type==="select-one"; -if(e<0)return null;var h=b?e:0;for(e=b?e+1:f.length;h=0;else if(c.nodeName(this,"select")){var A=c.makeArray(r);c("option",this).each(function(){this.selected=c.inArray(c(this).val(),A)>=0});if(!A.length)this.selectedIndex=-1}else this.value=r}})}});c.extend({attrFn:{val:true,css:true,html:true,text:true,data:true,width:true,height:true,offset:true}, -attr:function(a,b,d,e){if(!a||a.nodeType===3||a.nodeType===8)return B;if(e&&b in c.attrFn)return c(a)[b](d);e=a.nodeType!==1||!c.isXMLDoc(a);var f=d!==B;b=e&&c.props[b]||b;var h=Ta.test(b);if((b in a||a[b]!==B)&&e&&!h){if(f){b==="type"&&Ua.test(a.nodeName)&&a.parentNode&&c.error("type property can't be changed");if(d===null)a.nodeType===1&&a.removeAttribute(b);else a[b]=d}if(c.nodeName(a,"form")&&a.getAttributeNode(b))return a.getAttributeNode(b).nodeValue;if(b==="tabIndex")return(b=a.getAttributeNode("tabIndex"))&& -b.specified?b.value:Va.test(a.nodeName)||Wa.test(a.nodeName)&&a.href?0:B;return a[b]}if(!c.support.style&&e&&b==="style"){if(f)a.style.cssText=""+d;return a.style.cssText}f&&a.setAttribute(b,""+d);if(!a.attributes[b]&&a.hasAttribute&&!a.hasAttribute(b))return B;a=!c.support.hrefNormalized&&e&&h?a.getAttribute(b,2):a.getAttribute(b);return a===null?B:a}});var X=/\.(.*)$/,ia=/^(?:textarea|input|select)$/i,La=/\./g,Ma=/ /g,Xa=/[^\w\s.|`]/g,Ya=function(a){return a.replace(Xa,"\\$&")},ua={focusin:0,focusout:0}; -c.event={add:function(a,b,d,e){if(!(a.nodeType===3||a.nodeType===8)){if(c.isWindow(a)&&a!==E&&!a.frameElement)a=E;if(d===false)d=U;else if(!d)return;var f,h;if(d.handler){f=d;d=f.handler}if(!d.guid)d.guid=c.guid++;if(h=c.data(a)){var l=a.nodeType?"events":"__events__",k=h[l],o=h.handle;if(typeof k==="function"){o=k.handle;k=k.events}else if(!k){a.nodeType||(h[l]=h=function(){});h.events=k={}}if(!o)h.handle=o=function(){return typeof c!=="undefined"&&!c.event.triggered?c.event.handle.apply(o.elem, -arguments):B};o.elem=a;b=b.split(" ");for(var x=0,r;l=b[x++];){h=f?c.extend({},f):{handler:d,data:e};if(l.indexOf(".")>-1){r=l.split(".");l=r.shift();h.namespace=r.slice(0).sort().join(".")}else{r=[];h.namespace=""}h.type=l;if(!h.guid)h.guid=d.guid;var A=k[l],C=c.event.special[l]||{};if(!A){A=k[l]=[];if(!C.setup||C.setup.call(a,e,r,o)===false)if(a.addEventListener)a.addEventListener(l,o,false);else a.attachEvent&&a.attachEvent("on"+l,o)}if(C.add){C.add.call(a,h);if(!h.handler.guid)h.handler.guid= -d.guid}A.push(h);c.event.global[l]=true}a=null}}},global:{},remove:function(a,b,d,e){if(!(a.nodeType===3||a.nodeType===8)){if(d===false)d=U;var f,h,l=0,k,o,x,r,A,C,J=a.nodeType?"events":"__events__",w=c.data(a),I=w&&w[J];if(w&&I){if(typeof I==="function"){w=I;I=I.events}if(b&&b.type){d=b.handler;b=b.type}if(!b||typeof b==="string"&&b.charAt(0)==="."){b=b||"";for(f in I)c.event.remove(a,f+b)}else{for(b=b.split(" ");f=b[l++];){r=f;k=f.indexOf(".")<0;o=[];if(!k){o=f.split(".");f=o.shift();x=RegExp("(^|\\.)"+ -c.map(o.slice(0).sort(),Ya).join("\\.(?:.*\\.)?")+"(\\.|$)")}if(A=I[f])if(d){r=c.event.special[f]||{};for(h=e||0;h=0){a.type=f=f.slice(0,-1);a.exclusive=true}if(!d){a.stopPropagation();c.event.global[f]&&c.each(c.cache,function(){this.events&&this.events[f]&&c.event.trigger(a,b,this.handle.elem)})}if(!d||d.nodeType===3||d.nodeType=== -8)return B;a.result=B;a.target=d;b=c.makeArray(b);b.unshift(a)}a.currentTarget=d;(e=d.nodeType?c.data(d,"handle"):(c.data(d,"__events__")||{}).handle)&&e.apply(d,b);e=d.parentNode||d.ownerDocument;try{if(!(d&&d.nodeName&&c.noData[d.nodeName.toLowerCase()]))if(d["on"+f]&&d["on"+f].apply(d,b)===false){a.result=false;a.preventDefault()}}catch(h){}if(!a.isPropagationStopped()&&e)c.event.trigger(a,b,e,true);else if(!a.isDefaultPrevented()){var l;e=a.target;var k=f.replace(X,""),o=c.nodeName(e,"a")&&k=== -"click",x=c.event.special[k]||{};if((!x._default||x._default.call(d,a)===false)&&!o&&!(e&&e.nodeName&&c.noData[e.nodeName.toLowerCase()])){try{if(e[k]){if(l=e["on"+k])e["on"+k]=null;c.event.triggered=true;e[k]()}}catch(r){}if(l)e["on"+k]=l;c.event.triggered=false}}},handle:function(a){var b,d,e,f;d=[];var h=c.makeArray(arguments);a=h[0]=c.event.fix(a||E.event);a.currentTarget=this;b=a.type.indexOf(".")<0&&!a.exclusive;if(!b){e=a.type.split(".");a.type=e.shift();d=e.slice(0).sort();e=RegExp("(^|\\.)"+ -d.join("\\.(?:.*\\.)?")+"(\\.|$)")}a.namespace=a.namespace||d.join(".");f=c.data(this,this.nodeType?"events":"__events__");if(typeof f==="function")f=f.events;d=(f||{})[a.type];if(f&&d){d=d.slice(0);f=0;for(var l=d.length;f-1?c.map(a.options,function(e){return e.selected}).join("-"):"";else if(a.nodeName.toLowerCase()==="select")d=a.selectedIndex;return d},Z=function(a,b){var d=a.target,e,f;if(!(!ia.test(d.nodeName)||d.readOnly)){e=c.data(d,"_change_data");f=xa(d);if(a.type!=="focusout"||d.type!=="radio")c.data(d,"_change_data",f);if(!(e===B||f===e))if(e!=null||f){a.type="change";a.liveFired= -B;return c.event.trigger(a,b,d)}}};c.event.special.change={filters:{focusout:Z,beforedeactivate:Z,click:function(a){var b=a.target,d=b.type;if(d==="radio"||d==="checkbox"||b.nodeName.toLowerCase()==="select")return Z.call(this,a)},keydown:function(a){var b=a.target,d=b.type;if(a.keyCode===13&&b.nodeName.toLowerCase()!=="textarea"||a.keyCode===32&&(d==="checkbox"||d==="radio")||d==="select-multiple")return Z.call(this,a)},beforeactivate:function(a){a=a.target;c.data(a,"_change_data",xa(a))}},setup:function(){if(this.type=== -"file")return false;for(var a in V)c.event.add(this,a+".specialChange",V[a]);return ia.test(this.nodeName)},teardown:function(){c.event.remove(this,".specialChange");return ia.test(this.nodeName)}};V=c.event.special.change.filters;V.focus=V.beforeactivate}t.addEventListener&&c.each({focus:"focusin",blur:"focusout"},function(a,b){function d(e){e=c.event.fix(e);e.type=b;return c.event.trigger(e,null,e.target)}c.event.special[b]={setup:function(){ua[b]++===0&&t.addEventListener(a,d,true)},teardown:function(){--ua[b]=== -0&&t.removeEventListener(a,d,true)}}});c.each(["bind","one"],function(a,b){c.fn[b]=function(d,e,f){if(typeof d==="object"){for(var h in d)this[b](h,e,d[h],f);return this}if(c.isFunction(e)||e===false){f=e;e=B}var l=b==="one"?c.proxy(f,function(o){c(this).unbind(o,l);return f.apply(this,arguments)}):f;if(d==="unload"&&b!=="one")this.one(d,e,f);else{h=0;for(var k=this.length;h0?this.bind(b,d,e):this.trigger(b)};if(c.attrFn)c.attrFn[b]=true});E.attachEvent&&!E.addEventListener&&c(E).bind("unload",function(){for(var a in c.cache)if(c.cache[a].handle)try{c.event.remove(c.cache[a].handle.elem)}catch(b){}}); -(function(){function a(g,i,n,m,p,q){p=0;for(var u=m.length;p0){F=y;break}}y=y[g]}m[p]=F}}}var d=/((?:\((?:\([^()]+\)|[^()]+)+\)|\[(?:\[[^\[\]]*\]|['"][^'"]*['"]|[^\[\]'"]+)+\]|\\.|[^ >+~,(\[\\]+)+|[>+~])(\s*,\s*)?((?:.|\r|\n)*)/g,e=0,f=Object.prototype.toString,h=false,l=true;[0,0].sort(function(){l=false;return 0});var k=function(g,i,n,m){n=n||[];var p=i=i||t;if(i.nodeType!==1&&i.nodeType!==9)return[];if(!g||typeof g!=="string")return n;var q,u,y,F,M,N=true,O=k.isXML(i),D=[],R=g;do{d.exec("");if(q=d.exec(R)){R=q[3];D.push(q[1]);if(q[2]){F=q[3]; -break}}}while(q);if(D.length>1&&x.exec(g))if(D.length===2&&o.relative[D[0]])u=L(D[0]+D[1],i);else for(u=o.relative[D[0]]?[i]:k(D.shift(),i);D.length;){g=D.shift();if(o.relative[g])g+=D.shift();u=L(g,u)}else{if(!m&&D.length>1&&i.nodeType===9&&!O&&o.match.ID.test(D[0])&&!o.match.ID.test(D[D.length-1])){q=k.find(D.shift(),i,O);i=q.expr?k.filter(q.expr,q.set)[0]:q.set[0]}if(i){q=m?{expr:D.pop(),set:C(m)}:k.find(D.pop(),D.length===1&&(D[0]==="~"||D[0]==="+")&&i.parentNode?i.parentNode:i,O);u=q.expr?k.filter(q.expr, -q.set):q.set;if(D.length>0)y=C(u);else N=false;for(;D.length;){q=M=D.pop();if(o.relative[M])q=D.pop();else M="";if(q==null)q=i;o.relative[M](y,q,O)}}else y=[]}y||(y=u);y||k.error(M||g);if(f.call(y)==="[object Array]")if(N)if(i&&i.nodeType===1)for(g=0;y[g]!=null;g++){if(y[g]&&(y[g]===true||y[g].nodeType===1&&k.contains(i,y[g])))n.push(u[g])}else for(g=0;y[g]!=null;g++)y[g]&&y[g].nodeType===1&&n.push(u[g]);else n.push.apply(n,y);else C(y,n);if(F){k(F,p,n,m);k.uniqueSort(n)}return n};k.uniqueSort=function(g){if(w){h= -l;g.sort(w);if(h)for(var i=1;i0};k.find=function(g,i,n){var m;if(!g)return[];for(var p=0,q=o.order.length;p":function(g,i){var n,m=typeof i==="string",p=0,q=g.length;if(m&&!/\W/.test(i))for(i=i.toLowerCase();p=0))n||m.push(u);else if(n)i[q]=false;return false},ID:function(g){return g[1].replace(/\\/g,"")},TAG:function(g){return g[1].toLowerCase()},CHILD:function(g){if(g[1]==="nth"){var i=/(-?)(\d*)n((?:\+|-)?\d*)/.exec(g[2]==="even"&&"2n"||g[2]==="odd"&&"2n+1"||!/\D/.test(g[2])&&"0n+"+g[2]||g[2]);g[2]=i[1]+(i[2]||1)-0;g[3]=i[3]-0}g[0]=e++;return g},ATTR:function(g,i,n, -m,p,q){i=g[1].replace(/\\/g,"");if(!q&&o.attrMap[i])g[1]=o.attrMap[i];if(g[2]==="~=")g[4]=" "+g[4]+" ";return g},PSEUDO:function(g,i,n,m,p){if(g[1]==="not")if((d.exec(g[3])||"").length>1||/^\w/.test(g[3]))g[3]=k(g[3],null,null,i);else{g=k.filter(g[3],i,n,true^p);n||m.push.apply(m,g);return false}else if(o.match.POS.test(g[0])||o.match.CHILD.test(g[0]))return true;return g},POS:function(g){g.unshift(true);return g}},filters:{enabled:function(g){return g.disabled===false&&g.type!=="hidden"},disabled:function(g){return g.disabled=== -true},checked:function(g){return g.checked===true},selected:function(g){return g.selected===true},parent:function(g){return!!g.firstChild},empty:function(g){return!g.firstChild},has:function(g,i,n){return!!k(n[3],g).length},header:function(g){return/h\d/i.test(g.nodeName)},text:function(g){return"text"===g.type},radio:function(g){return"radio"===g.type},checkbox:function(g){return"checkbox"===g.type},file:function(g){return"file"===g.type},password:function(g){return"password"===g.type},submit:function(g){return"submit"=== -g.type},image:function(g){return"image"===g.type},reset:function(g){return"reset"===g.type},button:function(g){return"button"===g.type||g.nodeName.toLowerCase()==="button"},input:function(g){return/input|select|textarea|button/i.test(g.nodeName)}},setFilters:{first:function(g,i){return i===0},last:function(g,i,n,m){return i===m.length-1},even:function(g,i){return i%2===0},odd:function(g,i){return i%2===1},lt:function(g,i,n){return in[3]-0},nth:function(g,i,n){return n[3]- -0===i},eq:function(g,i,n){return n[3]-0===i}},filter:{PSEUDO:function(g,i,n,m){var p=i[1],q=o.filters[p];if(q)return q(g,n,i,m);else if(p==="contains")return(g.textContent||g.innerText||k.getText([g])||"").indexOf(i[3])>=0;else if(p==="not"){i=i[3];n=0;for(m=i.length;n=0}},ID:function(g,i){return g.nodeType===1&&g.getAttribute("id")===i},TAG:function(g,i){return i==="*"&&g.nodeType===1||g.nodeName.toLowerCase()=== -i},CLASS:function(g,i){return(" "+(g.className||g.getAttribute("class"))+" ").indexOf(i)>-1},ATTR:function(g,i){var n=i[1];n=o.attrHandle[n]?o.attrHandle[n](g):g[n]!=null?g[n]:g.getAttribute(n);var m=n+"",p=i[2],q=i[4];return n==null?p==="!=":p==="="?m===q:p==="*="?m.indexOf(q)>=0:p==="~="?(" "+m+" ").indexOf(q)>=0:!q?m&&n!==false:p==="!="?m!==q:p==="^="?m.indexOf(q)===0:p==="$="?m.substr(m.length-q.length)===q:p==="|="?m===q||m.substr(0,q.length+1)===q+"-":false},POS:function(g,i,n,m){var p=o.setFilters[i[2]]; -if(p)return p(g,n,i,m)}}},x=o.match.POS,r=function(g,i){return"\\"+(i-0+1)},A;for(A in o.match){o.match[A]=RegExp(o.match[A].source+/(?![^\[]*\])(?![^\(]*\))/.source);o.leftMatch[A]=RegExp(/(^(?:.|\r|\n)*?)/.source+o.match[A].source.replace(/\\(\d+)/g,r))}var C=function(g,i){g=Array.prototype.slice.call(g,0);if(i){i.push.apply(i,g);return i}return g};try{Array.prototype.slice.call(t.documentElement.childNodes,0)}catch(J){C=function(g,i){var n=0,m=i||[];if(f.call(g)==="[object Array]")Array.prototype.push.apply(m, -g);else if(typeof g.length==="number")for(var p=g.length;n";n.insertBefore(g,n.firstChild);if(t.getElementById(i)){o.find.ID=function(m,p,q){if(typeof p.getElementById!=="undefined"&&!q)return(p=p.getElementById(m[1]))?p.id===m[1]||typeof p.getAttributeNode!=="undefined"&&p.getAttributeNode("id").nodeValue===m[1]?[p]:B:[]};o.filter.ID=function(m,p){var q=typeof m.getAttributeNode!=="undefined"&&m.getAttributeNode("id");return m.nodeType===1&&q&&q.nodeValue===p}}n.removeChild(g); -n=g=null})();(function(){var g=t.createElement("div");g.appendChild(t.createComment(""));if(g.getElementsByTagName("*").length>0)o.find.TAG=function(i,n){var m=n.getElementsByTagName(i[1]);if(i[1]==="*"){for(var p=[],q=0;m[q];q++)m[q].nodeType===1&&p.push(m[q]);m=p}return m};g.innerHTML="";if(g.firstChild&&typeof g.firstChild.getAttribute!=="undefined"&&g.firstChild.getAttribute("href")!=="#")o.attrHandle.href=function(i){return i.getAttribute("href",2)};g=null})();t.querySelectorAll&& -function(){var g=k,i=t.createElement("div");i.innerHTML="

";if(!(i.querySelectorAll&&i.querySelectorAll(".TEST").length===0)){k=function(m,p,q,u){p=p||t;m=m.replace(/\=\s*([^'"\]]*)\s*\]/g,"='$1']");if(!u&&!k.isXML(p))if(p.nodeType===9)try{return C(p.querySelectorAll(m),q)}catch(y){}else if(p.nodeType===1&&p.nodeName.toLowerCase()!=="object"){var F=p.getAttribute("id"),M=F||"__sizzle__";F||p.setAttribute("id",M);try{return C(p.querySelectorAll("#"+M+" "+m),q)}catch(N){}finally{F|| -p.removeAttribute("id")}}return g(m,p,q,u)};for(var n in g)k[n]=g[n];i=null}}();(function(){var g=t.documentElement,i=g.matchesSelector||g.mozMatchesSelector||g.webkitMatchesSelector||g.msMatchesSelector,n=false;try{i.call(t.documentElement,"[test!='']:sizzle")}catch(m){n=true}if(i)k.matchesSelector=function(p,q){q=q.replace(/\=\s*([^'"\]]*)\s*\]/g,"='$1']");if(!k.isXML(p))try{if(n||!o.match.PSEUDO.test(q)&&!/!=/.test(q))return i.call(p,q)}catch(u){}return k(q,null,null,[p]).length>0}})();(function(){var g= -t.createElement("div");g.innerHTML="
";if(!(!g.getElementsByClassName||g.getElementsByClassName("e").length===0)){g.lastChild.className="e";if(g.getElementsByClassName("e").length!==1){o.order.splice(1,0,"CLASS");o.find.CLASS=function(i,n,m){if(typeof n.getElementsByClassName!=="undefined"&&!m)return n.getElementsByClassName(i[1])};g=null}}})();k.contains=t.documentElement.contains?function(g,i){return g!==i&&(g.contains?g.contains(i):true)}:t.documentElement.compareDocumentPosition? -function(g,i){return!!(g.compareDocumentPosition(i)&16)}:function(){return false};k.isXML=function(g){return(g=(g?g.ownerDocument||g:0).documentElement)?g.nodeName!=="HTML":false};var L=function(g,i){for(var n,m=[],p="",q=i.nodeType?[i]:i;n=o.match.PSEUDO.exec(g);){p+=n[0];g=g.replace(o.match.PSEUDO,"")}g=o.relative[g]?g+"*":g;n=0;for(var u=q.length;n0)for(var h=d;h0},closest:function(a,b){var d=[],e,f,h=this[0];if(c.isArray(a)){var l,k={},o=1;if(h&&a.length){e=0;for(f=a.length;e-1:c(h).is(e))d.push({selector:l,elem:h,level:o})}h= -h.parentNode;o++}}return d}l=cb.test(a)?c(a,b||this.context):null;e=0;for(f=this.length;e-1:c.find.matchesSelector(h,a)){d.push(h);break}else{h=h.parentNode;if(!h||!h.ownerDocument||h===b)break}d=d.length>1?c.unique(d):d;return this.pushStack(d,"closest",a)},index:function(a){if(!a||typeof a==="string")return c.inArray(this[0],a?c(a):this.parent().children());return c.inArray(a.jquery?a[0]:a,this)},add:function(a,b){var d=typeof a==="string"?c(a,b||this.context): -c.makeArray(a),e=c.merge(this.get(),d);return this.pushStack(!d[0]||!d[0].parentNode||d[0].parentNode.nodeType===11||!e[0]||!e[0].parentNode||e[0].parentNode.nodeType===11?e:c.unique(e))},andSelf:function(){return this.add(this.prevObject)}});c.each({parent:function(a){return(a=a.parentNode)&&a.nodeType!==11?a:null},parents:function(a){return c.dir(a,"parentNode")},parentsUntil:function(a,b,d){return c.dir(a,"parentNode",d)},next:function(a){return c.nth(a,2,"nextSibling")},prev:function(a){return c.nth(a, -2,"previousSibling")},nextAll:function(a){return c.dir(a,"nextSibling")},prevAll:function(a){return c.dir(a,"previousSibling")},nextUntil:function(a,b,d){return c.dir(a,"nextSibling",d)},prevUntil:function(a,b,d){return c.dir(a,"previousSibling",d)},siblings:function(a){return c.sibling(a.parentNode.firstChild,a)},children:function(a){return c.sibling(a.firstChild)},contents:function(a){return c.nodeName(a,"iframe")?a.contentDocument||a.contentWindow.document:c.makeArray(a.childNodes)}},function(a, -b){c.fn[a]=function(d,e){var f=c.map(this,b,d);Za.test(a)||(e=d);if(e&&typeof e==="string")f=c.filter(e,f);f=this.length>1?c.unique(f):f;if((this.length>1||ab.test(e))&&$a.test(a))f=f.reverse();return this.pushStack(f,a,bb.call(arguments).join(","))}});c.extend({filter:function(a,b,d){if(d)a=":not("+a+")";return b.length===1?c.find.matchesSelector(b[0],a)?[b[0]]:[]:c.find.matches(a,b)},dir:function(a,b,d){var e=[];for(a=a[b];a&&a.nodeType!==9&&(d===B||a.nodeType!==1||!c(a).is(d));){a.nodeType===1&& -e.push(a);a=a[b]}return e},nth:function(a,b,d){b=b||1;for(var e=0;a;a=a[d])if(a.nodeType===1&&++e===b)break;return a},sibling:function(a,b){for(var d=[];a;a=a.nextSibling)a.nodeType===1&&a!==b&&d.push(a);return d}});var za=/ jQuery\d+="(?:\d+|null)"/g,$=/^\s+/,Aa=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([\w:]+)[^>]*)\/>/ig,Ba=/<([\w:]+)/,db=/\s]+\/)>/g,P={option:[1, -""],legend:[1,"
","
"],thead:[1,"","
"],tr:[2,"","
"],td:[3,"","
"],col:[2,"","
"],area:[1,"",""],_default:[0,"",""]};P.optgroup=P.option;P.tbody=P.tfoot=P.colgroup=P.caption=P.thead;P.th=P.td;if(!c.support.htmlSerialize)P._default=[1,"div
","
"];c.fn.extend({text:function(a){if(c.isFunction(a))return this.each(function(b){var d= -c(this);d.text(a.call(this,b,d.text()))});if(typeof a!=="object"&&a!==B)return this.empty().append((this[0]&&this[0].ownerDocument||t).createTextNode(a));return c.text(this)},wrapAll:function(a){if(c.isFunction(a))return this.each(function(d){c(this).wrapAll(a.call(this,d))});if(this[0]){var b=c(a,this[0].ownerDocument).eq(0).clone(true);this[0].parentNode&&b.insertBefore(this[0]);b.map(function(){for(var d=this;d.firstChild&&d.firstChild.nodeType===1;)d=d.firstChild;return d}).append(this)}return this}, -wrapInner:function(a){if(c.isFunction(a))return this.each(function(b){c(this).wrapInner(a.call(this,b))});return this.each(function(){var b=c(this),d=b.contents();d.length?d.wrapAll(a):b.append(a)})},wrap:function(a){return this.each(function(){c(this).wrapAll(a)})},unwrap:function(){return this.parent().each(function(){c.nodeName(this,"body")||c(this).replaceWith(this.childNodes)}).end()},append:function(){return this.domManip(arguments,true,function(a){this.nodeType===1&&this.appendChild(a)})}, -prepend:function(){return this.domManip(arguments,true,function(a){this.nodeType===1&&this.insertBefore(a,this.firstChild)})},before:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,false,function(b){this.parentNode.insertBefore(b,this)});else if(arguments.length){var a=c(arguments[0]);a.push.apply(a,this.toArray());return this.pushStack(a,"before",arguments)}},after:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,false,function(b){this.parentNode.insertBefore(b, -this.nextSibling)});else if(arguments.length){var a=this.pushStack(this,"after",arguments);a.push.apply(a,c(arguments[0]).toArray());return a}},remove:function(a,b){for(var d=0,e;(e=this[d])!=null;d++)if(!a||c.filter(a,[e]).length){if(!b&&e.nodeType===1){c.cleanData(e.getElementsByTagName("*"));c.cleanData([e])}e.parentNode&&e.parentNode.removeChild(e)}return this},empty:function(){for(var a=0,b;(b=this[a])!=null;a++)for(b.nodeType===1&&c.cleanData(b.getElementsByTagName("*"));b.firstChild;)b.removeChild(b.firstChild); -return this},clone:function(a){var b=this.map(function(){if(!c.support.noCloneEvent&&!c.isXMLDoc(this)){var d=this.outerHTML,e=this.ownerDocument;if(!d){d=e.createElement("div");d.appendChild(this.cloneNode(true));d=d.innerHTML}return c.clean([d.replace(za,"").replace(fb,'="$1">').replace($,"")],e)[0]}else return this.cloneNode(true)});if(a===true){na(this,b);na(this.find("*"),b.find("*"))}return b},html:function(a){if(a===B)return this[0]&&this[0].nodeType===1?this[0].innerHTML.replace(za,""):null; -else if(typeof a==="string"&&!Ca.test(a)&&(c.support.leadingWhitespace||!$.test(a))&&!P[(Ba.exec(a)||["",""])[1].toLowerCase()]){a=a.replace(Aa,"<$1>");try{for(var b=0,d=this.length;b0||e.cacheable||this.length>1?h.cloneNode(true):h)}k.length&&c.each(k,Oa)}return this}});c.buildFragment=function(a,b,d){var e,f,h;b=b&&b[0]?b[0].ownerDocument||b[0]:t;if(a.length===1&&typeof a[0]==="string"&&a[0].length<512&&b===t&&!Ca.test(a[0])&&(c.support.checkClone||!Da.test(a[0]))){f=true;if(h=c.fragments[a[0]])if(h!==1)e=h}if(!e){e=b.createDocumentFragment();c.clean(a,b,e,d)}if(f)c.fragments[a[0]]=h?e:1;return{fragment:e,cacheable:f}};c.fragments={};c.each({appendTo:"append", -prependTo:"prepend",insertBefore:"before",insertAfter:"after",replaceAll:"replaceWith"},function(a,b){c.fn[a]=function(d){var e=[];d=c(d);var f=this.length===1&&this[0].parentNode;if(f&&f.nodeType===11&&f.childNodes.length===1&&d.length===1){d[b](this[0]);return this}else{f=0;for(var h=d.length;f0?this.clone(true):this).get();c(d[f])[b](l);e=e.concat(l)}return this.pushStack(e,a,d.selector)}}});c.extend({clean:function(a,b,d,e){b=b||t;if(typeof b.createElement==="undefined")b=b.ownerDocument|| -b[0]&&b[0].ownerDocument||t;for(var f=[],h=0,l;(l=a[h])!=null;h++){if(typeof l==="number")l+="";if(l){if(typeof l==="string"&&!eb.test(l))l=b.createTextNode(l);else if(typeof l==="string"){l=l.replace(Aa,"<$1>");var k=(Ba.exec(l)||["",""])[1].toLowerCase(),o=P[k]||P._default,x=o[0],r=b.createElement("div");for(r.innerHTML=o[1]+l+o[2];x--;)r=r.lastChild;if(!c.support.tbody){x=db.test(l);k=k==="table"&&!x?r.firstChild&&r.firstChild.childNodes:o[1]===""&&!x?r.childNodes:[];for(o=k.length- -1;o>=0;--o)c.nodeName(k[o],"tbody")&&!k[o].childNodes.length&&k[o].parentNode.removeChild(k[o])}!c.support.leadingWhitespace&&$.test(l)&&r.insertBefore(b.createTextNode($.exec(l)[0]),r.firstChild);l=r.childNodes}if(l.nodeType)f.push(l);else f=c.merge(f,l)}}if(d)for(h=0;f[h];h++)if(e&&c.nodeName(f[h],"script")&&(!f[h].type||f[h].type.toLowerCase()==="text/javascript"))e.push(f[h].parentNode?f[h].parentNode.removeChild(f[h]):f[h]);else{f[h].nodeType===1&&f.splice.apply(f,[h+1,0].concat(c.makeArray(f[h].getElementsByTagName("script")))); -d.appendChild(f[h])}return f},cleanData:function(a){for(var b,d,e=c.cache,f=c.event.special,h=c.support.deleteExpando,l=0,k;(k=a[l])!=null;l++)if(!(k.nodeName&&c.noData[k.nodeName.toLowerCase()]))if(d=k[c.expando]){if((b=e[d])&&b.events)for(var o in b.events)f[o]?c.event.remove(k,o):c.removeEvent(k,o,b.handle);if(h)delete k[c.expando];else k.removeAttribute&&k.removeAttribute(c.expando);delete e[d]}}});var Ea=/alpha\([^)]*\)/i,gb=/opacity=([^)]*)/,hb=/-([a-z])/ig,ib=/([A-Z])/g,Fa=/^-?\d+(?:px)?$/i, -jb=/^-?\d/,kb={position:"absolute",visibility:"hidden",display:"block"},Pa=["Left","Right"],Qa=["Top","Bottom"],W,Ga,aa,lb=function(a,b){return b.toUpperCase()};c.fn.css=function(a,b){if(arguments.length===2&&b===B)return this;return c.access(this,a,b,true,function(d,e,f){return f!==B?c.style(d,e,f):c.css(d,e)})};c.extend({cssHooks:{opacity:{get:function(a,b){if(b){var d=W(a,"opacity","opacity");return d===""?"1":d}else return a.style.opacity}}},cssNumber:{zIndex:true,fontWeight:true,opacity:true, -zoom:true,lineHeight:true},cssProps:{"float":c.support.cssFloat?"cssFloat":"styleFloat"},style:function(a,b,d,e){if(!(!a||a.nodeType===3||a.nodeType===8||!a.style)){var f,h=c.camelCase(b),l=a.style,k=c.cssHooks[h];b=c.cssProps[h]||h;if(d!==B){if(!(typeof d==="number"&&isNaN(d)||d==null)){if(typeof d==="number"&&!c.cssNumber[h])d+="px";if(!k||!("set"in k)||(d=k.set(a,d))!==B)try{l[b]=d}catch(o){}}}else{if(k&&"get"in k&&(f=k.get(a,false,e))!==B)return f;return l[b]}}},css:function(a,b,d){var e,f=c.camelCase(b), -h=c.cssHooks[f];b=c.cssProps[f]||f;if(h&&"get"in h&&(e=h.get(a,true,d))!==B)return e;else if(W)return W(a,b,f)},swap:function(a,b,d){var e={},f;for(f in b){e[f]=a.style[f];a.style[f]=b[f]}d.call(a);for(f in b)a.style[f]=e[f]},camelCase:function(a){return a.replace(hb,lb)}});c.curCSS=c.css;c.each(["height","width"],function(a,b){c.cssHooks[b]={get:function(d,e,f){var h;if(e){if(d.offsetWidth!==0)h=oa(d,b,f);else c.swap(d,kb,function(){h=oa(d,b,f)});if(h<=0){h=W(d,b,b);if(h==="0px"&&aa)h=aa(d,b,b); -if(h!=null)return h===""||h==="auto"?"0px":h}if(h<0||h==null){h=d.style[b];return h===""||h==="auto"?"0px":h}return typeof h==="string"?h:h+"px"}},set:function(d,e){if(Fa.test(e)){e=parseFloat(e);if(e>=0)return e+"px"}else return e}}});if(!c.support.opacity)c.cssHooks.opacity={get:function(a,b){return gb.test((b&&a.currentStyle?a.currentStyle.filter:a.style.filter)||"")?parseFloat(RegExp.$1)/100+"":b?"1":""},set:function(a,b){var d=a.style;d.zoom=1;var e=c.isNaN(b)?"":"alpha(opacity="+b*100+")",f= -d.filter||"";d.filter=Ea.test(f)?f.replace(Ea,e):d.filter+" "+e}};if(t.defaultView&&t.defaultView.getComputedStyle)Ga=function(a,b,d){var e;d=d.replace(ib,"-$1").toLowerCase();if(!(b=a.ownerDocument.defaultView))return B;if(b=b.getComputedStyle(a,null)){e=b.getPropertyValue(d);if(e===""&&!c.contains(a.ownerDocument.documentElement,a))e=c.style(a,d)}return e};if(t.documentElement.currentStyle)aa=function(a,b){var d,e,f=a.currentStyle&&a.currentStyle[b],h=a.style;if(!Fa.test(f)&&jb.test(f)){d=h.left; -e=a.runtimeStyle.left;a.runtimeStyle.left=a.currentStyle.left;h.left=b==="fontSize"?"1em":f||0;f=h.pixelLeft+"px";h.left=d;a.runtimeStyle.left=e}return f===""?"auto":f};W=Ga||aa;if(c.expr&&c.expr.filters){c.expr.filters.hidden=function(a){var b=a.offsetHeight;return a.offsetWidth===0&&b===0||!c.support.reliableHiddenOffsets&&(a.style.display||c.css(a,"display"))==="none"};c.expr.filters.visible=function(a){return!c.expr.filters.hidden(a)}}var mb=c.now(),nb=/)<[^<]*)*<\/script>/gi, -ob=/^(?:select|textarea)/i,pb=/^(?:color|date|datetime|email|hidden|month|number|password|range|search|tel|text|time|url|week)$/i,qb=/^(?:GET|HEAD)$/,Ra=/\[\]$/,T=/\=\?(&|$)/,ja=/\?/,rb=/([?&])_=[^&]*/,sb=/^(\w+:)?\/\/([^\/?#]+)/,tb=/%20/g,ub=/#.*$/,Ha=c.fn.load;c.fn.extend({load:function(a,b,d){if(typeof a!=="string"&&Ha)return Ha.apply(this,arguments);else if(!this.length)return this;var e=a.indexOf(" ");if(e>=0){var f=a.slice(e,a.length);a=a.slice(0,e)}e="GET";if(b)if(c.isFunction(b)){d=b;b=null}else if(typeof b=== -"object"){b=c.param(b,c.ajaxSettings.traditional);e="POST"}var h=this;c.ajax({url:a,type:e,dataType:"html",data:b,complete:function(l,k){if(k==="success"||k==="notmodified")h.html(f?c("
").append(l.responseText.replace(nb,"")).find(f):l.responseText);d&&h.each(d,[l.responseText,k,l])}});return this},serialize:function(){return c.param(this.serializeArray())},serializeArray:function(){return this.map(function(){return this.elements?c.makeArray(this.elements):this}).filter(function(){return this.name&& -!this.disabled&&(this.checked||ob.test(this.nodeName)||pb.test(this.type))}).map(function(a,b){var d=c(this).val();return d==null?null:c.isArray(d)?c.map(d,function(e){return{name:b.name,value:e}}):{name:b.name,value:d}}).get()}});c.each("ajaxStart ajaxStop ajaxComplete ajaxError ajaxSuccess ajaxSend".split(" "),function(a,b){c.fn[b]=function(d){return this.bind(b,d)}});c.extend({get:function(a,b,d,e){if(c.isFunction(b)){e=e||d;d=b;b=null}return c.ajax({type:"GET",url:a,data:b,success:d,dataType:e})}, -getScript:function(a,b){return c.get(a,null,b,"script")},getJSON:function(a,b,d){return c.get(a,b,d,"json")},post:function(a,b,d,e){if(c.isFunction(b)){e=e||d;d=b;b={}}return c.ajax({type:"POST",url:a,data:b,success:d,dataType:e})},ajaxSetup:function(a){c.extend(c.ajaxSettings,a)},ajaxSettings:{url:location.href,global:true,type:"GET",contentType:"application/x-www-form-urlencoded",processData:true,async:true,xhr:function(){return new E.XMLHttpRequest},accepts:{xml:"application/xml, text/xml",html:"text/html", -script:"text/javascript, application/javascript",json:"application/json, text/javascript",text:"text/plain",_default:"*/*"}},ajax:function(a){var b=c.extend(true,{},c.ajaxSettings,a),d,e,f,h=b.type.toUpperCase(),l=qb.test(h);b.url=b.url.replace(ub,"");b.context=a&&a.context!=null?a.context:b;if(b.data&&b.processData&&typeof b.data!=="string")b.data=c.param(b.data,b.traditional);if(b.dataType==="jsonp"){if(h==="GET")T.test(b.url)||(b.url+=(ja.test(b.url)?"&":"?")+(b.jsonp||"callback")+"=?");else if(!b.data|| -!T.test(b.data))b.data=(b.data?b.data+"&":"")+(b.jsonp||"callback")+"=?";b.dataType="json"}if(b.dataType==="json"&&(b.data&&T.test(b.data)||T.test(b.url))){d=b.jsonpCallback||"jsonp"+mb++;if(b.data)b.data=(b.data+"").replace(T,"="+d+"$1");b.url=b.url.replace(T,"="+d+"$1");b.dataType="script";var k=E[d];E[d]=function(m){if(c.isFunction(k))k(m);else{E[d]=B;try{delete E[d]}catch(p){}}f=m;c.handleSuccess(b,w,e,f);c.handleComplete(b,w,e,f);r&&r.removeChild(A)}}if(b.dataType==="script"&&b.cache===null)b.cache= -false;if(b.cache===false&&l){var o=c.now(),x=b.url.replace(rb,"$1_="+o);b.url=x+(x===b.url?(ja.test(b.url)?"&":"?")+"_="+o:"")}if(b.data&&l)b.url+=(ja.test(b.url)?"&":"?")+b.data;b.global&&c.active++===0&&c.event.trigger("ajaxStart");o=(o=sb.exec(b.url))&&(o[1]&&o[1].toLowerCase()!==location.protocol||o[2].toLowerCase()!==location.host);if(b.dataType==="script"&&h==="GET"&&o){var r=t.getElementsByTagName("head")[0]||t.documentElement,A=t.createElement("script");if(b.scriptCharset)A.charset=b.scriptCharset; -A.src=b.url;if(!d){var C=false;A.onload=A.onreadystatechange=function(){if(!C&&(!this.readyState||this.readyState==="loaded"||this.readyState==="complete")){C=true;c.handleSuccess(b,w,e,f);c.handleComplete(b,w,e,f);A.onload=A.onreadystatechange=null;r&&A.parentNode&&r.removeChild(A)}}}r.insertBefore(A,r.firstChild);return B}var J=false,w=b.xhr();if(w){b.username?w.open(h,b.url,b.async,b.username,b.password):w.open(h,b.url,b.async);try{if(b.data!=null&&!l||a&&a.contentType)w.setRequestHeader("Content-Type", -b.contentType);if(b.ifModified){c.lastModified[b.url]&&w.setRequestHeader("If-Modified-Since",c.lastModified[b.url]);c.etag[b.url]&&w.setRequestHeader("If-None-Match",c.etag[b.url])}o||w.setRequestHeader("X-Requested-With","XMLHttpRequest");w.setRequestHeader("Accept",b.dataType&&b.accepts[b.dataType]?b.accepts[b.dataType]+", */*; q=0.01":b.accepts._default)}catch(I){}if(b.beforeSend&&b.beforeSend.call(b.context,w,b)===false){b.global&&c.active--===1&&c.event.trigger("ajaxStop");w.abort();return false}b.global&& -c.triggerGlobal(b,"ajaxSend",[w,b]);var L=w.onreadystatechange=function(m){if(!w||w.readyState===0||m==="abort"){J||c.handleComplete(b,w,e,f);J=true;if(w)w.onreadystatechange=c.noop}else if(!J&&w&&(w.readyState===4||m==="timeout")){J=true;w.onreadystatechange=c.noop;e=m==="timeout"?"timeout":!c.httpSuccess(w)?"error":b.ifModified&&c.httpNotModified(w,b.url)?"notmodified":"success";var p;if(e==="success")try{f=c.httpData(w,b.dataType,b)}catch(q){e="parsererror";p=q}if(e==="success"||e==="notmodified")d|| -c.handleSuccess(b,w,e,f);else c.handleError(b,w,e,p);d||c.handleComplete(b,w,e,f);m==="timeout"&&w.abort();if(b.async)w=null}};try{var g=w.abort;w.abort=function(){w&&Function.prototype.call.call(g,w);L("abort")}}catch(i){}b.async&&b.timeout>0&&setTimeout(function(){w&&!J&&L("timeout")},b.timeout);try{w.send(l||b.data==null?null:b.data)}catch(n){c.handleError(b,w,null,n);c.handleComplete(b,w,e,f)}b.async||L();return w}},param:function(a,b){var d=[],e=function(h,l){l=c.isFunction(l)?l():l;d[d.length]= -encodeURIComponent(h)+"="+encodeURIComponent(l)};if(b===B)b=c.ajaxSettings.traditional;if(c.isArray(a)||a.jquery)c.each(a,function(){e(this.name,this.value)});else for(var f in a)da(f,a[f],b,e);return d.join("&").replace(tb,"+")}});c.extend({active:0,lastModified:{},etag:{},handleError:function(a,b,d,e){a.error&&a.error.call(a.context,b,d,e);a.global&&c.triggerGlobal(a,"ajaxError",[b,a,e])},handleSuccess:function(a,b,d,e){a.success&&a.success.call(a.context,e,d,b);a.global&&c.triggerGlobal(a,"ajaxSuccess", -[b,a])},handleComplete:function(a,b,d){a.complete&&a.complete.call(a.context,b,d);a.global&&c.triggerGlobal(a,"ajaxComplete",[b,a]);a.global&&c.active--===1&&c.event.trigger("ajaxStop")},triggerGlobal:function(a,b,d){(a.context&&a.context.url==null?c(a.context):c.event).trigger(b,d)},httpSuccess:function(a){try{return!a.status&&location.protocol==="file:"||a.status>=200&&a.status<300||a.status===304||a.status===1223}catch(b){}return false},httpNotModified:function(a,b){var d=a.getResponseHeader("Last-Modified"), -e=a.getResponseHeader("Etag");if(d)c.lastModified[b]=d;if(e)c.etag[b]=e;return a.status===304},httpData:function(a,b,d){var e=a.getResponseHeader("content-type")||"",f=b==="xml"||!b&&e.indexOf("xml")>=0;a=f?a.responseXML:a.responseText;f&&a.documentElement.nodeName==="parsererror"&&c.error("parsererror");if(d&&d.dataFilter)a=d.dataFilter(a,b);if(typeof a==="string")if(b==="json"||!b&&e.indexOf("json")>=0)a=c.parseJSON(a);else if(b==="script"||!b&&e.indexOf("javascript")>=0)c.globalEval(a);return a}}); -if(E.ActiveXObject)c.ajaxSettings.xhr=function(){if(E.location.protocol!=="file:")try{return new E.XMLHttpRequest}catch(a){}try{return new E.ActiveXObject("Microsoft.XMLHTTP")}catch(b){}};c.support.ajax=!!c.ajaxSettings.xhr();var ea={},vb=/^(?:toggle|show|hide)$/,wb=/^([+\-]=)?([\d+.\-]+)(.*)$/,ba,pa=[["height","marginTop","marginBottom","paddingTop","paddingBottom"],["width","marginLeft","marginRight","paddingLeft","paddingRight"],["opacity"]];c.fn.extend({show:function(a,b,d){if(a||a===0)return this.animate(S("show", -3),a,b,d);else{d=0;for(var e=this.length;d=0;e--)if(d[e].elem===this){b&&d[e](true);d.splice(e,1)}});b||this.dequeue();return this}});c.each({slideDown:S("show",1),slideUp:S("hide",1),slideToggle:S("toggle",1),fadeIn:{opacity:"show"},fadeOut:{opacity:"hide"},fadeToggle:{opacity:"toggle"}},function(a,b){c.fn[a]=function(d,e,f){return this.animate(b, -d,e,f)}});c.extend({speed:function(a,b,d){var e=a&&typeof a==="object"?c.extend({},a):{complete:d||!d&&b||c.isFunction(a)&&a,duration:a,easing:d&&b||b&&!c.isFunction(b)&&b};e.duration=c.fx.off?0:typeof e.duration==="number"?e.duration:e.duration in c.fx.speeds?c.fx.speeds[e.duration]:c.fx.speeds._default;e.old=e.complete;e.complete=function(){e.queue!==false&&c(this).dequeue();c.isFunction(e.old)&&e.old.call(this)};return e},easing:{linear:function(a,b,d,e){return d+e*a},swing:function(a,b,d,e){return(-Math.cos(a* -Math.PI)/2+0.5)*e+d}},timers:[],fx:function(a,b,d){this.options=b;this.elem=a;this.prop=d;if(!b.orig)b.orig={}}});c.fx.prototype={update:function(){this.options.step&&this.options.step.call(this.elem,this.now,this);(c.fx.step[this.prop]||c.fx.step._default)(this)},cur:function(){if(this.elem[this.prop]!=null&&(!this.elem.style||this.elem.style[this.prop]==null))return this.elem[this.prop];var a=parseFloat(c.css(this.elem,this.prop));return a&&a>-1E4?a:0},custom:function(a,b,d){function e(l){return f.step(l)} -var f=this,h=c.fx;this.startTime=c.now();this.start=a;this.end=b;this.unit=d||this.unit||"px";this.now=this.start;this.pos=this.state=0;e.elem=this.elem;if(e()&&c.timers.push(e)&&!ba)ba=setInterval(h.tick,h.interval)},show:function(){this.options.orig[this.prop]=c.style(this.elem,this.prop);this.options.show=true;this.custom(this.prop==="width"||this.prop==="height"?1:0,this.cur());c(this.elem).show()},hide:function(){this.options.orig[this.prop]=c.style(this.elem,this.prop);this.options.hide=true; -this.custom(this.cur(),0)},step:function(a){var b=c.now(),d=true;if(a||b>=this.options.duration+this.startTime){this.now=this.end;this.pos=this.state=1;this.update();this.options.curAnim[this.prop]=true;for(var e in this.options.curAnim)if(this.options.curAnim[e]!==true)d=false;if(d){if(this.options.overflow!=null&&!c.support.shrinkWrapBlocks){var f=this.elem,h=this.options;c.each(["","X","Y"],function(k,o){f.style["overflow"+o]=h.overflow[k]})}this.options.hide&&c(this.elem).hide();if(this.options.hide|| -this.options.show)for(var l in this.options.curAnim)c.style(this.elem,l,this.options.orig[l]);this.options.complete.call(this.elem)}return false}else{a=b-this.startTime;this.state=a/this.options.duration;b=this.options.easing||(c.easing.swing?"swing":"linear");this.pos=c.easing[this.options.specialEasing&&this.options.specialEasing[this.prop]||b](this.state,a,0,1,this.options.duration);this.now=this.start+(this.end-this.start)*this.pos;this.update()}return true}};c.extend(c.fx,{tick:function(){for(var a= -c.timers,b=0;b-1;e={};var x={};if(o)x=f.position();l=o?x.top:parseInt(l,10)||0;k=o?x.left:parseInt(k,10)||0;if(c.isFunction(b))b=b.call(a,d,h);if(b.top!=null)e.top=b.top-h.top+l;if(b.left!=null)e.left=b.left-h.left+k;"using"in b?b.using.call(a, -e):f.css(e)}};c.fn.extend({position:function(){if(!this[0])return null;var a=this[0],b=this.offsetParent(),d=this.offset(),e=Ia.test(b[0].nodeName)?{top:0,left:0}:b.offset();d.top-=parseFloat(c.css(a,"marginTop"))||0;d.left-=parseFloat(c.css(a,"marginLeft"))||0;e.top+=parseFloat(c.css(b[0],"borderTopWidth"))||0;e.left+=parseFloat(c.css(b[0],"borderLeftWidth"))||0;return{top:d.top-e.top,left:d.left-e.left}},offsetParent:function(){return this.map(function(){for(var a=this.offsetParent||t.body;a&&!Ia.test(a.nodeName)&& -c.css(a,"position")==="static";)a=a.offsetParent;return a})}});c.each(["Left","Top"],function(a,b){var d="scroll"+b;c.fn[d]=function(e){var f=this[0],h;if(!f)return null;if(e!==B)return this.each(function(){if(h=fa(this))h.scrollTo(!a?e:c(h).scrollLeft(),a?e:c(h).scrollTop());else this[d]=e});else return(h=fa(f))?"pageXOffset"in h?h[a?"pageYOffset":"pageXOffset"]:c.support.boxModel&&h.document.documentElement[d]||h.document.body[d]:f[d]}});c.each(["Height","Width"],function(a,b){var d=b.toLowerCase(); -c.fn["inner"+b]=function(){return this[0]?parseFloat(c.css(this[0],d,"padding")):null};c.fn["outer"+b]=function(e){return this[0]?parseFloat(c.css(this[0],d,e?"margin":"border")):null};c.fn[d]=function(e){var f=this[0];if(!f)return e==null?null:this;if(c.isFunction(e))return this.each(function(l){var k=c(this);k[d](e.call(this,l,k[d]()))});if(c.isWindow(f))return f.document.compatMode==="CSS1Compat"&&f.document.documentElement["client"+b]||f.document.body["client"+b];else if(f.nodeType===9)return Math.max(f.documentElement["client"+ -b],f.body["scroll"+b],f.documentElement["scroll"+b],f.body["offset"+b],f.documentElement["offset"+b]);else if(e===B){f=c.css(f,d);var h=parseFloat(f);return c.isNaN(h)?f:h}else return this.css(d,typeof e==="string"?e:e+"px")}})})(window); diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/debug/tbtools.py python-werkzeug-0.10.4+dfsg1/werkzeug/debug/tbtools.py --- python-werkzeug-0.9.6+dfsg/werkzeug/debug/tbtools.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/debug/tbtools.py 2015-03-26 14:11:47.000000000 +0000 @@ -20,13 +20,14 @@ from werkzeug.utils import cached_property, escape from werkzeug.debug.console import Console -from werkzeug._compat import range_type, PY2, text_type, string_types +from werkzeug._compat import range_type, PY2, text_type, string_types, \ + to_native, to_unicode -_coding_re = re.compile(r'coding[:=]\s*([-\w.]+)') -_line_re = re.compile(r'^(.*?)$(?m)') +_coding_re = re.compile(br'coding[:=]\s*([-\w.]+)') +_line_re = re.compile(br'^(.*?)$(?m)') _funcdef_re = re.compile(r'^(\s*def\s)|(.*(?

If you followed a link ' - 'from a foreign page, please contact the author of this page.' + 'The requested URL is no longer available on this server and there ' + 'is no forwarding address. If you followed a link from a foreign ' + 'page, please contact the author of this page.' ) @@ -536,17 +536,46 @@ ) +class GatewayTimeout(HTTPException): + """*504* `Gateway Timeout` + + Status code you should return if a connection to an upstream server + times out. + """ + code = 504 + description = ( + 'The connection to an upstream server timed out.' + ) + + +class HTTPVersionNotSupported(HTTPException): + """*505* `HTTP Version Not Supported` + + The server does not support the HTTP protocol version used in the request. + """ + code = 505 + description = ( + 'The server does not support the HTTP protocol version used in the ' + 'request.' + ) + + default_exceptions = {} __all__ = ['HTTPException'] def _find_exceptions(): for name, obj in iteritems(globals()): try: - if getattr(obj, 'code', None) is not None: - default_exceptions[obj.code] = obj - __all__.append(obj.__name__) - except TypeError: # pragma: no cover + is_http_exception = issubclass(obj, HTTPException) + except TypeError: + is_http_exception = False + if not is_http_exception or obj.code is None: + continue + __all__.append(obj.__name__) + old_obj = default_exceptions.get(obj.code, None) + if old_obj is not None and issubclass(obj, old_obj): continue + default_exceptions[obj.code] = obj _find_exceptions() del _find_exceptions diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/http.py python-werkzeug-0.10.4+dfsg1/werkzeug/http.py --- python-werkzeug-0.9.6+dfsg/werkzeug/http.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/http.py 2015-03-26 14:11:47.000000000 +0000 @@ -37,9 +37,24 @@ integer_types -# incorrect _cookie_charset = 'latin1' -_accept_re = re.compile(r'([^\s;,]+)(?:[^,]*?;\s*q=(\d*(?:\.\d+)?))?') +# for explanation of "media-range", etc. see Sections 5.3.{1,2} of RFC 7231 +_accept_re = re.compile( + r'''( # media-range capturing-parenthesis + [^\s;,]+ # type/subtype + (?:[ \t]*;[ \t]* # ";" + (?: # parameter non-capturing-parenthesis + [^\s;,q][^\s;,]* # token that doesn't start with "q" + | # or + q[^\s;,=][^\s;,]* # token that is more than just "q" + ) + )* # zero or more parameters + ) # end of media-range + (?:[ \t]*;[ \t]*q= # weight is a "q" parameter + (\d*(?:\.\d+)?) # qvalue capturing-parentheses + [^,]* # "extension" accept params: who cares? + )? # accept params are optional + ''', re.VERBOSE) _token_chars = frozenset("!#$%&'*+-.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ" '^_`abcdefghijklmnopqrstuvwxyz|~') _etag_re = re.compile(r'([Ww]/)?(?:"(.*?)"|(.*?))(?:\s*,\s*|$)') diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/__init__.py python-werkzeug-0.10.4+dfsg1/werkzeug/__init__.py --- python-werkzeug-0.9.6+dfsg/werkzeug/__init__.py 2014-06-07 10:34:33.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/__init__.py 2015-03-26 15:49:54.000000000 +0000 @@ -20,7 +20,7 @@ from werkzeug._compat import iteritems # the version. Usually set automatically by a script. -__version__ = '0.9.6' +__version__ = '0.10.4' # This import magic raises concerns quite often which is why the implementation diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/_internal.py python-werkzeug-0.10.4+dfsg1/werkzeug/_internal.py --- python-werkzeug-0.9.6+dfsg/werkzeug/_internal.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/_internal.py 2015-03-18 23:08:06.000000000 +0000 @@ -337,8 +337,8 @@ if b'.' in domain: return domain raise ValueError( - 'Setting \'domain\' for a cookie on a server running localy (ex: ' - 'localhost) is not supportted by complying browsers. You should ' + 'Setting \'domain\' for a cookie on a server running locally (ex: ' + 'localhost) is not supported by complying browsers. You should ' 'have something like: \'127.0.0.1 localhost dev.localhost\' on ' 'your hosts file and then point your server to run on ' '\'dev.localhost\' and also set \'domain\' for \'dev.localhost\'' diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/local.py python-werkzeug-0.10.4+dfsg1/werkzeug/local.py --- python-werkzeug-0.9.6+dfsg/werkzeug/local.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/local.py 2015-03-26 14:11:47.000000000 +0000 @@ -203,7 +203,7 @@ scoped sessions) to the Werkzeug locals. .. versionchanged:: 0.7 - Yu can pass a different ident function to the local manager that + You can pass a different ident function to the local manager that will then be propagated to all the locals passed to the constructor. """ diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/posixemulation.py python-werkzeug-0.10.4+dfsg1/werkzeug/posixemulation.py --- python-werkzeug-0.9.6+dfsg/werkzeug/posixemulation.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/posixemulation.py 2015-03-18 23:08:06.000000000 +0000 @@ -22,6 +22,7 @@ import errno import time import random +from ._compat import to_unicode can_rename_open_file = False @@ -37,10 +38,8 @@ _MoveFileEx = ctypes.windll.kernel32.MoveFileExW def _rename(src, dst): - if not isinstance(src, unicode): - src = unicode(src, sys.getfilesystemencoding()) - if not isinstance(dst, unicode): - dst = unicode(dst, sys.getfilesystemencoding()) + src = to_unicode(src, sys.getfilesystemencoding()) + dst = to_unicode(dst, sys.getfilesystemencoding()) if _rename_atomic(src, dst): return True retry = 0 diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/_reloader.py python-werkzeug-0.10.4+dfsg1/werkzeug/_reloader.py --- python-werkzeug-0.9.6+dfsg/werkzeug/_reloader.py 1970-01-01 00:00:00.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/_reloader.py 2015-03-26 15:33:58.000000000 +0000 @@ -0,0 +1,233 @@ +import os +import sys +import time +import subprocess +import threading +from itertools import chain + +from werkzeug._internal import _log +from werkzeug._compat import PY2, iteritems, text_type + + +def _iter_module_files(): + """This iterates over all relevant Python files. It goes through all + loaded files from modules, all files in folders of already loaded modules + as well as all files reachable through a package. + """ + # The list call is necessary on Python 3 in case the module + # dictionary modifies during iteration. + for module in list(sys.modules.values()): + if module is None: + continue + filename = getattr(module, '__file__', None) + if filename: + old = None + while not os.path.isfile(filename): + old = filename + filename = os.path.dirname(filename) + if filename == old: + break + else: + if filename[-4:] in ('.pyc', '.pyo'): + filename = filename[:-1] + yield filename + + +def _find_observable_paths(extra_files=None): + """Finds all paths that should be observed.""" + rv = set(os.path.abspath(x) for x in sys.path) + for filename in extra_files or (): + rv.add(os.path.dirname(os.path.abspath(filename))) + for module in list(sys.modules.values()): + fn = getattr(module, '__file__', None) + if fn is None: + continue + fn = os.path.abspath(fn) + rv.add(os.path.dirname(fn)) + return _find_common_roots(rv) + + +def _find_common_roots(paths): + """Out of some paths it finds the common roots that need monitoring.""" + paths = [x.split(os.path.sep) for x in paths] + root = {} + for chunks in sorted(paths, key=len, reverse=True): + node = root + for chunk in chunks: + node = node.setdefault(chunk, {}) + node.clear() + + rv = set() + def _walk(node, path): + for prefix, child in iteritems(node): + _walk(child, path + (prefix,)) + if not node: + rv.add('/'.join(path)) + _walk(root, ()) + return rv + + +class ReloaderLoop(object): + name = None + + # monkeypatched by testsuite. wrapping with `staticmethod` is required in + # case time.sleep has been replaced by a non-c function (e.g. by + # `eventlet.monkey_patch`) before we get here + _sleep = staticmethod(time.sleep) + + def __init__(self, extra_files=None, interval=1): + self.extra_files = set(os.path.abspath(x) + for x in extra_files or ()) + self.interval = interval + + def run(self): + pass + + def restart_with_reloader(self): + """Spawn a new Python interpreter with the same arguments as this one, + but running the reloader thread. + """ + while 1: + _log('info', ' * Restarting with %s' % self.name) + args = [sys.executable] + sys.argv + new_environ = os.environ.copy() + new_environ['WERKZEUG_RUN_MAIN'] = 'true' + + # a weird bug on windows. sometimes unicode strings end up in the + # environment and subprocess.call does not like this, encode them + # to latin1 and continue. + if os.name == 'nt' and PY2: + for key, value in iteritems(new_environ): + if isinstance(value, text_type): + new_environ[key] = value.encode('iso-8859-1') + + exit_code = subprocess.call(args, env=new_environ) + if exit_code != 3: + return exit_code + + def trigger_reload(self, filename): + filename = os.path.abspath(filename) + _log('info', ' * Detected change in %r, reloading' % filename) + sys.exit(3) + + +class StatReloaderLoop(ReloaderLoop): + name = 'stat' + + def run(self): + mtimes = {} + while 1: + for filename in chain(_iter_module_files(), self.extra_files): + try: + mtime = os.stat(filename).st_mtime + except OSError: + continue + + old_time = mtimes.get(filename) + if old_time is None: + mtimes[filename] = mtime + continue + elif mtime > old_time: + self.trigger_reload(filename) + self._sleep(self.interval) + + +class WatchdogReloaderLoop(ReloaderLoop): + + def __init__(self, *args, **kwargs): + ReloaderLoop.__init__(self, *args, **kwargs) + from watchdog.observers import Observer + from watchdog.events import FileSystemEventHandler + self.observable_paths = set() + + def _check_modification(filename): + if filename in self.extra_files: + self.trigger_reload(filename) + dirname = os.path.dirname(filename) + if dirname.startswith(tuple(self.observable_paths)): + if filename.endswith(('.pyc', '.pyo')): + self.trigger_reload(filename[:-1]) + elif filename.endswith('.py'): + self.trigger_reload(filename) + + class _CustomHandler(FileSystemEventHandler): + def on_created(self, event): + _check_modification(event.src_path) + def on_modified(self, event): + _check_modification(event.src_path) + + reloader_name = Observer.__name__.lower() + if reloader_name.endswith('observer'): + reloader_name = reloader_name[:-8] + reloader_name += ' reloader' + + self.name = reloader_name + + self.observer_class = Observer + self.event_handler = _CustomHandler() + self.should_reload = False + + def trigger_reload(self, filename): + # This is called inside an event handler, which means we can't throw + # SystemExit here. https://github.com/gorakhargosh/watchdog/issues/294 + self.should_reload = True + ReloaderLoop.trigger_reload(self, filename) + + def run(self): + watches = {} + observer = self.observer_class() + observer.start() + + while not self.should_reload: + to_delete = set(watches) + paths = _find_observable_paths(self.extra_files) + for path in paths: + if path not in watches: + try: + watches[path] = observer.schedule( + self.event_handler, path, recursive=True) + except OSError: + # "Path is not a directory". We could filter out + # those paths beforehand, but that would cause + # additional stat calls. + watches[path] = None + to_delete.discard(path) + for path in to_delete: + watch = watches.pop(path, None) + if watch is not None: + observer.unschedule(watch) + self.observable_paths = paths + self._sleep(self.interval) + + sys.exit(3) + + +reloader_loops = { + 'stat': StatReloaderLoop, + 'watchdog': WatchdogReloaderLoop, +} + +try: + __import__('watchdog.observers') +except ImportError: + reloader_loops['auto'] = reloader_loops['stat'] +else: + reloader_loops['auto'] = reloader_loops['watchdog'] + + +def run_with_reloader(main_func, extra_files=None, interval=1, + reloader_type='auto'): + """Run the given function in an independent python interpreter.""" + import signal + reloader = reloader_loops[reloader_type](extra_files, interval) + signal.signal(signal.SIGTERM, lambda *args: sys.exit(0)) + try: + if os.environ.get('WERKZEUG_RUN_MAIN') == 'true': + t = threading.Thread(target=main_func, args=()) + t.setDaemon(True) + t.start() + reloader.run() + else: + sys.exit(reloader.restart_with_reloader()) + except KeyboardInterrupt: + pass diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/routing.py python-werkzeug-0.10.4+dfsg1/werkzeug/routing.py --- python-werkzeug-0.9.6+dfsg/werkzeug/routing.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/routing.py 2015-03-26 15:33:58.000000000 +0000 @@ -96,8 +96,11 @@ :license: BSD, see LICENSE for more details. """ import re +import uuid import posixpath + from pprint import pformat +from threading import Lock from werkzeug.urls import url_encode, url_quote, url_join from werkzeug.utils import redirect, format_string @@ -561,14 +564,35 @@ self._trace = self._converters = self._regex = self._weights = None def empty(self): - """Return an unbound copy of this rule. This can be useful if you - want to reuse an already bound URL for another map.""" + """ + Return an unbound copy of this rule. + + This can be useful if want to reuse an already bound URL for another + map. See ``get_empty_kwargs`` to override what keyword arguments are + provided to the new copy. + """ + return type(self)(self.rule, **self.get_empty_kwargs()) + + def get_empty_kwargs(self): + """ + Provides kwargs for instantiating empty copy with empty() + + Use this method to provide custom keyword arguments to the subclass of + ``Rule`` when calling ``some_rule.empty()``. Helpful when the subclass + has custom keyword arguments that are needed at instantiation. + + Must return a ``dict`` that will be provided as kwargs to the new + instance of ``Rule``, following the initial ``self.rule`` value which + is always provided as the first, required positional argument. + """ defaults = None if self.defaults: defaults = dict(self.defaults) - return Rule(self.rule, defaults, self.subdomain, self.methods, - self.build_only, self.endpoint, self.strict_slashes, - self.redirect_to, self.alias, self.host) + return dict(defaults=defaults, subdomain=self.subdomain, + methods=self.methods, build_only=self.build_only, + endpoint=self.endpoint, strict_slashes=self.strict_slashes, + redirect_to=self.redirect_to, alias=self.alias, + host=self.host) def get_rules(self, map): yield self @@ -602,7 +626,7 @@ .. versionadded:: 0.9 """ - if not converter_name in self.map.converters: + if converter_name not in self.map.converters: raise LookupError('the converter %r does not exist' % converter_name) return self.map.converters[converter_name](self.map, *args, **kwargs) @@ -967,6 +991,25 @@ NumberConverter.__init__(self, map, 0, min, max) +class UUIDConverter(BaseConverter): + """This converter only accepts UUID strings:: + + Rule('/object/') + + .. versionadded:: 0.10 + + :param map: the :class:`Map`. + """ + regex = r'[A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-' \ + r'[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}' + + def to_python(self, value): + return uuid.UUID(value) + + def to_url(self, value): + return str(value) + + #: the default converter mapping for the map. DEFAULT_CONVERTERS = { 'default': UnicodeConverter, @@ -974,7 +1017,8 @@ 'any': AnyConverter, 'path': PathConverter, 'int': IntegerConverter, - 'float': FloatConverter + 'float': FloatConverter, + 'uuid': UUIDConverter, } @@ -1023,6 +1067,7 @@ self._rules = [] self._rules_by_endpoint = {} self._remap = True + self._remap_lock = Lock() self.default_subdomain = default_subdomain self.charset = charset @@ -1206,7 +1251,13 @@ """Called before matching and building to keep the compiled rules in the correct order after things changed. """ - if self._remap: + if not self._remap: + return + + with self._remap_lock: + if not self._remap: + return + self._rules.sort(key=lambda x: x.match_compare_key()) for rules in itervalues(self._rules_by_endpoint): rules.sort(key=lambda x: x.build_compare_key()) @@ -1378,8 +1429,10 @@ query_args = self.query_args method = (method or self.default_method).upper() - path = u'%s|/%s' % (self.map.host_matching and self.server_name or - self.subdomain, path_info.lstrip('/')) + path = u'%s|%s' % ( + self.map.host_matching and self.server_name or self.subdomain, + path_info and '/%s' % path_info.lstrip('/') + ) have_match_for = set() for rule in self.map._rules: @@ -1414,7 +1467,7 @@ else: redirect_url = rule.redirect_to(self, **rv) raise RequestRedirect(str(url_join('%s://%s%s%s' % ( - self.url_scheme, + self.url_scheme or 'http', self.subdomain and self.subdomain + '.' or '', self.server_name, self.script_name @@ -1509,7 +1562,7 @@ if query_args: suffix = '?' + self.encode_query_args(query_args) return str('%s://%s/%s%s' % ( - self.url_scheme, + self.url_scheme or 'http', self.get_host(domain_part), posixpath.join(self.script_name[:-1].lstrip('/'), path_info.lstrip('/')), @@ -1581,6 +1634,13 @@ >>> urls.build("index", {'q': 'My Searchstring'}) '/?q=My+Searchstring' + When processing those additional values, lists are furthermore + interpreted as multiple values (as per + :py:class:`werkzeug.datastructures.MultiDict`): + + >>> urls.build("index", {'q': ['a', 'b', 'c']}) + '/?q=a&q=b&q=c' + If a rule does not exist when building a `BuildError` exception is raised. @@ -1596,7 +1656,9 @@ appended to the URL as query parameters. :param method: the HTTP method for the rule if there are different URLs for different methods on the same endpoint. - :param force_external: enforce full canonical external URLs. + :param force_external: enforce full canonical external URLs. If the URL + scheme is not provided, this will generate + a protocol-relative URL. :param append_unknown: unknown parameters are appended to the generated URL as query string argument. Disable this if you want the builder to ignore those. @@ -1604,7 +1666,7 @@ self.map.update() if values: if isinstance(values, MultiDict): - valueiter = values.iteritems(multi=True) + valueiter = iteritems(values, multi=True) else: valueiter = iteritems(values) values = dict((k, v) for k, v in valueiter if v is not None) @@ -1623,8 +1685,8 @@ (self.map.host_matching and host == self.server_name) or (not self.map.host_matching and domain_part == self.subdomain)): return str(url_join(self.script_name, './' + path.lstrip('/'))) - return str('%s://%s%s/%s' % ( - self.url_scheme, + return str('%s//%s%s/%s' % ( + self.url_scheme + ':' if self.url_scheme else '', host, self.script_name[:-1], path.lstrip('/') diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/script.py python-werkzeug-0.10.4+dfsg1/werkzeug/script.py --- python-werkzeug-0.9.6+dfsg/werkzeug/script.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/script.py 2015-03-26 14:11:47.000000000 +0000 @@ -194,8 +194,7 @@ def print_usage(actions): """Print the usage information. (Help screen)""" - actions = actions.items() - actions.sort() + actions = sorted(iteritems(actions)) print('usage: %s []' % basename(sys.argv[0])) print(' %s --help' % basename(sys.argv[0])) print() @@ -310,7 +309,9 @@ """Start a new development server.""" from werkzeug.serving import run_simple app = app_factory() - run_simple(hostname, port, app, reloader, debugger, evalex, - extra_files, 1, threaded, processes, + run_simple(hostname, port, app, + use_reloader=reloader, use_debugger=debugger, + use_evalex=evalex, extra_files=extra_files, + reloader_interval=1, threaded=threaded, processes=processes, static_files=static_files, ssl_context=ssl_context) return action diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/security.py python-werkzeug-0.10.4+dfsg1/werkzeug/security.py --- python-werkzeug-0.9.6+dfsg/werkzeug/security.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/security.py 2015-03-26 14:11:47.000000000 +0000 @@ -48,28 +48,31 @@ def pbkdf2_hex(data, salt, iterations=DEFAULT_PBKDF2_ITERATIONS, keylen=None, hashfunc=None): - """Like :func:`pbkdf2_bin` but returns a hex encoded string. + """Like :func:`pbkdf2_bin`, but returns a hex-encoded string. .. versionadded:: 0.9 :param data: the data to derive. :param salt: the salt for the derivation. :param iterations: the number of iterations. - :param keylen: the length of the resulting key. If not provided + :param keylen: the length of the resulting key. If not provided, the digest size will be used. :param hashfunc: the hash function to use. This can either be the - string name of a known hash function or a function + string name of a known hash function, or a function from the hashlib module. Defaults to sha1. """ rv = pbkdf2_bin(data, salt, iterations, keylen, hashfunc) return to_native(codecs.encode(rv, 'hex_codec')) +_has_native_pbkdf2 = hasattr(hashlib, 'pbkdf2_hmac') + + def pbkdf2_bin(data, salt, iterations=DEFAULT_PBKDF2_ITERATIONS, keylen=None, hashfunc=None): """Returns a binary digest for the PBKDF2 hash algorithm of `data` - with the given `salt`. It iterates `iterations` time and produces a - key of `keylen` bytes. By default SHA-1 is used as hash function, + with the given `salt`. It iterates `iterations` times and produces a + key of `keylen` bytes. By default, SHA-1 is used as hash function; a different hashlib `hashfunc` can be provided. .. versionadded:: 0.9 @@ -87,8 +90,20 @@ hashfunc = _hash_funcs[hashfunc] elif not hashfunc: hashfunc = hashlib.sha1 + data = to_bytes(data) salt = to_bytes(salt) - mac = hmac.HMAC(to_bytes(data), None, hashfunc) + + # If we're on Python with pbkdf2_hmac we can try to use it for + # compatible digests. + if _has_native_pbkdf2: + _test_hash = hashfunc() + if hasattr(_test_hash, 'name') and \ + _test_hash.name in _hash_funcs: + return hashlib.pbkdf2_hmac(_test_hash.name, + data, salt, iterations, + keylen) + + mac = hmac.HMAC(data, None, hashfunc) if not keylen: keylen = mac.digest_size def _pseudorandom(x, mac=mac): @@ -109,7 +124,7 @@ """This function compares strings in somewhat constant time. This requires that the length of at least one string is known in advance. - Returns `True` if the two strings are equal or `False` if they are not. + Returns `True` if the two strings are equal, or `False` if they are not. .. versionadded:: 0.7 """ @@ -138,7 +153,7 @@ def gen_salt(length): """Generate a random string of SALT_CHARS with specified ``length``.""" if length <= 0: - raise ValueError('requested salt of length <= 0') + raise ValueError('Salt length must be positive') return ''.join(_sys_rng.choice(SALT_CHARS) for _ in range_type(length)) @@ -204,11 +219,11 @@ pbkdf2:sha1:2000$salt$hash pbkdf2:sha1$salt$hash - :param password: the password to hash - :param method: the hash method to use (one that hashlib supports), can - optionally be in the format ``pbpdf2:[:iterations]`` + :param password: the password to hash. + :param method: the hash method to use (one that hashlib supports). Can + optionally be in the format ``pbkdf2:[:iterations]`` to enable PBKDF2. - :param salt_length: the length of the salt in letters + :param salt_length: the length of the salt in letters. """ salt = method != 'plain' and gen_salt(salt_length) or '' h, actual_method = _hash_internal(method, salt, password) @@ -223,8 +238,8 @@ Returns `True` if the password matched, `False` otherwise. :param pwhash: a hashed string like returned by - :func:`generate_password_hash` - :param password: the plaintext password to compare against the hash + :func:`generate_password_hash`. + :param password: the plaintext password to compare against the hash. """ if pwhash.count('$') < 2: return False diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/serving.py python-werkzeug-0.10.4+dfsg1/werkzeug/serving.py --- python-werkzeug-0.9.6+dfsg/werkzeug/serving.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/serving.py 2015-03-26 15:33:58.000000000 +0000 @@ -40,14 +40,19 @@ import os import socket import sys -import time +import ssl import signal -import subprocess -try: - import thread -except ImportError: - import _thread as thread + +def _get_openssl_crypto_module(): + try: + from OpenSSL import crypto + except ImportError: + raise TypeError('Using ad-hoc certificates requires the pyOpenSSL ' + 'library.') + else: + return crypto + try: from SocketServer import ThreadingMixIn, ForkingMixIn @@ -58,10 +63,9 @@ import werkzeug from werkzeug._internal import _log -from werkzeug._compat import iteritems, PY2, reraise, text_type, \ - wsgi_encoding_dance +from werkzeug._compat import reraise, wsgi_encoding_dance from werkzeug.urls import url_parse, url_unquote -from werkzeug.exceptions import InternalServerError, BadRequest +from werkzeug.exceptions import InternalServerError class WSGIRequestHandler(BaseHTTPRequestHandler, object): @@ -88,8 +92,7 @@ 'wsgi.multithread': self.server.multithread, 'wsgi.multiprocess': self.server.multiprocess, 'wsgi.run_once': False, - 'werkzeug.server.shutdown': - shutdown_server, + 'werkzeug.server.shutdown': shutdown_server, 'SERVER_SOFTWARE': self.server_version, 'REQUEST_METHOD': self.command, 'SCRIPT_NAME': '', @@ -270,14 +273,14 @@ def generate_adhoc_ssl_pair(cn=None): from random import random - from OpenSSL import crypto + crypto = _get_openssl_crypto_module() # pretty damn sure that this is not actually accepted by anyone if cn is None: cn = '*' cert = crypto.X509() - cert.set_serial_number(int(random() * sys.maxint)) + cert.set_serial_number(int(random() * sys.maxsize)) cert.gmtime_adj_notBefore(0) cert.gmtime_adj_notAfter(60 * 60 * 24 * 365) @@ -290,7 +293,7 @@ issuer.O = 'Self-Signed' pkey = crypto.PKey() - pkey.generate_key(crypto.TYPE_RSA, 768) + pkey.generate_key(crypto.TYPE_RSA, 1024) cert.set_pubkey(pkey) cert.sign(pkey, 'md5') @@ -323,9 +326,9 @@ cert_file = base_path + '.crt' pkey_file = base_path + '.key' - with open(cert_file, 'w') as f: + with open(cert_file, 'wb') as f: f.write(crypto.dump_certificate(crypto.FILETYPE_PEM, cert)) - with open(pkey_file, 'w') as f: + with open(pkey_file, 'wb') as f: f.write(crypto.dump_privatekey(crypto.FILETYPE_PEM, pkey)) return cert_file, pkey_file @@ -333,48 +336,75 @@ def generate_adhoc_ssl_context(): """Generates an adhoc SSL context for the development server.""" - from OpenSSL import SSL + crypto = _get_openssl_crypto_module() + import tempfile + import atexit + cert, pkey = generate_adhoc_ssl_pair() - ctx = SSL.Context(SSL.SSLv23_METHOD) - ctx.use_privatekey(pkey) - ctx.use_certificate(cert) + cert_handle, cert_file = tempfile.mkstemp() + pkey_handle, pkey_file = tempfile.mkstemp() + atexit.register(os.remove, pkey_file) + atexit.register(os.remove, cert_file) + + os.write(cert_handle, crypto.dump_certificate(crypto.FILETYPE_PEM, cert)) + os.write(pkey_handle, crypto.dump_privatekey(crypto.FILETYPE_PEM, pkey)) + os.close(cert_handle) + os.close(pkey_handle) + ctx = load_ssl_context(cert_file, pkey_file) return ctx -def load_ssl_context(cert_file, pkey_file): - """Loads an SSL context from a certificate and private key file.""" - from OpenSSL import SSL - ctx = SSL.Context(SSL.SSLv23_METHOD) - ctx.use_certificate_file(cert_file) - ctx.use_privatekey_file(pkey_file) +def load_ssl_context(cert_file, pkey_file=None, protocol=None): + """Loads SSL context from cert/private key files and optional protocol. + Many parameters are directly taken from the API of + :py:class:`ssl.SSLContext`. + + :param cert_file: Path of the certificate to use. + :param pkey_file: Path of the private key to use. If not given, the key + will be obtained from the certificate file. + :param protocol: One of the ``PROTOCOL_*`` constants in the stdlib ``ssl`` + module. Defaults to ``PROTOCOL_SSLv23``. + """ + if protocol is None: + protocol = ssl.PROTOCOL_SSLv23 + ctx = _SSLContext(protocol) + ctx.load_cert_chain(cert_file, pkey_file) return ctx +class _SSLContext(object): + '''A dummy class with a small subset of Python3's ``ssl.SSLContext``, only + intended to be used with and by Werkzeug.''' + + def __init__(self, protocol): + self._protocol = protocol + self._certfile = None + self._keyfile = None + self._password = None + + def load_cert_chain(self, certfile, keyfile=None, password=None): + self._certfile = certfile + self._keyfile = keyfile or certfile + self._password = password + + def wrap_socket(self, sock, **kwargs): + return ssl.wrap_socket(sock, keyfile=self._keyfile, + certfile=self._certfile, + ssl_version=self._protocol, **kwargs) + + def is_ssl_error(error=None): """Checks if the given error (or the current one) is an SSL error.""" + exc_types = (ssl.SSLError,) + try: + from OpenSSL.SSL import Error + exc_types += (Error,) + except ImportError: + pass + if error is None: error = sys.exc_info()[1] - from OpenSSL import SSL - return isinstance(error, SSL.Error) - - -class _SSLConnectionFix(object): - """Wrapper around SSL connection to provide a working makefile().""" - - def __init__(self, con): - self._con = con - - def makefile(self, mode, bufsize): - return socket._fileobject(self._con, mode, bufsize) - - def __getattr__(self, attrib): - return getattr(self._con, attrib) - - def shutdown(self, arg=None): - try: - self._con.shutdown() - except Exception: - pass + return isinstance(error, exc_types) def select_ip_version(host, port): @@ -383,14 +413,14 @@ # and various operating systems. Probably this code also is # not supposed to work, but I can't come up with any other # ways to implement this. - ##try: - ## info = socket.getaddrinfo(host, port, socket.AF_UNSPEC, - ## socket.SOCK_STREAM, 0, - ## socket.AI_PASSIVE) - ## if info: - ## return info[0][0] - ##except socket.gaierror: - ## pass + # try: + # info = socket.getaddrinfo(host, port, socket.AF_UNSPEC, + # socket.SOCK_STREAM, 0, + # socket.AI_PASSIVE) + # if info: + # return info[0][0] + # except socket.gaierror: + # pass if ':' in host and hasattr(socket, 'AF_INET6'): return socket.AF_INET6 return socket.AF_INET @@ -413,16 +443,12 @@ self.shutdown_signal = False if ssl_context is not None: - try: - from OpenSSL import tsafe - except ImportError: - raise TypeError('SSL is not available if the OpenSSL ' - 'library is not installed.') if isinstance(ssl_context, tuple): ssl_context = load_ssl_context(*ssl_context) if ssl_context == 'adhoc': ssl_context = generate_adhoc_ssl_context() - self.socket = tsafe.Connection(ssl_context, self.socket) + self.socket = ssl_context.wrap_socket(self.socket, + server_side=True) self.ssl_context = ssl_context else: self.ssl_context = None @@ -436,6 +462,8 @@ HTTPServer.serve_forever(self) except KeyboardInterrupt: pass + finally: + self.server_close() def handle_error(self, request, client_address): if self.passthrough_errors: @@ -445,8 +473,6 @@ def get_request(self): con, info = self.socket.accept() - if self.ssl_context is not None: - con = _SSLConnectionFix(con) return con, info @@ -486,147 +512,23 @@ passthrough_errors, ssl_context) -def _iter_module_files(): - # The list call is necessary on Python 3 in case the module - # dictionary modifies during iteration. - for module in list(sys.modules.values()): - filename = getattr(module, '__file__', None) - if filename: - old = None - while not os.path.isfile(filename): - old = filename - filename = os.path.dirname(filename) - if filename == old: - break - else: - if filename[-4:] in ('.pyc', '.pyo'): - filename = filename[:-1] - yield filename - - -def _reloader_stat_loop(extra_files=None, interval=1): - """When this function is run from the main thread, it will force other - threads to exit when any modules currently loaded change. +def is_running_from_reloader(): + """Checks if the application is running from within the Werkzeug + reloader subprocess. - Copyright notice. This function is based on the autoreload.py from - the CherryPy trac which originated from WSGIKit which is now dead. - - :param extra_files: a list of additional files it should watch. + .. versionadded:: 0.10 """ - from itertools import chain - mtimes = {} - while 1: - for filename in chain(_iter_module_files(), extra_files or ()): - try: - mtime = os.stat(filename).st_mtime - except OSError: - continue - - old_time = mtimes.get(filename) - if old_time is None: - mtimes[filename] = mtime - continue - elif mtime > old_time: - _log('info', ' * Detected change in %r, reloading' % filename) - sys.exit(3) - time.sleep(interval) - - -def _reloader_inotify(extra_files=None, interval=None): - # Mutated by inotify loop when changes occur. - changed = [False] - - # Setup inotify watches - from pyinotify import WatchManager, Notifier - - # this API changed at one point, support both - try: - from pyinotify import EventsCodes as ec - ec.IN_ATTRIB - except (ImportError, AttributeError): - import pyinotify as ec - - wm = WatchManager() - mask = ec.IN_DELETE_SELF | ec.IN_MOVE_SELF | ec.IN_MODIFY | ec.IN_ATTRIB - - def signal_changed(event): - if changed[0]: - return - _log('info', ' * Detected change in %r, reloading' % event.path) - changed[:] = [True] - - for fname in extra_files or (): - wm.add_watch(fname, mask, signal_changed) - - # ... And now we wait... - notif = Notifier(wm) - try: - while not changed[0]: - # always reiterate through sys.modules, adding them - for fname in _iter_module_files(): - wm.add_watch(fname, mask, signal_changed) - notif.process_events() - if notif.check_events(timeout=interval): - notif.read_events() - # TODO Set timeout to something small and check parent liveliness - finally: - notif.stop() - sys.exit(3) - - -# currently we always use the stat loop reloader for the simple reason -# that the inotify one does not respond to added files properly. Also -# it's quite buggy and the API is a mess. -reloader_loop = _reloader_stat_loop - - -def restart_with_reloader(): - """Spawn a new Python interpreter with the same arguments as this one, - but running the reloader thread. - """ - while 1: - _log('info', ' * Restarting with reloader') - args = [sys.executable] + sys.argv - new_environ = os.environ.copy() - new_environ['WERKZEUG_RUN_MAIN'] = 'true' - - # a weird bug on windows. sometimes unicode strings end up in the - # environment and subprocess.call does not like this, encode them - # to latin1 and continue. - if os.name == 'nt' and PY2: - for key, value in iteritems(new_environ): - if isinstance(value, text_type): - new_environ[key] = value.encode('iso-8859-1') - - exit_code = subprocess.call(args, env=new_environ) - if exit_code != 3: - return exit_code - - -def run_with_reloader(main_func, extra_files=None, interval=1): - """Run the given function in an independent python interpreter.""" - import signal - signal.signal(signal.SIGTERM, lambda *args: sys.exit(0)) - if os.environ.get('WERKZEUG_RUN_MAIN') == 'true': - thread.start_new_thread(main_func, ()) - try: - reloader_loop(extra_files, interval) - except KeyboardInterrupt: - return - try: - sys.exit(restart_with_reloader()) - except KeyboardInterrupt: - pass + return os.environ.get('WERKZEUG_RUN_MAIN') == 'true' def run_simple(hostname, port, application, use_reloader=False, use_debugger=False, use_evalex=True, - extra_files=None, reloader_interval=1, threaded=False, - processes=1, request_handler=None, static_files=None, + extra_files=None, reloader_interval=1, + reloader_type='auto', threaded=False, processes=1, + request_handler=None, static_files=None, passthrough_errors=False, ssl_context=None): - """Start an application using wsgiref and with an optional reloader. This - wraps `wsgiref` to fix the wrong default reporting of the multithreaded - WSGI variable and adds optional multithreading and fork support. + """Start a WSGI application. Optional features include a reloader, + multithreading and fork support. This function has a command-line interface too:: @@ -646,6 +548,11 @@ .. versionadded:: 0.9 Added command-line interface. + .. versionadded:: 0.10 + Improved the reloader and added support for changing the backend + through the `reloader_type` parameter. See :ref:`reloader` + for more information. + :param hostname: The host for the application. eg: ``'localhost'`` :param port: The port for the server. eg: ``8080`` :param application: the WSGI application to execute @@ -657,6 +564,10 @@ additionally to the modules. For example configuration files. :param reloader_interval: the interval for the reloader in seconds. + :param reloader_type: the type of reloader to use. The default is + auto detection. Valid values are ``'stat'`` and + ``'watchdog'``. See :ref:`reloader` for more + information. :param threaded: should the process handle each request in a separate thread? :param processes: if greater than 1 then handle each request in a new process @@ -673,11 +584,11 @@ :param passthrough_errors: set this to `True` to disable the error catching. This means that the server will die on errors but it can be useful to hook debuggers in (pdb etc.) - :param ssl_context: an SSL context for the connection. Either an OpenSSL - context, a tuple in the form ``(cert_file, pkey_file)``, - the string ``'adhoc'`` if the server should - automatically create one, or `None` to disable SSL - (which is the default). + :param ssl_context: an SSL context for the connection. Either an + :class:`ssl.SSLContext`, a tuple in the form + ``(cert_file, pkey_file)``, the string ``'adhoc'`` if + the server should automatically create one, or ``None`` + to disable SSL (which is the default). """ if use_debugger: from werkzeug.debug import DebuggedApplication @@ -695,8 +606,9 @@ display_hostname = hostname != '*' and hostname or 'localhost' if ':' in display_hostname: display_hostname = '[%s]' % display_hostname - _log('info', ' * Running on %s://%s:%d/', ssl_context is None - and 'http' or 'https', display_hostname, port) + quit_msg = '(Press CTRL+C to quit)' + _log('info', ' * Running on %s://%s:%d/ %s', ssl_context is None + and 'http' or 'https', display_hostname, port, quit_msg) if use_reloader: # Create and destroy a socket so that any exceptions are raised before # we spawn a separate Python interpreter and lose this ability. @@ -705,10 +617,21 @@ test_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) test_socket.bind((hostname, port)) test_socket.close() - run_with_reloader(inner, extra_files, reloader_interval) + + from ._reloader import run_with_reloader + run_with_reloader(inner, extra_files, reloader_interval, + reloader_type) else: inner() + +def run_with_reloader(*args, **kwargs): + # People keep using undocumented APIs. Do not use this function + # please, we do not guarantee that it continues working. + from ._reloader import run_with_reloader + return run_with_reloader(*args, **kwargs) + + def main(): '''A simple command-line interface for :py:func:`run_simple`.''' @@ -716,7 +639,8 @@ import optparse from werkzeug.utils import import_string - parser = optparse.OptionParser(usage='Usage: %prog [options] app_module:app_object') + parser = optparse.OptionParser( + usage='Usage: %prog [options] app_module:app_object') parser.add_option('-b', '--bind', dest='address', help='The hostname:port the app should listen on.') parser.add_option('-d', '--debug', dest='use_debugger', diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/test.py python-werkzeug-0.10.4+dfsg1/werkzeug/test.py --- python-werkzeug-0.9.6+dfsg/werkzeug/test.py 2014-02-08 16:53:26.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/test.py 2015-03-26 14:11:47.000000000 +0000 @@ -379,9 +379,9 @@ def _get_content_type(self): ct = self.headers.get('Content-Type') if ct is None and not self._input_stream: - if self.method in ('POST', 'PUT', 'PATCH'): - if self._files: - return 'multipart/form-data' + if self._files: + return 'multipart/form-data' + elif self._form: return 'application/x-www-form-urlencoded' return None return ct @@ -680,6 +680,12 @@ raise RuntimeError('%r does not support redirect to ' 'external targets' % self.__class__) + status_code = int(response[1].split(None, 1)[0]) + if status_code == 307: + method = environ['REQUEST_METHOD'] + else: + method = 'GET' + # For redirect handling we temporarily disable the response # wrapper. This is not threadsafe but not a real concern # since the test client must not be shared anyways. @@ -688,7 +694,7 @@ try: return self.open(path=script_root, base_url=base_url, query_string=qs, as_tuple=True, - buffered=buffered) + buffered=buffered, method=method) finally: self.response_wrapper = old_response_wrapper @@ -743,12 +749,18 @@ or not follow_redirects: break new_location = response[2]['location'] + + method = 'GET' + if status_code == 307: + method = environ['REQUEST_METHOD'] + new_redirect_entry = (new_location, status_code) if new_redirect_entry in redirect_chain: raise ClientRedirectError('loop detected') redirect_chain.append(new_redirect_entry) environ, response = self.resolve_redirect(response, new_location, - environ, buffered=buffered) + environ, + buffered=buffered) if self.response_wrapper is not None: response = self.response_wrapper(*response) @@ -852,29 +864,29 @@ response[:] = [status, headers] return buffer.append - app_iter = app(environ, start_response) + app_rv = app(environ, start_response) + close_func = getattr(app_rv, 'close', None) + app_iter = iter(app_rv) # when buffering we emit the close call early and convert the # application iterator into a regular list if buffered: - close_func = getattr(app_iter, 'close', None) try: app_iter = list(app_iter) finally: if close_func is not None: close_func() - # otherwise we iterate the application iter until we have - # a response, chain the already received data with the already - # collected data and wrap it in a new `ClosingIterator` if - # we have a close callable. + # otherwise we iterate the application iter until we have a response, chain + # the already received data with the already collected data and wrap it in + # a new `ClosingIterator` if we need to restore a `close` callable from the + # original return value. else: while not response: buffer.append(next(app_iter)) if buffer: - close_func = getattr(app_iter, 'close', None) app_iter = chain(buffer, app_iter) - if close_func is not None: - app_iter = ClosingIterator(app_iter, close_func) + if close_func is not None and app_iter is not app_rv: + app_iter = ClosingIterator(app_iter, close_func) return app_iter, response[0], Headers(response[1]) diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/compat.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/compat.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/compat.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/compat.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,40 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.compat - ~~~~~~~~~~~~~~~~~~~~~~~~~ - - Ensure that old stuff does not break on update. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import unittest -import warnings -from werkzeug.testsuite import WerkzeugTestCase - -from werkzeug.wrappers import Response -from werkzeug.test import create_environ - - -class CompatTestCase(WerkzeugTestCase): - - def test_old_imports(self): - from werkzeug.utils import Headers, MultiDict, CombinedMultiDict, \ - Headers, EnvironHeaders - from werkzeug.http import Accept, MIMEAccept, CharsetAccept, \ - LanguageAccept, ETags, HeaderSet, WWWAuthenticate, \ - Authorization - - def test_exposed_werkzeug_mod(self): - import werkzeug - for key in werkzeug.__all__: - # deprecated, skip it - if key in ('templates', 'Template'): - continue - getattr(werkzeug, key) - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(CompatTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/contrib/cache.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/contrib/cache.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/contrib/cache.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/contrib/cache.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,257 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.cache - ~~~~~~~~~~~~~~~~~~~~~~~~ - - Tests the cache system - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import os -import time -import unittest -import tempfile -import shutil - -from werkzeug.testsuite import WerkzeugTestCase -from werkzeug.contrib import cache - -try: - import redis - try: - from redis.exceptions import ConnectionError as RedisConnectionError - cache.RedisCache(key_prefix='werkzeug-test-case:')._client.set('test','connection') - except RedisConnectionError: - redis = None -except ImportError: - redis = None -try: - import pylibmc as memcache -except ImportError: - try: - from google.appengine.api import memcache - except ImportError: - try: - import memcache - except ImportError: - memcache = None - - -class SimpleCacheTestCase(WerkzeugTestCase): - - def test_get_dict(self): - c = cache.SimpleCache() - c.set('a', 'a') - c.set('b', 'b') - d = c.get_dict('a', 'b') - assert 'a' in d - assert 'a' == d['a'] - assert 'b' in d - assert 'b' == d['b'] - - def test_set_many(self): - c = cache.SimpleCache() - c.set_many({0: 0, 1: 1, 2: 4}) - assert c.get(2) == 4 - c.set_many((i, i*i) for i in range(3)) - assert c.get(2) == 4 - - -class FileSystemCacheTestCase(WerkzeugTestCase): - - def test_set_get(self): - tmp_dir = tempfile.mkdtemp() - try: - c = cache.FileSystemCache(cache_dir=tmp_dir) - for i in range(3): - c.set(str(i), i * i) - for i in range(3): - result = c.get(str(i)) - assert result == i * i - finally: - shutil.rmtree(tmp_dir) - - def test_filesystemcache_prune(self): - THRESHOLD = 13 - tmp_dir = tempfile.mkdtemp() - c = cache.FileSystemCache(cache_dir=tmp_dir, threshold=THRESHOLD) - for i in range(2 * THRESHOLD): - c.set(str(i), i) - cache_files = os.listdir(tmp_dir) - shutil.rmtree(tmp_dir) - assert len(cache_files) <= THRESHOLD - - - def test_filesystemcache_clear(self): - tmp_dir = tempfile.mkdtemp() - c = cache.FileSystemCache(cache_dir=tmp_dir) - c.set('foo', 'bar') - cache_files = os.listdir(tmp_dir) - assert len(cache_files) == 1 - c.clear() - cache_files = os.listdir(tmp_dir) - assert len(cache_files) == 0 - shutil.rmtree(tmp_dir) - - -class RedisCacheTestCase(WerkzeugTestCase): - - def make_cache(self): - return cache.RedisCache(key_prefix='werkzeug-test-case:') - - def teardown(self): - self.make_cache().clear() - - def test_compat(self): - c = self.make_cache() - c._client.set(c.key_prefix + 'foo', b'Awesome') - self.assert_equal(c.get('foo'), b'Awesome') - c._client.set(c.key_prefix + 'foo', b'42') - self.assert_equal(c.get('foo'), 42) - - def test_get_set(self): - c = self.make_cache() - c.set('foo', ['bar']) - assert c.get('foo') == ['bar'] - - def test_get_many(self): - c = self.make_cache() - c.set('foo', ['bar']) - c.set('spam', 'eggs') - assert c.get_many('foo', 'spam') == [['bar'], 'eggs'] - - def test_set_many(self): - c = self.make_cache() - c.set_many({'foo': 'bar', 'spam': ['eggs']}) - assert c.get('foo') == 'bar' - assert c.get('spam') == ['eggs'] - - def test_expire(self): - c = self.make_cache() - c.set('foo', 'bar', 1) - time.sleep(2) - assert c.get('foo') is None - - def test_add(self): - c = self.make_cache() - # sanity check that add() works like set() - c.add('foo', 'bar') - assert c.get('foo') == 'bar' - c.add('foo', 'qux') - assert c.get('foo') == 'bar' - - def test_delete(self): - c = self.make_cache() - c.add('foo', 'bar') - assert c.get('foo') == 'bar' - c.delete('foo') - assert c.get('foo') is None - - def test_delete_many(self): - c = self.make_cache() - c.add('foo', 'bar') - c.add('spam', 'eggs') - c.delete_many('foo', 'spam') - assert c.get('foo') is None - assert c.get('spam') is None - - def test_inc_dec(self): - c = self.make_cache() - c.set('foo', 1) - self.assert_equal(c.inc('foo'), 2) - self.assert_equal(c.dec('foo'), 1) - c.delete('foo') - - def test_true_false(self): - c = self.make_cache() - c.set('foo', True) - assert c.get('foo') == True - c.set('bar', False) - assert c.get('bar') == False - - -class MemcachedCacheTestCase(WerkzeugTestCase): - - def make_cache(self): - return cache.MemcachedCache(key_prefix='werkzeug-test-case:') - - def teardown(self): - self.make_cache().clear() - - def test_compat(self): - c = self.make_cache() - c._client.set(c.key_prefix + b'foo', 'bar') - self.assert_equal(c.get('foo'), 'bar') - - def test_get_set(self): - c = self.make_cache() - c.set('foo', 'bar') - self.assert_equal(c.get('foo'), 'bar') - - def test_get_many(self): - c = self.make_cache() - c.set('foo', 'bar') - c.set('spam', 'eggs') - self.assert_equal(c.get_many('foo', 'spam'), ['bar', 'eggs']) - - def test_set_many(self): - c = self.make_cache() - c.set_many({'foo': 'bar', 'spam': 'eggs'}) - self.assert_equal(c.get('foo'), 'bar') - self.assert_equal(c.get('spam'), 'eggs') - - def test_expire(self): - c = self.make_cache() - c.set('foo', 'bar', 1) - time.sleep(2) - self.assert_is_none(c.get('foo')) - - def test_add(self): - c = self.make_cache() - c.add('foo', 'bar') - self.assert_equal(c.get('foo'), 'bar') - c.add('foo', 'baz') - self.assert_equal(c.get('foo'), 'bar') - - def test_delete(self): - c = self.make_cache() - c.add('foo', 'bar') - self.assert_equal(c.get('foo'), 'bar') - c.delete('foo') - self.assert_is_none(c.get('foo')) - - def test_delete_many(self): - c = self.make_cache() - c.add('foo', 'bar') - c.add('spam', 'eggs') - c.delete_many('foo', 'spam') - self.assert_is_none(c.get('foo')) - self.assert_is_none(c.get('spam')) - - def test_inc_dec(self): - c = self.make_cache() - c.set('foo', 1) - # XXX: Is this an intended difference? - c.inc('foo') - self.assert_equal(c.get('foo'), 2) - c.dec('foo') - self.assert_equal(c.get('foo'), 1) - - def test_true_false(self): - c = self.make_cache() - c.set('foo', True) - self.assert_equal(c.get('foo'), True) - c.set('bar', False) - self.assert_equal(c.get('bar'), False) - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(SimpleCacheTestCase)) - suite.addTest(unittest.makeSuite(FileSystemCacheTestCase)) - if redis is not None: - suite.addTest(unittest.makeSuite(RedisCacheTestCase)) - if memcache is not None: - suite.addTest(unittest.makeSuite(MemcachedCacheTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/contrib/fixers.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/contrib/fixers.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/contrib/fixers.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/contrib/fixers.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,193 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.fixers - ~~~~~~~~~~~~~~~~~~~~~~~~~ - - Server / Browser fixers. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import unittest - -from werkzeug.testsuite import WerkzeugTestCase -from werkzeug.datastructures import ResponseCacheControl -from werkzeug.http import parse_cache_control_header - -from werkzeug.test import create_environ, Client -from werkzeug.wrappers import Request, Response -from werkzeug.contrib import fixers -from werkzeug.utils import redirect - - -@Request.application -def path_check_app(request): - return Response('PATH_INFO: %s\nSCRIPT_NAME: %s' % ( - request.environ.get('PATH_INFO', ''), - request.environ.get('SCRIPT_NAME', '') - )) - - -class ServerFixerTestCase(WerkzeugTestCase): - - def test_cgi_root_fix(self): - app = fixers.CGIRootFix(path_check_app) - response = Response.from_app(app, dict(create_environ(), - SCRIPT_NAME='/foo', - PATH_INFO='/bar', - SERVER_SOFTWARE='lighttpd/1.4.27' - )) - self.assert_equal(response.get_data(), - b'PATH_INFO: /foo/bar\nSCRIPT_NAME: ') - - def test_cgi_root_fix_custom_app_root(self): - app = fixers.CGIRootFix(path_check_app, app_root='/baz/poop/') - response = Response.from_app(app, dict(create_environ(), - SCRIPT_NAME='/foo', - PATH_INFO='/bar' - )) - self.assert_equal(response.get_data(), b'PATH_INFO: /foo/bar\nSCRIPT_NAME: baz/poop') - - def test_path_info_from_request_uri_fix(self): - app = fixers.PathInfoFromRequestUriFix(path_check_app) - for key in 'REQUEST_URI', 'REQUEST_URL', 'UNENCODED_URL': - env = dict(create_environ(), SCRIPT_NAME='/test', PATH_INFO='/?????') - env[key] = '/test/foo%25bar?drop=this' - response = Response.from_app(app, env) - self.assert_equal(response.get_data(), b'PATH_INFO: /foo%bar\nSCRIPT_NAME: /test') - - def test_proxy_fix(self): - @Request.application - def app(request): - return Response('%s|%s' % ( - request.remote_addr, - # do not use request.host as this fixes too :) - request.environ['HTTP_HOST'] - )) - app = fixers.ProxyFix(app, num_proxies=2) - environ = dict(create_environ(), - HTTP_X_FORWARDED_PROTO="https", - HTTP_X_FORWARDED_HOST='example.com', - HTTP_X_FORWARDED_FOR='1.2.3.4, 5.6.7.8', - REMOTE_ADDR='127.0.0.1', - HTTP_HOST='fake' - ) - - response = Response.from_app(app, environ) - - self.assert_equal(response.get_data(), b'1.2.3.4|example.com') - - # And we must check that if it is a redirection it is - # correctly done: - - redirect_app = redirect('/foo/bar.hml') - response = Response.from_app(redirect_app, environ) - - wsgi_headers = response.get_wsgi_headers(environ) - assert wsgi_headers['Location'] == 'https://example.com/foo/bar.hml' - - def test_proxy_fix_weird_enum(self): - @fixers.ProxyFix - @Request.application - def app(request): - return Response(request.remote_addr) - environ = dict(create_environ(), - HTTP_X_FORWARDED_FOR=',', - REMOTE_ADDR='127.0.0.1', - ) - - response = Response.from_app(app, environ) - self.assert_strict_equal(response.get_data(), b'127.0.0.1') - - def test_header_rewriter_fix(self): - @Request.application - def application(request): - return Response("", headers=[ - ('X-Foo', 'bar') - ]) - application = fixers.HeaderRewriterFix(application, ('X-Foo',), (('X-Bar', '42'),)) - response = Response.from_app(application, create_environ()) - assert response.headers['Content-Type'] == 'text/plain; charset=utf-8' - assert 'X-Foo' not in response.headers - assert response.headers['X-Bar'] == '42' - - -class BrowserFixerTestCase(WerkzeugTestCase): - - def test_ie_fixes(self): - @fixers.InternetExplorerFix - @Request.application - def application(request): - response = Response('binary data here', mimetype='application/vnd.ms-excel') - response.headers['Vary'] = 'Cookie' - response.headers['Content-Disposition'] = 'attachment; filename=foo.xls' - return response - - c = Client(application, Response) - response = c.get('/', headers=[ - ('User-Agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)') - ]) - - # IE gets no vary - self.assert_equal(response.get_data(), b'binary data here') - assert 'vary' not in response.headers - assert response.headers['content-disposition'] == 'attachment; filename=foo.xls' - assert response.headers['content-type'] == 'application/vnd.ms-excel' - - # other browsers do - c = Client(application, Response) - response = c.get('/') - self.assert_equal(response.get_data(), b'binary data here') - assert 'vary' in response.headers - - cc = ResponseCacheControl() - cc.no_cache = True - - @fixers.InternetExplorerFix - @Request.application - def application(request): - response = Response('binary data here', mimetype='application/vnd.ms-excel') - response.headers['Pragma'] = ', '.join(pragma) - response.headers['Cache-Control'] = cc.to_header() - response.headers['Content-Disposition'] = 'attachment; filename=foo.xls' - return response - - - # IE has no pragma or cache control - pragma = ('no-cache',) - c = Client(application, Response) - response = c.get('/', headers=[ - ('User-Agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)') - ]) - self.assert_equal(response.get_data(), b'binary data here') - assert 'pragma' not in response.headers - assert 'cache-control' not in response.headers - assert response.headers['content-disposition'] == 'attachment; filename=foo.xls' - - # IE has simplified pragma - pragma = ('no-cache', 'x-foo') - cc.proxy_revalidate = True - response = c.get('/', headers=[ - ('User-Agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)') - ]) - self.assert_equal(response.get_data(), b'binary data here') - assert response.headers['pragma'] == 'x-foo' - assert response.headers['cache-control'] == 'proxy-revalidate' - assert response.headers['content-disposition'] == 'attachment; filename=foo.xls' - - # regular browsers get everything - response = c.get('/') - self.assert_equal(response.get_data(), b'binary data here') - assert response.headers['pragma'] == 'no-cache, x-foo' - cc = parse_cache_control_header(response.headers['cache-control'], - cls=ResponseCacheControl) - assert cc.no_cache - assert cc.proxy_revalidate - assert response.headers['content-disposition'] == 'attachment; filename=foo.xls' - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(ServerFixerTestCase)) - suite.addTest(unittest.makeSuite(BrowserFixerTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/contrib/__init__.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/contrib/__init__.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/contrib/__init__.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/contrib/__init__.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,19 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.contrib - ~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Tests the contrib modules. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import unittest -from werkzeug.testsuite import iter_suites - - -def suite(): - suite = unittest.TestSuite() - for other_suite in iter_suites(__name__): - suite.addTest(other_suite) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/contrib/iterio.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/contrib/iterio.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/contrib/iterio.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/contrib/iterio.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,184 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.iterio - ~~~~~~~~~~~~~~~~~~~~~~~~~ - - Tests the iterio object. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import unittest -from functools import partial - -from werkzeug.testsuite import WerkzeugTestCase -from werkzeug.contrib.iterio import IterIO, greenlet - - -class IterOTestSuite(WerkzeugTestCase): - - def test_basic_native(self): - io = IterIO(["Hello", "World", "1", "2", "3"]) - self.assert_equal(io.tell(), 0) - self.assert_equal(io.read(2), "He") - self.assert_equal(io.tell(), 2) - self.assert_equal(io.read(3), "llo") - self.assert_equal(io.tell(), 5) - io.seek(0) - self.assert_equal(io.read(5), "Hello") - self.assert_equal(io.tell(), 5) - self.assert_equal(io._buf, "Hello") - self.assert_equal(io.read(), "World123") - self.assert_equal(io.tell(), 13) - io.close() - assert io.closed - - io = IterIO(["Hello\n", "World!"]) - self.assert_equal(io.readline(), 'Hello\n') - self.assert_equal(io._buf, 'Hello\n') - self.assert_equal(io.read(), 'World!') - self.assert_equal(io._buf, 'Hello\nWorld!') - self.assert_equal(io.tell(), 12) - io.seek(0) - self.assert_equal(io.readlines(), ['Hello\n', 'World!']) - - io = IterIO(["foo\n", "bar"]) - io.seek(-4, 2) - self.assert_equal(io.read(4), '\nbar') - - self.assert_raises(IOError, io.seek, 2, 100) - io.close() - self.assert_raises(ValueError, io.read) - - def test_basic_bytes(self): - io = IterIO([b"Hello", b"World", b"1", b"2", b"3"]) - self.assert_equal(io.tell(), 0) - self.assert_equal(io.read(2), b"He") - self.assert_equal(io.tell(), 2) - self.assert_equal(io.read(3), b"llo") - self.assert_equal(io.tell(), 5) - io.seek(0) - self.assert_equal(io.read(5), b"Hello") - self.assert_equal(io.tell(), 5) - self.assert_equal(io._buf, b"Hello") - self.assert_equal(io.read(), b"World123") - self.assert_equal(io.tell(), 13) - io.close() - assert io.closed - - io = IterIO([b"Hello\n", b"World!"]) - self.assert_equal(io.readline(), b'Hello\n') - self.assert_equal(io._buf, b'Hello\n') - self.assert_equal(io.read(), b'World!') - self.assert_equal(io._buf, b'Hello\nWorld!') - self.assert_equal(io.tell(), 12) - io.seek(0) - self.assert_equal(io.readlines(), [b'Hello\n', b'World!']) - - io = IterIO([b"foo\n", b"bar"]) - io.seek(-4, 2) - self.assert_equal(io.read(4), b'\nbar') - - self.assert_raises(IOError, io.seek, 2, 100) - io.close() - self.assert_raises(ValueError, io.read) - - def test_basic_unicode(self): - io = IterIO([u"Hello", u"World", u"1", u"2", u"3"]) - self.assert_equal(io.tell(), 0) - self.assert_equal(io.read(2), u"He") - self.assert_equal(io.tell(), 2) - self.assert_equal(io.read(3), u"llo") - self.assert_equal(io.tell(), 5) - io.seek(0) - self.assert_equal(io.read(5), u"Hello") - self.assert_equal(io.tell(), 5) - self.assert_equal(io._buf, u"Hello") - self.assert_equal(io.read(), u"World123") - self.assert_equal(io.tell(), 13) - io.close() - assert io.closed - - io = IterIO([u"Hello\n", u"World!"]) - self.assert_equal(io.readline(), u'Hello\n') - self.assert_equal(io._buf, u'Hello\n') - self.assert_equal(io.read(), u'World!') - self.assert_equal(io._buf, u'Hello\nWorld!') - self.assert_equal(io.tell(), 12) - io.seek(0) - self.assert_equal(io.readlines(), [u'Hello\n', u'World!']) - - io = IterIO([u"foo\n", u"bar"]) - io.seek(-4, 2) - self.assert_equal(io.read(4), u'\nbar') - - self.assert_raises(IOError, io.seek, 2, 100) - io.close() - self.assert_raises(ValueError, io.read) - - def test_sentinel_cases(self): - io = IterIO([]) - self.assert_strict_equal(io.read(), '') - io = IterIO([], b'') - self.assert_strict_equal(io.read(), b'') - io = IterIO([], u'') - self.assert_strict_equal(io.read(), u'') - - io = IterIO([]) - self.assert_strict_equal(io.read(), '') - io = IterIO([b'']) - self.assert_strict_equal(io.read(), b'') - io = IterIO([u'']) - self.assert_strict_equal(io.read(), u'') - - io = IterIO([]) - self.assert_strict_equal(io.readline(), '') - io = IterIO([], b'') - self.assert_strict_equal(io.readline(), b'') - io = IterIO([], u'') - self.assert_strict_equal(io.readline(), u'') - - io = IterIO([]) - self.assert_strict_equal(io.readline(), '') - io = IterIO([b'']) - self.assert_strict_equal(io.readline(), b'') - io = IterIO([u'']) - self.assert_strict_equal(io.readline(), u'') - - -class IterITestSuite(WerkzeugTestCase): - - def test_basic(self): - def producer(out): - out.write('1\n') - out.write('2\n') - out.flush() - out.write('3\n') - iterable = IterIO(producer) - self.assert_equal(next(iterable), '1\n2\n') - self.assert_equal(next(iterable), '3\n') - self.assert_raises(StopIteration, next, iterable) - - def test_sentinel_cases(self): - def producer_dummy_flush(out): - out.flush() - iterable = IterIO(producer_dummy_flush) - self.assert_strict_equal(next(iterable), '') - - def producer_empty(out): - pass - iterable = IterIO(producer_empty) - self.assert_raises(StopIteration, next, iterable) - - iterable = IterIO(producer_dummy_flush, b'') - self.assert_strict_equal(next(iterable), b'') - iterable = IterIO(producer_dummy_flush, u'') - self.assert_strict_equal(next(iterable), u'') - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(IterOTestSuite)) - if greenlet is not None: - suite.addTest(unittest.makeSuite(IterITestSuite)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/contrib/securecookie.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/contrib/securecookie.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/contrib/securecookie.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/contrib/securecookie.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,64 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.securecookie - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Tests the secure cookie. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import unittest - -from werkzeug.testsuite import WerkzeugTestCase - -from werkzeug.utils import parse_cookie -from werkzeug.wrappers import Request, Response -from werkzeug.contrib.securecookie import SecureCookie - - -class SecureCookieTestCase(WerkzeugTestCase): - - def test_basic_support(self): - c = SecureCookie(secret_key=b'foo') - assert c.new - assert not c.modified - assert not c.should_save - c['x'] = 42 - assert c.modified - assert c.should_save - s = c.serialize() - - c2 = SecureCookie.unserialize(s, b'foo') - assert c is not c2 - assert not c2.new - assert not c2.modified - assert not c2.should_save - self.assert_equal(c2, c) - - c3 = SecureCookie.unserialize(s, b'wrong foo') - assert not c3.modified - assert not c3.new - self.assert_equal(c3, {}) - - def test_wrapper_support(self): - req = Request.from_values() - resp = Response() - c = SecureCookie.load_cookie(req, secret_key=b'foo') - assert c.new - c['foo'] = 42 - self.assert_equal(c.secret_key, b'foo') - c.save_cookie(resp) - - req = Request.from_values(headers={ - 'Cookie': 'session="%s"' % parse_cookie(resp.headers['set-cookie'])['session'] - }) - c2 = SecureCookie.load_cookie(req, secret_key=b'foo') - assert not c2.new - self.assert_equal(c2, c) - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(SecureCookieTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/contrib/sessions.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/contrib/sessions.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/contrib/sessions.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/contrib/sessions.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,91 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.sessions - ~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Added tests for the sessions. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import os -import unittest -import shutil -from tempfile import mkdtemp, gettempdir - -from werkzeug.testsuite import WerkzeugTestCase -from werkzeug.contrib.sessions import FilesystemSessionStore - - - -class SessionTestCase(WerkzeugTestCase): - - def setup(self): - self.session_folder = mkdtemp() - - def teardown(self): - shutil.rmtree(self.session_folder) - - def test_default_tempdir(self): - store = FilesystemSessionStore() - assert store.path == gettempdir() - - def test_basic_fs_sessions(self): - store = FilesystemSessionStore(self.session_folder) - x = store.new() - assert x.new - assert not x.modified - x['foo'] = [1, 2, 3] - assert x.modified - store.save(x) - - x2 = store.get(x.sid) - assert not x2.new - assert not x2.modified - assert x2 is not x - assert x2 == x - x2['test'] = 3 - assert x2.modified - assert not x2.new - store.save(x2) - - x = store.get(x.sid) - store.delete(x) - x2 = store.get(x.sid) - # the session is not new when it was used previously. - assert not x2.new - - def test_non_urandom(self): - urandom = os.urandom - del os.urandom - try: - store = FilesystemSessionStore(self.session_folder) - store.new() - finally: - os.urandom = urandom - - - def test_renewing_fs_session(self): - store = FilesystemSessionStore(self.session_folder, renew_missing=True) - x = store.new() - store.save(x) - store.delete(x) - x2 = store.get(x.sid) - assert x2.new - - def test_fs_session_lising(self): - store = FilesystemSessionStore(self.session_folder, renew_missing=True) - sessions = set() - for x in range(10): - sess = store.new() - store.save(sess) - sessions.add(sess.sid) - - listed_sessions = set(store.list()) - assert sessions == listed_sessions - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(SessionTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/contrib/wrappers.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/contrib/wrappers.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/contrib/wrappers.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/contrib/wrappers.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,97 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.contrib.wrappers - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Added tests for the sessions. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" - -from __future__ import with_statement - -import unittest - -from werkzeug.testsuite import WerkzeugTestCase - -from werkzeug.contrib import wrappers -from werkzeug import routing -from werkzeug.wrappers import Request, Response - - -class WrappersTestCase(WerkzeugTestCase): - - def test_reverse_slash_behavior(self): - class MyRequest(wrappers.ReverseSlashBehaviorRequestMixin, Request): - pass - req = MyRequest.from_values('/foo/bar', 'http://example.com/test') - assert req.url == 'http://example.com/test/foo/bar' - assert req.path == 'foo/bar' - assert req.script_root == '/test/' - - # make sure the routing system works with the slashes in - # reverse order as well. - map = routing.Map([routing.Rule('/foo/bar', endpoint='foo')]) - adapter = map.bind_to_environ(req.environ) - assert adapter.match() == ('foo', {}) - adapter = map.bind(req.host, req.script_root) - assert adapter.match(req.path) == ('foo', {}) - - def test_dynamic_charset_request_mixin(self): - class MyRequest(wrappers.DynamicCharsetRequestMixin, Request): - pass - env = {'CONTENT_TYPE': 'text/html'} - req = MyRequest(env) - assert req.charset == 'latin1' - - env = {'CONTENT_TYPE': 'text/html; charset=utf-8'} - req = MyRequest(env) - assert req.charset == 'utf-8' - - env = {'CONTENT_TYPE': 'application/octet-stream'} - req = MyRequest(env) - assert req.charset == 'latin1' - assert req.url_charset == 'latin1' - - MyRequest.url_charset = 'utf-8' - env = {'CONTENT_TYPE': 'application/octet-stream'} - req = MyRequest(env) - assert req.charset == 'latin1' - assert req.url_charset == 'utf-8' - - def return_ascii(x): - return "ascii" - env = {'CONTENT_TYPE': 'text/plain; charset=x-weird-charset'} - req = MyRequest(env) - req.unknown_charset = return_ascii - assert req.charset == 'ascii' - assert req.url_charset == 'utf-8' - - def test_dynamic_charset_response_mixin(self): - class MyResponse(wrappers.DynamicCharsetResponseMixin, Response): - default_charset = 'utf-7' - resp = MyResponse(mimetype='text/html') - assert resp.charset == 'utf-7' - resp.charset = 'utf-8' - assert resp.charset == 'utf-8' - assert resp.mimetype == 'text/html' - assert resp.mimetype_params == {'charset': 'utf-8'} - resp.mimetype_params['charset'] = 'iso-8859-15' - assert resp.charset == 'iso-8859-15' - resp.set_data(u'Hällo Wörld') - assert b''.join(resp.iter_encoded()) == \ - u'Hällo Wörld'.encode('iso-8859-15') - del resp.headers['content-type'] - try: - resp.charset = 'utf-8' - except TypeError as e: - pass - else: - assert False, 'expected type error on charset setting without ct' - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(WrappersTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/datastructures.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/datastructures.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/datastructures.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/datastructures.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,810 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.datastructures - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Tests the functionality of the provided Werkzeug - datastructures. - - TODO: - - - FileMultiDict - - Immutable types undertested - - Split up dict tests - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" - -from __future__ import with_statement - -import unittest -import pickle -from contextlib import contextmanager -from copy import copy, deepcopy - -from werkzeug import datastructures -from werkzeug._compat import iterkeys, itervalues, iteritems, iterlists, \ - iterlistvalues, text_type, PY2 -from werkzeug.testsuite import WerkzeugTestCase -from werkzeug.exceptions import BadRequestKeyError - - -class NativeItermethodsTestCase(WerkzeugTestCase): - def test_basic(self): - @datastructures.native_itermethods(['keys', 'values', 'items']) - class StupidDict(object): - def keys(self, multi=1): - return iter(['a', 'b', 'c'] * multi) - - def values(self, multi=1): - return iter([1, 2, 3] * multi) - - def items(self, multi=1): - return iter(zip(iterkeys(self, multi=multi), - itervalues(self, multi=multi))) - - d = StupidDict() - expected_keys = ['a', 'b', 'c'] - expected_values = [1, 2, 3] - expected_items = list(zip(expected_keys, expected_values)) - - self.assert_equal(list(iterkeys(d)), expected_keys) - self.assert_equal(list(itervalues(d)), expected_values) - self.assert_equal(list(iteritems(d)), expected_items) - - self.assert_equal(list(iterkeys(d, 2)), expected_keys * 2) - self.assert_equal(list(itervalues(d, 2)), expected_values * 2) - self.assert_equal(list(iteritems(d, 2)), expected_items * 2) - - -class MutableMultiDictBaseTestCase(WerkzeugTestCase): - storage_class = None - - def test_pickle(self): - cls = self.storage_class - - for protocol in range(pickle.HIGHEST_PROTOCOL + 1): - d = cls() - d.setlist(b'foo', [1, 2, 3, 4]) - d.setlist(b'bar', b'foo bar baz'.split()) - s = pickle.dumps(d, protocol) - ud = pickle.loads(s) - self.assert_equal(type(ud), type(d)) - self.assert_equal(ud, d) - self.assert_equal(pickle.loads( - s.replace(b'werkzeug.datastructures', b'werkzeug')), d) - ud[b'newkey'] = b'bla' - self.assert_not_equal(ud, d) - - def test_basic_interface(self): - md = self.storage_class() - assert isinstance(md, dict) - - mapping = [('a', 1), ('b', 2), ('a', 2), ('d', 3), - ('a', 1), ('a', 3), ('d', 4), ('c', 3)] - md = self.storage_class(mapping) - - # simple getitem gives the first value - self.assert_equal(md['a'], 1) - self.assert_equal(md['c'], 3) - with self.assert_raises(KeyError): - md['e'] - self.assert_equal(md.get('a'), 1) - - # list getitem - self.assert_equal(md.getlist('a'), [1, 2, 1, 3]) - self.assert_equal(md.getlist('d'), [3, 4]) - # do not raise if key not found - self.assert_equal(md.getlist('x'), []) - - # simple setitem overwrites all values - md['a'] = 42 - self.assert_equal(md.getlist('a'), [42]) - - # list setitem - md.setlist('a', [1, 2, 3]) - self.assert_equal(md['a'], 1) - self.assert_equal(md.getlist('a'), [1, 2, 3]) - - # verify that it does not change original lists - l1 = [1, 2, 3] - md.setlist('a', l1) - del l1[:] - self.assert_equal(md['a'], 1) - - # setdefault, setlistdefault - self.assert_equal(md.setdefault('u', 23), 23) - self.assert_equal(md.getlist('u'), [23]) - del md['u'] - - md.setlist('u', [-1, -2]) - - # delitem - del md['u'] - with self.assert_raises(KeyError): - md['u'] - del md['d'] - self.assert_equal(md.getlist('d'), []) - - # keys, values, items, lists - self.assert_equal(list(sorted(md.keys())), ['a', 'b', 'c']) - self.assert_equal(list(sorted(iterkeys(md))), ['a', 'b', 'c']) - - self.assert_equal(list(sorted(itervalues(md))), [1, 2, 3]) - self.assert_equal(list(sorted(itervalues(md))), [1, 2, 3]) - - self.assert_equal(list(sorted(md.items())), - [('a', 1), ('b', 2), ('c', 3)]) - self.assert_equal(list(sorted(md.items(multi=True))), - [('a', 1), ('a', 2), ('a', 3), ('b', 2), ('c', 3)]) - self.assert_equal(list(sorted(iteritems(md))), - [('a', 1), ('b', 2), ('c', 3)]) - self.assert_equal(list(sorted(iteritems(md, multi=True))), - [('a', 1), ('a', 2), ('a', 3), ('b', 2), ('c', 3)]) - - self.assert_equal(list(sorted(md.lists())), - [('a', [1, 2, 3]), ('b', [2]), ('c', [3])]) - self.assert_equal(list(sorted(iterlists(md))), - [('a', [1, 2, 3]), ('b', [2]), ('c', [3])]) - - # copy method - c = md.copy() - self.assert_equal(c['a'], 1) - self.assert_equal(c.getlist('a'), [1, 2, 3]) - - # copy method 2 - c = copy(md) - self.assert_equal(c['a'], 1) - self.assert_equal(c.getlist('a'), [1, 2, 3]) - - # deepcopy method - c = md.deepcopy() - self.assert_equal(c['a'], 1) - self.assert_equal(c.getlist('a'), [1, 2, 3]) - - # deepcopy method 2 - c = deepcopy(md) - self.assert_equal(c['a'], 1) - self.assert_equal(c.getlist('a'), [1, 2, 3]) - - # update with a multidict - od = self.storage_class([('a', 4), ('a', 5), ('y', 0)]) - md.update(od) - self.assert_equal(md.getlist('a'), [1, 2, 3, 4, 5]) - self.assert_equal(md.getlist('y'), [0]) - - # update with a regular dict - md = c - od = {'a': 4, 'y': 0} - md.update(od) - self.assert_equal(md.getlist('a'), [1, 2, 3, 4]) - self.assert_equal(md.getlist('y'), [0]) - - # pop, poplist, popitem, popitemlist - self.assert_equal(md.pop('y'), 0) - assert 'y' not in md - self.assert_equal(md.poplist('a'), [1, 2, 3, 4]) - assert 'a' not in md - self.assert_equal(md.poplist('missing'), []) - - # remaining: b=2, c=3 - popped = md.popitem() - assert popped in [('b', 2), ('c', 3)] - popped = md.popitemlist() - assert popped in [('b', [2]), ('c', [3])] - - # type conversion - md = self.storage_class({'a': '4', 'b': ['2', '3']}) - self.assert_equal(md.get('a', type=int), 4) - self.assert_equal(md.getlist('b', type=int), [2, 3]) - - # repr - md = self.storage_class([('a', 1), ('a', 2), ('b', 3)]) - assert "('a', 1)" in repr(md) - assert "('a', 2)" in repr(md) - assert "('b', 3)" in repr(md) - - # add and getlist - md.add('c', '42') - md.add('c', '23') - self.assert_equal(md.getlist('c'), ['42', '23']) - md.add('c', 'blah') - self.assert_equal(md.getlist('c', type=int), [42, 23]) - - # setdefault - md = self.storage_class() - md.setdefault('x', []).append(42) - md.setdefault('x', []).append(23) - self.assert_equal(md['x'], [42, 23]) - - # to dict - md = self.storage_class() - md['foo'] = 42 - md.add('bar', 1) - md.add('bar', 2) - self.assert_equal(md.to_dict(), {'foo': 42, 'bar': 1}) - self.assert_equal(md.to_dict(flat=False), {'foo': [42], 'bar': [1, 2]}) - - # popitem from empty dict - with self.assert_raises(KeyError): - self.storage_class().popitem() - - with self.assert_raises(KeyError): - self.storage_class().popitemlist() - - # key errors are of a special type - with self.assert_raises(BadRequestKeyError): - self.storage_class()[42] - - # setlist works - md = self.storage_class() - md['foo'] = 42 - md.setlist('foo', [1, 2]) - self.assert_equal(md.getlist('foo'), [1, 2]) - - -class ImmutableDictBaseTestCase(WerkzeugTestCase): - storage_class = None - - def test_follows_dict_interface(self): - cls = self.storage_class - - data = {'foo': 1, 'bar': 2, 'baz': 3} - d = cls(data) - - self.assert_equal(d['foo'], 1) - self.assert_equal(d['bar'], 2) - self.assert_equal(d['baz'], 3) - self.assert_equal(sorted(d.keys()), ['bar', 'baz', 'foo']) - self.assert_true('foo' in d) - self.assert_true('foox' not in d) - self.assert_equal(len(d), 3) - - def test_copies_are_mutable(self): - cls = self.storage_class - immutable = cls({'a': 1}) - with self.assert_raises(TypeError): - immutable.pop('a') - - mutable = immutable.copy() - mutable.pop('a') - self.assert_true('a' in immutable) - self.assert_true(mutable is not immutable) - self.assert_true(copy(immutable) is immutable) - - def test_dict_is_hashable(self): - cls = self.storage_class - immutable = cls({'a': 1, 'b': 2}) - immutable2 = cls({'a': 2, 'b': 2}) - x = set([immutable]) - self.assert_true(immutable in x) - self.assert_true(immutable2 not in x) - x.discard(immutable) - self.assert_true(immutable not in x) - self.assert_true(immutable2 not in x) - x.add(immutable2) - self.assert_true(immutable not in x) - self.assert_true(immutable2 in x) - x.add(immutable) - self.assert_true(immutable in x) - self.assert_true(immutable2 in x) - - -class ImmutableTypeConversionDictTestCase(ImmutableDictBaseTestCase): - storage_class = datastructures.ImmutableTypeConversionDict - - -class ImmutableMultiDictTestCase(ImmutableDictBaseTestCase): - storage_class = datastructures.ImmutableMultiDict - - def test_multidict_is_hashable(self): - cls = self.storage_class - immutable = cls({'a': [1, 2], 'b': 2}) - immutable2 = cls({'a': [1], 'b': 2}) - x = set([immutable]) - self.assert_true(immutable in x) - self.assert_true(immutable2 not in x) - x.discard(immutable) - self.assert_true(immutable not in x) - self.assert_true(immutable2 not in x) - x.add(immutable2) - self.assert_true(immutable not in x) - self.assert_true(immutable2 in x) - x.add(immutable) - self.assert_true(immutable in x) - self.assert_true(immutable2 in x) - - -class ImmutableDictTestCase(ImmutableDictBaseTestCase): - storage_class = datastructures.ImmutableDict - - -class ImmutableOrderedMultiDictTestCase(ImmutableDictBaseTestCase): - storage_class = datastructures.ImmutableOrderedMultiDict - - def test_ordered_multidict_is_hashable(self): - a = self.storage_class([('a', 1), ('b', 1), ('a', 2)]) - b = self.storage_class([('a', 1), ('a', 2), ('b', 1)]) - self.assert_not_equal(hash(a), hash(b)) - - -class MultiDictTestCase(MutableMultiDictBaseTestCase): - storage_class = datastructures.MultiDict - - def test_multidict_pop(self): - make_d = lambda: self.storage_class({'foo': [1, 2, 3, 4]}) - d = make_d() - self.assert_equal(d.pop('foo'), 1) - assert not d - d = make_d() - self.assert_equal(d.pop('foo', 32), 1) - assert not d - d = make_d() - self.assert_equal(d.pop('foos', 32), 32) - assert d - - with self.assert_raises(KeyError): - d.pop('foos') - - def test_setlistdefault(self): - md = self.storage_class() - self.assert_equal(md.setlistdefault('u', [-1, -2]), [-1, -2]) - self.assert_equal(md.getlist('u'), [-1, -2]) - self.assert_equal(md['u'], -1) - - def test_iter_interfaces(self): - mapping = [('a', 1), ('b', 2), ('a', 2), ('d', 3), - ('a', 1), ('a', 3), ('d', 4), ('c', 3)] - md = self.storage_class(mapping) - self.assert_equal(list(zip(md.keys(), md.listvalues())), - list(md.lists())) - self.assert_equal(list(zip(md, iterlistvalues(md))), - list(iterlists(md))) - self.assert_equal(list(zip(iterkeys(md), iterlistvalues(md))), - list(iterlists(md))) - - -class OrderedMultiDictTestCase(MutableMultiDictBaseTestCase): - storage_class = datastructures.OrderedMultiDict - - def test_ordered_interface(self): - cls = self.storage_class - - d = cls() - assert not d - d.add('foo', 'bar') - self.assert_equal(len(d), 1) - d.add('foo', 'baz') - self.assert_equal(len(d), 1) - self.assert_equal(list(iteritems(d)), [('foo', 'bar')]) - self.assert_equal(list(d), ['foo']) - self.assert_equal(list(iteritems(d, multi=True)), - [('foo', 'bar'), ('foo', 'baz')]) - del d['foo'] - assert not d - self.assert_equal(len(d), 0) - self.assert_equal(list(d), []) - - d.update([('foo', 1), ('foo', 2), ('bar', 42)]) - d.add('foo', 3) - self.assert_equal(d.getlist('foo'), [1, 2, 3]) - self.assert_equal(d.getlist('bar'), [42]) - self.assert_equal(list(iteritems(d)), [('foo', 1), ('bar', 42)]) - - expected = ['foo', 'bar'] - - self.assert_sequence_equal(list(d.keys()), expected) - self.assert_sequence_equal(list(d), expected) - self.assert_sequence_equal(list(iterkeys(d)), expected) - - self.assert_equal(list(iteritems(d, multi=True)), - [('foo', 1), ('foo', 2), ('bar', 42), ('foo', 3)]) - self.assert_equal(len(d), 2) - - self.assert_equal(d.pop('foo'), 1) - assert d.pop('blafasel', None) is None - self.assert_equal(d.pop('blafasel', 42), 42) - self.assert_equal(len(d), 1) - self.assert_equal(d.poplist('bar'), [42]) - assert not d - - d.get('missingkey') is None - - d.add('foo', 42) - d.add('foo', 23) - d.add('bar', 2) - d.add('foo', 42) - self.assert_equal(d, datastructures.MultiDict(d)) - id = self.storage_class(d) - self.assert_equal(d, id) - d.add('foo', 2) - assert d != id - - d.update({'blah': [1, 2, 3]}) - self.assert_equal(d['blah'], 1) - self.assert_equal(d.getlist('blah'), [1, 2, 3]) - - # setlist works - d = self.storage_class() - d['foo'] = 42 - d.setlist('foo', [1, 2]) - self.assert_equal(d.getlist('foo'), [1, 2]) - - with self.assert_raises(BadRequestKeyError): - d.pop('missing') - with self.assert_raises(BadRequestKeyError): - d['missing'] - - # popping - d = self.storage_class() - d.add('foo', 23) - d.add('foo', 42) - d.add('foo', 1) - self.assert_equal(d.popitem(), ('foo', 23)) - with self.assert_raises(BadRequestKeyError): - d.popitem() - assert not d - - d.add('foo', 23) - d.add('foo', 42) - d.add('foo', 1) - self.assert_equal(d.popitemlist(), ('foo', [23, 42, 1])) - - with self.assert_raises(BadRequestKeyError): - d.popitemlist() - - def test_iterables(self): - a = datastructures.MultiDict((("key_a", "value_a"),)) - b = datastructures.MultiDict((("key_b", "value_b"),)) - ab = datastructures.CombinedMultiDict((a,b)) - - self.assert_equal(sorted(ab.lists()), [('key_a', ['value_a']), ('key_b', ['value_b'])]) - self.assert_equal(sorted(ab.listvalues()), [['value_a'], ['value_b']]) - self.assert_equal(sorted(ab.keys()), ["key_a", "key_b"]) - - self.assert_equal(sorted(iterlists(ab)), [('key_a', ['value_a']), ('key_b', ['value_b'])]) - self.assert_equal(sorted(iterlistvalues(ab)), [['value_a'], ['value_b']]) - self.assert_equal(sorted(iterkeys(ab)), ["key_a", "key_b"]) - - -class CombinedMultiDictTestCase(WerkzeugTestCase): - storage_class = datastructures.CombinedMultiDict - - def test_basic_interface(self): - d1 = datastructures.MultiDict([('foo', '1')]) - d2 = datastructures.MultiDict([('bar', '2'), ('bar', '3')]) - d = self.storage_class([d1, d2]) - - # lookup - self.assert_equal(d['foo'], '1') - self.assert_equal(d['bar'], '2') - self.assert_equal(d.getlist('bar'), ['2', '3']) - - self.assert_equal(sorted(d.items()), - [('bar', '2'), ('foo', '1')]) - self.assert_equal(sorted(d.items(multi=True)), - [('bar', '2'), ('bar', '3'), ('foo', '1')]) - assert 'missingkey' not in d - assert 'foo' in d - - # type lookup - self.assert_equal(d.get('foo', type=int), 1) - self.assert_equal(d.getlist('bar', type=int), [2, 3]) - - # get key errors for missing stuff - with self.assert_raises(KeyError): - d['missing'] - - # make sure that they are immutable - with self.assert_raises(TypeError): - d['foo'] = 'blub' - - # copies are immutable - d = d.copy() - with self.assert_raises(TypeError): - d['foo'] = 'blub' - - # make sure lists merges - md1 = datastructures.MultiDict((("foo", "bar"),)) - md2 = datastructures.MultiDict((("foo", "blafasel"),)) - x = self.storage_class((md1, md2)) - self.assert_equal(list(iterlists(x)), [('foo', ['bar', 'blafasel'])]) - - -class HeadersTestCase(WerkzeugTestCase): - storage_class = datastructures.Headers - - def test_basic_interface(self): - headers = self.storage_class() - headers.add('Content-Type', 'text/plain') - headers.add('X-Foo', 'bar') - assert 'x-Foo' in headers - assert 'Content-type' in headers - - headers['Content-Type'] = 'foo/bar' - self.assert_equal(headers['Content-Type'], 'foo/bar') - self.assert_equal(len(headers.getlist('Content-Type')), 1) - - # list conversion - self.assert_equal(headers.to_wsgi_list(), [ - ('Content-Type', 'foo/bar'), - ('X-Foo', 'bar') - ]) - self.assert_equal(str(headers), ( - "Content-Type: foo/bar\r\n" - "X-Foo: bar\r\n" - "\r\n")) - self.assert_equal(str(self.storage_class()), "\r\n") - - # extended add - headers.add('Content-Disposition', 'attachment', filename='foo') - self.assert_equal(headers['Content-Disposition'], - 'attachment; filename=foo') - - headers.add('x', 'y', z='"') - self.assert_equal(headers['x'], r'y; z="\""') - - def test_defaults_and_conversion(self): - # defaults - headers = self.storage_class([ - ('Content-Type', 'text/plain'), - ('X-Foo', 'bar'), - ('X-Bar', '1'), - ('X-Bar', '2') - ]) - self.assert_equal(headers.getlist('x-bar'), ['1', '2']) - self.assert_equal(headers.get('x-Bar'), '1') - self.assert_equal(headers.get('Content-Type'), 'text/plain') - - self.assert_equal(headers.setdefault('X-Foo', 'nope'), 'bar') - self.assert_equal(headers.setdefault('X-Bar', 'nope'), '1') - self.assert_equal(headers.setdefault('X-Baz', 'quux'), 'quux') - self.assert_equal(headers.setdefault('X-Baz', 'nope'), 'quux') - headers.pop('X-Baz') - - # type conversion - self.assert_equal(headers.get('x-bar', type=int), 1) - self.assert_equal(headers.getlist('x-bar', type=int), [1, 2]) - - # list like operations - self.assert_equal(headers[0], ('Content-Type', 'text/plain')) - self.assert_equal(headers[:1], self.storage_class([('Content-Type', 'text/plain')])) - del headers[:2] - del headers[-1] - self.assert_equal(headers, self.storage_class([('X-Bar', '1')])) - - def test_copying(self): - a = self.storage_class([('foo', 'bar')]) - b = a.copy() - a.add('foo', 'baz') - self.assert_equal(a.getlist('foo'), ['bar', 'baz']) - self.assert_equal(b.getlist('foo'), ['bar']) - - def test_popping(self): - headers = self.storage_class([('a', 1)]) - self.assert_equal(headers.pop('a'), 1) - self.assert_equal(headers.pop('b', 2), 2) - - with self.assert_raises(KeyError): - headers.pop('c') - - def test_set_arguments(self): - a = self.storage_class() - a.set('Content-Disposition', 'useless') - a.set('Content-Disposition', 'attachment', filename='foo') - self.assert_equal(a['Content-Disposition'], 'attachment; filename=foo') - - def test_reject_newlines(self): - h = self.storage_class() - - for variation in 'foo\nbar', 'foo\r\nbar', 'foo\rbar': - with self.assert_raises(ValueError): - h['foo'] = variation - with self.assert_raises(ValueError): - h.add('foo', variation) - with self.assert_raises(ValueError): - h.add('foo', 'test', option=variation) - with self.assert_raises(ValueError): - h.set('foo', variation) - with self.assert_raises(ValueError): - h.set('foo', 'test', option=variation) - - def test_slicing(self): - # there's nothing wrong with these being native strings - # Headers doesn't care about the data types - h = self.storage_class() - h.set('X-Foo-Poo', 'bleh') - h.set('Content-Type', 'application/whocares') - h.set('X-Forwarded-For', '192.168.0.123') - h[:] = [(k, v) for k, v in h if k.startswith(u'X-')] - self.assert_equal(list(h), [ - ('X-Foo-Poo', 'bleh'), - ('X-Forwarded-For', '192.168.0.123') - ]) - - def test_bytes_operations(self): - h = self.storage_class() - h.set('X-Foo-Poo', 'bleh') - h.set('X-Whoops', b'\xff') - - self.assert_equal(h.get('x-foo-poo', as_bytes=True), b'bleh') - self.assert_equal(h.get('x-whoops', as_bytes=True), b'\xff') - - def test_to_wsgi_list(self): - h = self.storage_class() - h.set(u'Key', u'Value') - for key, value in h.to_wsgi_list(): - if PY2: - self.assert_strict_equal(key, b'Key') - self.assert_strict_equal(value, b'Value') - else: - self.assert_strict_equal(key, u'Key') - self.assert_strict_equal(value, u'Value') - - - -class EnvironHeadersTestCase(WerkzeugTestCase): - storage_class = datastructures.EnvironHeaders - - def test_basic_interface(self): - # this happens in multiple WSGI servers because they - # use a vary naive way to convert the headers; - broken_env = { - 'HTTP_CONTENT_TYPE': 'text/html', - 'CONTENT_TYPE': 'text/html', - 'HTTP_CONTENT_LENGTH': '0', - 'CONTENT_LENGTH': '0', - 'HTTP_ACCEPT': '*', - 'wsgi.version': (1, 0) - } - headers = self.storage_class(broken_env) - assert headers - self.assert_equal(len(headers), 3) - self.assert_equal(sorted(headers), [ - ('Accept', '*'), - ('Content-Length', '0'), - ('Content-Type', 'text/html') - ]) - assert not self.storage_class({'wsgi.version': (1, 0)}) - self.assert_equal(len(self.storage_class({'wsgi.version': (1, 0)})), 0) - - def test_return_type_is_unicode(self): - # environ contains native strings; we return unicode - headers = self.storage_class({ - 'HTTP_FOO': '\xe2\x9c\x93', - 'CONTENT_TYPE': 'text/plain', - }) - self.assert_equal(headers['Foo'], u"\xe2\x9c\x93") - assert isinstance(headers['Foo'], text_type) - assert isinstance(headers['Content-Type'], text_type) - iter_output = dict(iter(headers)) - self.assert_equal(iter_output['Foo'], u"\xe2\x9c\x93") - assert isinstance(iter_output['Foo'], text_type) - assert isinstance(iter_output['Content-Type'], text_type) - - def test_bytes_operations(self): - foo_val = '\xff' - h = self.storage_class({ - 'HTTP_X_FOO': foo_val - }) - - self.assert_equal(h.get('x-foo', as_bytes=True), b'\xff') - self.assert_equal(h.get('x-foo'), u'\xff') - - -class HeaderSetTestCase(WerkzeugTestCase): - storage_class = datastructures.HeaderSet - - def test_basic_interface(self): - hs = self.storage_class() - hs.add('foo') - hs.add('bar') - assert 'Bar' in hs - self.assert_equal(hs.find('foo'), 0) - self.assert_equal(hs.find('BAR'), 1) - assert hs.find('baz') < 0 - hs.discard('missing') - hs.discard('foo') - assert hs.find('foo') < 0 - self.assert_equal(hs.find('bar'), 0) - - with self.assert_raises(IndexError): - hs.index('missing') - - self.assert_equal(hs.index('bar'), 0) - assert hs - hs.clear() - assert not hs - - -class ImmutableListTestCase(WerkzeugTestCase): - storage_class = datastructures.ImmutableList - - def test_list_hashable(self): - t = (1, 2, 3, 4) - l = self.storage_class(t) - self.assert_equal(hash(t), hash(l)) - self.assert_not_equal(t, l) - - -def make_call_asserter(assert_equal_func, func=None): - """Utility to assert a certain number of function calls. - - >>> assert_calls, func = make_call_asserter(self.assert_equal) - >>> with assert_calls(2): - func() - func() - """ - - calls = [0] - - @contextmanager - def asserter(count, msg=None): - calls[0] = 0 - yield - assert_equal_func(calls[0], count, msg) - - def wrapped(*args, **kwargs): - calls[0] += 1 - if func is not None: - return func(*args, **kwargs) - - return asserter, wrapped - - -class CallbackDictTestCase(WerkzeugTestCase): - storage_class = datastructures.CallbackDict - - def test_callback_dict_reads(self): - assert_calls, func = make_call_asserter(self.assert_equal) - initial = {'a': 'foo', 'b': 'bar'} - dct = self.storage_class(initial=initial, on_update=func) - with assert_calls(0, 'callback triggered by read-only method'): - # read-only methods - dct['a'] - dct.get('a') - self.assert_raises(KeyError, lambda: dct['x']) - 'a' in dct - list(iter(dct)) - dct.copy() - with assert_calls(0, 'callback triggered without modification'): - # methods that may write but don't - dct.pop('z', None) - dct.setdefault('a') - - def test_callback_dict_writes(self): - assert_calls, func = make_call_asserter(self.assert_equal) - initial = {'a': 'foo', 'b': 'bar'} - dct = self.storage_class(initial=initial, on_update=func) - with assert_calls(8, 'callback not triggered by write method'): - # always-write methods - dct['z'] = 123 - dct['z'] = 123 # must trigger again - del dct['z'] - dct.pop('b', None) - dct.setdefault('x') - dct.popitem() - dct.update([]) - dct.clear() - with assert_calls(0, 'callback triggered by failed del'): - self.assert_raises(KeyError, lambda: dct.__delitem__('x')) - with assert_calls(0, 'callback triggered by failed pop'): - self.assert_raises(KeyError, lambda: dct.pop('x')) - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(MultiDictTestCase)) - suite.addTest(unittest.makeSuite(OrderedMultiDictTestCase)) - suite.addTest(unittest.makeSuite(CombinedMultiDictTestCase)) - suite.addTest(unittest.makeSuite(ImmutableTypeConversionDictTestCase)) - suite.addTest(unittest.makeSuite(ImmutableMultiDictTestCase)) - suite.addTest(unittest.makeSuite(ImmutableDictTestCase)) - suite.addTest(unittest.makeSuite(ImmutableOrderedMultiDictTestCase)) - suite.addTest(unittest.makeSuite(HeadersTestCase)) - suite.addTest(unittest.makeSuite(EnvironHeadersTestCase)) - suite.addTest(unittest.makeSuite(HeaderSetTestCase)) - suite.addTest(unittest.makeSuite(NativeItermethodsTestCase)) - suite.addTest(unittest.makeSuite(CallbackDictTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/debug.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/debug.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/debug.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/debug.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,172 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.debug - ~~~~~~~~~~~~~~~~~~~~~~~~ - - Tests some debug utilities. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import unittest -import sys -import re - -from werkzeug.testsuite import WerkzeugTestCase -from werkzeug.debug.repr import debug_repr, DebugReprGenerator, \ - dump, helper -from werkzeug.debug.console import HTMLStringO -from werkzeug._compat import PY2 - - -class DebugReprTestCase(WerkzeugTestCase): - - def test_basic_repr(self): - self.assert_equal(debug_repr([]), u'[]') - self.assert_equal(debug_repr([1, 2]), - u'[1, 2]') - self.assert_equal(debug_repr([1, 'test']), - u'[1, \'test\']') - self.assert_equal(debug_repr([None]), - u'[None]') - - def test_sequence_repr(self): - self.assert_equal(debug_repr(list(range(20))), ( - u'[0, 1, ' - u'2, 3, ' - u'4, 5, ' - u'6, 7, ' - u'8, ' - u'9, 10, ' - u'11, 12, ' - u'13, 14, ' - u'15, 16, ' - u'17, 18, ' - u'19]' - )) - - def test_mapping_repr(self): - self.assert_equal(debug_repr({}), u'{}') - self.assert_equal(debug_repr({'foo': 42}), - u'{\'foo\'' - u': 42' - u'}') - self.assert_equal(debug_repr(dict(zip(range(10), [None] * 10))), - u'{0: None, 1: None, 2: None, 3: None, 4: None, 5: None, 6: None, 7: None, 8: None, 9: None}') - self.assert_equal( - debug_repr((1, 'zwei', u'drei')), - u'(1, \'' - u'zwei\', %s\'drei\')' % ('u' if PY2 else '')) - - def test_custom_repr(self): - class Foo(object): - def __repr__(self): - return '' - self.assert_equal(debug_repr(Foo()), - '<Foo 42>') - - def test_list_subclass_repr(self): - class MyList(list): - pass - self.assert_equal( - debug_repr(MyList([1, 2])), - u'werkzeug.testsuite.debug.MyList([' - u'1, 2])') - - def test_regex_repr(self): - self.assert_equal(debug_repr(re.compile(r'foo\d')), - u're.compile(r\'foo\\d\')') - #XXX: no raw string here cause of a syntax bug in py3.3 - self.assert_equal(debug_repr(re.compile(u'foo\\d')), - u're.compile(%sr\'foo\\d\')' % - ('u' if PY2 else '')) - - def test_set_repr(self): - self.assert_equal(debug_repr(frozenset('x')), - u'frozenset([\'x\'])') - self.assert_equal(debug_repr(set('x')), - u'set([\'x\'])') - - def test_recursive_repr(self): - a = [1] - a.append(a) - self.assert_equal(debug_repr(a), - u'[1, [...]]') - - def test_broken_repr(self): - class Foo(object): - def __repr__(self): - raise Exception('broken!') - - self.assert_equal( - debug_repr(Foo()), - u'<broken repr (Exception: ' - u'broken!)>') - - -class Foo(object): - x = 42 - y = 23 - - def __init__(self): - self.z = 15 - - -class DebugHelpersTestCase(WerkzeugTestCase): - - def test_object_dumping(self): - drg = DebugReprGenerator() - out = drg.dump_object(Foo()) - assert re.search('Details for werkzeug.testsuite.debug.Foo object at', out) - assert re.search('

x.*42(?s)', out) - assert re.search('y.*23(?s)', out) - assert re.search('z.*15(?s)', out) - - out = drg.dump_object({'x': 42, 'y': 23}) - assert re.search('Contents of', out) - assert re.search('x.*42(?s)', out) - assert re.search('y.*23(?s)', out) - - out = drg.dump_object({'x': 42, 'y': 23, 23: 11}) - assert not re.search('Contents of', out) - - out = drg.dump_locals({'x': 42, 'y': 23}) - assert re.search('Local variables in frame', out) - assert re.search('x.*42(?s)', out) - assert re.search('y.*23(?s)', out) - - def test_debug_dump(self): - old = sys.stdout - sys.stdout = HTMLStringO() - try: - dump([1, 2, 3]) - x = sys.stdout.reset() - dump() - y = sys.stdout.reset() - finally: - sys.stdout = old - - self.assert_in('Details for list object at', x) - self.assert_in('1', x) - self.assert_in('Local variables in frame', y) - self.assert_in('x', y) - self.assert_in('old', y) - - def test_debug_help(self): - old = sys.stdout - sys.stdout = HTMLStringO() - try: - helper([1, 2, 3]) - x = sys.stdout.reset() - finally: - sys.stdout = old - - self.assert_in('Help on list object', x) - self.assert_in('__delitem__', x) - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(DebugReprTestCase)) - suite.addTest(unittest.makeSuite(DebugHelpersTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/exceptions.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/exceptions.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/exceptions.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/exceptions.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,85 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.exceptions - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - The tests for the exception classes. - - TODO: - - - This is undertested. HTML is never checked - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import unittest - -from werkzeug.testsuite import WerkzeugTestCase - -from werkzeug import exceptions -from werkzeug.wrappers import Response -from werkzeug._compat import text_type - - -class ExceptionsTestCase(WerkzeugTestCase): - - def test_proxy_exception(self): - orig_resp = Response('Hello World') - try: - exceptions.abort(orig_resp) - except exceptions.HTTPException as e: - resp = e.get_response({}) - else: - self.fail('exception not raised') - self.assert_true(resp is orig_resp) - self.assert_equal(resp.get_data(), b'Hello World') - - def test_aborter(self): - abort = exceptions.abort - self.assert_raises(exceptions.BadRequest, abort, 400) - self.assert_raises(exceptions.Unauthorized, abort, 401) - self.assert_raises(exceptions.Forbidden, abort, 403) - self.assert_raises(exceptions.NotFound, abort, 404) - self.assert_raises(exceptions.MethodNotAllowed, abort, 405, ['GET', 'HEAD']) - self.assert_raises(exceptions.NotAcceptable, abort, 406) - self.assert_raises(exceptions.RequestTimeout, abort, 408) - self.assert_raises(exceptions.Gone, abort, 410) - self.assert_raises(exceptions.LengthRequired, abort, 411) - self.assert_raises(exceptions.PreconditionFailed, abort, 412) - self.assert_raises(exceptions.RequestEntityTooLarge, abort, 413) - self.assert_raises(exceptions.RequestURITooLarge, abort, 414) - self.assert_raises(exceptions.UnsupportedMediaType, abort, 415) - self.assert_raises(exceptions.UnprocessableEntity, abort, 422) - self.assert_raises(exceptions.InternalServerError, abort, 500) - self.assert_raises(exceptions.NotImplemented, abort, 501) - self.assert_raises(exceptions.BadGateway, abort, 502) - self.assert_raises(exceptions.ServiceUnavailable, abort, 503) - - myabort = exceptions.Aborter({1: exceptions.NotFound}) - self.assert_raises(LookupError, myabort, 404) - self.assert_raises(exceptions.NotFound, myabort, 1) - - myabort = exceptions.Aborter(extra={1: exceptions.NotFound}) - self.assert_raises(exceptions.NotFound, myabort, 404) - self.assert_raises(exceptions.NotFound, myabort, 1) - - def test_exception_repr(self): - exc = exceptions.NotFound() - self.assert_equal(text_type(exc), '404: Not Found') - self.assert_equal(repr(exc), "") - - exc = exceptions.NotFound('Not There') - self.assert_equal(text_type(exc), '404: Not Found') - self.assert_equal(repr(exc), "") - - def test_special_exceptions(self): - exc = exceptions.MethodNotAllowed(['GET', 'HEAD', 'POST']) - h = dict(exc.get_headers({})) - self.assert_equal(h['Allow'], 'GET, HEAD, POST') - self.assert_true('The method is not allowed' in exc.get_description()) - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(ExceptionsTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/formparser.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/formparser.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/formparser.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/formparser.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,411 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.formparser - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Tests the form parsing facilities. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" - -from __future__ import with_statement - -import unittest -from os.path import join, dirname - -from werkzeug.testsuite import WerkzeugTestCase - -from werkzeug import formparser -from werkzeug.test import create_environ, Client -from werkzeug.wrappers import Request, Response -from werkzeug.exceptions import RequestEntityTooLarge -from werkzeug.datastructures import MultiDict -from werkzeug.formparser import parse_form_data -from werkzeug._compat import BytesIO - - -@Request.application -def form_data_consumer(request): - result_object = request.args['object'] - if result_object == 'text': - return Response(repr(request.form['text'])) - f = request.files[result_object] - return Response(b'\n'.join(( - repr(f.filename).encode('ascii'), - repr(f.name).encode('ascii'), - repr(f.content_type).encode('ascii'), - f.stream.read() - ))) - - -def get_contents(filename): - with open(filename, 'rb') as f: - return f.read() - - -class FormParserTestCase(WerkzeugTestCase): - - def test_limiting(self): - data = b'foo=Hello+World&bar=baz' - req = Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='application/x-www-form-urlencoded', - method='POST') - req.max_content_length = 400 - self.assert_strict_equal(req.form['foo'], u'Hello World') - - req = Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='application/x-www-form-urlencoded', - method='POST') - req.max_form_memory_size = 7 - self.assert_raises(RequestEntityTooLarge, lambda: req.form['foo']) - - req = Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='application/x-www-form-urlencoded', - method='POST') - req.max_form_memory_size = 400 - self.assert_strict_equal(req.form['foo'], u'Hello World') - - data = (b'--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\n' - b'Hello World\r\n' - b'--foo\r\nContent-Disposition: form-field; name=bar\r\n\r\n' - b'bar=baz\r\n--foo--') - req = Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='multipart/form-data; boundary=foo', - method='POST') - req.max_content_length = 4 - self.assert_raises(RequestEntityTooLarge, lambda: req.form['foo']) - - req = Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='multipart/form-data; boundary=foo', - method='POST') - req.max_content_length = 400 - self.assert_strict_equal(req.form['foo'], u'Hello World') - - req = Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='multipart/form-data; boundary=foo', - method='POST') - req.max_form_memory_size = 7 - self.assert_raises(RequestEntityTooLarge, lambda: req.form['foo']) - - req = Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='multipart/form-data; boundary=foo', - method='POST') - req.max_form_memory_size = 400 - self.assert_strict_equal(req.form['foo'], u'Hello World') - - def test_missing_multipart_boundary(self): - data = (b'--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\n' - b'Hello World\r\n' - b'--foo\r\nContent-Disposition: form-field; name=bar\r\n\r\n' - b'bar=baz\r\n--foo--') - req = Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='multipart/form-data', - method='POST') - self.assert_equal(req.form, {}) - - def test_parse_form_data_put_without_content(self): - # A PUT without a Content-Type header returns empty data - - # Both rfc1945 and rfc2616 (1.0 and 1.1) say "Any HTTP/[1.0/1.1] message - # containing an entity-body SHOULD include a Content-Type header field - # defining the media type of that body." In the case where either - # headers are omitted, parse_form_data should still work. - env = create_environ('/foo', 'http://example.org/', method='PUT') - del env['CONTENT_TYPE'] - del env['CONTENT_LENGTH'] - - stream, form, files = formparser.parse_form_data(env) - self.assert_strict_equal(stream.read(), b'') - self.assert_strict_equal(len(form), 0) - self.assert_strict_equal(len(files), 0) - - def test_parse_form_data_get_without_content(self): - env = create_environ('/foo', 'http://example.org/', method='GET') - del env['CONTENT_TYPE'] - del env['CONTENT_LENGTH'] - - stream, form, files = formparser.parse_form_data(env) - self.assert_strict_equal(stream.read(), b'') - self.assert_strict_equal(len(form), 0) - self.assert_strict_equal(len(files), 0) - - def test_large_file(self): - data = b'x' * (1024 * 600) - req = Request.from_values(data={'foo': (BytesIO(data), 'test.txt')}, - method='POST') - # make sure we have a real file here, because we expect to be - # on the disk. > 1024 * 500 - self.assert_true(hasattr(req.files['foo'].stream, u'fileno')) - # close file to prevent fds from leaking - req.files['foo'].close() - - def test_streaming_parse(self): - data = b'x' * (1024 * 600) - class StreamMPP(formparser.MultiPartParser): - def parse(self, file, boundary, content_length): - i = iter(self.parse_lines(file, boundary, content_length)) - one = next(i) - two = next(i) - return self.cls(()), {'one': one, 'two': two} - class StreamFDP(formparser.FormDataParser): - def _sf_parse_multipart(self, stream, mimetype, - content_length, options): - form, files = StreamMPP( - self.stream_factory, self.charset, self.errors, - max_form_memory_size=self.max_form_memory_size, - cls=self.cls).parse(stream, options.get('boundary').encode('ascii'), - content_length) - return stream, form, files - parse_functions = {} - parse_functions.update(formparser.FormDataParser.parse_functions) - parse_functions['multipart/form-data'] = _sf_parse_multipart - class StreamReq(Request): - form_data_parser_class = StreamFDP - req = StreamReq.from_values(data={'foo': (BytesIO(data), 'test.txt')}, - method='POST') - self.assert_strict_equal('begin_file', req.files['one'][0]) - self.assert_strict_equal(('foo', 'test.txt'), req.files['one'][1][1:]) - self.assert_strict_equal('cont', req.files['two'][0]) - self.assert_strict_equal(data, req.files['two'][1]) - - -class MultiPartTestCase(WerkzeugTestCase): - - def test_basic(self): - resources = join(dirname(__file__), 'multipart') - client = Client(form_data_consumer, Response) - - repository = [ - ('firefox3-2png1txt', '---------------------------186454651713519341951581030105', [ - (u'anchor.png', 'file1', 'image/png', 'file1.png'), - (u'application_edit.png', 'file2', 'image/png', 'file2.png') - ], u'example text'), - ('firefox3-2pnglongtext', '---------------------------14904044739787191031754711748', [ - (u'accept.png', 'file1', 'image/png', 'file1.png'), - (u'add.png', 'file2', 'image/png', 'file2.png') - ], u'--long text\r\n--with boundary\r\n--lookalikes--'), - ('opera8-2png1txt', '----------zEO9jQKmLc2Cq88c23Dx19', [ - (u'arrow_branch.png', 'file1', 'image/png', 'file1.png'), - (u'award_star_bronze_1.png', 'file2', 'image/png', 'file2.png') - ], u'blafasel öäü'), - ('webkit3-2png1txt', '----WebKitFormBoundaryjdSFhcARk8fyGNy6', [ - (u'gtk-apply.png', 'file1', 'image/png', 'file1.png'), - (u'gtk-no.png', 'file2', 'image/png', 'file2.png') - ], u'this is another text with ümläüts'), - ('ie6-2png1txt', '---------------------------7d91b03a20128', [ - (u'file1.png', 'file1', 'image/x-png', 'file1.png'), - (u'file2.png', 'file2', 'image/x-png', 'file2.png') - ], u'ie6 sucks :-/') - ] - - for name, boundary, files, text in repository: - folder = join(resources, name) - data = get_contents(join(folder, 'request.txt')) - for filename, field, content_type, fsname in files: - response = client.post('/?object=' + field, data=data, content_type= - 'multipart/form-data; boundary="%s"' % boundary, - content_length=len(data)) - lines = response.get_data().split(b'\n', 3) - self.assert_strict_equal(lines[0], repr(filename).encode('ascii')) - self.assert_strict_equal(lines[1], repr(field).encode('ascii')) - self.assert_strict_equal(lines[2], repr(content_type).encode('ascii')) - self.assert_strict_equal(lines[3], get_contents(join(folder, fsname))) - response = client.post('/?object=text', data=data, content_type= - 'multipart/form-data; boundary="%s"' % boundary, - content_length=len(data)) - self.assert_strict_equal(response.get_data(), repr(text).encode('utf-8')) - - def test_ie7_unc_path(self): - client = Client(form_data_consumer, Response) - data_file = join(dirname(__file__), 'multipart', 'ie7_full_path_request.txt') - data = get_contents(data_file) - boundary = '---------------------------7da36d1b4a0164' - response = client.post('/?object=cb_file_upload_multiple', data=data, content_type= - 'multipart/form-data; boundary="%s"' % boundary, content_length=len(data)) - lines = response.get_data().split(b'\n', 3) - self.assert_strict_equal(lines[0], - repr(u'Sellersburg Town Council Meeting 02-22-2010doc.doc').encode('ascii')) - - def test_end_of_file(self): - # This test looks innocent but it was actually timeing out in - # the Werkzeug 0.5 release version (#394) - data = ( - b'--foo\r\n' - b'Content-Disposition: form-data; name="test"; filename="test.txt"\r\n' - b'Content-Type: text/plain\r\n\r\n' - b'file contents and no end' - ) - data = Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='multipart/form-data; boundary=foo', - method='POST') - self.assert_true(not data.files) - self.assert_true(not data.form) - - def test_broken(self): - data = ( - '--foo\r\n' - 'Content-Disposition: form-data; name="test"; filename="test.txt"\r\n' - 'Content-Transfer-Encoding: base64\r\n' - 'Content-Type: text/plain\r\n\r\n' - 'broken base 64' - '--foo--' - ) - _, form, files = formparser.parse_form_data(create_environ(data=data, - method='POST', content_type='multipart/form-data; boundary=foo')) - self.assert_true(not files) - self.assert_true(not form) - - self.assert_raises(ValueError, formparser.parse_form_data, - create_environ(data=data, method='POST', - content_type='multipart/form-data; boundary=foo'), - silent=False) - - def test_file_no_content_type(self): - data = ( - b'--foo\r\n' - b'Content-Disposition: form-data; name="test"; filename="test.txt"\r\n\r\n' - b'file contents\r\n--foo--' - ) - data = Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='multipart/form-data; boundary=foo', - method='POST') - self.assert_equal(data.files['test'].filename, 'test.txt') - self.assert_strict_equal(data.files['test'].read(), b'file contents') - - def test_extra_newline(self): - # this test looks innocent but it was actually timeing out in - # the Werkzeug 0.5 release version (#394) - data = ( - b'\r\n\r\n--foo\r\n' - b'Content-Disposition: form-data; name="foo"\r\n\r\n' - b'a string\r\n' - b'--foo--' - ) - data = Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='multipart/form-data; boundary=foo', - method='POST') - self.assert_true(not data.files) - self.assert_strict_equal(data.form['foo'], u'a string') - - def test_headers(self): - data = (b'--foo\r\n' - b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n' - b'X-Custom-Header: blah\r\n' - b'Content-Type: text/plain; charset=utf-8\r\n\r\n' - b'file contents, just the contents\r\n' - b'--foo--') - req = Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='multipart/form-data; boundary=foo', - method='POST') - foo = req.files['foo'] - self.assert_strict_equal(foo.mimetype, 'text/plain') - self.assert_strict_equal(foo.mimetype_params, {'charset': 'utf-8'}) - self.assert_strict_equal(foo.headers['content-type'], foo.content_type) - self.assert_strict_equal(foo.content_type, 'text/plain; charset=utf-8') - self.assert_strict_equal(foo.headers['x-custom-header'], 'blah') - - def test_nonstandard_line_endings(self): - for nl in b'\n', b'\r', b'\r\n': - data = nl.join(( - b'--foo', - b'Content-Disposition: form-data; name=foo', - b'', - b'this is just bar', - b'--foo', - b'Content-Disposition: form-data; name=bar', - b'', - b'blafasel', - b'--foo--' - )) - req = Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='multipart/form-data; ' - 'boundary=foo', method='POST') - self.assert_strict_equal(req.form['foo'], u'this is just bar') - self.assert_strict_equal(req.form['bar'], u'blafasel') - - def test_failures(self): - def parse_multipart(stream, boundary, content_length): - parser = formparser.MultiPartParser(content_length) - return parser.parse(stream, boundary, content_length) - self.assert_raises(ValueError, parse_multipart, BytesIO(), b'broken ', 0) - - data = b'--foo\r\n\r\nHello World\r\n--foo--' - self.assert_raises(ValueError, parse_multipart, BytesIO(data), b'foo', len(data)) - - data = b'--foo\r\nContent-Disposition: form-field; name=foo\r\n' \ - b'Content-Transfer-Encoding: base64\r\n\r\nHello World\r\n--foo--' - self.assert_raises(ValueError, parse_multipart, BytesIO(data), b'foo', len(data)) - - data = b'--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\nHello World\r\n' - self.assert_raises(ValueError, parse_multipart, BytesIO(data), b'foo', len(data)) - - x = formparser.parse_multipart_headers(['foo: bar\r\n', ' x test\r\n']) - self.assert_strict_equal(x['foo'], 'bar\n x test') - self.assert_raises(ValueError, formparser.parse_multipart_headers, - ['foo: bar\r\n', ' x test']) - - def test_bad_newline_bad_newline_assumption(self): - class ISORequest(Request): - charset = 'latin1' - contents = b'U2vlbmUgbORu' - data = b'--foo\r\nContent-Disposition: form-data; name="test"\r\n' \ - b'Content-Transfer-Encoding: base64\r\n\r\n' + \ - contents + b'\r\n--foo--' - req = ISORequest.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='multipart/form-data; boundary=foo', - method='POST') - self.assert_strict_equal(req.form['test'], u'Sk\xe5ne l\xe4n') - - def test_empty_multipart(self): - environ = {} - data = b'--boundary--' - environ['REQUEST_METHOD'] = 'POST' - environ['CONTENT_TYPE'] = 'multipart/form-data; boundary=boundary' - environ['CONTENT_LENGTH'] = str(len(data)) - environ['wsgi.input'] = BytesIO(data) - stream, form, files = parse_form_data(environ, silent=False) - rv = stream.read() - self.assert_equal(rv, b'') - self.assert_equal(form, MultiDict()) - self.assert_equal(files, MultiDict()) - - -class InternalFunctionsTestCase(WerkzeugTestCase): - - def test_line_parser(self): - assert formparser._line_parse('foo') == ('foo', False) - assert formparser._line_parse('foo\r\n') == ('foo', True) - assert formparser._line_parse('foo\r') == ('foo', True) - assert formparser._line_parse('foo\n') == ('foo', True) - - def test_find_terminator(self): - lineiter = iter(b'\n\n\nfoo\nbar\nbaz'.splitlines(True)) - find_terminator = formparser.MultiPartParser()._find_terminator - line = find_terminator(lineiter) - self.assert_equal(line, b'foo') - self.assert_equal(list(lineiter), [b'bar\n', b'baz']) - self.assert_equal(find_terminator([]), b'') - self.assert_equal(find_terminator([b'']), b'') - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(FormParserTestCase)) - suite.addTest(unittest.makeSuite(MultiPartTestCase)) - suite.addTest(unittest.makeSuite(InternalFunctionsTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/http.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/http.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/http.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/http.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,449 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.http - ~~~~~~~~~~~~~~~~~~~~~~~ - - HTTP parsing utilities. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import unittest -from datetime import datetime - -from werkzeug.testsuite import WerkzeugTestCase -from werkzeug._compat import itervalues, wsgi_encoding_dance - -from werkzeug import http, datastructures -from werkzeug.test import create_environ - - -class HTTPUtilityTestCase(WerkzeugTestCase): - - def test_accept(self): - a = http.parse_accept_header('en-us,ru;q=0.5') - self.assert_equal(list(itervalues(a)), ['en-us', 'ru']) - self.assert_equal(a.best, 'en-us') - self.assert_equal(a.find('ru'), 1) - self.assert_raises(ValueError, a.index, 'de') - self.assert_equal(a.to_header(), 'en-us,ru;q=0.5') - - def test_mime_accept(self): - a = http.parse_accept_header('text/xml,application/xml,' - 'application/xhtml+xml,' - 'text/html;q=0.9,text/plain;q=0.8,' - 'image/png,*/*;q=0.5', - datastructures.MIMEAccept) - self.assert_raises(ValueError, lambda: a['missing']) - self.assert_equal(a['image/png'], 1) - self.assert_equal(a['text/plain'], 0.8) - self.assert_equal(a['foo/bar'], 0.5) - self.assert_equal(a[a.find('foo/bar')], ('*/*', 0.5)) - - def test_accept_matches(self): - a = http.parse_accept_header('text/xml,application/xml,application/xhtml+xml,' - 'text/html;q=0.9,text/plain;q=0.8,' - 'image/png', datastructures.MIMEAccept) - self.assert_equal(a.best_match(['text/html', 'application/xhtml+xml']), - 'application/xhtml+xml') - self.assert_equal(a.best_match(['text/html']), 'text/html') - self.assert_true(a.best_match(['foo/bar']) is None) - self.assert_equal(a.best_match(['foo/bar', 'bar/foo'], - default='foo/bar'), 'foo/bar') - self.assert_equal(a.best_match(['application/xml', 'text/xml']), 'application/xml') - - def test_charset_accept(self): - a = http.parse_accept_header('ISO-8859-1,utf-8;q=0.7,*;q=0.7', - datastructures.CharsetAccept) - self.assert_equal(a['iso-8859-1'], a['iso8859-1']) - self.assert_equal(a['iso-8859-1'], 1) - self.assert_equal(a['UTF8'], 0.7) - self.assert_equal(a['ebcdic'], 0.7) - - def test_language_accept(self): - a = http.parse_accept_header('de-AT,de;q=0.8,en;q=0.5', - datastructures.LanguageAccept) - self.assert_equal(a.best, 'de-AT') - self.assert_true('de_AT' in a) - self.assert_true('en' in a) - self.assert_equal(a['de-at'], 1) - self.assert_equal(a['en'], 0.5) - - def test_set_header(self): - hs = http.parse_set_header('foo, Bar, "Blah baz", Hehe') - self.assert_true('blah baz' in hs) - self.assert_true('foobar' not in hs) - self.assert_true('foo' in hs) - self.assert_equal(list(hs), ['foo', 'Bar', 'Blah baz', 'Hehe']) - hs.add('Foo') - self.assert_equal(hs.to_header(), 'foo, Bar, "Blah baz", Hehe') - - def test_list_header(self): - hl = http.parse_list_header('foo baz, blah') - self.assert_equal(hl, ['foo baz', 'blah']) - - def test_dict_header(self): - d = http.parse_dict_header('foo="bar baz", blah=42') - self.assert_equal(d, {'foo': 'bar baz', 'blah': '42'}) - - def test_cache_control_header(self): - cc = http.parse_cache_control_header('max-age=0, no-cache') - assert cc.max_age == 0 - assert cc.no_cache - cc = http.parse_cache_control_header('private, community="UCI"', None, - datastructures.ResponseCacheControl) - assert cc.private - assert cc['community'] == 'UCI' - - c = datastructures.ResponseCacheControl() - assert c.no_cache is None - assert c.private is None - c.no_cache = True - assert c.no_cache == '*' - c.private = True - assert c.private == '*' - del c.private - assert c.private is None - assert c.to_header() == 'no-cache' - - def test_authorization_header(self): - a = http.parse_authorization_header('Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==') - assert a.type == 'basic' - assert a.username == 'Aladdin' - assert a.password == 'open sesame' - - a = http.parse_authorization_header('''Digest username="Mufasa", - realm="testrealm@host.invalid", - nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093", - uri="/dir/index.html", - qop=auth, - nc=00000001, - cnonce="0a4f113b", - response="6629fae49393a05397450978507c4ef1", - opaque="5ccc069c403ebaf9f0171e9517f40e41"''') - assert a.type == 'digest' - assert a.username == 'Mufasa' - assert a.realm == 'testrealm@host.invalid' - assert a.nonce == 'dcd98b7102dd2f0e8b11d0f600bfb0c093' - assert a.uri == '/dir/index.html' - assert 'auth' in a.qop - assert a.nc == '00000001' - assert a.cnonce == '0a4f113b' - assert a.response == '6629fae49393a05397450978507c4ef1' - assert a.opaque == '5ccc069c403ebaf9f0171e9517f40e41' - - a = http.parse_authorization_header('''Digest username="Mufasa", - realm="testrealm@host.invalid", - nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093", - uri="/dir/index.html", - response="e257afa1414a3340d93d30955171dd0e", - opaque="5ccc069c403ebaf9f0171e9517f40e41"''') - assert a.type == 'digest' - assert a.username == 'Mufasa' - assert a.realm == 'testrealm@host.invalid' - assert a.nonce == 'dcd98b7102dd2f0e8b11d0f600bfb0c093' - assert a.uri == '/dir/index.html' - assert a.response == 'e257afa1414a3340d93d30955171dd0e' - assert a.opaque == '5ccc069c403ebaf9f0171e9517f40e41' - - assert http.parse_authorization_header('') is None - assert http.parse_authorization_header(None) is None - assert http.parse_authorization_header('foo') is None - - def test_www_authenticate_header(self): - wa = http.parse_www_authenticate_header('Basic realm="WallyWorld"') - assert wa.type == 'basic' - assert wa.realm == 'WallyWorld' - wa.realm = 'Foo Bar' - assert wa.to_header() == 'Basic realm="Foo Bar"' - - wa = http.parse_www_authenticate_header('''Digest - realm="testrealm@host.com", - qop="auth,auth-int", - nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093", - opaque="5ccc069c403ebaf9f0171e9517f40e41"''') - assert wa.type == 'digest' - assert wa.realm == 'testrealm@host.com' - assert 'auth' in wa.qop - assert 'auth-int' in wa.qop - assert wa.nonce == 'dcd98b7102dd2f0e8b11d0f600bfb0c093' - assert wa.opaque == '5ccc069c403ebaf9f0171e9517f40e41' - - wa = http.parse_www_authenticate_header('broken') - assert wa.type == 'broken' - - assert not http.parse_www_authenticate_header('').type - assert not http.parse_www_authenticate_header('') - - def test_etags(self): - assert http.quote_etag('foo') == '"foo"' - assert http.quote_etag('foo', True) == 'w/"foo"' - assert http.unquote_etag('"foo"') == ('foo', False) - assert http.unquote_etag('w/"foo"') == ('foo', True) - es = http.parse_etags('"foo", "bar", w/"baz", blar') - assert sorted(es) == ['bar', 'blar', 'foo'] - assert 'foo' in es - assert 'baz' not in es - assert es.contains_weak('baz') - assert 'blar' in es - assert es.contains_raw('w/"baz"') - assert es.contains_raw('"foo"') - assert sorted(es.to_header().split(', ')) == ['"bar"', '"blar"', '"foo"', 'w/"baz"'] - - def test_etags_nonzero(self): - etags = http.parse_etags('w/"foo"') - self.assert_true(bool(etags)) - self.assert_true(etags.contains_raw('w/"foo"')) - - def test_parse_date(self): - assert http.parse_date('Sun, 06 Nov 1994 08:49:37 GMT ') == datetime(1994, 11, 6, 8, 49, 37) - assert http.parse_date('Sunday, 06-Nov-94 08:49:37 GMT') == datetime(1994, 11, 6, 8, 49, 37) - assert http.parse_date(' Sun Nov 6 08:49:37 1994') == datetime(1994, 11, 6, 8, 49, 37) - assert http.parse_date('foo') is None - - def test_parse_date_overflows(self): - assert http.parse_date(' Sun 02 Feb 1343 08:49:37 GMT') == datetime(1343, 2, 2, 8, 49, 37) - assert http.parse_date('Thu, 01 Jan 1970 00:00:00 GMT') == datetime(1970, 1, 1, 0, 0) - assert http.parse_date('Thu, 33 Jan 1970 00:00:00 GMT') is None - - def test_remove_entity_headers(self): - now = http.http_date() - headers1 = [('Date', now), ('Content-Type', 'text/html'), ('Content-Length', '0')] - headers2 = datastructures.Headers(headers1) - - http.remove_entity_headers(headers1) - assert headers1 == [('Date', now)] - - http.remove_entity_headers(headers2) - self.assert_equal(headers2, datastructures.Headers([(u'Date', now)])) - - def test_remove_hop_by_hop_headers(self): - headers1 = [('Connection', 'closed'), ('Foo', 'bar'), - ('Keep-Alive', 'wtf')] - headers2 = datastructures.Headers(headers1) - - http.remove_hop_by_hop_headers(headers1) - assert headers1 == [('Foo', 'bar')] - - http.remove_hop_by_hop_headers(headers2) - assert headers2 == datastructures.Headers([('Foo', 'bar')]) - - def test_parse_options_header(self): - assert http.parse_options_header(r'something; foo="other\"thing"') == \ - ('something', {'foo': 'other"thing'}) - assert http.parse_options_header(r'something; foo="other\"thing"; meh=42') == \ - ('something', {'foo': 'other"thing', 'meh': '42'}) - assert http.parse_options_header(r'something; foo="other\"thing"; meh=42; bleh') == \ - ('something', {'foo': 'other"thing', 'meh': '42', 'bleh': None}) - assert http.parse_options_header('something; foo="other;thing"; meh=42; bleh') == \ - ('something', {'foo': 'other;thing', 'meh': '42', 'bleh': None}) - assert http.parse_options_header('something; foo="otherthing"; meh=; bleh') == \ - ('something', {'foo': 'otherthing', 'meh': None, 'bleh': None}) - - - - def test_dump_options_header(self): - assert http.dump_options_header('foo', {'bar': 42}) == \ - 'foo; bar=42' - assert http.dump_options_header('foo', {'bar': 42, 'fizz': None}) in \ - ('foo; bar=42; fizz', 'foo; fizz; bar=42') - - def test_dump_header(self): - assert http.dump_header([1, 2, 3]) == '1, 2, 3' - assert http.dump_header([1, 2, 3], allow_token=False) == '"1", "2", "3"' - assert http.dump_header({'foo': 'bar'}, allow_token=False) == 'foo="bar"' - assert http.dump_header({'foo': 'bar'}) == 'foo=bar' - - def test_is_resource_modified(self): - env = create_environ() - - # ignore POST - env['REQUEST_METHOD'] = 'POST' - assert not http.is_resource_modified(env, etag='testing') - env['REQUEST_METHOD'] = 'GET' - - # etagify from data - self.assert_raises(TypeError, http.is_resource_modified, env, - data='42', etag='23') - env['HTTP_IF_NONE_MATCH'] = http.generate_etag(b'awesome') - assert not http.is_resource_modified(env, data=b'awesome') - - env['HTTP_IF_MODIFIED_SINCE'] = http.http_date(datetime(2008, 1, 1, 12, 30)) - assert not http.is_resource_modified(env, - last_modified=datetime(2008, 1, 1, 12, 00)) - assert http.is_resource_modified(env, - last_modified=datetime(2008, 1, 1, 13, 00)) - - def test_date_formatting(self): - assert http.cookie_date(0) == 'Thu, 01-Jan-1970 00:00:00 GMT' - assert http.cookie_date(datetime(1970, 1, 1)) == 'Thu, 01-Jan-1970 00:00:00 GMT' - assert http.http_date(0) == 'Thu, 01 Jan 1970 00:00:00 GMT' - assert http.http_date(datetime(1970, 1, 1)) == 'Thu, 01 Jan 1970 00:00:00 GMT' - - def test_cookies(self): - self.assert_strict_equal( - dict(http.parse_cookie('dismiss-top=6; CP=null*; PHPSESSID=0a539d42abc001cd' - 'c762809248d4beed; a=42; b="\\\";"')), - { - 'CP': u'null*', - 'PHPSESSID': u'0a539d42abc001cdc762809248d4beed', - 'a': u'42', - 'dismiss-top': u'6', - 'b': u'\";' - } - ) - self.assert_strict_equal( - set(http.dump_cookie('foo', 'bar baz blub', 360, httponly=True, - sync_expires=False).split(u'; ')), - set([u'HttpOnly', u'Max-Age=360', u'Path=/', u'foo="bar baz blub"']) - ) - self.assert_strict_equal(dict(http.parse_cookie('fo234{=bar; blub=Blah')), - {'fo234{': u'bar', 'blub': u'Blah'}) - - def test_cookie_quoting(self): - val = http.dump_cookie("foo", "?foo") - self.assert_strict_equal(val, 'foo="?foo"; Path=/') - self.assert_strict_equal(dict(http.parse_cookie(val)), {'foo': u'?foo'}) - - self.assert_strict_equal(dict(http.parse_cookie(r'foo="foo\054bar"')), - {'foo': u'foo,bar'}) - - def test_cookie_domain_resolving(self): - val = http.dump_cookie('foo', 'bar', domain=u'\N{SNOWMAN}.com') - self.assert_strict_equal(val, 'foo=bar; Domain=xn--n3h.com; Path=/') - - def test_cookie_unicode_dumping(self): - val = http.dump_cookie('foo', u'\N{SNOWMAN}') - h = datastructures.Headers() - h.add('Set-Cookie', val) - self.assert_equal(h['Set-Cookie'], 'foo="\\342\\230\\203"; Path=/') - - cookies = http.parse_cookie(h['Set-Cookie']) - self.assert_equal(cookies['foo'], u'\N{SNOWMAN}') - - def test_cookie_unicode_keys(self): - # Yes, this is technically against the spec but happens - val = http.dump_cookie(u'fö', u'fö') - self.assert_equal(val, wsgi_encoding_dance(u'fö="f\\303\\266"; Path=/', 'utf-8')) - cookies = http.parse_cookie(val) - self.assert_equal(cookies[u'fö'], u'fö') - - def test_cookie_unicode_parsing(self): - # This is actually a correct test. This is what is being submitted - # by firefox if you set an unicode cookie and we get the cookie sent - # in on Python 3 under PEP 3333. - cookies = http.parse_cookie(u'fö=fö') - self.assert_equal(cookies[u'fö'], u'fö') - - def test_cookie_domain_encoding(self): - val = http.dump_cookie('foo', 'bar', domain=u'\N{SNOWMAN}.com') - self.assert_strict_equal(val, 'foo=bar; Domain=xn--n3h.com; Path=/') - - val = http.dump_cookie('foo', 'bar', domain=u'.\N{SNOWMAN}.com') - self.assert_strict_equal(val, 'foo=bar; Domain=.xn--n3h.com; Path=/') - - val = http.dump_cookie('foo', 'bar', domain=u'.foo.com') - self.assert_strict_equal(val, 'foo=bar; Domain=.foo.com; Path=/') - - -class RangeTestCase(WerkzeugTestCase): - - def test_if_range_parsing(self): - rv = http.parse_if_range_header('"Test"') - assert rv.etag == 'Test' - assert rv.date is None - assert rv.to_header() == '"Test"' - - # weak information is dropped - rv = http.parse_if_range_header('w/"Test"') - assert rv.etag == 'Test' - assert rv.date is None - assert rv.to_header() == '"Test"' - - # broken etags are supported too - rv = http.parse_if_range_header('bullshit') - assert rv.etag == 'bullshit' - assert rv.date is None - assert rv.to_header() == '"bullshit"' - - rv = http.parse_if_range_header('Thu, 01 Jan 1970 00:00:00 GMT') - assert rv.etag is None - assert rv.date == datetime(1970, 1, 1) - assert rv.to_header() == 'Thu, 01 Jan 1970 00:00:00 GMT' - - for x in '', None: - rv = http.parse_if_range_header(x) - assert rv.etag is None - assert rv.date is None - assert rv.to_header() == '' - - def test_range_parsing(): - rv = http.parse_range_header('bytes=52') - assert rv is None - - rv = http.parse_range_header('bytes=52-') - assert rv.units == 'bytes' - assert rv.ranges == [(52, None)] - assert rv.to_header() == 'bytes=52-' - - rv = http.parse_range_header('bytes=52-99') - assert rv.units == 'bytes' - assert rv.ranges == [(52, 100)] - assert rv.to_header() == 'bytes=52-99' - - rv = http.parse_range_header('bytes=52-99,-1000') - assert rv.units == 'bytes' - assert rv.ranges == [(52, 100), (-1000, None)] - assert rv.to_header() == 'bytes=52-99,-1000' - - rv = http.parse_range_header('bytes = 1 - 100') - assert rv.units == 'bytes' - assert rv.ranges == [(1, 101)] - assert rv.to_header() == 'bytes=1-100' - - rv = http.parse_range_header('AWesomes=0-999') - assert rv.units == 'awesomes' - assert rv.ranges == [(0, 1000)] - assert rv.to_header() == 'awesomes=0-999' - - def test_content_range_parsing(): - rv = http.parse_content_range_header('bytes 0-98/*') - assert rv.units == 'bytes' - assert rv.start == 0 - assert rv.stop == 99 - assert rv.length is None - assert rv.to_header() == 'bytes 0-98/*' - - rv = http.parse_content_range_header('bytes 0-98/*asdfsa') - assert rv is None - - rv = http.parse_content_range_header('bytes 0-99/100') - assert rv.to_header() == 'bytes 0-99/100' - rv.start = None - rv.stop = None - assert rv.units == 'bytes' - assert rv.to_header() == 'bytes */100' - - rv = http.parse_content_range_header('bytes */100') - assert rv.start is None - assert rv.stop is None - assert rv.length == 100 - assert rv.units == 'bytes' - - -class RegressionTestCase(WerkzeugTestCase): - - def test_best_match_works(self): - # was a bug in 0.6 - rv = http.parse_accept_header('foo=,application/xml,application/xhtml+xml,' - 'text/html;q=0.9,text/plain;q=0.8,' - 'image/png,*/*;q=0.5', - datastructures.MIMEAccept).best_match(['foo/bar']) - self.assert_equal(rv, 'foo/bar') - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(HTTPUtilityTestCase)) - suite.addTest(unittest.makeSuite(RegressionTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/__init__.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/__init__.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/__init__.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/__init__.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,267 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite - ~~~~~~~~~~~~~~~~~~ - - Contains all test Werkzeug tests. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" - -from __future__ import with_statement - -import re -import sys -import unittest -import shutil -import tempfile -import atexit - -from werkzeug.utils import find_modules -from werkzeug._compat import text_type, integer_types, reraise - - -def get_temporary_directory(): - directory = tempfile.mkdtemp() - @atexit.register - def remove_directory(): - try: - shutil.rmtree(directory) - except EnvironmentError: - pass - return directory - - -def iter_suites(package): - """Yields all testsuites.""" - for module in find_modules(package, include_packages=True): - mod = __import__(module, fromlist=['*']) - if hasattr(mod, 'suite'): - yield mod.suite() - - -def find_all_tests(suite): - """Yields all the tests and their names from a given suite.""" - suites = [suite] - while suites: - s = suites.pop() - try: - suites.extend(s) - except TypeError: - yield s, '%s.%s.%s' % ( - s.__class__.__module__, - s.__class__.__name__, - s._testMethodName - ) - - -class WerkzeugTestCase(unittest.TestCase): - """Baseclass for all the tests that Werkzeug uses. Use these - methods for testing instead of the camelcased ones in the - baseclass for consistency. - """ - - def setup(self): - pass - - def teardown(self): - pass - - def setUp(self): - self.setup() - - def tearDown(self): - unittest.TestCase.tearDown(self) - self.teardown() - - def assert_line_equal(self, x, y): - assert x == y, "lines not equal\n a = %r\n b = %r" % (x, y) - - def assert_equal(self, x, y, msg=None): - return self.assertEqual(x, y, msg) - - def assert_not_equal(self, x, y): - return self.assertNotEqual(x, y) - - def assert_raises(self, exc_type, callable=None, *args, **kwargs): - catcher = _ExceptionCatcher(self, exc_type) - if callable is None: - return catcher - with catcher: - callable(*args, **kwargs) - - if sys.version_info[:2] == (2, 6): - def assertIsNone(self, x): - assert x is None, "%r is not None" % (x,) - - def assertIsNotNone(self, x): - assert x is not None, "%r is None" % (x, ) - - def assertIn(self, x, y): - assert x in y, "%r not in %r" % (x, y) - - def assertNotIn(self, x, y): - assert x not in y, "%r in %r" % (x, y) - - def assertIsInstance(self, x, y): - assert isinstance(x, y), "not isinstance(%r, %r)" % (x, y) - - def assertIs(self, x, y): - assert x is y, "%r is not %r" % (x, y) - - def assertIsNot(self, x, y): - assert x is not y, "%r is %r" % (x, y) - - def assertSequenceEqual(self, x, y): - self.assertEqual(x, y) - - def assertRaisesRegex(self, exc_type, regex, *args, **kwargs): - catcher = _ExceptionCatcher(self, exc_type) - if not args: - return catcher - elif callable(args[0]): - with catcher: - args[0](*args[1:], **kwargs) - if args[0] is not None: - assert re.search(args[0], catcher.exc_value[0]) - else: - raise NotImplementedError() - - elif sys.version_info[0] == 2: - def assertRaisesRegex(self, *args, **kwargs): - return self.assertRaisesRegexp(*args, **kwargs) - - def assert_is_none(self, x): - self.assertIsNone(x) - - def assert_is_not_none(self, x): - self.assertIsNotNone(x) - - def assert_in(self, x, y): - self.assertIn(x, y) - - def assert_is_instance(self, x, y): - self.assertIsInstance(x, y) - - def assert_not_in(self, x, y): - self.assertNotIn(x, y) - - def assert_is(self, x, y): - self.assertIs(x, y) - - def assert_is_not(self, x, y): - self.assertIsNot(x, y) - - def assert_true(self, x): - self.assertTrue(x) - - def assert_false(self, x): - self.assertFalse(x) - - def assert_raises_regex(self, *args, **kwargs): - return self.assertRaisesRegex(*args, **kwargs) - - def assert_sequence_equal(self, x, y): - self.assertSequenceEqual(x, y) - - def assert_strict_equal(self, x, y): - '''Stricter version of assert_equal that doesn't do implicit conversion - between unicode and strings''' - self.assert_equal(x, y) - assert issubclass(type(x), type(y)) or issubclass(type(y), type(x)), \ - '%s != %s' % (type(x), type(y)) - if isinstance(x, (bytes, text_type, integer_types)) or x is None: - return - elif isinstance(x, dict) or isinstance(y, dict): - x = sorted(x.items()) - y = sorted(y.items()) - elif isinstance(x, set) or isinstance(y, set): - x = sorted(x) - y = sorted(y) - rx, ry = repr(x), repr(y) - if rx != ry: - rx = rx[:200] + (rx[200:] and '...') - ry = ry[:200] + (ry[200:] and '...') - raise AssertionError(rx, ry) - assert repr(x) == repr(y), repr((x, y))[:200] - - -class _ExceptionCatcher(object): - - def __init__(self, test_case, exc_type): - self.test_case = test_case - self.exc_type = exc_type - self.exc_value = None - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, tb): - exception_name = self.exc_type.__name__ - if exc_type is None: - self.test_case.fail('Expected exception of type %r' % - exception_name) - elif not issubclass(exc_type, self.exc_type): - reraise(exc_type, exc_value, tb) - self.exc_value = exc_value - return True - - -class BetterLoader(unittest.TestLoader): - """A nicer loader that solves two problems. First of all we are setting - up tests from different sources and we're doing this programmatically - which breaks the default loading logic so this is required anyways. - Secondly this loader has a nicer interpolation for test names than the - default one so you can just do ``run-tests.py ViewTestCase`` and it - will work. - """ - - def getRootSuite(self): - return suite() - - def loadTestsFromName(self, name, module=None): - root = self.getRootSuite() - if name == 'suite': - return root - - all_tests = [] - for testcase, testname in find_all_tests(root): - if testname == name or \ - testname.endswith('.' + name) or \ - ('.' + name + '.') in testname or \ - testname.startswith(name + '.'): - all_tests.append(testcase) - - if not all_tests: - raise LookupError('could not find test case for "%s"' % name) - - if len(all_tests) == 1: - return all_tests[0] - rv = unittest.TestSuite() - for test in all_tests: - rv.addTest(test) - return rv - - -def suite(): - """A testsuite that has all the Flask tests. You can use this - function to integrate the Flask tests into your own testsuite - in case you want to test that monkeypatches to Flask do not - break it. - """ - suite = unittest.TestSuite() - for other_suite in iter_suites(__name__): - suite.addTest(other_suite) - return suite - - -def main(): - """Runs the testsuite as command line application.""" - try: - unittest.main(testLoader=BetterLoader(), defaultTest='suite') - except Exception: - import sys - import traceback - traceback.print_exc() - sys.exit(1) diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/internal.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/internal.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/internal.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/internal.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,81 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.internal - ~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Internal tests. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import unittest - -from datetime import datetime -from warnings import filterwarnings, resetwarnings - -from werkzeug.testsuite import WerkzeugTestCase -from werkzeug.wrappers import Request, Response - -from werkzeug import _internal as internal -from werkzeug.test import create_environ - - -class InternalTestCase(WerkzeugTestCase): - - def test_date_to_unix(self): - assert internal._date_to_unix(datetime(1970, 1, 1)) == 0 - assert internal._date_to_unix(datetime(1970, 1, 1, 1, 0, 0)) == 3600 - assert internal._date_to_unix(datetime(1970, 1, 1, 1, 1, 1)) == 3661 - x = datetime(2010, 2, 15, 16, 15, 39) - assert internal._date_to_unix(x) == 1266250539 - - def test_easteregg(self): - req = Request.from_values('/?macgybarchakku') - resp = Response.force_type(internal._easteregg(None), req) - assert b'About Werkzeug' in resp.get_data() - assert b'the Swiss Army knife of Python web development' in resp.get_data() - - def test_wrapper_internals(self): - req = Request.from_values(data={'foo': 'bar'}, method='POST') - req._load_form_data() - assert req.form.to_dict() == {'foo': 'bar'} - - # second call does not break - req._load_form_data() - assert req.form.to_dict() == {'foo': 'bar'} - - # check reprs - assert repr(req) == "" - resp = Response() - assert repr(resp) == '' - resp.set_data('Hello World!') - assert repr(resp) == '' - resp.response = iter(['Test']) - assert repr(resp) == '' - - # unicode data does not set content length - response = Response([u'Hällo Wörld']) - headers = response.get_wsgi_headers(create_environ()) - assert u'Content-Length' not in headers - - response = Response([u'Hällo Wörld'.encode('utf-8')]) - headers = response.get_wsgi_headers(create_environ()) - assert u'Content-Length' in headers - - # check for internal warnings - filterwarnings('error', category=Warning) - response = Response() - environ = create_environ() - response.response = 'What the...?' - self.assert_raises(Warning, lambda: list(response.iter_encoded())) - self.assert_raises(Warning, lambda: list(response.get_app_iter(environ))) - response.direct_passthrough = True - self.assert_raises(Warning, lambda: list(response.iter_encoded())) - self.assert_raises(Warning, lambda: list(response.get_app_iter(environ))) - resetwarnings() - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(InternalTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/local.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/local.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/local.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/local.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,159 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.local - ~~~~~~~~~~~~~~~~~~~~~~~~ - - Local and local proxy tests. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import time -import unittest -from threading import Thread - -from werkzeug.testsuite import WerkzeugTestCase - -from werkzeug import local - - -class LocalTestCase(WerkzeugTestCase): - - def test_basic_local(self): - l = local.Local() - l.foo = 0 - values = [] - def value_setter(idx): - time.sleep(0.01 * idx) - l.foo = idx - time.sleep(0.02) - values.append(l.foo) - threads = [Thread(target=value_setter, args=(x,)) - for x in [1, 2, 3]] - for thread in threads: - thread.start() - time.sleep(0.2) - assert sorted(values) == [1, 2, 3] - - def delfoo(): - del l.foo - delfoo() - self.assert_raises(AttributeError, lambda: l.foo) - self.assert_raises(AttributeError, delfoo) - - local.release_local(l) - - def test_local_release(self): - loc = local.Local() - loc.foo = 42 - local.release_local(loc) - assert not hasattr(loc, 'foo') - - ls = local.LocalStack() - ls.push(42) - local.release_local(ls) - assert ls.top is None - - def test_local_proxy(self): - foo = [] - ls = local.LocalProxy(lambda: foo) - ls.append(42) - ls.append(23) - ls[1:] = [1, 2, 3] - assert foo == [42, 1, 2, 3] - assert repr(foo) == repr(ls) - assert foo[0] == 42 - foo += [1] - assert list(foo) == [42, 1, 2, 3, 1] - - def test_local_proxy_operations_math(self): - foo = 2 - ls = local.LocalProxy(lambda: foo) - assert ls + 1 == 3 - assert 1 + ls == 3 - assert ls - 1 == 1 - assert 1 - ls == -1 - assert ls * 1 == 2 - assert 1 * ls == 2 - assert ls / 1 == 2 - assert 1.0 / ls == 0.5 - assert ls // 1.0 == 2.0 - assert 1.0 // ls == 0.0 - assert ls % 2 == 0 - assert 2 % ls == 0 - - def test_local_proxy_operations_strings(self): - foo = "foo" - ls = local.LocalProxy(lambda: foo) - assert ls + "bar" == "foobar" - assert "bar" + ls == "barfoo" - assert ls * 2 == "foofoo" - - foo = "foo %s" - assert ls % ("bar",) == "foo bar" - - def test_local_stack(self): - ident = local.get_ident() - - ls = local.LocalStack() - assert ident not in ls._local.__storage__ - assert ls.top is None - ls.push(42) - assert ident in ls._local.__storage__ - assert ls.top == 42 - ls.push(23) - assert ls.top == 23 - ls.pop() - assert ls.top == 42 - ls.pop() - assert ls.top is None - assert ls.pop() is None - assert ls.pop() is None - - proxy = ls() - ls.push([1, 2]) - assert proxy == [1, 2] - ls.push((1, 2)) - assert proxy == (1, 2) - ls.pop() - ls.pop() - assert repr(proxy) == '' - - assert ident not in ls._local.__storage__ - - def test_local_proxies_with_callables(self): - foo = 42 - ls = local.LocalProxy(lambda: foo) - assert ls == 42 - foo = [23] - ls.append(42) - assert ls == [23, 42] - assert foo == [23, 42] - - def test_custom_idents(self): - ident = 0 - loc = local.Local() - stack = local.LocalStack() - mgr = local.LocalManager([loc, stack], ident_func=lambda: ident) - - loc.foo = 42 - stack.push({'foo': 42}) - ident = 1 - loc.foo = 23 - stack.push({'foo': 23}) - ident = 0 - assert loc.foo == 42 - assert stack.top['foo'] == 42 - stack.pop() - assert stack.top is None - ident = 1 - assert loc.foo == 23 - assert stack.top['foo'] == 23 - stack.pop() - assert stack.top is None - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(LocalTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/collect.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/collect.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/collect.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/collect.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,56 +0,0 @@ -#!/usr/bin/env python -""" -Hacky helper application to collect form data. -""" -from werkzeug.serving import run_simple -from werkzeug.wrappers import Request, Response - - -def copy_stream(request): - from os import mkdir - from time import time - folder = 'request-%d' % time() - mkdir(folder) - environ = request.environ - f = open(folder + '/request.txt', 'wb+') - f.write(environ['wsgi.input'].read(int(environ['CONTENT_LENGTH']))) - f.flush() - f.seek(0) - environ['wsgi.input'] = f - request.stat_folder = folder - - -def stats(request): - copy_stream(request) - f1 = request.files['file1'] - f2 = request.files['file2'] - text = request.form['text'] - f1.save(request.stat_folder + '/file1.bin') - f2.save(request.stat_folder + '/file2.bin') - open(request.stat_folder + '/text.txt', 'w').write(text.encode('utf-8')) - return Response('Done.') - - -def upload_file(request): - return Response(''' -

Upload File

-
-
-
-
- -
- ''', mimetype='text/html') - - -def application(environ, start_responseonse): - request = Request(environ) - if request.method == 'POST': - response = stats(request) - else: - response = upload_file(request) - return response(environ, start_responseonse) - - -if __name__ == '__main__': - run_simple('localhost', 5000, application, use_debugger=True) Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/firefox3-2png1txt/file1.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/firefox3-2png1txt/file1.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/firefox3-2png1txt/file2.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/firefox3-2png1txt/file2.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/firefox3-2png1txt/request.txt and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/firefox3-2png1txt/request.txt differ diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/firefox3-2png1txt/text.txt python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/firefox3-2png1txt/text.txt --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/firefox3-2png1txt/text.txt 2013-06-13 10:25:36.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/firefox3-2png1txt/text.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ -example text \ No newline at end of file Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/firefox3-2pnglongtext/file1.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/firefox3-2pnglongtext/file1.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/firefox3-2pnglongtext/file2.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/firefox3-2pnglongtext/file2.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/firefox3-2pnglongtext/request.txt and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/firefox3-2pnglongtext/request.txt differ diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/firefox3-2pnglongtext/text.txt python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/firefox3-2pnglongtext/text.txt --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/firefox3-2pnglongtext/text.txt 2013-06-13 10:25:36.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/firefox3-2pnglongtext/text.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1,3 +0,0 @@ ---long text ---with boundary ---lookalikes-- \ No newline at end of file Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/ie6-2png1txt/file1.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/ie6-2png1txt/file1.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/ie6-2png1txt/file2.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/ie6-2png1txt/file2.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/ie6-2png1txt/request.txt and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/ie6-2png1txt/request.txt differ diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/ie6-2png1txt/text.txt python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/ie6-2png1txt/text.txt --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/ie6-2png1txt/text.txt 2013-06-13 10:25:36.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/ie6-2png1txt/text.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ -ie6 sucks :-/ \ No newline at end of file Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/ie7_full_path_request.txt and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/ie7_full_path_request.txt differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/opera8-2png1txt/file1.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/opera8-2png1txt/file1.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/opera8-2png1txt/file2.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/opera8-2png1txt/file2.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/opera8-2png1txt/request.txt and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/opera8-2png1txt/request.txt differ diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/opera8-2png1txt/text.txt python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/opera8-2png1txt/text.txt --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/opera8-2png1txt/text.txt 2013-06-13 10:25:36.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/opera8-2png1txt/text.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ -blafasel öäü \ No newline at end of file Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/webkit3-2png1txt/file1.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/webkit3-2png1txt/file1.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/webkit3-2png1txt/file2.png and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/webkit3-2png1txt/file2.png differ Binary files /tmp/FabJo1H0Sm/python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/webkit3-2png1txt/request.txt and /tmp/WRiaVF22d0/python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/webkit3-2png1txt/request.txt differ diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/webkit3-2png1txt/text.txt python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/webkit3-2png1txt/text.txt --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/multipart/webkit3-2png1txt/text.txt 2013-06-13 10:25:36.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/multipart/webkit3-2png1txt/text.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ -this is another text with ümläüts \ No newline at end of file diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/res/test.txt python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/res/test.txt --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/res/test.txt 2013-06-13 10:25:36.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/res/test.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ -FOUND diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/routing.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/routing.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/routing.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/routing.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,689 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.routing - ~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Routing tests. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import unittest - -from werkzeug.testsuite import WerkzeugTestCase - -from werkzeug import routing as r -from werkzeug.wrappers import Response -from werkzeug.datastructures import ImmutableDict -from werkzeug.test import create_environ - - -class RoutingTestCase(WerkzeugTestCase): - - def test_basic_routing(self): - map = r.Map([ - r.Rule('/', endpoint='index'), - r.Rule('/foo', endpoint='foo'), - r.Rule('/bar/', endpoint='bar') - ]) - adapter = map.bind('example.org', '/') - assert adapter.match('/') == ('index', {}) - assert adapter.match('/foo') == ('foo', {}) - assert adapter.match('/bar/') == ('bar', {}) - self.assert_raises(r.RequestRedirect, lambda: adapter.match('/bar')) - self.assert_raises(r.NotFound, lambda: adapter.match('/blub')) - - adapter = map.bind('example.org', '/test') - try: - adapter.match('/bar') - except r.RequestRedirect as e: - assert e.new_url == 'http://example.org/test/bar/' - else: - self.fail('Expected request redirect') - - adapter = map.bind('example.org', '/') - try: - adapter.match('/bar') - except r.RequestRedirect as e: - assert e.new_url == 'http://example.org/bar/' - else: - self.fail('Expected request redirect') - - adapter = map.bind('example.org', '/') - try: - adapter.match('/bar', query_args={'aha': 'muhaha'}) - except r.RequestRedirect as e: - assert e.new_url == 'http://example.org/bar/?aha=muhaha' - else: - self.fail('Expected request redirect') - - adapter = map.bind('example.org', '/') - try: - adapter.match('/bar', query_args='aha=muhaha') - except r.RequestRedirect as e: - assert e.new_url == 'http://example.org/bar/?aha=muhaha' - else: - self.fail('Expected request redirect') - - adapter = map.bind_to_environ(create_environ('/bar?foo=bar', - 'http://example.org/')) - try: - adapter.match() - except r.RequestRedirect as e: - assert e.new_url == 'http://example.org/bar/?foo=bar' - else: - self.fail('Expected request redirect') - - def test_environ_defaults(self): - environ = create_environ("/foo") - self.assert_strict_equal(environ["PATH_INFO"], '/foo') - m = r.Map([r.Rule("/foo", endpoint="foo"), r.Rule("/bar", endpoint="bar")]) - a = m.bind_to_environ(environ) - self.assert_strict_equal(a.match("/foo"), ('foo', {})) - self.assert_strict_equal(a.match(), ('foo', {})) - self.assert_strict_equal(a.match("/bar"), ('bar', {})) - self.assert_raises(r.NotFound, a.match, "/bars") - - def test_environ_nonascii_pathinfo(self): - environ = create_environ(u'/лошадь') - m = r.Map([ - r.Rule(u'/', endpoint='index'), - r.Rule(u'/лошадь', endpoint='horse') - ]) - a = m.bind_to_environ(environ) - self.assert_strict_equal(a.match(u'/'), ('index', {})) - self.assert_strict_equal(a.match(u'/лошадь'), ('horse', {})) - self.assert_raises(r.NotFound, a.match, u'/барсук') - - def test_basic_building(self): - map = r.Map([ - r.Rule('/', endpoint='index'), - r.Rule('/foo', endpoint='foo'), - r.Rule('/bar/', endpoint='bar'), - r.Rule('/bar/', endpoint='bari'), - r.Rule('/bar/', endpoint='barf'), - r.Rule('/bar/', endpoint='barp'), - r.Rule('/hehe', endpoint='blah', subdomain='blah') - ]) - adapter = map.bind('example.org', '/', subdomain='blah') - - assert adapter.build('index', {}) == 'http://example.org/' - assert adapter.build('foo', {}) == 'http://example.org/foo' - assert adapter.build('bar', {'baz': 'blub'}) == 'http://example.org/bar/blub' - assert adapter.build('bari', {'bazi': 50}) == 'http://example.org/bar/50' - assert adapter.build('barf', {'bazf': 0.815}) == 'http://example.org/bar/0.815' - assert adapter.build('barp', {'bazp': 'la/di'}) == 'http://example.org/bar/la/di' - assert adapter.build('blah', {}) == '/hehe' - self.assert_raises(r.BuildError, lambda: adapter.build('urks')) - - adapter = map.bind('example.org', '/test', subdomain='blah') - assert adapter.build('index', {}) == 'http://example.org/test/' - assert adapter.build('foo', {}) == 'http://example.org/test/foo' - assert adapter.build('bar', {'baz': 'blub'}) == 'http://example.org/test/bar/blub' - assert adapter.build('bari', {'bazi': 50}) == 'http://example.org/test/bar/50' - assert adapter.build('barf', {'bazf': 0.815}) == 'http://example.org/test/bar/0.815' - assert adapter.build('barp', {'bazp': 'la/di'}) == 'http://example.org/test/bar/la/di' - assert adapter.build('blah', {}) == '/test/hehe' - - def test_defaults(self): - map = r.Map([ - r.Rule('/foo/', defaults={'page': 1}, endpoint='foo'), - r.Rule('/foo/', endpoint='foo') - ]) - adapter = map.bind('example.org', '/') - - assert adapter.match('/foo/') == ('foo', {'page': 1}) - self.assert_raises(r.RequestRedirect, lambda: adapter.match('/foo/1')) - assert adapter.match('/foo/2') == ('foo', {'page': 2}) - assert adapter.build('foo', {}) == '/foo/' - assert adapter.build('foo', {'page': 1}) == '/foo/' - assert adapter.build('foo', {'page': 2}) == '/foo/2' - - def test_greedy(self): - map = r.Map([ - r.Rule('/foo', endpoint='foo'), - r.Rule('/', endpoint='bar'), - r.Rule('//', endpoint='bar') - ]) - adapter = map.bind('example.org', '/') - - assert adapter.match('/foo') == ('foo', {}) - assert adapter.match('/blub') == ('bar', {'bar': 'blub'}) - assert adapter.match('/he/he') == ('bar', {'bar': 'he', 'blub': 'he'}) - - assert adapter.build('foo', {}) == '/foo' - assert adapter.build('bar', {'bar': 'blub'}) == '/blub' - assert adapter.build('bar', {'bar': 'blub', 'blub': 'bar'}) == '/blub/bar' - - def test_path(self): - map = r.Map([ - r.Rule('/', defaults={'name': 'FrontPage'}, endpoint='page'), - r.Rule('/Special', endpoint='special'), - r.Rule('/', endpoint='year'), - r.Rule('/', endpoint='page'), - r.Rule('//edit', endpoint='editpage'), - r.Rule('//silly/', endpoint='sillypage'), - r.Rule('//silly//edit', endpoint='editsillypage'), - r.Rule('/Talk:', endpoint='talk'), - r.Rule('/User:', endpoint='user'), - r.Rule('/User:/', endpoint='userpage'), - r.Rule('/Files/', endpoint='files'), - ]) - adapter = map.bind('example.org', '/') - - assert adapter.match('/') == ('page', {'name':'FrontPage'}) - self.assert_raises(r.RequestRedirect, lambda: adapter.match('/FrontPage')) - assert adapter.match('/Special') == ('special', {}) - assert adapter.match('/2007') == ('year', {'year':2007}) - assert adapter.match('/Some/Page') == ('page', {'name':'Some/Page'}) - assert adapter.match('/Some/Page/edit') == ('editpage', {'name':'Some/Page'}) - assert adapter.match('/Foo/silly/bar') == ('sillypage', {'name':'Foo', 'name2':'bar'}) - assert adapter.match('/Foo/silly/bar/edit') == ('editsillypage', {'name':'Foo', 'name2':'bar'}) - assert adapter.match('/Talk:Foo/Bar') == ('talk', {'name':'Foo/Bar'}) - assert adapter.match('/User:thomas') == ('user', {'username':'thomas'}) - assert adapter.match('/User:thomas/projects/werkzeug') == \ - ('userpage', {'username':'thomas', 'name':'projects/werkzeug'}) - assert adapter.match('/Files/downloads/werkzeug/0.2.zip') == \ - ('files', {'file':'downloads/werkzeug/0.2.zip'}) - - def test_dispatch(self): - env = create_environ('/') - map = r.Map([ - r.Rule('/', endpoint='root'), - r.Rule('/foo/', endpoint='foo') - ]) - adapter = map.bind_to_environ(env) - - raise_this = None - def view_func(endpoint, values): - if raise_this is not None: - raise raise_this - return Response(repr((endpoint, values))) - dispatch = lambda p, q=False: Response.force_type(adapter.dispatch(view_func, p, - catch_http_exceptions=q), env) - - assert dispatch('/').data == b"('root', {})" - assert dispatch('/foo').status_code == 301 - raise_this = r.NotFound() - self.assert_raises(r.NotFound, lambda: dispatch('/bar')) - assert dispatch('/bar', True).status_code == 404 - - def test_http_host_before_server_name(self): - env = { - 'HTTP_HOST': 'wiki.example.com', - 'SERVER_NAME': 'web0.example.com', - 'SERVER_PORT': '80', - 'SCRIPT_NAME': '', - 'PATH_INFO': '', - 'REQUEST_METHOD': 'GET', - 'wsgi.url_scheme': 'http' - } - map = r.Map([r.Rule('/', endpoint='index', subdomain='wiki')]) - adapter = map.bind_to_environ(env, server_name='example.com') - assert adapter.match('/') == ('index', {}) - assert adapter.build('index', force_external=True) == 'http://wiki.example.com/' - assert adapter.build('index') == '/' - - env['HTTP_HOST'] = 'admin.example.com' - adapter = map.bind_to_environ(env, server_name='example.com') - assert adapter.build('index') == 'http://wiki.example.com/' - - def test_adapter_url_parameter_sorting(self): - map = r.Map([r.Rule('/', endpoint='index')], sort_parameters=True, - sort_key=lambda x: x[1]) - adapter = map.bind('localhost', '/') - assert adapter.build('index', {'x': 20, 'y': 10, 'z': 30}, - force_external=True) == 'http://localhost/?y=10&x=20&z=30' - - def test_request_direct_charset_bug(self): - map = r.Map([r.Rule(u'/öäü/')]) - adapter = map.bind('localhost', '/') - try: - adapter.match(u'/öäü') - except r.RequestRedirect as e: - assert e.new_url == 'http://localhost/%C3%B6%C3%A4%C3%BC/' - else: - self.fail('expected request redirect exception') - - def test_request_redirect_default(self): - map = r.Map([r.Rule(u'/foo', defaults={'bar': 42}), - r.Rule(u'/foo/')]) - adapter = map.bind('localhost', '/') - try: - adapter.match(u'/foo/42') - except r.RequestRedirect as e: - assert e.new_url == 'http://localhost/foo' - else: - self.fail('expected request redirect exception') - - def test_request_redirect_default_subdomain(self): - map = r.Map([r.Rule(u'/foo', defaults={'bar': 42}, subdomain='test'), - r.Rule(u'/foo/', subdomain='other')]) - adapter = map.bind('localhost', '/', subdomain='other') - try: - adapter.match(u'/foo/42') - except r.RequestRedirect as e: - assert e.new_url == 'http://test.localhost/foo' - else: - self.fail('expected request redirect exception') - - def test_adapter_match_return_rule(self): - rule = r.Rule('/foo/', endpoint='foo') - map = r.Map([rule]) - adapter = map.bind('localhost', '/') - assert adapter.match('/foo/', return_rule=True) == (rule, {}) - - def test_server_name_interpolation(self): - server_name = 'example.invalid' - map = r.Map([r.Rule('/', endpoint='index'), - r.Rule('/', endpoint='alt', subdomain='alt')]) - - env = create_environ('/', 'http://%s/' % server_name) - adapter = map.bind_to_environ(env, server_name=server_name) - assert adapter.match() == ('index', {}) - - env = create_environ('/', 'http://alt.%s/' % server_name) - adapter = map.bind_to_environ(env, server_name=server_name) - assert adapter.match() == ('alt', {}) - - env = create_environ('/', 'http://%s/' % server_name) - adapter = map.bind_to_environ(env, server_name='foo') - assert adapter.subdomain == '' - - def test_rule_emptying(self): - rule = r.Rule('/foo', {'meh': 'muh'}, 'x', ['POST'], - False, 'x', True, None) - rule2 = rule.empty() - assert rule.__dict__ == rule2.__dict__ - rule.methods.add('GET') - assert rule.__dict__ != rule2.__dict__ - rule.methods.discard('GET') - rule.defaults['meh'] = 'aha' - assert rule.__dict__ != rule2.__dict__ - - def test_rule_templates(self): - testcase = r.RuleTemplate( - [ r.Submount('/test/$app', - [ r.Rule('/foo/', endpoint='handle_foo') - , r.Rule('/bar/', endpoint='handle_bar') - , r.Rule('/baz/', endpoint='handle_baz') - ]), - r.EndpointPrefix('${app}', - [ r.Rule('/${app}-blah', endpoint='bar') - , r.Rule('/${app}-meh', endpoint='baz') - ]), - r.Subdomain('$app', - [ r.Rule('/blah', endpoint='x_bar') - , r.Rule('/meh', endpoint='x_baz') - ]) - ]) - - url_map = r.Map( - [ testcase(app='test1') - , testcase(app='test2') - , testcase(app='test3') - , testcase(app='test4') - ]) - - out = sorted([(x.rule, x.subdomain, x.endpoint) - for x in url_map.iter_rules()]) - - assert out == ([ - ('/blah', 'test1', 'x_bar'), - ('/blah', 'test2', 'x_bar'), - ('/blah', 'test3', 'x_bar'), - ('/blah', 'test4', 'x_bar'), - ('/meh', 'test1', 'x_baz'), - ('/meh', 'test2', 'x_baz'), - ('/meh', 'test3', 'x_baz'), - ('/meh', 'test4', 'x_baz'), - ('/test/test1/bar/', '', 'handle_bar'), - ('/test/test1/baz/', '', 'handle_baz'), - ('/test/test1/foo/', '', 'handle_foo'), - ('/test/test2/bar/', '', 'handle_bar'), - ('/test/test2/baz/', '', 'handle_baz'), - ('/test/test2/foo/', '', 'handle_foo'), - ('/test/test3/bar/', '', 'handle_bar'), - ('/test/test3/baz/', '', 'handle_baz'), - ('/test/test3/foo/', '', 'handle_foo'), - ('/test/test4/bar/', '', 'handle_bar'), - ('/test/test4/baz/', '', 'handle_baz'), - ('/test/test4/foo/', '', 'handle_foo'), - ('/test1-blah', '', 'test1bar'), - ('/test1-meh', '', 'test1baz'), - ('/test2-blah', '', 'test2bar'), - ('/test2-meh', '', 'test2baz'), - ('/test3-blah', '', 'test3bar'), - ('/test3-meh', '', 'test3baz'), - ('/test4-blah', '', 'test4bar'), - ('/test4-meh', '', 'test4baz') - ]) - - def test_non_string_parts(self): - m = r.Map([ - r.Rule('/', endpoint='foo') - ]) - a = m.bind('example.com') - self.assert_equal(a.build('foo', {'foo': 42}), '/42') - - def test_complex_routing_rules(self): - m = r.Map([ - r.Rule('/', endpoint='index'), - r.Rule('/', endpoint='an_int'), - r.Rule('/', endpoint='a_string'), - r.Rule('/foo/', endpoint='nested'), - r.Rule('/foobar/', endpoint='nestedbar'), - r.Rule('/foo//', endpoint='nested_show'), - r.Rule('/foo//edit', endpoint='nested_edit'), - r.Rule('/users/', endpoint='users', defaults={'page': 1}), - r.Rule('/users/page/', endpoint='users'), - r.Rule('/foox', endpoint='foox'), - r.Rule('//', endpoint='barx_path_path') - ]) - a = m.bind('example.com') - - assert a.match('/') == ('index', {}) - assert a.match('/42') == ('an_int', {'blub': 42}) - assert a.match('/blub') == ('a_string', {'blub': 'blub'}) - assert a.match('/foo/') == ('nested', {}) - assert a.match('/foobar/') == ('nestedbar', {}) - assert a.match('/foo/1/2/3/') == ('nested_show', {'testing': '1/2/3'}) - assert a.match('/foo/1/2/3/edit') == ('nested_edit', {'testing': '1/2/3'}) - assert a.match('/users/') == ('users', {'page': 1}) - assert a.match('/users/page/2') == ('users', {'page': 2}) - assert a.match('/foox') == ('foox', {}) - assert a.match('/1/2/3') == ('barx_path_path', {'bar': '1', 'blub': '2/3'}) - - assert a.build('index') == '/' - assert a.build('an_int', {'blub': 42}) == '/42' - assert a.build('a_string', {'blub': 'test'}) == '/test' - assert a.build('nested') == '/foo/' - assert a.build('nestedbar') == '/foobar/' - assert a.build('nested_show', {'testing': '1/2/3'}) == '/foo/1/2/3/' - assert a.build('nested_edit', {'testing': '1/2/3'}) == '/foo/1/2/3/edit' - assert a.build('users', {'page': 1}) == '/users/' - assert a.build('users', {'page': 2}) == '/users/page/2' - assert a.build('foox') == '/foox' - assert a.build('barx_path_path', {'bar': '1', 'blub': '2/3'}) == '/1/2/3' - - def test_default_converters(self): - class MyMap(r.Map): - default_converters = r.Map.default_converters.copy() - default_converters['foo'] = r.UnicodeConverter - assert isinstance(r.Map.default_converters, ImmutableDict) - m = MyMap([ - r.Rule('/a/', endpoint='a'), - r.Rule('/b/', endpoint='b'), - r.Rule('/c/', endpoint='c') - ], converters={'bar': r.UnicodeConverter}) - a = m.bind('example.org', '/') - assert a.match('/a/1') == ('a', {'a': '1'}) - assert a.match('/b/2') == ('b', {'b': '2'}) - assert a.match('/c/3') == ('c', {'c': '3'}) - assert 'foo' not in r.Map.default_converters - - def test_build_append_unknown(self): - map = r.Map([ - r.Rule('/bar/', endpoint='barf') - ]) - adapter = map.bind('example.org', '/', subdomain='blah') - assert adapter.build('barf', {'bazf': 0.815, 'bif' : 1.0}) == \ - 'http://example.org/bar/0.815?bif=1.0' - assert adapter.build('barf', {'bazf': 0.815, 'bif' : 1.0}, - append_unknown=False) == 'http://example.org/bar/0.815' - - def test_method_fallback(self): - map = r.Map([ - r.Rule('/', endpoint='index', methods=['GET']), - r.Rule('/', endpoint='hello_name', methods=['GET']), - r.Rule('/select', endpoint='hello_select', methods=['POST']), - r.Rule('/search_get', endpoint='search', methods=['GET']), - r.Rule('/search_post', endpoint='search', methods=['POST']) - ]) - adapter = map.bind('example.com') - assert adapter.build('index') == '/' - assert adapter.build('index', method='GET') == '/' - assert adapter.build('hello_name', {'name': 'foo'}) == '/foo' - assert adapter.build('hello_select') == '/select' - assert adapter.build('hello_select', method='POST') == '/select' - assert adapter.build('search') == '/search_get' - assert adapter.build('search', method='GET') == '/search_get' - assert adapter.build('search', method='POST') == '/search_post' - - def test_implicit_head(self): - url_map = r.Map([ - r.Rule('/get', methods=['GET'], endpoint='a'), - r.Rule('/post', methods=['POST'], endpoint='b') - ]) - adapter = url_map.bind('example.org') - assert adapter.match('/get', method='HEAD') == ('a', {}) - self.assert_raises(r.MethodNotAllowed, adapter.match, - '/post', method='HEAD') - - def test_protocol_joining_bug(self): - m = r.Map([r.Rule('/', endpoint='x')]) - a = m.bind('example.org') - assert a.build('x', {'foo': 'x:y'}) == '/x:y' - assert a.build('x', {'foo': 'x:y'}, force_external=True) == \ - 'http://example.org/x:y' - - def test_allowed_methods_querying(self): - m = r.Map([r.Rule('/', methods=['GET', 'HEAD']), - r.Rule('/foo', methods=['POST'])]) - a = m.bind('example.org') - assert sorted(a.allowed_methods('/foo')) == ['GET', 'HEAD', 'POST'] - - def test_external_building_with_port(self): - map = r.Map([ - r.Rule('/', endpoint='index'), - ]) - adapter = map.bind('example.org:5000', '/') - built_url = adapter.build('index', {}, force_external=True) - assert built_url == 'http://example.org:5000/', built_url - - def test_external_building_with_port_bind_to_environ(self): - map = r.Map([ - r.Rule('/', endpoint='index'), - ]) - adapter = map.bind_to_environ( - create_environ('/', 'http://example.org:5000/'), - server_name="example.org:5000" - ) - built_url = adapter.build('index', {}, force_external=True) - assert built_url == 'http://example.org:5000/', built_url - - def test_external_building_with_port_bind_to_environ_wrong_servername(self): - map = r.Map([ - r.Rule('/', endpoint='index'), - ]) - environ = create_environ('/', 'http://example.org:5000/') - adapter = map.bind_to_environ(environ, server_name="example.org") - assert adapter.subdomain == '' - - def test_converter_parser(self): - args, kwargs = r.parse_converter_args(u'test, a=1, b=3.0') - - assert args == ('test',) - assert kwargs == {'a': 1, 'b': 3.0 } - - args, kwargs = r.parse_converter_args('') - assert not args and not kwargs - - args, kwargs = r.parse_converter_args('a, b, c,') - assert args == ('a', 'b', 'c') - assert not kwargs - - args, kwargs = r.parse_converter_args('True, False, None') - assert args == (True, False, None) - - args, kwargs = r.parse_converter_args('"foo", u"bar"') - assert args == ('foo', 'bar') - - def test_alias_redirects(self): - m = r.Map([ - r.Rule('/', endpoint='index'), - r.Rule('/index.html', endpoint='index', alias=True), - r.Rule('/users/', defaults={'page': 1}, endpoint='users'), - r.Rule('/users/index.html', defaults={'page': 1}, alias=True, - endpoint='users'), - r.Rule('/users/page/', endpoint='users'), - r.Rule('/users/page-.html', alias=True, endpoint='users'), - ]) - a = m.bind('example.com') - - def ensure_redirect(path, new_url, args=None): - try: - a.match(path, query_args=args) - except r.RequestRedirect as e: - assert e.new_url == 'http://example.com' + new_url - else: - assert False, 'expected redirect' - - ensure_redirect('/index.html', '/') - ensure_redirect('/users/index.html', '/users/') - ensure_redirect('/users/page-2.html', '/users/page/2') - ensure_redirect('/users/page-1.html', '/users/') - ensure_redirect('/users/page-1.html', '/users/?foo=bar', {'foo': 'bar'}) - - assert a.build('index') == '/' - assert a.build('users', {'page': 1}) == '/users/' - assert a.build('users', {'page': 2}) == '/users/page/2' - - def test_double_defaults(self): - for prefix in '', '/aaa': - m = r.Map([ - r.Rule(prefix + '/', defaults={'foo': 1, 'bar': False}, endpoint='x'), - r.Rule(prefix + '/', defaults={'bar': False}, endpoint='x'), - r.Rule(prefix + '/bar/', defaults={'foo': 1, 'bar': True}, endpoint='x'), - r.Rule(prefix + '/bar/', defaults={'bar': True}, endpoint='x') - ]) - a = m.bind('example.com') - - assert a.match(prefix + '/') == ('x', {'foo': 1, 'bar': False}) - assert a.match(prefix + '/2') == ('x', {'foo': 2, 'bar': False}) - assert a.match(prefix + '/bar/') == ('x', {'foo': 1, 'bar': True}) - assert a.match(prefix + '/bar/2') == ('x', {'foo': 2, 'bar': True}) - - assert a.build('x', {'foo': 1, 'bar': False}) == prefix + '/' - assert a.build('x', {'foo': 2, 'bar': False}) == prefix + '/2' - assert a.build('x', {'bar': False}) == prefix + '/' - assert a.build('x', {'foo': 1, 'bar': True}) == prefix + '/bar/' - assert a.build('x', {'foo': 2, 'bar': True}) == prefix + '/bar/2' - assert a.build('x', {'bar': True}) == prefix + '/bar/' - - def test_host_matching(self): - m = r.Map([ - r.Rule('/', endpoint='index', host='www.'), - r.Rule('/', endpoint='files', host='files.'), - r.Rule('/foo/', defaults={'page': 1}, host='www.', endpoint='x'), - r.Rule('/', host='files.', endpoint='x') - ], host_matching=True) - - a = m.bind('www.example.com') - assert a.match('/') == ('index', {'domain': 'example.com'}) - assert a.match('/foo/') == ('x', {'domain': 'example.com', 'page': 1}) - try: - a.match('/foo') - except r.RequestRedirect as e: - assert e.new_url == 'http://www.example.com/foo/' - else: - assert False, 'expected redirect' - - a = m.bind('files.example.com') - assert a.match('/') == ('files', {'domain': 'example.com'}) - assert a.match('/2') == ('x', {'domain': 'example.com', 'page': 2}) - try: - a.match('/1') - except r.RequestRedirect as e: - assert e.new_url == 'http://www.example.com/foo/' - else: - assert False, 'expected redirect' - - def test_server_name_casing(self): - m = r.Map([ - r.Rule('/', endpoint='index', subdomain='foo') - ]) - - env = create_environ() - env['SERVER_NAME'] = env['HTTP_HOST'] = 'FOO.EXAMPLE.COM' - a = m.bind_to_environ(env, server_name='example.com') - assert a.match('/') == ('index', {}) - - env = create_environ() - env['SERVER_NAME'] = '127.0.0.1' - env['SERVER_PORT'] = '5000' - del env['HTTP_HOST'] - a = m.bind_to_environ(env, server_name='example.com') - try: - a.match() - except r.NotFound: - pass - else: - assert False, 'Expected not found exception' - - def test_redirect_request_exception_code(self): - exc = r.RequestRedirect('http://www.google.com/') - exc.code = 307 - env = create_environ() - self.assert_strict_equal(exc.get_response(env).status_code, exc.code) - - def test_redirect_path_quoting(self): - url_map = r.Map([ - r.Rule('/', defaults={'page': 1}, endpoint='category'), - r.Rule('//page/', endpoint='category') - ]) - - adapter = url_map.bind('example.com') - try: - adapter.match('/foo bar/page/1') - except r.RequestRedirect as e: - response = e.get_response({}) - self.assert_strict_equal(response.headers['location'], - u'http://example.com/foo%20bar') - else: - self.fail('Expected redirect') - - def test_unicode_rules(self): - m = r.Map([ - r.Rule(u'/войти/', endpoint='enter'), - r.Rule(u'/foo+bar/', endpoint='foobar') - ]) - a = m.bind(u'☃.example.com') - try: - a.match(u'/войти') - except r.RequestRedirect as e: - self.assert_strict_equal(e.new_url, 'http://xn--n3h.example.com/' - '%D0%B2%D0%BE%D0%B9%D1%82%D0%B8/') - endpoint, values = a.match(u'/войти/') - self.assert_strict_equal(endpoint, 'enter') - self.assert_strict_equal(values, {}) - - try: - a.match(u'/foo+bar') - except r.RequestRedirect as e: - self.assert_strict_equal(e.new_url, 'http://xn--n3h.example.com/' - 'foo+bar/') - endpoint, values = a.match(u'/foo+bar/') - self.assert_strict_equal(endpoint, 'foobar') - self.assert_strict_equal(values, {}) - - url = a.build('enter', {}, force_external=True) - self.assert_strict_equal(url, 'http://xn--n3h.example.com/%D0%B2%D0%BE%D0%B9%D1%82%D0%B8/') - - url = a.build('foobar', {}, force_external=True) - self.assert_strict_equal(url, 'http://xn--n3h.example.com/foo+bar/') - - def test_map_repr(self): - m = r.Map([ - r.Rule(u'/wat', endpoint='enter'), - r.Rule(u'/woop', endpoint='foobar') - ]) - rv = repr(m) - self.assert_strict_equal(rv, - "Map([ foobar>, enter>])") - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(RoutingTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/security.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/security.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/security.py 2014-06-06 19:19:28.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/security.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,105 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.security - ~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Tests the security helpers. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import os -import unittest - -from werkzeug.testsuite import WerkzeugTestCase - -from werkzeug.security import check_password_hash, generate_password_hash, \ - safe_join, pbkdf2_hex, safe_str_cmp - - -class SecurityTestCase(WerkzeugTestCase): - - def test_safe_str_cmp(self): - assert safe_str_cmp('a', 'a') is True - assert safe_str_cmp(b'a', u'a') is True - assert safe_str_cmp('a', 'b') is False - assert safe_str_cmp(b'aaa', 'aa') is False - assert safe_str_cmp(b'aaa', 'bbb') is False - assert safe_str_cmp(b'aaa', u'aaa') is True - - def test_password_hashing(self): - hash0 = generate_password_hash('default') - assert check_password_hash(hash0, 'default') - assert hash0.startswith('pbkdf2:sha1:1000$') - - hash1 = generate_password_hash('default', 'sha1') - hash2 = generate_password_hash(u'default', method='sha1') - assert hash1 != hash2 - assert check_password_hash(hash1, 'default') - assert check_password_hash(hash2, 'default') - assert hash1.startswith('sha1$') - assert hash2.startswith('sha1$') - - fakehash = generate_password_hash('default', method='plain') - assert fakehash == 'plain$$default' - assert check_password_hash(fakehash, 'default') - - mhash = generate_password_hash(u'default', method='md5') - assert mhash.startswith('md5$') - assert check_password_hash(mhash, 'default') - - legacy = 'md5$$c21f969b5f03d33d43e04f8f136e7682' - assert check_password_hash(legacy, 'default') - - legacy = u'md5$$c21f969b5f03d33d43e04f8f136e7682' - assert check_password_hash(legacy, 'default') - - def test_safe_join(self): - assert safe_join('foo', 'bar/baz') == os.path.join('foo', 'bar/baz') - assert safe_join('foo', '../bar/baz') is None - if os.name == 'nt': - assert safe_join('foo', 'foo\\bar') is None - - def test_pbkdf2(self): - def check(data, salt, iterations, keylen, expected): - rv = pbkdf2_hex(data, salt, iterations, keylen) - self.assert_equal(rv, expected) - - # From RFC 6070 - check('password', 'salt', 1, None, - '0c60c80f961f0e71f3a9b524af6012062fe037a6') - check('password', 'salt', 1, 20, - '0c60c80f961f0e71f3a9b524af6012062fe037a6') - check('password', 'salt', 2, 20, - 'ea6c014dc72d6f8ccd1ed92ace1d41f0d8de8957') - check('password', 'salt', 4096, 20, - '4b007901b765489abead49d926f721d065a429c1') - check('passwordPASSWORDpassword', 'saltSALTsaltSALTsaltSALTsaltSALTsalt', - 4096, 25, '3d2eec4fe41c849b80c8d83662c0e44a8b291a964cf2f07038') - check('pass\x00word', 'sa\x00lt', 4096, 16, - '56fa6aa75548099dcc37d7f03425e0c3') - # This one is from the RFC but it just takes for ages - ##check('password', 'salt', 16777216, 20, - ## 'eefe3d61cd4da4e4e9945b3d6ba2158c2634e984') - - # From Crypt-PBKDF2 - check('password', 'ATHENA.MIT.EDUraeburn', 1, 16, - 'cdedb5281bb2f801565a1122b2563515') - check('password', 'ATHENA.MIT.EDUraeburn', 1, 32, - 'cdedb5281bb2f801565a1122b25635150ad1f7a04bb9f3a333ecc0e2e1f70837') - check('password', 'ATHENA.MIT.EDUraeburn', 2, 16, - '01dbee7f4a9e243e988b62c73cda935d') - check('password', 'ATHENA.MIT.EDUraeburn', 2, 32, - '01dbee7f4a9e243e988b62c73cda935da05378b93244ec8f48a99e61ad799d86') - check('password', 'ATHENA.MIT.EDUraeburn', 1200, 32, - '5c08eb61fdf71e4e4ec3cf6ba1f5512ba7e52ddbc5e5142f708a31e2e62b1e13') - check('X' * 64, 'pass phrase equals block size', 1200, 32, - '139c30c0966bc32ba55fdbf212530ac9c5ec59f1a452f5cc9ad940fea0598ed1') - check('X' * 65, 'pass phrase exceeds block size', 1200, 32, - '9ccad6d468770cd51b10e6a68721be611a8b4d282601db3b36be9246915ec82a') - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(SecurityTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/serving.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/serving.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/serving.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/serving.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,117 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.serving - ~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Added serving tests. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import sys -import time -try: - import httplib -except ImportError: - from http import client as httplib -try: - from urllib2 import urlopen, HTTPError -except ImportError: # pragma: no cover - from urllib.request import urlopen - from urllib.error import HTTPError - -import unittest -from functools import update_wrapper - -from werkzeug.testsuite import WerkzeugTestCase - -from werkzeug import __version__ as version, serving -from werkzeug.testapp import test_app -from werkzeug._compat import StringIO -from threading import Thread - - - -real_make_server = serving.make_server - - -def silencestderr(f): - def new_func(*args, **kwargs): - old_stderr = sys.stderr - sys.stderr = StringIO() - try: - return f(*args, **kwargs) - finally: - sys.stderr = old_stderr - return update_wrapper(new_func, f) - - -def run_dev_server(application): - servers = [] - - def tracking_make_server(*args, **kwargs): - srv = real_make_server(*args, **kwargs) - servers.append(srv) - return srv - serving.make_server = tracking_make_server - try: - t = Thread(target=serving.run_simple, - args=('localhost', 0, application)) - t.setDaemon(True) - t.start() - time.sleep(0.25) - finally: - serving.make_server = real_make_server - if not servers: - return None, None - server, = servers - ip, port = server.socket.getsockname()[:2] - if ':' in ip: - ip = '[%s]' % ip - return server, '%s:%d' % (ip, port) - - -class ServingTestCase(WerkzeugTestCase): - - @silencestderr - def test_serving(self): - server, addr = run_dev_server(test_app) - rv = urlopen('http://%s/?foo=bar&baz=blah' % addr).read() - self.assert_in(b'WSGI Information', rv) - self.assert_in(b'foo=bar&baz=blah', rv) - self.assert_in(b'Werkzeug/' + version.encode('ascii'), rv) - - @silencestderr - def test_broken_app(self): - def broken_app(environ, start_response): - 1 // 0 - server, addr = run_dev_server(broken_app) - try: - urlopen('http://%s/?foo=bar&baz=blah' % addr).read() - except HTTPError as e: - # In Python3 a 500 response causes an exception - rv = e.read() - assert b'Internal Server Error' in rv - else: - assert False, 'expected internal server error' - - @silencestderr - def test_absolute_requests(self): - def asserting_app(environ, start_response): - assert environ['HTTP_HOST'] == 'surelynotexisting.example.com:1337' - assert environ['PATH_INFO'] == '/index.htm' - assert environ['SERVER_PORT'] == addr.split(':')[1] - start_response('200 OK', [('Content-Type', 'text/html')]) - return [b'YES'] - - server, addr = run_dev_server(asserting_app) - conn = httplib.HTTPConnection(addr) - conn.request('GET', 'http://surelynotexisting.example.com:1337/index.htm') - res = conn.getresponse() - assert res.read() == b'YES' - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(ServingTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/test.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/test.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/test.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/test.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,444 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.test - ~~~~~~~~~~~~~~~~~~~~~~~ - - Tests the testing tools. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" - -from __future__ import with_statement - -import sys -import unittest -from io import BytesIO -from werkzeug._compat import iteritems, to_bytes - -from werkzeug.testsuite import WerkzeugTestCase - -from werkzeug.wrappers import Request, Response, BaseResponse -from werkzeug.test import Client, EnvironBuilder, create_environ, \ - ClientRedirectError, stream_encode_multipart, run_wsgi_app -from werkzeug.utils import redirect -from werkzeug.formparser import parse_form_data -from werkzeug.datastructures import MultiDict, FileStorage - - -def cookie_app(environ, start_response): - """A WSGI application which sets a cookie, and returns as a ersponse any - cookie which exists. - """ - response = Response(environ.get('HTTP_COOKIE', 'No Cookie'), - mimetype='text/plain') - response.set_cookie('test', 'test') - return response(environ, start_response) - - -def redirect_loop_app(environ, start_response): - response = redirect('http://localhost/some/redirect/') - return response(environ, start_response) - - -def redirect_with_get_app(environ, start_response): - req = Request(environ) - if req.url not in ('http://localhost/', - 'http://localhost/first/request', - 'http://localhost/some/redirect/'): - assert False, 'redirect_demo_app() did not expect URL "%s"' % req.url - if '/some/redirect' not in req.url: - response = redirect('http://localhost/some/redirect/') - else: - response = Response('current url: %s' % req.url) - return response(environ, start_response) - - -def redirect_with_post_app(environ, start_response): - req = Request(environ) - if req.url == 'http://localhost/some/redirect/': - assert req.method == 'GET', 'request should be GET' - assert not req.form, 'request should not have data' - response = Response('current url: %s' % req.url) - else: - response = redirect('http://localhost/some/redirect/') - return response(environ, start_response) - - -def external_redirect_demo_app(environ, start_response): - response = redirect('http://example.com/') - return response(environ, start_response) - - -def external_subdomain_redirect_demo_app(environ, start_response): - if 'test.example.com' in environ['HTTP_HOST']: - response = Response('redirected successfully to subdomain') - else: - response = redirect('http://test.example.com/login') - return response(environ, start_response) - - -def multi_value_post_app(environ, start_response): - req = Request(environ) - assert req.form['field'] == 'val1', req.form['field'] - assert req.form.getlist('field') == ['val1', 'val2'], req.form.getlist('field') - response = Response('ok') - return response(environ, start_response) - - -class TestTestCase(WerkzeugTestCase): - - def test_cookie_forging(self): - c = Client(cookie_app) - c.set_cookie('localhost', 'foo', 'bar') - appiter, code, headers = c.open() - self.assert_strict_equal(list(appiter), [b'foo=bar']) - - def test_set_cookie_app(self): - c = Client(cookie_app) - appiter, code, headers = c.open() - self.assert_in('Set-Cookie', dict(headers)) - - def test_cookiejar_stores_cookie(self): - c = Client(cookie_app) - appiter, code, headers = c.open() - self.assert_in('test', c.cookie_jar._cookies['localhost.local']['/']) - - def test_no_initial_cookie(self): - c = Client(cookie_app) - appiter, code, headers = c.open() - self.assert_strict_equal(b''.join(appiter), b'No Cookie') - - def test_resent_cookie(self): - c = Client(cookie_app) - c.open() - appiter, code, headers = c.open() - self.assert_strict_equal(b''.join(appiter), b'test=test') - - def test_disable_cookies(self): - c = Client(cookie_app, use_cookies=False) - c.open() - appiter, code, headers = c.open() - self.assert_strict_equal(b''.join(appiter), b'No Cookie') - - def test_cookie_for_different_path(self): - c = Client(cookie_app) - c.open('/path1') - appiter, code, headers = c.open('/path2') - self.assert_strict_equal(b''.join(appiter), b'test=test') - - def test_environ_builder_basics(self): - b = EnvironBuilder() - self.assert_is_none(b.content_type) - b.method = 'POST' - self.assert_equal(b.content_type, 'application/x-www-form-urlencoded') - b.files.add_file('test', BytesIO(b'test contents'), 'test.txt') - self.assert_equal(b.files['test'].content_type, 'text/plain') - self.assert_equal(b.content_type, 'multipart/form-data') - b.form['test'] = 'normal value' - - req = b.get_request() - b.close() - - self.assert_strict_equal(req.url, u'http://localhost/') - self.assert_strict_equal(req.method, 'POST') - self.assert_strict_equal(req.form['test'], u'normal value') - self.assert_equal(req.files['test'].content_type, 'text/plain') - self.assert_strict_equal(req.files['test'].filename, u'test.txt') - self.assert_strict_equal(req.files['test'].read(), b'test contents') - - def test_environ_builder_headers(self): - b = EnvironBuilder(environ_base={'HTTP_USER_AGENT': 'Foo/0.1'}, - environ_overrides={'wsgi.version': (1, 1)}) - b.headers['X-Suck-My-Dick'] = 'very well sir' - env = b.get_environ() - self.assert_strict_equal(env['HTTP_USER_AGENT'], 'Foo/0.1') - self.assert_strict_equal(env['HTTP_X_SUCK_MY_DICK'], 'very well sir') - self.assert_strict_equal(env['wsgi.version'], (1, 1)) - - b.headers['User-Agent'] = 'Bar/1.0' - env = b.get_environ() - self.assert_strict_equal(env['HTTP_USER_AGENT'], 'Bar/1.0') - - def test_environ_builder_headers_content_type(self): - b = EnvironBuilder(headers={'Content-Type': 'text/plain'}) - env = b.get_environ() - self.assert_equal(env['CONTENT_TYPE'], 'text/plain') - b = EnvironBuilder(content_type='text/html', - headers={'Content-Type': 'text/plain'}) - env = b.get_environ() - self.assert_equal(env['CONTENT_TYPE'], 'text/html') - - def test_environ_builder_paths(self): - b = EnvironBuilder(path='/foo', base_url='http://example.com/') - self.assert_strict_equal(b.base_url, 'http://example.com/') - self.assert_strict_equal(b.path, '/foo') - self.assert_strict_equal(b.script_root, '') - self.assert_strict_equal(b.host, 'example.com') - - b = EnvironBuilder(path='/foo', base_url='http://example.com/bar') - self.assert_strict_equal(b.base_url, 'http://example.com/bar/') - self.assert_strict_equal(b.path, '/foo') - self.assert_strict_equal(b.script_root, '/bar') - self.assert_strict_equal(b.host, 'example.com') - - b.host = 'localhost' - self.assert_strict_equal(b.base_url, 'http://localhost/bar/') - b.base_url = 'http://localhost:8080/' - self.assert_strict_equal(b.host, 'localhost:8080') - self.assert_strict_equal(b.server_name, 'localhost') - self.assert_strict_equal(b.server_port, 8080) - - b.host = 'foo.invalid' - b.url_scheme = 'https' - b.script_root = '/test' - env = b.get_environ() - self.assert_strict_equal(env['SERVER_NAME'], 'foo.invalid') - self.assert_strict_equal(env['SERVER_PORT'], '443') - self.assert_strict_equal(env['SCRIPT_NAME'], '/test') - self.assert_strict_equal(env['PATH_INFO'], '/foo') - self.assert_strict_equal(env['HTTP_HOST'], 'foo.invalid') - self.assert_strict_equal(env['wsgi.url_scheme'], 'https') - self.assert_strict_equal(b.base_url, 'https://foo.invalid/test/') - - def test_environ_builder_content_type(self): - builder = EnvironBuilder() - self.assert_is_none(builder.content_type) - builder.method = 'POST' - self.assert_equal(builder.content_type, 'application/x-www-form-urlencoded') - builder.form['foo'] = 'bar' - self.assert_equal(builder.content_type, 'application/x-www-form-urlencoded') - builder.files.add_file('blafasel', BytesIO(b'foo'), 'test.txt') - self.assert_equal(builder.content_type, 'multipart/form-data') - req = builder.get_request() - self.assert_strict_equal(req.form['foo'], u'bar') - self.assert_strict_equal(req.files['blafasel'].read(), b'foo') - - def test_environ_builder_stream_switch(self): - d = MultiDict(dict(foo=u'bar', blub=u'blah', hu=u'hum')) - for use_tempfile in False, True: - stream, length, boundary = stream_encode_multipart( - d, use_tempfile, threshold=150) - self.assert_true(isinstance(stream, BytesIO) != use_tempfile) - - form = parse_form_data({'wsgi.input': stream, 'CONTENT_LENGTH': str(length), - 'CONTENT_TYPE': 'multipart/form-data; boundary="%s"' % - boundary})[1] - self.assert_strict_equal(form, d) - stream.close() - - def test_environ_builder_unicode_file_mix(self): - for use_tempfile in False, True: - f = FileStorage(BytesIO(u'\N{SNOWMAN}'.encode('utf-8')), - 'snowman.txt') - d = MultiDict(dict(f=f, s=u'\N{SNOWMAN}')) - stream, length, boundary = stream_encode_multipart( - d, use_tempfile, threshold=150) - self.assert_true(isinstance(stream, BytesIO) != use_tempfile) - - _, form, files = parse_form_data({ - 'wsgi.input': stream, - 'CONTENT_LENGTH': str(length), - 'CONTENT_TYPE': 'multipart/form-data; boundary="%s"' % - boundary - }) - self.assert_strict_equal(form['s'], u'\N{SNOWMAN}') - self.assert_strict_equal(files['f'].name, 'f') - self.assert_strict_equal(files['f'].filename, u'snowman.txt') - self.assert_strict_equal(files['f'].read(), - u'\N{SNOWMAN}'.encode('utf-8')) - stream.close() - - def test_create_environ(self): - env = create_environ('/foo?bar=baz', 'http://example.org/') - expected = { - 'wsgi.multiprocess': False, - 'wsgi.version': (1, 0), - 'wsgi.run_once': False, - 'wsgi.errors': sys.stderr, - 'wsgi.multithread': False, - 'wsgi.url_scheme': 'http', - 'SCRIPT_NAME': '', - 'CONTENT_TYPE': '', - 'CONTENT_LENGTH': '0', - 'SERVER_NAME': 'example.org', - 'REQUEST_METHOD': 'GET', - 'HTTP_HOST': 'example.org', - 'PATH_INFO': '/foo', - 'SERVER_PORT': '80', - 'SERVER_PROTOCOL': 'HTTP/1.1', - 'QUERY_STRING': 'bar=baz' - } - for key, value in iteritems(expected): - self.assert_equal(env[key], value) - self.assert_strict_equal(env['wsgi.input'].read(0), b'') - self.assert_strict_equal(create_environ('/foo', 'http://example.com/')['SCRIPT_NAME'], '') - - def test_file_closing(self): - closed = [] - class SpecialInput(object): - def read(self): - return '' - def close(self): - closed.append(self) - - env = create_environ(data={'foo': SpecialInput()}) - self.assert_strict_equal(len(closed), 1) - builder = EnvironBuilder() - builder.files.add_file('blah', SpecialInput()) - builder.close() - self.assert_strict_equal(len(closed), 2) - - def test_follow_redirect(self): - env = create_environ('/', base_url='http://localhost') - c = Client(redirect_with_get_app) - appiter, code, headers = c.open(environ_overrides=env, follow_redirects=True) - self.assert_strict_equal(code, '200 OK') - self.assert_strict_equal(b''.join(appiter), b'current url: http://localhost/some/redirect/') - - # Test that the :cls:`Client` is aware of user defined response wrappers - c = Client(redirect_with_get_app, response_wrapper=BaseResponse) - resp = c.get('/', follow_redirects=True) - self.assert_strict_equal(resp.status_code, 200) - self.assert_strict_equal(resp.data, b'current url: http://localhost/some/redirect/') - - # test with URL other than '/' to make sure redirected URL's are correct - c = Client(redirect_with_get_app, response_wrapper=BaseResponse) - resp = c.get('/first/request', follow_redirects=True) - self.assert_strict_equal(resp.status_code, 200) - self.assert_strict_equal(resp.data, b'current url: http://localhost/some/redirect/') - - def test_follow_external_redirect(self): - env = create_environ('/', base_url='http://localhost') - c = Client(external_redirect_demo_app) - self.assert_raises(RuntimeError, lambda: - c.get(environ_overrides=env, follow_redirects=True)) - - def test_follow_external_redirect_on_same_subdomain(self): - env = create_environ('/', base_url='http://example.com') - c = Client(external_subdomain_redirect_demo_app, allow_subdomain_redirects=True) - c.get(environ_overrides=env, follow_redirects=True) - - # check that this does not work for real external domains - env = create_environ('/', base_url='http://localhost') - self.assert_raises(RuntimeError, lambda: - c.get(environ_overrides=env, follow_redirects=True)) - - # check that subdomain redirects fail if no `allow_subdomain_redirects` is applied - c = Client(external_subdomain_redirect_demo_app) - self.assert_raises(RuntimeError, lambda: - c.get(environ_overrides=env, follow_redirects=True)) - - def test_follow_redirect_loop(self): - c = Client(redirect_loop_app, response_wrapper=BaseResponse) - with self.assert_raises(ClientRedirectError): - resp = c.get('/', follow_redirects=True) - - def test_follow_redirect_with_post(self): - c = Client(redirect_with_post_app, response_wrapper=BaseResponse) - resp = c.post('/', follow_redirects=True, data='foo=blub+hehe&blah=42') - self.assert_strict_equal(resp.status_code, 200) - self.assert_strict_equal(resp.data, b'current url: http://localhost/some/redirect/') - - def test_path_info_script_name_unquoting(self): - def test_app(environ, start_response): - start_response('200 OK', [('Content-Type', 'text/plain')]) - return [environ['PATH_INFO'] + '\n' + environ['SCRIPT_NAME']] - c = Client(test_app, response_wrapper=BaseResponse) - resp = c.get('/foo%40bar') - self.assert_strict_equal(resp.data, b'/foo@bar\n') - c = Client(test_app, response_wrapper=BaseResponse) - resp = c.get('/foo%40bar', 'http://localhost/bar%40baz') - self.assert_strict_equal(resp.data, b'/foo@bar\n/bar@baz') - - def test_multi_value_submit(self): - c = Client(multi_value_post_app, response_wrapper=BaseResponse) - data = { - 'field': ['val1','val2'] - } - resp = c.post('/', data=data) - self.assert_strict_equal(resp.status_code, 200) - c = Client(multi_value_post_app, response_wrapper=BaseResponse) - data = MultiDict({ - 'field': ['val1', 'val2'] - }) - resp = c.post('/', data=data) - self.assert_strict_equal(resp.status_code, 200) - - def test_iri_support(self): - b = EnvironBuilder(u'/föö-bar', base_url=u'http://☃.net/') - self.assert_strict_equal(b.path, '/f%C3%B6%C3%B6-bar') - self.assert_strict_equal(b.base_url, 'http://xn--n3h.net/') - - def test_run_wsgi_apps(self): - def simple_app(environ, start_response): - start_response('200 OK', [('Content-Type', 'text/html')]) - return ['Hello World!'] - app_iter, status, headers = run_wsgi_app(simple_app, {}) - self.assert_strict_equal(status, '200 OK') - self.assert_strict_equal(list(headers), [('Content-Type', 'text/html')]) - self.assert_strict_equal(app_iter, ['Hello World!']) - - def yielding_app(environ, start_response): - start_response('200 OK', [('Content-Type', 'text/html')]) - yield 'Hello ' - yield 'World!' - app_iter, status, headers = run_wsgi_app(yielding_app, {}) - self.assert_strict_equal(status, '200 OK') - self.assert_strict_equal(list(headers), [('Content-Type', 'text/html')]) - self.assert_strict_equal(list(app_iter), ['Hello ', 'World!']) - - def test_multiple_cookies(self): - @Request.application - def test_app(request): - response = Response(repr(sorted(request.cookies.items()))) - response.set_cookie(u'test1', b'foo') - response.set_cookie(u'test2', b'bar') - return response - client = Client(test_app, Response) - resp = client.get('/') - self.assert_strict_equal(resp.data, b'[]') - resp = client.get('/') - self.assert_strict_equal(resp.data, - to_bytes(repr([('test1', u'foo'), ('test2', u'bar')]), 'ascii')) - - def test_correct_open_invocation_on_redirect(self): - class MyClient(Client): - counter = 0 - def open(self, *args, **kwargs): - self.counter += 1 - env = kwargs.setdefault('environ_overrides', {}) - env['werkzeug._foo'] = self.counter - return Client.open(self, *args, **kwargs) - - @Request.application - def test_app(request): - return Response(str(request.environ['werkzeug._foo'])) - - c = MyClient(test_app, response_wrapper=Response) - self.assert_strict_equal(c.get('/').data, b'1') - self.assert_strict_equal(c.get('/').data, b'2') - self.assert_strict_equal(c.get('/').data, b'3') - - def test_correct_encoding(self): - req = Request.from_values(u'/\N{SNOWMAN}', u'http://example.com/foo') - self.assert_strict_equal(req.script_root, u'/foo') - self.assert_strict_equal(req.path, u'/\N{SNOWMAN}') - - def test_full_url_requests_with_args(self): - base = 'http://example.com/' - - @Request.application - def test_app(request): - return Response(request.args['x']) - client = Client(test_app, Response) - resp = client.get('/?x=42', base) - self.assert_strict_equal(resp.data, b'42') - resp = client.get('http://www.example.com/?x=23', base) - self.assert_strict_equal(resp.data, b'23') - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(TestTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/urls.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/urls.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/urls.py 2014-06-07 10:29:03.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/urls.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,322 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.urls - ~~~~~~~~~~~~~~~~~~~~~~~ - - URL helper tests. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import unittest - -from werkzeug.testsuite import WerkzeugTestCase - -from werkzeug.datastructures import OrderedMultiDict -from werkzeug import urls -from werkzeug._compat import text_type, NativeStringIO, BytesIO - - -class URLsTestCase(WerkzeugTestCase): - - def test_replace(self): - url = urls.url_parse('http://de.wikipedia.org/wiki/Troll') - self.assert_strict_equal(url.replace(query='foo=bar'), - urls.url_parse('http://de.wikipedia.org/wiki/Troll?foo=bar')) - self.assert_strict_equal(url.replace(scheme='https'), - urls.url_parse('https://de.wikipedia.org/wiki/Troll')) - - def test_quoting(self): - self.assert_strict_equal(urls.url_quote(u'\xf6\xe4\xfc'), '%C3%B6%C3%A4%C3%BC') - self.assert_strict_equal(urls.url_unquote(urls.url_quote(u'#%="\xf6')), u'#%="\xf6') - self.assert_strict_equal(urls.url_quote_plus('foo bar'), 'foo+bar') - self.assert_strict_equal(urls.url_unquote_plus('foo+bar'), u'foo bar') - self.assert_strict_equal(urls.url_quote_plus('foo+bar'), 'foo%2Bbar') - self.assert_strict_equal(urls.url_unquote_plus('foo%2Bbar'), u'foo+bar') - self.assert_strict_equal(urls.url_encode({b'a': None, b'b': b'foo bar'}), 'b=foo+bar') - self.assert_strict_equal(urls.url_encode({u'a': None, u'b': u'foo bar'}), 'b=foo+bar') - self.assert_strict_equal(urls.url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffsklärung)'), - 'http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)') - self.assert_strict_equal(urls.url_quote_plus(42), '42') - self.assert_strict_equal(urls.url_quote(b'\xff'), '%FF') - - def test_bytes_unquoting(self): - self.assert_strict_equal(urls.url_unquote(urls.url_quote( - u'#%="\xf6', charset='latin1'), charset=None), b'#%="\xf6') - - def test_url_decoding(self): - x = urls.url_decode(b'foo=42&bar=23&uni=H%C3%A4nsel') - self.assert_strict_equal(x['foo'], u'42') - self.assert_strict_equal(x['bar'], u'23') - self.assert_strict_equal(x['uni'], u'Hänsel') - - x = urls.url_decode(b'foo=42;bar=23;uni=H%C3%A4nsel', separator=b';') - self.assert_strict_equal(x['foo'], u'42') - self.assert_strict_equal(x['bar'], u'23') - self.assert_strict_equal(x['uni'], u'Hänsel') - - x = urls.url_decode(b'%C3%9Ch=H%C3%A4nsel', decode_keys=True) - self.assert_strict_equal(x[u'Üh'], u'Hänsel') - - def test_url_bytes_decoding(self): - x = urls.url_decode(b'foo=42&bar=23&uni=H%C3%A4nsel', charset=None) - self.assert_strict_equal(x[b'foo'], b'42') - self.assert_strict_equal(x[b'bar'], b'23') - self.assert_strict_equal(x[b'uni'], u'Hänsel'.encode('utf-8')) - - def test_streamed_url_decoding(self): - item1 = u'a' * 100000 - item2 = u'b' * 400 - string = ('a=%s&b=%s&c=%s' % (item1, item2, item2)).encode('ascii') - gen = urls.url_decode_stream(BytesIO(string), limit=len(string), - return_iterator=True) - self.assert_strict_equal(next(gen), ('a', item1)) - self.assert_strict_equal(next(gen), ('b', item2)) - self.assert_strict_equal(next(gen), ('c', item2)) - self.assert_raises(StopIteration, lambda: next(gen)) - - def test_stream_decoding_string_fails(self): - self.assert_raises(TypeError, urls.url_decode_stream, 'testing') - - def test_url_encoding(self): - self.assert_strict_equal(urls.url_encode({'foo': 'bar 45'}), 'foo=bar+45') - d = {'foo': 1, 'bar': 23, 'blah': u'Hänsel'} - self.assert_strict_equal(urls.url_encode(d, sort=True), 'bar=23&blah=H%C3%A4nsel&foo=1') - self.assert_strict_equal(urls.url_encode(d, sort=True, separator=u';'), 'bar=23;blah=H%C3%A4nsel;foo=1') - - def test_sorted_url_encode(self): - self.assert_strict_equal(urls.url_encode({u"a": 42, u"b": 23, 1: 1, 2: 2}, - sort=True, key=lambda i: text_type(i[0])), '1=1&2=2&a=42&b=23') - self.assert_strict_equal(urls.url_encode({u'A': 1, u'a': 2, u'B': 3, 'b': 4}, sort=True, - key=lambda x: x[0].lower() + x[0]), 'A=1&a=2&B=3&b=4') - - def test_streamed_url_encoding(self): - out = NativeStringIO() - urls.url_encode_stream({'foo': 'bar 45'}, out) - self.assert_strict_equal(out.getvalue(), 'foo=bar+45') - - d = {'foo': 1, 'bar': 23, 'blah': u'Hänsel'} - out = NativeStringIO() - urls.url_encode_stream(d, out, sort=True) - self.assert_strict_equal(out.getvalue(), 'bar=23&blah=H%C3%A4nsel&foo=1') - out = NativeStringIO() - urls.url_encode_stream(d, out, sort=True, separator=u';') - self.assert_strict_equal(out.getvalue(), 'bar=23;blah=H%C3%A4nsel;foo=1') - - gen = urls.url_encode_stream(d, sort=True) - self.assert_strict_equal(next(gen), 'bar=23') - self.assert_strict_equal(next(gen), 'blah=H%C3%A4nsel') - self.assert_strict_equal(next(gen), 'foo=1') - self.assert_raises(StopIteration, lambda: next(gen)) - - def test_url_fixing(self): - x = urls.url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffskl\xe4rung)') - self.assert_line_equal(x, 'http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)') - - x = urls.url_fix("http://just.a.test/$-_.+!*'(),") - self.assert_equal(x, "http://just.a.test/$-_.+!*'(),") - - def test_url_fixing_qs(self): - x = urls.url_fix(b'http://example.com/?foo=%2f%2f') - self.assert_line_equal(x, 'http://example.com/?foo=%2f%2f') - - x = urls.url_fix('http://acronyms.thefreedictionary.com/Algebraic+Methods+of+Solving+the+Schr%C3%B6dinger+Equation') - self.assert_equal(x, 'http://acronyms.thefreedictionary.com/Algebraic+Methods+of+Solving+the+Schr%C3%B6dinger+Equation') - - def test_iri_support(self): - self.assert_strict_equal(urls.uri_to_iri('http://xn--n3h.net/'), - u'http://\u2603.net/') - self.assert_strict_equal( - urls.uri_to_iri(b'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th'), - u'http://\xfcser:p\xe4ssword@\u2603.net/p\xe5th') - self.assert_strict_equal(urls.iri_to_uri(u'http://☃.net/'), 'http://xn--n3h.net/') - self.assert_strict_equal( - urls.iri_to_uri(u'http://üser:pässword@☃.net/påth'), - 'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th') - - self.assert_strict_equal(urls.uri_to_iri('http://test.com/%3Fmeh?foo=%26%2F'), - u'http://test.com/%3Fmeh?foo=%26%2F') - - # this should work as well, might break on 2.4 because of a broken - # idna codec - self.assert_strict_equal(urls.uri_to_iri(b'/foo'), u'/foo') - self.assert_strict_equal(urls.iri_to_uri(u'/foo'), '/foo') - - self.assert_strict_equal(urls.iri_to_uri(u'http://föö.com:8080/bam/baz'), - 'http://xn--f-1gaa.com:8080/bam/baz') - - def test_iri_safe_conversion(self): - self.assert_strict_equal(urls.iri_to_uri(u'magnet:?foo=bar'), - 'magnet:?foo=bar') - self.assert_strict_equal(urls.iri_to_uri(u'itms-service://?foo=bar'), - 'itms-service:?foo=bar') - self.assert_strict_equal(urls.iri_to_uri(u'itms-service://?foo=bar', - safe_conversion=True), - 'itms-service://?foo=bar') - - def test_iri_safe_quoting(self): - uri = 'http://xn--f-1gaa.com/%2F%25?q=%C3%B6&x=%3D%25#%25' - iri = u'http://föö.com/%2F%25?q=ö&x=%3D%25#%25' - self.assert_strict_equal(urls.uri_to_iri(uri), iri) - self.assert_strict_equal(urls.iri_to_uri(urls.uri_to_iri(uri)), uri) - - def test_ordered_multidict_encoding(self): - d = OrderedMultiDict() - d.add('foo', 1) - d.add('foo', 2) - d.add('foo', 3) - d.add('bar', 0) - d.add('foo', 4) - self.assert_equal(urls.url_encode(d), 'foo=1&foo=2&foo=3&bar=0&foo=4') - - def test_multidict_encoding(self): - d = OrderedMultiDict() - d.add('2013-10-10T23:26:05.657975+0000', '2013-10-10T23:26:05.657975+0000') - self.assert_equal(urls.url_encode(d), '2013-10-10T23%3A26%3A05.657975%2B0000=2013-10-10T23%3A26%3A05.657975%2B0000') - - def test_href(self): - x = urls.Href('http://www.example.com/') - self.assert_strict_equal(x(u'foo'), 'http://www.example.com/foo') - self.assert_strict_equal(x.foo(u'bar'), 'http://www.example.com/foo/bar') - self.assert_strict_equal(x.foo(u'bar', x=42), 'http://www.example.com/foo/bar?x=42') - self.assert_strict_equal(x.foo(u'bar', class_=42), 'http://www.example.com/foo/bar?class=42') - self.assert_strict_equal(x.foo(u'bar', {u'class': 42}), 'http://www.example.com/foo/bar?class=42') - self.assert_raises(AttributeError, lambda: x.__blah__) - - x = urls.Href('blah') - self.assert_strict_equal(x.foo(u'bar'), 'blah/foo/bar') - - self.assert_raises(TypeError, x.foo, {u"foo": 23}, x=42) - - x = urls.Href('') - self.assert_strict_equal(x('foo'), 'foo') - - def test_href_url_join(self): - x = urls.Href(u'test') - self.assert_line_equal(x(u'foo:bar'), u'test/foo:bar') - self.assert_line_equal(x(u'http://example.com/'), u'test/http://example.com/') - self.assert_line_equal(x.a(), u'test/a') - - def test_href_past_root(self): - base_href = urls.Href('http://www.blagga.com/1/2/3') - self.assert_strict_equal(base_href('../foo'), 'http://www.blagga.com/1/2/foo') - self.assert_strict_equal(base_href('../../foo'), 'http://www.blagga.com/1/foo') - self.assert_strict_equal(base_href('../../../foo'), 'http://www.blagga.com/foo') - self.assert_strict_equal(base_href('../../../../foo'), 'http://www.blagga.com/foo') - self.assert_strict_equal(base_href('../../../../../foo'), 'http://www.blagga.com/foo') - self.assert_strict_equal(base_href('../../../../../../foo'), 'http://www.blagga.com/foo') - - def test_url_unquote_plus_unicode(self): - # was broken in 0.6 - self.assert_strict_equal(urls.url_unquote_plus(u'\x6d'), u'\x6d') - self.assert_is(type(urls.url_unquote_plus(u'\x6d')), text_type) - - def test_quoting_of_local_urls(self): - rv = urls.iri_to_uri(u'/foo\x8f') - self.assert_strict_equal(rv, '/foo%C2%8F') - self.assert_is(type(rv), str) - - def test_url_attributes(self): - rv = urls.url_parse('http://foo%3a:bar%3a@[::1]:80/123?x=y#frag') - self.assert_strict_equal(rv.scheme, 'http') - self.assert_strict_equal(rv.auth, 'foo%3a:bar%3a') - self.assert_strict_equal(rv.username, u'foo:') - self.assert_strict_equal(rv.password, u'bar:') - self.assert_strict_equal(rv.raw_username, 'foo%3a') - self.assert_strict_equal(rv.raw_password, 'bar%3a') - self.assert_strict_equal(rv.host, '::1') - self.assert_equal(rv.port, 80) - self.assert_strict_equal(rv.path, '/123') - self.assert_strict_equal(rv.query, 'x=y') - self.assert_strict_equal(rv.fragment, 'frag') - - rv = urls.url_parse(u'http://\N{SNOWMAN}.com/') - self.assert_strict_equal(rv.host, u'\N{SNOWMAN}.com') - self.assert_strict_equal(rv.ascii_host, 'xn--n3h.com') - - def test_url_attributes_bytes(self): - rv = urls.url_parse(b'http://foo%3a:bar%3a@[::1]:80/123?x=y#frag') - self.assert_strict_equal(rv.scheme, b'http') - self.assert_strict_equal(rv.auth, b'foo%3a:bar%3a') - self.assert_strict_equal(rv.username, u'foo:') - self.assert_strict_equal(rv.password, u'bar:') - self.assert_strict_equal(rv.raw_username, b'foo%3a') - self.assert_strict_equal(rv.raw_password, b'bar%3a') - self.assert_strict_equal(rv.host, b'::1') - self.assert_equal(rv.port, 80) - self.assert_strict_equal(rv.path, b'/123') - self.assert_strict_equal(rv.query, b'x=y') - self.assert_strict_equal(rv.fragment, b'frag') - - def test_url_joining(self): - self.assert_strict_equal(urls.url_join('/foo', '/bar'), '/bar') - self.assert_strict_equal(urls.url_join('http://example.com/foo', '/bar'), - 'http://example.com/bar') - self.assert_strict_equal(urls.url_join('file:///tmp/', 'test.html'), - 'file:///tmp/test.html') - self.assert_strict_equal(urls.url_join('file:///tmp/x', 'test.html'), - 'file:///tmp/test.html') - self.assert_strict_equal(urls.url_join('file:///tmp/x', '../../../x.html'), - 'file:///x.html') - - def test_partial_unencoded_decode(self): - ref = u'foo=정상처리'.encode('euc-kr') - x = urls.url_decode(ref, charset='euc-kr') - self.assert_strict_equal(x['foo'], u'정상처리') - - def test_iri_to_uri_idempotence_ascii_only(self): - uri = u'http://www.idempoten.ce' - uri = urls.iri_to_uri(uri) - self.assert_equal(urls.iri_to_uri(uri), uri) - - def test_iri_to_uri_idempotence_non_ascii(self): - uri = u'http://\N{SNOWMAN}/\N{SNOWMAN}' - uri = urls.iri_to_uri(uri) - self.assert_equal(urls.iri_to_uri(uri), uri) - - def test_uri_to_iri_idempotence_ascii_only(self): - uri = 'http://www.idempoten.ce' - uri = urls.uri_to_iri(uri) - self.assert_equal(urls.uri_to_iri(uri), uri) - - def test_uri_to_iri_idempotence_non_ascii(self): - uri = 'http://xn--n3h/%E2%98%83' - uri = urls.uri_to_iri(uri) - self.assert_equal(urls.uri_to_iri(uri), uri) - - def test_iri_to_uri_to_iri(self): - iri = u'http://föö.com/' - uri = urls.iri_to_uri(iri) - self.assert_equal(urls.uri_to_iri(uri), iri) - - def test_uri_to_iri_to_uri(self): - uri = 'http://xn--f-rgao.com/%C3%9E' - iri = urls.uri_to_iri(uri) - self.assert_equal(urls.iri_to_uri(iri), uri) - - def test_uri_iri_normalization(self): - uri = 'http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93' - iri = u'http://föñ.com/\N{BALLOT BOX}/fred?utf8=\u2713' - - tests = [ - u'http://föñ.com/\N{BALLOT BOX}/fred?utf8=\u2713', - u'http://xn--f-rgao.com/\u2610/fred?utf8=\N{CHECK MARK}', - b'http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93', - u'http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93', - u'http://föñ.com/\u2610/fred?utf8=%E2%9C%93', - b'http://xn--f-rgao.com/\xe2\x98\x90/fred?utf8=\xe2\x9c\x93', - ] - - for test in tests: - self.assert_equal(urls.uri_to_iri(test), iri) - self.assert_equal(urls.iri_to_uri(test), uri) - self.assert_equal(urls.uri_to_iri(urls.iri_to_uri(test)), iri) - self.assert_equal(urls.iri_to_uri(urls.uri_to_iri(test)), uri) - self.assert_equal(urls.uri_to_iri(urls.uri_to_iri(test)), iri) - self.assert_equal(urls.iri_to_uri(urls.iri_to_uri(test)), uri) - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(URLsTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/utils.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/utils.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/utils.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/utils.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,284 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.utils - ~~~~~~~~~~~~~~~~~~~~~~~~ - - General utilities. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" - -from __future__ import with_statement - -import unittest -from datetime import datetime -from functools import partial - -from werkzeug.testsuite import WerkzeugTestCase - -from werkzeug import utils -from werkzeug.datastructures import Headers -from werkzeug.http import parse_date, http_date -from werkzeug.wrappers import BaseResponse -from werkzeug.test import Client, run_wsgi_app -from werkzeug._compat import text_type, implements_iterator - - -class GeneralUtilityTestCase(WerkzeugTestCase): - - def test_redirect(self): - resp = utils.redirect(u'/füübär') - self.assert_in(b'/f%C3%BC%C3%BCb%C3%A4r', resp.get_data()) - self.assert_equal(resp.headers['Location'], '/f%C3%BC%C3%BCb%C3%A4r') - self.assert_equal(resp.status_code, 302) - - resp = utils.redirect(u'http://☃.net/', 307) - self.assert_in(b'http://xn--n3h.net/', resp.get_data()) - self.assert_equal(resp.headers['Location'], 'http://xn--n3h.net/') - self.assert_equal(resp.status_code, 307) - - resp = utils.redirect('http://example.com/', 305) - self.assert_equal(resp.headers['Location'], 'http://example.com/') - self.assert_equal(resp.status_code, 305) - - def test_redirect_no_unicode_header_keys(self): - # Make sure all headers are native keys. This was a bug at one point - # due to an incorrect conversion. - resp = utils.redirect('http://example.com/', 305) - for key, value in resp.headers.items(): - self.assert_equal(type(key), str) - self.assert_equal(type(value), text_type) - self.assert_equal(resp.headers['Location'], 'http://example.com/') - self.assert_equal(resp.status_code, 305) - - def test_redirect_xss(self): - location = 'http://example.com/?xss=">' - resp = utils.redirect(location) - self.assert_not_in(b'', resp.get_data()) - - location = 'http://example.com/?xss="onmouseover="alert(1)' - resp = utils.redirect(location) - self.assert_not_in(b'href="http://example.com/?xss="onmouseover="alert(1)"', resp.get_data()) - - def test_cached_property(self): - foo = [] - class A(object): - def prop(self): - foo.append(42) - return 42 - prop = utils.cached_property(prop) - - a = A() - p = a.prop - q = a.prop - self.assert_true(p == q == 42) - self.assert_equal(foo, [42]) - - foo = [] - class A(object): - def _prop(self): - foo.append(42) - return 42 - prop = utils.cached_property(_prop, name='prop') - del _prop - - a = A() - p = a.prop - q = a.prop - self.assert_true(p == q == 42) - self.assert_equal(foo, [42]) - - def test_environ_property(self): - class A(object): - environ = {'string': 'abc', 'number': '42'} - - string = utils.environ_property('string') - missing = utils.environ_property('missing', 'spam') - read_only = utils.environ_property('number') - number = utils.environ_property('number', load_func=int) - broken_number = utils.environ_property('broken_number', load_func=int) - date = utils.environ_property('date', None, parse_date, http_date, - read_only=False) - foo = utils.environ_property('foo') - - a = A() - self.assert_equal(a.string, 'abc') - self.assert_equal(a.missing, 'spam') - def test_assign(): - a.read_only = 'something' - self.assert_raises(AttributeError, test_assign) - self.assert_equal(a.number, 42) - self.assert_equal(a.broken_number, None) - self.assert_is_none(a.date) - a.date = datetime(2008, 1, 22, 10, 0, 0, 0) - self.assert_equal(a.environ['date'], 'Tue, 22 Jan 2008 10:00:00 GMT') - - def test_escape(self): - class Foo(str): - def __html__(self): - return text_type(self) - self.assert_equal(utils.escape(None), '') - self.assert_equal(utils.escape(42), '42') - self.assert_equal(utils.escape('<>'), '<>') - self.assert_equal(utils.escape('"foo"'), '"foo"') - self.assert_equal(utils.escape(Foo('')), '') - - def test_unescape(self): - self.assert_equal(utils.unescape('<ä>'), u'<ä>') - - def test_run_wsgi_app(self): - def foo(environ, start_response): - start_response('200 OK', [('Content-Type', 'text/plain')]) - yield '1' - yield '2' - yield '3' - - app_iter, status, headers = run_wsgi_app(foo, {}) - self.assert_equal(status, '200 OK') - self.assert_equal(list(headers), [('Content-Type', 'text/plain')]) - self.assert_equal(next(app_iter), '1') - self.assert_equal(next(app_iter), '2') - self.assert_equal(next(app_iter), '3') - self.assert_raises(StopIteration, partial(next, app_iter)) - - got_close = [] - @implements_iterator - class CloseIter(object): - def __init__(self): - self.iterated = False - def __iter__(self): - return self - def close(self): - got_close.append(None) - def __next__(self): - if self.iterated: - raise StopIteration() - self.iterated = True - return 'bar' - - def bar(environ, start_response): - start_response('200 OK', [('Content-Type', 'text/plain')]) - return CloseIter() - - app_iter, status, headers = run_wsgi_app(bar, {}) - self.assert_equal(status, '200 OK') - self.assert_equal(list(headers), [('Content-Type', 'text/plain')]) - self.assert_equal(next(app_iter), 'bar') - self.assert_raises(StopIteration, partial(next, app_iter)) - app_iter.close() - - self.assert_equal(run_wsgi_app(bar, {}, True)[0], ['bar']) - - self.assert_equal(len(got_close), 2) - - def test_import_string(self): - import cgi - from werkzeug.debug import DebuggedApplication - self.assert_is(utils.import_string('cgi.escape'), cgi.escape) - self.assert_is(utils.import_string(u'cgi.escape'), cgi.escape) - self.assert_is(utils.import_string('cgi:escape'), cgi.escape) - self.assert_is_none(utils.import_string('XXXXXXXXXXXX', True)) - self.assert_is_none(utils.import_string('cgi.XXXXXXXXXXXX', True)) - self.assert_is(utils.import_string(u'cgi.escape'), cgi.escape) - self.assert_is(utils.import_string(u'werkzeug.debug.DebuggedApplication'), DebuggedApplication) - self.assert_raises(ImportError, utils.import_string, 'XXXXXXXXXXXXXXXX') - self.assert_raises(ImportError, utils.import_string, 'cgi.XXXXXXXXXX') - - def test_find_modules(self): - self.assert_equal(list(utils.find_modules('werkzeug.debug')), \ - ['werkzeug.debug.console', 'werkzeug.debug.repr', - 'werkzeug.debug.tbtools']) - - def test_html_builder(self): - html = utils.html - xhtml = utils.xhtml - self.assert_equal(html.p('Hello World'), '

Hello World

') - self.assert_equal(html.a('Test', href='#'), 'Test') - self.assert_equal(html.br(), '
') - self.assert_equal(xhtml.br(), '
') - self.assert_equal(html.img(src='foo'), '') - self.assert_equal(xhtml.img(src='foo'), '') - self.assert_equal(html.html( - html.head( - html.title('foo'), - html.script(type='text/javascript') - ) - ), 'foo') - self.assert_equal(html(''), '<foo>') - self.assert_equal(html.input(disabled=True), '') - self.assert_equal(xhtml.input(disabled=True), '') - self.assert_equal(html.input(disabled=''), '') - self.assert_equal(xhtml.input(disabled=''), '') - self.assert_equal(html.input(disabled=None), '') - self.assert_equal(xhtml.input(disabled=None), '') - self.assert_equal(html.script('alert("Hello World");'), '') - self.assert_equal(xhtml.script('alert("Hello World");'), '') - - def test_validate_arguments(self): - take_none = lambda: None - take_two = lambda a, b: None - take_two_one_default = lambda a, b=0: None - - self.assert_equal(utils.validate_arguments(take_two, (1, 2,), {}), ((1, 2), {})) - self.assert_equal(utils.validate_arguments(take_two, (1,), {'b': 2}), ((1, 2), {})) - self.assert_equal(utils.validate_arguments(take_two_one_default, (1,), {}), ((1, 0), {})) - self.assert_equal(utils.validate_arguments(take_two_one_default, (1, 2), {}), ((1, 2), {})) - - self.assert_raises(utils.ArgumentValidationError, - utils.validate_arguments, take_two, (), {}) - - self.assert_equal(utils.validate_arguments(take_none, (1, 2,), {'c': 3}), ((), {})) - self.assert_raises(utils.ArgumentValidationError, - utils.validate_arguments, take_none, (1,), {}, drop_extra=False) - self.assert_raises(utils.ArgumentValidationError, - utils.validate_arguments, take_none, (), {'a': 1}, drop_extra=False) - - def test_header_set_duplication_bug(self): - headers = Headers([ - ('Content-Type', 'text/html'), - ('Foo', 'bar'), - ('Blub', 'blah') - ]) - headers['blub'] = 'hehe' - headers['blafasel'] = 'humm' - self.assert_equal(headers, Headers([ - ('Content-Type', 'text/html'), - ('Foo', 'bar'), - ('blub', 'hehe'), - ('blafasel', 'humm') - ])) - - def test_append_slash_redirect(self): - def app(env, sr): - return utils.append_slash_redirect(env)(env, sr) - client = Client(app, BaseResponse) - response = client.get('foo', base_url='http://example.org/app') - self.assert_equal(response.status_code, 301) - self.assert_equal(response.headers['Location'], 'http://example.org/app/foo/') - - def test_cached_property_doc(self): - @utils.cached_property - def foo(): - """testing""" - return 42 - self.assert_equal(foo.__doc__, 'testing') - self.assert_equal(foo.__name__, 'foo') - self.assert_equal(foo.__module__, __name__) - - def test_secure_filename(self): - self.assert_equal(utils.secure_filename('My cool movie.mov'), - 'My_cool_movie.mov') - self.assert_equal(utils.secure_filename('../../../etc/passwd'), - 'etc_passwd') - self.assert_equal(utils.secure_filename(u'i contain cool \xfcml\xe4uts.txt'), - 'i_contain_cool_umlauts.txt') - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(GeneralUtilityTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/wrappers.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/wrappers.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/wrappers.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/wrappers.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,840 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.wrappers - ~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Tests for the response and request objects. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import unittest -import pickle -from io import BytesIO -from datetime import datetime -from werkzeug._compat import iteritems - -from werkzeug.testsuite import WerkzeugTestCase - -from werkzeug import wrappers -from werkzeug.exceptions import SecurityError -from werkzeug.wsgi import LimitedStream -from werkzeug.datastructures import MultiDict, ImmutableOrderedMultiDict, \ - ImmutableList, ImmutableTypeConversionDict, CharsetAccept, \ - MIMEAccept, LanguageAccept, Accept, CombinedMultiDict -from werkzeug.test import Client, create_environ, run_wsgi_app -from werkzeug._compat import implements_iterator, text_type - - -class RequestTestResponse(wrappers.BaseResponse): - """Subclass of the normal response class we use to test response - and base classes. Has some methods to test if things in the - response match. - """ - - def __init__(self, response, status, headers): - wrappers.BaseResponse.__init__(self, response, status, headers) - self.body_data = pickle.loads(self.get_data()) - - def __getitem__(self, key): - return self.body_data[key] - - -def request_demo_app(environ, start_response): - request = wrappers.BaseRequest(environ) - assert 'werkzeug.request' in environ - start_response('200 OK', [('Content-Type', 'text/plain')]) - return [pickle.dumps({ - 'args': request.args, - 'args_as_list': list(request.args.lists()), - 'form': request.form, - 'form_as_list': list(request.form.lists()), - 'environ': prepare_environ_pickle(request.environ), - 'data': request.get_data() - })] - - -def prepare_environ_pickle(environ): - result = {} - for key, value in iteritems(environ): - try: - pickle.dumps((key, value)) - except Exception: - continue - result[key] = value - return result - - -class WrappersTestCase(WerkzeugTestCase): - - def assert_environ(self, environ, method): - self.assert_strict_equal(environ['REQUEST_METHOD'], method) - self.assert_strict_equal(environ['PATH_INFO'], '/') - self.assert_strict_equal(environ['SCRIPT_NAME'], '') - self.assert_strict_equal(environ['SERVER_NAME'], 'localhost') - self.assert_strict_equal(environ['wsgi.version'], (1, 0)) - self.assert_strict_equal(environ['wsgi.url_scheme'], 'http') - - def test_base_request(self): - client = Client(request_demo_app, RequestTestResponse) - - # get requests - response = client.get('/?foo=bar&foo=hehe') - self.assert_strict_equal(response['args'], MultiDict([('foo', u'bar'), ('foo', u'hehe')])) - self.assert_strict_equal(response['args_as_list'], [('foo', [u'bar', u'hehe'])]) - self.assert_strict_equal(response['form'], MultiDict()) - self.assert_strict_equal(response['form_as_list'], []) - self.assert_strict_equal(response['data'], b'') - self.assert_environ(response['environ'], 'GET') - - # post requests with form data - response = client.post('/?blub=blah', data='foo=blub+hehe&blah=42', - content_type='application/x-www-form-urlencoded') - self.assert_strict_equal(response['args'], MultiDict([('blub', u'blah')])) - self.assert_strict_equal(response['args_as_list'], [('blub', [u'blah'])]) - self.assert_strict_equal(response['form'], MultiDict([('foo', u'blub hehe'), ('blah', u'42')])) - self.assert_strict_equal(response['data'], b'') - # currently we do not guarantee that the values are ordered correctly - # for post data. - ## self.assert_strict_equal(response['form_as_list'], [('foo', ['blub hehe']), ('blah', ['42'])]) - self.assert_environ(response['environ'], 'POST') - - # patch requests with form data - response = client.patch('/?blub=blah', data='foo=blub+hehe&blah=42', - content_type='application/x-www-form-urlencoded') - self.assert_strict_equal(response['args'], MultiDict([('blub', u'blah')])) - self.assert_strict_equal(response['args_as_list'], [('blub', [u'blah'])]) - self.assert_strict_equal(response['form'], - MultiDict([('foo', u'blub hehe'), ('blah', u'42')])) - self.assert_strict_equal(response['data'], b'') - self.assert_environ(response['environ'], 'PATCH') - - # post requests with json data - json = b'{"foo": "bar", "blub": "blah"}' - response = client.post('/?a=b', data=json, content_type='application/json') - self.assert_strict_equal(response['data'], json) - self.assert_strict_equal(response['args'], MultiDict([('a', u'b')])) - self.assert_strict_equal(response['form'], MultiDict()) - - def test_query_string_is_bytes(self): - req = wrappers.Request.from_values(u'/?foo=%2f') - self.assert_strict_equal(req.query_string, b'foo=%2f') - - def test_access_route(self): - req = wrappers.Request.from_values(headers={ - 'X-Forwarded-For': '192.168.1.2, 192.168.1.1' - }) - req.environ['REMOTE_ADDR'] = '192.168.1.3' - self.assert_equal(req.access_route, ['192.168.1.2', '192.168.1.1']) - self.assert_strict_equal(req.remote_addr, '192.168.1.3') - - req = wrappers.Request.from_values() - req.environ['REMOTE_ADDR'] = '192.168.1.3' - self.assert_strict_equal(list(req.access_route), ['192.168.1.3']) - - def test_url_request_descriptors(self): - req = wrappers.Request.from_values('/bar?foo=baz', 'http://example.com/test') - self.assert_strict_equal(req.path, u'/bar') - self.assert_strict_equal(req.full_path, u'/bar?foo=baz') - self.assert_strict_equal(req.script_root, u'/test') - self.assert_strict_equal(req.url, u'http://example.com/test/bar?foo=baz') - self.assert_strict_equal(req.base_url, u'http://example.com/test/bar') - self.assert_strict_equal(req.url_root, u'http://example.com/test/') - self.assert_strict_equal(req.host_url, u'http://example.com/') - self.assert_strict_equal(req.host, 'example.com') - self.assert_strict_equal(req.scheme, 'http') - - req = wrappers.Request.from_values('/bar?foo=baz', 'https://example.com/test') - self.assert_strict_equal(req.scheme, 'https') - - def test_url_request_descriptors_query_quoting(self): - next = 'http%3A%2F%2Fwww.example.com%2F%3Fnext%3D%2F' - req = wrappers.Request.from_values('/bar?next=' + next, 'http://example.com/') - self.assert_equal(req.path, u'/bar') - self.assert_strict_equal(req.full_path, u'/bar?next=' + next) - self.assert_strict_equal(req.url, u'http://example.com/bar?next=' + next) - - def test_url_request_descriptors_hosts(self): - req = wrappers.Request.from_values('/bar?foo=baz', 'http://example.com/test') - req.trusted_hosts = ['example.com'] - self.assert_strict_equal(req.path, u'/bar') - self.assert_strict_equal(req.full_path, u'/bar?foo=baz') - self.assert_strict_equal(req.script_root, u'/test') - self.assert_strict_equal(req.url, u'http://example.com/test/bar?foo=baz') - self.assert_strict_equal(req.base_url, u'http://example.com/test/bar') - self.assert_strict_equal(req.url_root, u'http://example.com/test/') - self.assert_strict_equal(req.host_url, u'http://example.com/') - self.assert_strict_equal(req.host, 'example.com') - self.assert_strict_equal(req.scheme, 'http') - - req = wrappers.Request.from_values('/bar?foo=baz', 'https://example.com/test') - self.assert_strict_equal(req.scheme, 'https') - - req = wrappers.Request.from_values('/bar?foo=baz', 'http://example.com/test') - req.trusted_hosts = ['example.org'] - self.assert_raises(SecurityError, lambda: req.url) - self.assert_raises(SecurityError, lambda: req.base_url) - self.assert_raises(SecurityError, lambda: req.url_root) - self.assert_raises(SecurityError, lambda: req.host_url) - self.assert_raises(SecurityError, lambda: req.host) - - def test_authorization_mixin(self): - request = wrappers.Request.from_values(headers={ - 'Authorization': 'Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==' - }) - a = request.authorization - self.assert_strict_equal(a.type, 'basic') - self.assert_strict_equal(a.username, 'Aladdin') - self.assert_strict_equal(a.password, 'open sesame') - - def test_stream_only_mixing(self): - request = wrappers.PlainRequest.from_values( - data=b'foo=blub+hehe', - content_type='application/x-www-form-urlencoded' - ) - self.assert_equal(list(request.files.items()), []) - self.assert_equal(list(request.form.items()), []) - self.assert_raises(AttributeError, lambda: request.data) - self.assert_strict_equal(request.stream.read(), b'foo=blub+hehe') - - def test_base_response(self): - # unicode - response = wrappers.BaseResponse(u'öäü') - self.assert_strict_equal(response.get_data(), u'öäü'.encode('utf-8')) - - # writing - response = wrappers.Response('foo') - response.stream.write('bar') - self.assert_strict_equal(response.get_data(), b'foobar') - - # set cookie - response = wrappers.BaseResponse() - response.set_cookie('foo', 'bar', 60, 0, '/blub', 'example.org') - self.assert_strict_equal(response.headers.to_wsgi_list(), [ - ('Content-Type', 'text/plain; charset=utf-8'), - ('Set-Cookie', 'foo=bar; Domain=example.org; Expires=Thu, ' - '01-Jan-1970 00:00:00 GMT; Max-Age=60; Path=/blub') - ]) - - # delete cookie - response = wrappers.BaseResponse() - response.delete_cookie('foo') - self.assert_strict_equal(response.headers.to_wsgi_list(), [ - ('Content-Type', 'text/plain; charset=utf-8'), - ('Set-Cookie', 'foo=; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Max-Age=0; Path=/') - ]) - - # close call forwarding - closed = [] - @implements_iterator - class Iterable(object): - def __next__(self): - raise StopIteration() - def __iter__(self): - return self - def close(self): - closed.append(True) - response = wrappers.BaseResponse(Iterable()) - response.call_on_close(lambda: closed.append(True)) - app_iter, status, headers = run_wsgi_app(response, - create_environ(), - buffered=True) - self.assert_strict_equal(status, '200 OK') - self.assert_strict_equal(''.join(app_iter), '') - self.assert_strict_equal(len(closed), 2) - - # with statement - del closed[:] - response = wrappers.BaseResponse(Iterable()) - with response: - pass - self.assert_equal(len(closed), 1) - - def test_response_status_codes(self): - response = wrappers.BaseResponse() - response.status_code = 404 - self.assert_strict_equal(response.status, '404 NOT FOUND') - response.status = '200 OK' - self.assert_strict_equal(response.status_code, 200) - response.status = '999 WTF' - self.assert_strict_equal(response.status_code, 999) - response.status_code = 588 - self.assert_strict_equal(response.status_code, 588) - self.assert_strict_equal(response.status, '588 UNKNOWN') - response.status = 'wtf' - self.assert_strict_equal(response.status_code, 0) - self.assert_strict_equal(response.status, '0 wtf') - - def test_type_forcing(self): - def wsgi_application(environ, start_response): - start_response('200 OK', [('Content-Type', 'text/html')]) - return ['Hello World!'] - base_response = wrappers.BaseResponse('Hello World!', content_type='text/html') - - class SpecialResponse(wrappers.Response): - def foo(self): - return 42 - - # good enough for this simple application, but don't ever use that in - # real world examples! - fake_env = {} - - for orig_resp in wsgi_application, base_response: - response = SpecialResponse.force_type(orig_resp, fake_env) - assert response.__class__ is SpecialResponse - self.assert_strict_equal(response.foo(), 42) - self.assert_strict_equal(response.get_data(), b'Hello World!') - self.assert_equal(response.content_type, 'text/html') - - # without env, no arbitrary conversion - self.assert_raises(TypeError, SpecialResponse.force_type, wsgi_application) - - def test_accept_mixin(self): - request = wrappers.Request({ - 'HTTP_ACCEPT': 'text/xml,application/xml,application/xhtml+xml,' - 'text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5', - 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', - 'HTTP_ACCEPT_ENCODING': 'gzip,deflate', - 'HTTP_ACCEPT_LANGUAGE': 'en-us,en;q=0.5' - }) - self.assert_equal(request.accept_mimetypes, MIMEAccept([ - ('text/xml', 1), ('image/png', 1), ('application/xml', 1), - ('application/xhtml+xml', 1), ('text/html', 0.9), - ('text/plain', 0.8), ('*/*', 0.5) - ])) - self.assert_strict_equal(request.accept_charsets, CharsetAccept([ - ('ISO-8859-1', 1), ('utf-8', 0.7), ('*', 0.7) - ])) - self.assert_strict_equal(request.accept_encodings, Accept([ - ('gzip', 1), ('deflate', 1)])) - self.assert_strict_equal(request.accept_languages, LanguageAccept([ - ('en-us', 1), ('en', 0.5)])) - - request = wrappers.Request({'HTTP_ACCEPT': ''}) - self.assert_strict_equal(request.accept_mimetypes, MIMEAccept()) - - def test_etag_request_mixin(self): - request = wrappers.Request({ - 'HTTP_CACHE_CONTROL': 'no-store, no-cache', - 'HTTP_IF_MATCH': 'w/"foo", bar, "baz"', - 'HTTP_IF_NONE_MATCH': 'w/"foo", bar, "baz"', - 'HTTP_IF_MODIFIED_SINCE': 'Tue, 22 Jan 2008 11:18:44 GMT', - 'HTTP_IF_UNMODIFIED_SINCE': 'Tue, 22 Jan 2008 11:18:44 GMT' - }) - assert request.cache_control.no_store - assert request.cache_control.no_cache - - for etags in request.if_match, request.if_none_match: - assert etags('bar') - assert etags.contains_raw('w/"foo"') - assert etags.contains_weak('foo') - assert not etags.contains('foo') - - self.assert_equal(request.if_modified_since, datetime(2008, 1, 22, 11, 18, 44)) - self.assert_equal(request.if_unmodified_since, datetime(2008, 1, 22, 11, 18, 44)) - - def test_user_agent_mixin(self): - user_agents = [ - ('Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.11) ' - 'Gecko/20071127 Firefox/2.0.0.11', 'firefox', 'macos', '2.0.0.11', - 'en-US'), - ('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; de-DE) Opera 8.54', - 'opera', 'windows', '8.54', 'de-DE'), - ('Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420 ' - '(KHTML, like Gecko) Version/3.0 Mobile/1A543a Safari/419.3', - 'safari', 'iphone', '419.3', 'en'), - ('Bot Googlebot/2.1 ( http://www.googlebot.com/bot.html)', - 'google', None, '2.1', None) - ] - for ua, browser, platform, version, lang in user_agents: - request = wrappers.Request({'HTTP_USER_AGENT': ua}) - self.assert_strict_equal(request.user_agent.browser, browser) - self.assert_strict_equal(request.user_agent.platform, platform) - self.assert_strict_equal(request.user_agent.version, version) - self.assert_strict_equal(request.user_agent.language, lang) - assert bool(request.user_agent) - self.assert_strict_equal(request.user_agent.to_header(), ua) - self.assert_strict_equal(str(request.user_agent), ua) - - request = wrappers.Request({'HTTP_USER_AGENT': 'foo'}) - assert not request.user_agent - - def test_stream_wrapping(self): - class LowercasingStream(object): - def __init__(self, stream): - self._stream = stream - def read(self, size=-1): - return self._stream.read(size).lower() - def readline(self, size=-1): - return self._stream.readline(size).lower() - - data = b'foo=Hello+World' - req = wrappers.Request.from_values('/', method='POST', data=data, - content_type='application/x-www-form-urlencoded') - req.stream = LowercasingStream(req.stream) - self.assert_equal(req.form['foo'], 'hello world') - - def test_data_descriptor_triggers_parsing(self): - data = b'foo=Hello+World' - req = wrappers.Request.from_values('/', method='POST', data=data, - content_type='application/x-www-form-urlencoded') - - self.assert_equal(req.data, b'') - self.assert_equal(req.form['foo'], u'Hello World') - - def test_get_data_method_parsing_caching_behavior(self): - data = b'foo=Hello+World' - req = wrappers.Request.from_values('/', method='POST', data=data, - content_type='application/x-www-form-urlencoded') - - # get_data() caches, so form stays available - self.assert_equal(req.get_data(), data) - self.assert_equal(req.form['foo'], u'Hello World') - self.assert_equal(req.get_data(), data) - - # here we access the form data first, caching is bypassed - req = wrappers.Request.from_values('/', method='POST', data=data, - content_type='application/x-www-form-urlencoded') - self.assert_equal(req.form['foo'], u'Hello World') - self.assert_equal(req.get_data(), b'') - - # Another case is uncached get data which trashes everything - req = wrappers.Request.from_values('/', method='POST', data=data, - content_type='application/x-www-form-urlencoded') - self.assert_equal(req.get_data(cache=False), data) - self.assert_equal(req.get_data(cache=False), b'') - self.assert_equal(req.form, {}) - - # Or we can implicitly start the form parser which is similar to - # the old .data behavior - req = wrappers.Request.from_values('/', method='POST', data=data, - content_type='application/x-www-form-urlencoded') - self.assert_equal(req.get_data(parse_form_data=True), b'') - self.assert_equal(req.form['foo'], u'Hello World') - - def test_etag_response_mixin(self): - response = wrappers.Response('Hello World') - self.assert_equal(response.get_etag(), (None, None)) - response.add_etag() - self.assert_equal(response.get_etag(), ('b10a8db164e0754105b7a99be72e3fe5', False)) - assert not response.cache_control - response.cache_control.must_revalidate = True - response.cache_control.max_age = 60 - response.headers['Content-Length'] = len(response.get_data()) - assert response.headers['Cache-Control'] in ('must-revalidate, max-age=60', - 'max-age=60, must-revalidate') - - assert 'date' not in response.headers - env = create_environ() - env.update({ - 'REQUEST_METHOD': 'GET', - 'HTTP_IF_NONE_MATCH': response.get_etag()[0] - }) - response.make_conditional(env) - assert 'date' in response.headers - - # after the thing is invoked by the server as wsgi application - # (we're emulating this here), there must not be any entity - # headers left and the status code would have to be 304 - resp = wrappers.Response.from_app(response, env) - self.assert_equal(resp.status_code, 304) - assert not 'content-length' in resp.headers - - # make sure date is not overriden - response = wrappers.Response('Hello World') - response.date = 1337 - d = response.date - response.make_conditional(env) - self.assert_equal(response.date, d) - - # make sure content length is only set if missing - response = wrappers.Response('Hello World') - response.content_length = 999 - response.make_conditional(env) - self.assert_equal(response.content_length, 999) - - def test_etag_response_mixin_freezing(self): - class WithFreeze(wrappers.ETagResponseMixin, wrappers.BaseResponse): - pass - class WithoutFreeze(wrappers.BaseResponse, wrappers.ETagResponseMixin): - pass - - response = WithFreeze('Hello World') - response.freeze() - self.assert_strict_equal(response.get_etag(), - (text_type(wrappers.generate_etag(b'Hello World')), False)) - response = WithoutFreeze('Hello World') - response.freeze() - self.assert_equal(response.get_etag(), (None, None)) - response = wrappers.Response('Hello World') - response.freeze() - self.assert_equal(response.get_etag(), (None, None)) - - def test_authenticate_mixin(self): - resp = wrappers.Response() - resp.www_authenticate.type = 'basic' - resp.www_authenticate.realm = 'Testing' - self.assert_strict_equal(resp.headers['WWW-Authenticate'], u'Basic realm="Testing"') - resp.www_authenticate.realm = None - resp.www_authenticate.type = None - assert 'WWW-Authenticate' not in resp.headers - - def test_response_stream_mixin(self): - response = wrappers.Response() - response.stream.write('Hello ') - response.stream.write('World!') - self.assert_equal(response.response, ['Hello ', 'World!']) - self.assert_equal(response.get_data(), b'Hello World!') - - def test_common_response_descriptors_mixin(self): - response = wrappers.Response() - response.mimetype = 'text/html' - self.assert_equal(response.mimetype, 'text/html') - self.assert_equal(response.content_type, 'text/html; charset=utf-8') - self.assert_equal(response.mimetype_params, {'charset': 'utf-8'}) - response.mimetype_params['x-foo'] = 'yep' - del response.mimetype_params['charset'] - self.assert_equal(response.content_type, 'text/html; x-foo=yep') - - now = datetime.utcnow().replace(microsecond=0) - - assert response.content_length is None - response.content_length = '42' - self.assert_equal(response.content_length, 42) - - for attr in 'date', 'age', 'expires': - assert getattr(response, attr) is None - setattr(response, attr, now) - self.assert_equal(getattr(response, attr), now) - - assert response.retry_after is None - response.retry_after = now - self.assert_equal(response.retry_after, now) - - assert not response.vary - response.vary.add('Cookie') - response.vary.add('Content-Language') - assert 'cookie' in response.vary - self.assert_equal(response.vary.to_header(), 'Cookie, Content-Language') - response.headers['Vary'] = 'Content-Encoding' - self.assert_equal(response.vary.as_set(), set(['content-encoding'])) - - response.allow.update(['GET', 'POST']) - self.assert_equal(response.headers['Allow'], 'GET, POST') - - response.content_language.add('en-US') - response.content_language.add('fr') - self.assert_equal(response.headers['Content-Language'], 'en-US, fr') - - def test_common_request_descriptors_mixin(self): - request = wrappers.Request.from_values(content_type='text/html; charset=utf-8', - content_length='23', - headers={ - 'Referer': 'http://www.example.com/', - 'Date': 'Sat, 28 Feb 2009 19:04:35 GMT', - 'Max-Forwards': '10', - 'Pragma': 'no-cache', - 'Content-Encoding': 'gzip', - 'Content-MD5': '9a3bc6dbc47a70db25b84c6e5867a072' - }) - - self.assert_equal(request.content_type, 'text/html; charset=utf-8') - self.assert_equal(request.mimetype, 'text/html') - self.assert_equal(request.mimetype_params, {'charset': 'utf-8'}) - self.assert_equal(request.content_length, 23) - self.assert_equal(request.referrer, 'http://www.example.com/') - self.assert_equal(request.date, datetime(2009, 2, 28, 19, 4, 35)) - self.assert_equal(request.max_forwards, 10) - self.assert_true('no-cache' in request.pragma) - self.assert_equal(request.content_encoding, 'gzip') - self.assert_equal(request.content_md5, '9a3bc6dbc47a70db25b84c6e5867a072') - - def test_shallow_mode(self): - request = wrappers.Request({'QUERY_STRING': 'foo=bar'}, shallow=True) - self.assert_equal(request.args['foo'], 'bar') - self.assert_raises(RuntimeError, lambda: request.form['foo']) - - def test_form_parsing_failed(self): - data = ( - b'--blah\r\n' - ) - data = wrappers.Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='multipart/form-data; boundary=foo', - method='POST') - assert not data.files - assert not data.form - - def test_file_closing(self): - data = (b'--foo\r\n' - b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n' - b'Content-Type: text/plain; charset=utf-8\r\n\r\n' - b'file contents, just the contents\r\n' - b'--foo--') - req = wrappers.Request.from_values( - input_stream=BytesIO(data), - content_length=len(data), - content_type='multipart/form-data; boundary=foo', - method='POST' - ) - foo = req.files['foo'] - self.assert_equal(foo.mimetype, 'text/plain') - self.assert_equal(foo.filename, 'foo.txt') - - self.assert_equal(foo.closed, False) - req.close() - self.assert_equal(foo.closed, True) - - def test_file_closing_with(self): - data = (b'--foo\r\n' - b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n' - b'Content-Type: text/plain; charset=utf-8\r\n\r\n' - b'file contents, just the contents\r\n' - b'--foo--') - req = wrappers.Request.from_values( - input_stream=BytesIO(data), - content_length=len(data), - content_type='multipart/form-data; boundary=foo', - method='POST' - ) - with req: - foo = req.files['foo'] - self.assert_equal(foo.mimetype, 'text/plain') - self.assert_equal(foo.filename, 'foo.txt') - - self.assert_equal(foo.closed, True) - - def test_url_charset_reflection(self): - req = wrappers.Request.from_values() - req.charset = 'utf-7' - self.assert_equal(req.url_charset, 'utf-7') - - def test_response_streamed(self): - r = wrappers.Response() - assert not r.is_streamed - r = wrappers.Response("Hello World") - assert not r.is_streamed - r = wrappers.Response(["foo", "bar"]) - assert not r.is_streamed - def gen(): - if 0: - yield None - r = wrappers.Response(gen()) - assert r.is_streamed - - def test_response_iter_wrapping(self): - def uppercasing(iterator): - for item in iterator: - yield item.upper() - def generator(): - yield 'foo' - yield 'bar' - req = wrappers.Request.from_values() - resp = wrappers.Response(generator()) - del resp.headers['Content-Length'] - resp.response = uppercasing(resp.iter_encoded()) - actual_resp = wrappers.Response.from_app(resp, req.environ, buffered=True) - self.assertEqual(actual_resp.get_data(), b'FOOBAR') - - def test_response_freeze(self): - def generate(): - yield "foo" - yield "bar" - resp = wrappers.Response(generate()) - resp.freeze() - self.assert_equal(resp.response, [b'foo', b'bar']) - self.assert_equal(resp.headers['content-length'], '6') - - def test_other_method_payload(self): - data = b'Hello World' - req = wrappers.Request.from_values(input_stream=BytesIO(data), - content_length=len(data), - content_type='text/plain', - method='WHAT_THE_FUCK') - self.assert_equal(req.get_data(), data) - self.assert_is_instance(req.stream, LimitedStream) - - def test_urlfication(self): - resp = wrappers.Response() - resp.headers['Location'] = u'http://üser:pässword@☃.net/påth' - resp.headers['Content-Location'] = u'http://☃.net/' - headers = resp.get_wsgi_headers(create_environ()) - self.assert_equal(headers['location'], \ - 'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th') - self.assert_equal(headers['content-location'], 'http://xn--n3h.net/') - - def test_new_response_iterator_behavior(self): - req = wrappers.Request.from_values() - resp = wrappers.Response(u'Hello Wörld!') - - def get_content_length(resp): - headers = resp.get_wsgi_headers(req.environ) - return headers.get('content-length', type=int) - - def generate_items(): - yield "Hello " - yield u"Wörld!" - - # werkzeug encodes when set to `data` now, which happens - # if a string is passed to the response object. - self.assert_equal(resp.response, [u'Hello Wörld!'.encode('utf-8')]) - self.assert_equal(resp.get_data(), u'Hello Wörld!'.encode('utf-8')) - self.assert_equal(get_content_length(resp), 13) - assert not resp.is_streamed - assert resp.is_sequence - - # try the same for manual assignment - resp.set_data(u'Wörd') - self.assert_equal(resp.response, [u'Wörd'.encode('utf-8')]) - self.assert_equal(resp.get_data(), u'Wörd'.encode('utf-8')) - self.assert_equal(get_content_length(resp), 5) - assert not resp.is_streamed - assert resp.is_sequence - - # automatic generator sequence conversion - resp.response = generate_items() - assert resp.is_streamed - assert not resp.is_sequence - self.assert_equal(resp.get_data(), u'Hello Wörld!'.encode('utf-8')) - self.assert_equal(resp.response, [b'Hello ', u'Wörld!'.encode('utf-8')]) - assert not resp.is_streamed - assert resp.is_sequence - - # automatic generator sequence conversion - resp.response = generate_items() - resp.implicit_sequence_conversion = False - assert resp.is_streamed - assert not resp.is_sequence - self.assert_raises(RuntimeError, lambda: resp.get_data()) - resp.make_sequence() - self.assert_equal(resp.get_data(), u'Hello Wörld!'.encode('utf-8')) - self.assert_equal(resp.response, [b'Hello ', u'Wörld!'.encode('utf-8')]) - assert not resp.is_streamed - assert resp.is_sequence - - # stream makes it a list no matter how the conversion is set - for val in True, False: - resp.implicit_sequence_conversion = val - resp.response = ("foo", "bar") - assert resp.is_sequence - resp.stream.write('baz') - self.assert_equal(resp.response, ['foo', 'bar', 'baz']) - - def test_form_data_ordering(self): - class MyRequest(wrappers.Request): - parameter_storage_class = ImmutableOrderedMultiDict - - req = MyRequest.from_values('/?foo=1&bar=0&foo=3') - self.assert_equal(list(req.args), ['foo', 'bar']) - self.assert_equal(list(req.args.items(multi=True)), [ - ('foo', '1'), - ('bar', '0'), - ('foo', '3') - ]) - self.assert_is_instance(req.args, ImmutableOrderedMultiDict) - self.assert_is_instance(req.values, CombinedMultiDict) - self.assert_equal(req.values['foo'], '1') - self.assert_equal(req.values.getlist('foo'), ['1', '3']) - - def test_storage_classes(self): - class MyRequest(wrappers.Request): - dict_storage_class = dict - list_storage_class = list - parameter_storage_class = dict - req = MyRequest.from_values('/?foo=baz', headers={ - 'Cookie': 'foo=bar' - }) - assert type(req.cookies) is dict - self.assert_equal(req.cookies, {'foo': 'bar'}) - assert type(req.access_route) is list - - assert type(req.args) is dict - assert type(req.values) is CombinedMultiDict - self.assert_equal(req.values['foo'], u'baz') - - req = wrappers.Request.from_values(headers={ - 'Cookie': 'foo=bar' - }) - assert type(req.cookies) is ImmutableTypeConversionDict - self.assert_equal(req.cookies, {'foo': 'bar'}) - assert type(req.access_route) is ImmutableList - - MyRequest.list_storage_class = tuple - req = MyRequest.from_values() - assert type(req.access_route) is tuple - - def test_response_headers_passthrough(self): - headers = wrappers.Headers() - resp = wrappers.Response(headers=headers) - assert resp.headers is headers - - def test_response_304_no_content_length(self): - resp = wrappers.Response('Test', status=304) - env = create_environ() - assert 'content-length' not in resp.get_wsgi_headers(env) - - def test_ranges(self): - # basic range stuff - req = wrappers.Request.from_values() - assert req.range is None - req = wrappers.Request.from_values(headers={'Range': 'bytes=0-499'}) - self.assert_equal(req.range.ranges, [(0, 500)]) - - resp = wrappers.Response() - resp.content_range = req.range.make_content_range(1000) - self.assert_equal(resp.content_range.units, 'bytes') - self.assert_equal(resp.content_range.start, 0) - self.assert_equal(resp.content_range.stop, 500) - self.assert_equal(resp.content_range.length, 1000) - self.assert_equal(resp.headers['Content-Range'], 'bytes 0-499/1000') - - resp.content_range.unset() - assert 'Content-Range' not in resp.headers - - resp.headers['Content-Range'] = 'bytes 0-499/1000' - self.assert_equal(resp.content_range.units, 'bytes') - self.assert_equal(resp.content_range.start, 0) - self.assert_equal(resp.content_range.stop, 500) - self.assert_equal(resp.content_range.length, 1000) - - def test_auto_content_length(self): - resp = wrappers.Response('Hello World!') - self.assert_equal(resp.content_length, 12) - - resp = wrappers.Response(['Hello World!']) - assert resp.content_length is None - self.assert_equal(resp.get_wsgi_headers({})['Content-Length'], '12') - - def test_disabled_auto_content_length(self): - class MyResponse(wrappers.Response): - automatically_set_content_length = False - resp = MyResponse('Hello World!') - self.assert_is_none(resp.content_length) - - resp = MyResponse(['Hello World!']) - self.assert_is_none(resp.content_length) - self.assert_not_in('Content-Length', resp.get_wsgi_headers({})) - - def test_location_header_autocorrect(self): - env = create_environ() - class MyResponse(wrappers.Response): - autocorrect_location_header = False - resp = MyResponse('Hello World!') - resp.headers['Location'] = '/test' - self.assert_equal(resp.get_wsgi_headers(env)['Location'], '/test') - - resp = wrappers.Response('Hello World!') - resp.headers['Location'] = '/test' - self.assert_equal(resp.get_wsgi_headers(env)['Location'], 'http://localhost/test') - - def test_modified_url_encoding(self): - class ModifiedRequest(wrappers.Request): - url_charset = 'euc-kr' - - req = ModifiedRequest.from_values(u'/?foo=정상처리'.encode('euc-kr')) - self.assert_strict_equal(req.args['foo'], u'정상처리') - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(WrappersTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/wsgi.py python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/wsgi.py --- python-werkzeug-0.9.6+dfsg/werkzeug/testsuite/wsgi.py 2014-02-08 16:30:04.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/testsuite/wsgi.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,352 +0,0 @@ -# -*- coding: utf-8 -*- -""" - werkzeug.testsuite.wsgi - ~~~~~~~~~~~~~~~~~~~~~~~ - - Tests the WSGI utilities. - - :copyright: (c) 2014 by Armin Ronacher. - :license: BSD, see LICENSE for more details. -""" -import unittest -from os import path -from contextlib import closing - -from werkzeug.testsuite import WerkzeugTestCase, get_temporary_directory - -from werkzeug.wrappers import BaseResponse -from werkzeug.exceptions import BadRequest, ClientDisconnected -from werkzeug.test import Client, create_environ, run_wsgi_app -from werkzeug import wsgi -from werkzeug._compat import StringIO, BytesIO, NativeStringIO, to_native - - -class WSGIUtilsTestCase(WerkzeugTestCase): - - def test_shareddatamiddleware_get_file_loader(self): - app = wsgi.SharedDataMiddleware(None, {}) - assert callable(app.get_file_loader('foo')) - - def test_shared_data_middleware(self): - def null_application(environ, start_response): - start_response('404 NOT FOUND', [('Content-Type', 'text/plain')]) - yield b'NOT FOUND' - - test_dir = get_temporary_directory() - with open(path.join(test_dir, to_native(u'äöü', 'utf-8')), 'w') as test_file: - test_file.write(u'FOUND') - - app = wsgi.SharedDataMiddleware(null_application, { - '/': path.join(path.dirname(__file__), 'res'), - '/sources': path.join(path.dirname(__file__), 'res'), - '/pkg': ('werkzeug.debug', 'shared'), - '/foo': test_dir - }) - - for p in '/test.txt', '/sources/test.txt', '/foo/äöü': - app_iter, status, headers = run_wsgi_app(app, create_environ(p)) - self.assert_equal(status, '200 OK') - with closing(app_iter) as app_iter: - data = b''.join(app_iter).strip() - self.assert_equal(data, b'FOUND') - - app_iter, status, headers = run_wsgi_app( - app, create_environ('/pkg/debugger.js')) - with closing(app_iter) as app_iter: - contents = b''.join(app_iter) - self.assert_in(b'$(function() {', contents) - - app_iter, status, headers = run_wsgi_app( - app, create_environ('/missing')) - self.assert_equal(status, '404 NOT FOUND') - self.assert_equal(b''.join(app_iter).strip(), b'NOT FOUND') - - - def test_get_host(self): - env = {'HTTP_X_FORWARDED_HOST': 'example.org', - 'SERVER_NAME': 'bullshit', 'HOST_NAME': 'ignore me dammit'} - self.assert_equal(wsgi.get_host(env), 'example.org') - self.assert_equal( - wsgi.get_host(create_environ('/', 'http://example.org')), - 'example.org') - - def test_get_host_multiple_forwarded(self): - env = {'HTTP_X_FORWARDED_HOST': 'example.com, example.org', - 'SERVER_NAME': 'bullshit', 'HOST_NAME': 'ignore me dammit'} - self.assert_equal(wsgi.get_host(env), 'example.com') - self.assert_equal( - wsgi.get_host(create_environ('/', 'http://example.com')), - 'example.com') - - def test_get_host_validation(self): - env = {'HTTP_X_FORWARDED_HOST': 'example.org', - 'SERVER_NAME': 'bullshit', 'HOST_NAME': 'ignore me dammit'} - self.assert_equal(wsgi.get_host(env, trusted_hosts=['.example.org']), - 'example.org') - self.assert_raises(BadRequest, wsgi.get_host, env, - trusted_hosts=['example.com']) - - def test_responder(self): - def foo(environ, start_response): - return BaseResponse(b'Test') - client = Client(wsgi.responder(foo), BaseResponse) - response = client.get('/') - self.assert_equal(response.status_code, 200) - self.assert_equal(response.data, b'Test') - - def test_pop_path_info(self): - original_env = {'SCRIPT_NAME': '/foo', 'PATH_INFO': '/a/b///c'} - - # regular path info popping - def assert_tuple(script_name, path_info): - self.assert_equal(env.get('SCRIPT_NAME'), script_name) - self.assert_equal(env.get('PATH_INFO'), path_info) - env = original_env.copy() - pop = lambda: wsgi.pop_path_info(env) - - assert_tuple('/foo', '/a/b///c') - self.assert_equal(pop(), 'a') - assert_tuple('/foo/a', '/b///c') - self.assert_equal(pop(), 'b') - assert_tuple('/foo/a/b', '///c') - self.assert_equal(pop(), 'c') - assert_tuple('/foo/a/b///c', '') - self.assert_is_none(pop()) - - def test_peek_path_info(self): - env = { - 'SCRIPT_NAME': '/foo', - 'PATH_INFO': '/aaa/b///c' - } - - self.assert_equal(wsgi.peek_path_info(env), 'aaa') - self.assert_equal(wsgi.peek_path_info(env), 'aaa') - self.assert_equal(wsgi.peek_path_info(env, charset=None), b'aaa') - self.assert_equal(wsgi.peek_path_info(env, charset=None), b'aaa') - - def test_path_info_and_script_name_fetching(self): - env = create_environ(u'/\N{SNOWMAN}', u'http://example.com/\N{COMET}/') - self.assert_equal(wsgi.get_path_info(env), u'/\N{SNOWMAN}') - self.assert_equal(wsgi.get_path_info(env, charset=None), u'/\N{SNOWMAN}'.encode('utf-8')) - self.assert_equal(wsgi.get_script_name(env), u'/\N{COMET}') - self.assert_equal(wsgi.get_script_name(env, charset=None), u'/\N{COMET}'.encode('utf-8')) - - def test_query_string_fetching(self): - env = create_environ(u'/?\N{SNOWMAN}=\N{COMET}') - qs = wsgi.get_query_string(env) - self.assert_strict_equal(qs, '%E2%98%83=%E2%98%84') - - def test_limited_stream(self): - class RaisingLimitedStream(wsgi.LimitedStream): - def on_exhausted(self): - raise BadRequest('input stream exhausted') - - io = BytesIO(b'123456') - stream = RaisingLimitedStream(io, 3) - self.assert_strict_equal(stream.read(), b'123') - self.assert_raises(BadRequest, stream.read) - - io = BytesIO(b'123456') - stream = RaisingLimitedStream(io, 3) - self.assert_strict_equal(stream.tell(), 0) - self.assert_strict_equal(stream.read(1), b'1') - self.assert_strict_equal(stream.tell(), 1) - self.assert_strict_equal(stream.read(1), b'2') - self.assert_strict_equal(stream.tell(), 2) - self.assert_strict_equal(stream.read(1), b'3') - self.assert_strict_equal(stream.tell(), 3) - self.assert_raises(BadRequest, stream.read) - - io = BytesIO(b'123456\nabcdefg') - stream = wsgi.LimitedStream(io, 9) - self.assert_strict_equal(stream.readline(), b'123456\n') - self.assert_strict_equal(stream.readline(), b'ab') - - io = BytesIO(b'123456\nabcdefg') - stream = wsgi.LimitedStream(io, 9) - self.assert_strict_equal(stream.readlines(), [b'123456\n', b'ab']) - - io = BytesIO(b'123456\nabcdefg') - stream = wsgi.LimitedStream(io, 9) - self.assert_strict_equal(stream.readlines(2), [b'12']) - self.assert_strict_equal(stream.readlines(2), [b'34']) - self.assert_strict_equal(stream.readlines(), [b'56\n', b'ab']) - - io = BytesIO(b'123456\nabcdefg') - stream = wsgi.LimitedStream(io, 9) - self.assert_strict_equal(stream.readline(100), b'123456\n') - - io = BytesIO(b'123456\nabcdefg') - stream = wsgi.LimitedStream(io, 9) - self.assert_strict_equal(stream.readlines(100), [b'123456\n', b'ab']) - - io = BytesIO(b'123456') - stream = wsgi.LimitedStream(io, 3) - self.assert_strict_equal(stream.read(1), b'1') - self.assert_strict_equal(stream.read(1), b'2') - self.assert_strict_equal(stream.read(), b'3') - self.assert_strict_equal(stream.read(), b'') - - io = BytesIO(b'123456') - stream = wsgi.LimitedStream(io, 3) - self.assert_strict_equal(stream.read(-1), b'123') - - io = BytesIO(b'123456') - stream = wsgi.LimitedStream(io, 0) - self.assert_strict_equal(stream.read(-1), b'') - - io = StringIO(u'123456') - stream = wsgi.LimitedStream(io, 0) - self.assert_strict_equal(stream.read(-1), u'') - - io = StringIO(u'123\n456\n') - stream = wsgi.LimitedStream(io, 8) - self.assert_strict_equal(list(stream), [u'123\n', u'456\n']) - - def test_limited_stream_disconnection(self): - io = BytesIO(b'A bit of content') - - # disconnect detection on out of bytes - stream = wsgi.LimitedStream(io, 255) - with self.assert_raises(ClientDisconnected): - stream.read() - - # disconnect detection because file close - io = BytesIO(b'x' * 255) - io.close() - stream = wsgi.LimitedStream(io, 255) - with self.assert_raises(ClientDisconnected): - stream.read() - - def test_path_info_extraction(self): - x = wsgi.extract_path_info('http://example.com/app', '/app/hello') - self.assert_equal(x, u'/hello') - x = wsgi.extract_path_info('http://example.com/app', - 'https://example.com/app/hello') - self.assert_equal(x, u'/hello') - x = wsgi.extract_path_info('http://example.com/app/', - 'https://example.com/app/hello') - self.assert_equal(x, u'/hello') - x = wsgi.extract_path_info('http://example.com/app/', - 'https://example.com/app') - self.assert_equal(x, u'/') - x = wsgi.extract_path_info(u'http://☃.net/', u'/fööbär') - self.assert_equal(x, u'/fööbär') - x = wsgi.extract_path_info(u'http://☃.net/x', u'http://☃.net/x/fööbär') - self.assert_equal(x, u'/fööbär') - - env = create_environ(u'/fööbär', u'http://☃.net/x/') - x = wsgi.extract_path_info(env, u'http://☃.net/x/fööbär') - self.assert_equal(x, u'/fööbär') - - x = wsgi.extract_path_info('http://example.com/app/', - 'https://example.com/a/hello') - self.assert_is_none(x) - x = wsgi.extract_path_info('http://example.com/app/', - 'https://example.com/app/hello', - collapse_http_schemes=False) - self.assert_is_none(x) - - def test_get_host_fallback(self): - self.assert_equal(wsgi.get_host({ - 'SERVER_NAME': 'foobar.example.com', - 'wsgi.url_scheme': 'http', - 'SERVER_PORT': '80' - }), 'foobar.example.com') - self.assert_equal(wsgi.get_host({ - 'SERVER_NAME': 'foobar.example.com', - 'wsgi.url_scheme': 'http', - 'SERVER_PORT': '81' - }), 'foobar.example.com:81') - - def test_get_current_url_unicode(self): - env = create_environ() - env['QUERY_STRING'] = 'foo=bar&baz=blah&meh=\xcf' - rv = wsgi.get_current_url(env) - self.assert_strict_equal(rv, - u'http://localhost/?foo=bar&baz=blah&meh=\ufffd') - - def test_multi_part_line_breaks(self): - data = 'abcdef\r\nghijkl\r\nmnopqrstuvwxyz\r\nABCDEFGHIJK' - test_stream = NativeStringIO(data) - lines = list(wsgi.make_line_iter(test_stream, limit=len(data), - buffer_size=16)) - self.assert_equal(lines, ['abcdef\r\n', 'ghijkl\r\n', - 'mnopqrstuvwxyz\r\n', 'ABCDEFGHIJK']) - - data = 'abc\r\nThis line is broken by the buffer length.' \ - '\r\nFoo bar baz' - test_stream = NativeStringIO(data) - lines = list(wsgi.make_line_iter(test_stream, limit=len(data), - buffer_size=24)) - self.assert_equal(lines, ['abc\r\n', 'This line is broken by the ' - 'buffer length.\r\n', 'Foo bar baz']) - - def test_multi_part_line_breaks_bytes(self): - data = b'abcdef\r\nghijkl\r\nmnopqrstuvwxyz\r\nABCDEFGHIJK' - test_stream = BytesIO(data) - lines = list(wsgi.make_line_iter(test_stream, limit=len(data), - buffer_size=16)) - self.assert_equal(lines, [b'abcdef\r\n', b'ghijkl\r\n', - b'mnopqrstuvwxyz\r\n', b'ABCDEFGHIJK']) - - data = b'abc\r\nThis line is broken by the buffer length.' \ - b'\r\nFoo bar baz' - test_stream = BytesIO(data) - lines = list(wsgi.make_line_iter(test_stream, limit=len(data), - buffer_size=24)) - self.assert_equal(lines, [b'abc\r\n', b'This line is broken by the ' - b'buffer length.\r\n', b'Foo bar baz']) - - def test_multi_part_line_breaks_problematic(self): - data = 'abc\rdef\r\nghi' - for x in range(1, 10): - test_stream = NativeStringIO(data) - lines = list(wsgi.make_line_iter(test_stream, limit=len(data), - buffer_size=4)) - self.assert_equal(lines, ['abc\r', 'def\r\n', 'ghi']) - - def test_iter_functions_support_iterators(self): - data = ['abcdef\r\nghi', 'jkl\r\nmnopqrstuvwxyz\r', '\nABCDEFGHIJK'] - lines = list(wsgi.make_line_iter(data)) - self.assert_equal(lines, ['abcdef\r\n', 'ghijkl\r\n', - 'mnopqrstuvwxyz\r\n', 'ABCDEFGHIJK']) - - def test_make_chunk_iter(self): - data = [u'abcdefXghi', u'jklXmnopqrstuvwxyzX', u'ABCDEFGHIJK'] - rv = list(wsgi.make_chunk_iter(data, 'X')) - self.assert_equal(rv, [u'abcdef', u'ghijkl', u'mnopqrstuvwxyz', - u'ABCDEFGHIJK']) - - data = u'abcdefXghijklXmnopqrstuvwxyzXABCDEFGHIJK' - test_stream = StringIO(data) - rv = list(wsgi.make_chunk_iter(test_stream, 'X', limit=len(data), - buffer_size=4)) - self.assert_equal(rv, [u'abcdef', u'ghijkl', u'mnopqrstuvwxyz', - u'ABCDEFGHIJK']) - - def test_make_chunk_iter_bytes(self): - data = [b'abcdefXghi', b'jklXmnopqrstuvwxyzX', b'ABCDEFGHIJK'] - rv = list(wsgi.make_chunk_iter(data, 'X')) - self.assert_equal(rv, [b'abcdef', b'ghijkl', b'mnopqrstuvwxyz', - b'ABCDEFGHIJK']) - - data = b'abcdefXghijklXmnopqrstuvwxyzXABCDEFGHIJK' - test_stream = BytesIO(data) - rv = list(wsgi.make_chunk_iter(test_stream, 'X', limit=len(data), - buffer_size=4)) - self.assert_equal(rv, [b'abcdef', b'ghijkl', b'mnopqrstuvwxyz', - b'ABCDEFGHIJK']) - - def test_lines_longer_buffer_size(self): - data = '1234567890\n1234567890\n' - for bufsize in range(1, 15): - lines = list(wsgi.make_line_iter(NativeStringIO(data), limit=len(data), - buffer_size=4)) - self.assert_equal(lines, ['1234567890\n', '1234567890\n']) - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(WSGIUtilsTestCase)) - return suite diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/urls.py python-werkzeug-0.10.4+dfsg1/werkzeug/urls.py --- python-werkzeug-0.9.6+dfsg/werkzeug/urls.py 2014-06-07 10:23:06.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/urls.py 2015-03-26 14:11:47.000000000 +0000 @@ -3,11 +3,19 @@ werkzeug.urls ~~~~~~~~~~~~~ - This module implements various URL related functions. + ``werkzeug.urls`` used to provide several wrapper functions for Python 2 + urlparse, whose main purpose were to work around the behavior of the Py2 + stdlib and its lack of unicode support. While this was already a somewhat + inconvenient situation, it got even more complicated because Python 3's + ``urllib.parse`` actually does handle unicode properly. In other words, + this module would wrap two libraries with completely different behavior. So + now this module contains a 2-and-3-compatible backport of Python 3's + ``urllib.parse``, which is mostly API-compatible. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ +import os import re from werkzeug._compat import text_type, PY2, to_unicode, \ to_native, implements_to_string, try_coerce_native, \ @@ -36,7 +44,8 @@ ['scheme', 'netloc', 'path', 'query', 'fragment'])) -class _URLMixin(object): +class BaseURL(_URLTuple): + '''Superclass of :py:class:`URL` and :py:class:`BytesURL`.''' __slots__ = () def replace(self, **kwargs): @@ -174,6 +183,64 @@ """ return url_parse(uri_to_iri(self)) + def get_file_location(self, pathformat=None): + """Returns a tuple with the location of the file in the form + ``(server, location)``. If the netloc is empty in the URL or + points to localhost, it's represented as ``None``. + + The `pathformat` by default is autodetection but needs to be set + when working with URLs of a specific system. The supported values + are ``'windows'`` when working with Windows or DOS paths and + ``'posix'`` when working with posix paths. + + If the URL does not point to to a local file, the server and location + are both represented as ``None``. + + :param pathformat: The expected format of the path component. + Currently ``'windows'`` and ``'posix'`` are + supported. Defaults to ``None`` which is + autodetect. + """ + if self.scheme != 'file': + return None, None + + path = url_unquote(self.path) + host = self.netloc or None + + if pathformat is None: + if os.name == 'nt': + pathformat = 'windows' + else: + pathformat = 'posix' + + if pathformat == 'windows': + if path[:1] == '/' and path[1:2].isalpha() and path[2:3] in '|:': + path = path[1:2] + ':' + path[3:] + windows_share = path[:3] in ('\\' * 3, '/' * 3) + import ntpath + path = ntpath.normpath(path) + # Windows shared drives are represented as ``\\host\\directory``. + # That results in a URL like ``file://///host/directory``, and a + # path like ``///host/directory``. We need to special-case this + # because the path contains the hostname. + if windows_share and host is None: + parts = path.lstrip('\\').split('\\', 1) + if len(parts) == 2: + host, path = parts + else: + host = parts[0] + path = '' + elif pathformat == 'posix': + import posixpath + path = posixpath.normpath(path) + else: + raise TypeError('Invalid path format %s' % repr(pathformat)) + + if host in ('127.0.0.1', '::1', 'localhost'): + host = None + + return host, path + def _split_netloc(self): if self._at in self.netloc: return self.netloc.split(self._at, 1) @@ -209,7 +276,7 @@ @implements_to_string -class URL(_URLTuple, _URLMixin): +class URL(BaseURL): """Represents a parsed URL. This behaves like a regular tuple but also has some extra attributes that give further insight into the URL. @@ -237,7 +304,7 @@ ])) if auth: rv = '%s@%s' % (auth, rv) - return rv.encode('ascii') + return to_native(rv) def encode(self, charset='utf-8', errors='replace'): """Encodes the URL to a tuple made out of bytes. The charset is @@ -252,7 +319,7 @@ ) -class BytesURL(_URLTuple, _URLMixin): +class BytesURL(BaseURL): """Represents a parsed URL in bytes.""" __slots__ = () _at = b'@' @@ -491,10 +558,22 @@ :param charset: The target charset for the URL if the url was given as unicode string. """ - scheme, netloc, path, qs, anchor = url_parse(to_unicode(s, charset, 'replace')) - path = url_quote(path, charset, safe='/%+$!*\'(),') - qs = url_quote_plus(qs, charset, safe=':&%=+$!*\'(),') - return to_native(url_unparse((scheme, netloc, path, qs, anchor))) + # First step is to switch to unicode processing and to convert + # backslashes (which are invalid in URLs anyways) to slashes. This is + # consistent with what Chrome does. + s = to_unicode(s, charset, 'replace').replace('\\', '/') + + # For the specific case that we look like a malformed windows URL + # we want to fix this up manually: + if s.startswith('file://') and s[7:8].isalpha() and s[8:10] in (':/', '|/'): + s = 'file:///' + s[7:] + + url = url_parse(s) + path = url_quote(url.path, charset, safe='/%+$!*\'(),') + qs = url_quote_plus(url.query, charset, safe=':&%=+$!*\'(),') + anchor = url_quote_plus(url.fragment, charset, safe=':&%=+$!*\'(),') + return to_native(url_unparse((url.scheme, url.encode_netloc(), + path, qs, anchor))) def uri_to_iri(uri, charset='utf-8', errors='replace'): @@ -585,7 +664,7 @@ iri = url_parse(to_unicode(iri, charset, errors)) - netloc = iri.encode_netloc().decode('ascii') + netloc = iri.encode_netloc() path = url_quote(iri.path, charset, errors, '/:~+%') query = url_quote(iri.query, charset, errors, '%&[]:;$*()+,!?*/=') fragment = url_quote(iri.fragment, charset, errors, '=%&[]:;$()+,!?*/') diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/useragents.py python-werkzeug-0.10.4+dfsg1/werkzeug/useragents.py --- python-werkzeug-0.9.6+dfsg/werkzeug/useragents.py 2014-06-07 09:21:49.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/useragents.py 2015-03-26 15:33:58.000000000 +0000 @@ -18,6 +18,7 @@ """A simple user agent parser. Used by the `UserAgent`.""" platforms = ( + ('cros', 'chromeos'), ('iphone|ios', 'iphone'), ('ipad', 'ipad'), (r'darwin|mac|os\s*x', 'macos'), @@ -32,7 +33,8 @@ ('sco|unix_sv', 'sco'), ('bsd', 'bsd'), ('amiga', 'amiga'), - ('blackberry|playbook', 'blackberry') + ('blackberry|playbook', 'blackberry'), + ('symbian','symbian') ) browsers = ( ('googlebot', 'google'), @@ -50,7 +52,7 @@ ('konqueror', 'konqueror'), ('k-meleon', 'kmeleon'), ('netscape', 'netscape'), - (r'msie|microsoft\s+internet\s+explorer', 'msie'), + (r'msie|microsoft\s+internet\s+explorer|trident/.+? rv:', 'msie'), ('lynx', 'lynx'), ('links', 'links'), ('seamonkey|mozilla', 'seamonkey') @@ -107,6 +109,7 @@ - `amiga` - `android` - `bsd` + - `chromeos` - `hpux` - `iphone` - `ipad` diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/utils.py python-werkzeug-0.10.4+dfsg1/werkzeug/utils.py --- python-werkzeug-0.9.6+dfsg/werkzeug/utils.py 2014-06-07 10:25:02.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/utils.py 2015-03-26 15:33:58.000000000 +0000 @@ -207,7 +207,7 @@ def get_content_type(mimetype, charset): - """Return the full content type string with charset for a mimetype. + """Returns the full content type string with charset for a mimetype. If the mimetype represents text the charset will be appended as charset parameter, otherwise the mimetype is returned unchanged. @@ -250,7 +250,7 @@ to :func:`os.path.join`. The filename returned is an ASCII only string for maximum portability. - On windows system the function also makes sure that the file is not + On windows systems the function also makes sure that the file is not named after one of the special device files. >>> secure_filename("My cool movie.mov") @@ -335,8 +335,8 @@ return _entity_re.sub(handle_match, s) -def redirect(location, code=302): - """Return a response object (a WSGI application) that, if called, +def redirect(location, code=302, Response=None): + """Returns a response object (a WSGI application) that, if called, redirects the client to the target location. Supported codes are 301, 302, 303, 305, and 307. 300 is not supported because it's not a real redirect and 304 because it's the answer for a request with a request @@ -346,10 +346,18 @@ The location can now be a unicode string that is encoded using the :func:`iri_to_uri` function. + .. versionadded:: 0.10 + The class used for the Response object can now be passed in. + :param location: the location the response should redirect to. :param code: the redirect status code. defaults to 302. + :param class Response: a Response class to use when instantiating a + response. The default is :class:`werkzeug.wrappers.Response` if + unspecified. """ - from werkzeug.wrappers import Response + if Response is None: + from werkzeug.wrappers import Response + display_location = escape(location) if isinstance(location, text_type): # Safe conversion is necessary here as we might redirect @@ -368,7 +376,7 @@ def append_slash_redirect(environ, code=301): - """Redirect to the same URL but with a slash appended. The behavior + """Redirects to the same URL but with a slash appended. The behavior of this function is undefined if the path ends with a slash already. :param environ: the WSGI environment for the request that triggers @@ -395,29 +403,32 @@ `None` is returned instead. :return: imported object """ - #XXX: py3 review needed - assert isinstance(import_name, string_types) # force the import name to automatically convert to strings - import_name = str(import_name) + # __import__ is not able to handle unicode strings in the fromlist + # if the module is a package + import_name = str(import_name).replace(':', '.') try: - if ':' in import_name: - module, obj = import_name.split(':', 1) - elif '.' in import_name: - module, obj = import_name.rsplit('.', 1) + try: + __import__(import_name) + except ImportError: + if '.' not in import_name: + raise else: - return __import__(import_name) - # __import__ is not able to handle unicode strings in the fromlist - # if the module is a package - if PY2 and isinstance(obj, unicode): - obj = obj.encode('utf-8') + return sys.modules[import_name] + + module_name, obj_name = import_name.rsplit('.', 1) try: - return getattr(__import__(module, None, None, [obj]), obj) - except (ImportError, AttributeError): + module = __import__(module_name, None, None, [obj_name]) + except ImportError: # support importing modules not yet set up by the parent module # (or package for that matter) - modname = module + '.' + obj - __import__(modname) - return sys.modules[modname] + module = import_string(module_name) + + try: + return getattr(module, obj_name) + except AttributeError as e: + raise ImportError(e) + except ImportError as e: if not silent: reraise( @@ -427,7 +438,7 @@ def find_modules(import_path, include_packages=False, recursive=False): - """Find all the modules below a package. This can be useful to + """Finds all the modules below a package. This can be useful to automatically import all views / controllers so that their metaclasses / function decorators have a chance to register themselves on the application. @@ -459,7 +470,7 @@ def validate_arguments(func, args, kwargs, drop_extra=True): - """Check if the function accepts the arguments and keyword arguments. + """Checks if the function accepts the arguments and keyword arguments. Returns a new ``(args, kwargs)`` tuple that can safely be passed to the function without causing a `TypeError` because the function signature is incompatible. If `drop_extra` is set to `True` (which is the default) diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/wrappers.py python-werkzeug-0.10.4+dfsg1/werkzeug/wrappers.py --- python-werkzeug-0.9.6+dfsg/werkzeug/wrappers.py 2014-06-07 10:33:37.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/wrappers.py 2015-03-26 15:33:58.000000000 +0000 @@ -190,13 +190,17 @@ #: Optionally a list of hosts that is trusted by this request. By default #: all hosts are trusted which means that whatever the client sends the - #: host is will be accepted. This is the recommended setup as a webserver - #: should manually be set up to not route invalid hosts to the application. + #: host is will be accepted. + #: + #: This is the recommended setup as a webserver should manually be set up + #: to only route correct hosts to the application, and remove the + #: `X-Forwarded-Host` header if it is not being used (see + #: :func:`werkzeug.wsgi.get_host`). #: #: .. versionadded:: 0.9 trusted_hosts = None - #: Indicates weather the data descriptor should be allowed to read and + #: Indicates whether the data descriptor should be allowed to read and #: buffer up the input stream. By default it's enabled. #: #: .. versionadded:: 0.9 @@ -214,7 +218,7 @@ # in a debug session we don't want the repr to blow up. args = [] try: - args.append("'%s'" % self.url) + args.append("'%s'" % to_native(self.url, self.url_charset)) args.append('[%s]' % self.method) except Exception: args.append('(invalid WSGI environ)') @@ -550,13 +554,17 @@ @cached_property def url(self): - """The reconstructed current URL as IRI.""" + """The reconstructed current URL as IRI. + See also: :attr:`trusted_hosts`. + """ return get_current_url(self.environ, trusted_hosts=self.trusted_hosts) @cached_property def base_url(self): - """Like :attr:`url` but without the querystring""" + """Like :attr:`url` but without the querystring + See also: :attr:`trusted_hosts`. + """ return get_current_url(self.environ, strip_querystring=True, trusted_hosts=self.trusted_hosts) @@ -564,25 +572,31 @@ def url_root(self): """The full URL root (with hostname), this is the application root as IRI. + See also: :attr:`trusted_hosts`. """ return get_current_url(self.environ, True, trusted_hosts=self.trusted_hosts) @cached_property def host_url(self): - """Just the host with scheme as IRI.""" + """Just the host with scheme as IRI. + See also: :attr:`trusted_hosts`. + """ return get_current_url(self.environ, host_only=True, trusted_hosts=self.trusted_hosts) @cached_property def host(self): - """Just the host including the port if available.""" + """Just the host including the port if available. + See also: :attr:`trusted_hosts`. + """ return get_host(self.environ, trusted_hosts=self.trusted_hosts) query_string = environ_property('QUERY_STRING', '', read_only=True, load_func=wsgi_get_bytes, doc= '''The URL parameters as raw bytestring.''') - method = environ_property('REQUEST_METHOD', 'GET', read_only=True, doc= + method = environ_property('REQUEST_METHOD', 'GET', read_only=True, + load_func=lambda x: x.upper(), doc= '''The transmission method. (For example ``'GET'`` or ``'POST'``).''') @cached_property @@ -1405,7 +1419,7 @@ # wsgiref. if 'date' not in self.headers: self.headers['Date'] = http_date() - if 'content-length' not in self.headers: + if self.automatically_set_content_length and 'content-length' not in self.headers: length = self.calculate_content_length() if length is not None: self.headers['Content-Length'] = length @@ -1494,6 +1508,7 @@ raise ValueError('I/O operation on closed file') self.response._ensure_sequence(mutable=True) self.response.response.append(value) + self.response.headers.pop('Content-Length', None) def writelines(self, seq): for item in seq: diff -Nru python-werkzeug-0.9.6+dfsg/werkzeug/wsgi.py python-werkzeug-0.10.4+dfsg1/werkzeug/wsgi.py --- python-werkzeug-0.9.6+dfsg/werkzeug/wsgi.py 2014-06-07 10:32:57.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/werkzeug/wsgi.py 2015-03-26 14:11:47.000000000 +0000 @@ -129,17 +129,20 @@ def get_host(environ, trusted_hosts=None): - """Return the real host for the given WSGI environment. This takes care - of the `X-Forwarded-Host` header. Optionally it verifies that the host - is in a list of trusted hosts. If the host is not in there it will raise - a :exc:`~werkzeug.exceptions.SecurityError`. + """Return the real host for the given WSGI environment. This first checks + the `X-Forwarded-Host` header, then the normal `Host` header, and finally + the `SERVER_NAME` environment variable (using the first one it finds). + + Optionally it verifies that the host is in a list of trusted hosts. + If the host is not in there it will raise a + :exc:`~werkzeug.exceptions.SecurityError`. :param environ: the WSGI environment to get the host of. :param trusted_hosts: a list of trusted hosts, see :func:`host_is_trusted` for more information. """ if 'HTTP_X_FORWARDED_HOST' in environ: - rv = environ['HTTP_X_FORWARDED_HOST'].split(',')[0].strip() + rv = environ['HTTP_X_FORWARDED_HOST'].split(',', 1)[0].strip() elif 'HTTP_HOST' in environ: rv = environ['HTTP_HOST'] else: @@ -475,7 +478,7 @@ :param disallow: a list of :func:`~fnmatch.fnmatch` rules. :param fallback_mimetype: the fallback mimetype for unknown files. :param cache: enable or disable caching headers. - :Param cache_timeout: the cache timeout in seconds for the headers. + :param cache_timeout: the cache timeout in seconds for the headers. """ def __init__(self, app, exports, disallow=None, cache=True, @@ -570,8 +573,8 @@ for sep in os.sep, os.altsep: if sep and sep != '/': cleaned_path = cleaned_path.replace(sep, '/') - path = '/'.join([''] + [x for x in cleaned_path.split('/') - if x and x != '..']) + path = '/' + '/'.join(x for x in cleaned_path.split('/') + if x and x != '..') file_loader = None for search_path, loader in iteritems(self.exports): if search_path == path: @@ -637,9 +640,8 @@ if script in self.mounts: app = self.mounts[script] break - items = script.split('/') - script = '/'.join(items[:-1]) - path_info = '/%s%s' % (items[-1], path_info) + script, last_item = script.rsplit('/', 1) + path_info = '/%s%s' % (last_item, path_info) else: app = self.mounts.get(script, self.app) original_script_name = environ.get('SCRIPT_NAME', '') diff -Nru python-werkzeug-0.9.6+dfsg/Werkzeug.egg-info/dependency_links.txt python-werkzeug-0.10.4+dfsg1/Werkzeug.egg-info/dependency_links.txt --- python-werkzeug-0.9.6+dfsg/Werkzeug.egg-info/dependency_links.txt 2014-06-07 10:34:33.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/Werkzeug.egg-info/dependency_links.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ - diff -Nru python-werkzeug-0.9.6+dfsg/Werkzeug.egg-info/not-zip-safe python-werkzeug-0.10.4+dfsg1/Werkzeug.egg-info/not-zip-safe --- python-werkzeug-0.9.6+dfsg/Werkzeug.egg-info/not-zip-safe 2013-06-14 08:51:16.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/Werkzeug.egg-info/not-zip-safe 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ - diff -Nru python-werkzeug-0.9.6+dfsg/Werkzeug.egg-info/PKG-INFO python-werkzeug-0.10.4+dfsg1/Werkzeug.egg-info/PKG-INFO --- python-werkzeug-0.9.6+dfsg/Werkzeug.egg-info/PKG-INFO 2014-06-07 10:34:33.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/Werkzeug.egg-info/PKG-INFO 1970-01-01 00:00:00.000000000 +0000 @@ -1,72 +0,0 @@ -Metadata-Version: 1.0 -Name: Werkzeug -Version: 0.9.6 -Summary: The Swiss Army knife of Python web development -Home-page: http://werkzeug.pocoo.org/ -Author: Armin Ronacher -Author-email: armin.ronacher@active-4.com -License: BSD -Description: - Werkzeug - ======== - - Werkzeug started as simple collection of various utilities for WSGI - applications and has become one of the most advanced WSGI utility - modules. It includes a powerful debugger, full featured request and - response objects, HTTP utilities to handle entity tags, cache control - headers, HTTP dates, cookie handling, file uploads, a powerful URL - routing system and a bunch of community contributed addon modules. - - Werkzeug is unicode aware and doesn't enforce a specific template - engine, database adapter or anything else. It doesn't even enforce - a specific way of handling requests and leaves all that up to the - developer. It's most useful for end user applications which should work - on as many server environments as possible (such as blogs, wikis, - bulletin boards, etc.). - - Details and example applications are available on the - `Werkzeug website `_. - - - Features - -------- - - - unicode awareness - - - request and response objects - - - various utility functions for dealing with HTTP headers such as - `Accept` and `Cache-Control` headers. - - - thread local objects with proper cleanup at request end - - - an interactive debugger - - - A simple WSGI server with support for threading and forking - with an automatic reloader. - - - a flexible URL routing system with REST support. - - - fully WSGI compatible - - - Development Version - ------------------- - - The Werkzeug development version can be installed by cloning the git - repository from `github`_:: - - git clone git@github.com:mitsuhiko/werkzeug.git - - .. _github: http://github.com/mitsuhiko/werkzeug - -Platform: any -Classifier: Development Status :: 5 - Production/Stable -Classifier: Environment :: Web Environment -Classifier: Intended Audience :: Developers -Classifier: License :: OSI Approved :: BSD License -Classifier: Operating System :: OS Independent -Classifier: Programming Language :: Python -Classifier: Programming Language :: Python :: 3 -Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content -Classifier: Topic :: Software Development :: Libraries :: Python Modules diff -Nru python-werkzeug-0.9.6+dfsg/Werkzeug.egg-info/SOURCES.txt python-werkzeug-0.10.4+dfsg1/Werkzeug.egg-info/SOURCES.txt --- python-werkzeug-0.9.6+dfsg/Werkzeug.egg-info/SOURCES.txt 2014-06-07 10:34:34.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/Werkzeug.egg-info/SOURCES.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1,290 +0,0 @@ -AUTHORS -CHANGES -LICENSE -MANIFEST.in -Makefile -README.rst -setup.cfg -setup.py -Werkzeug.egg-info/PKG-INFO -Werkzeug.egg-info/SOURCES.txt -Werkzeug.egg-info/dependency_links.txt -Werkzeug.egg-info/not-zip-safe -Werkzeug.egg-info/top_level.txt -artwork/logo.png -artwork/logo.svg -docs/Makefile -docs/changes.rst -docs/conf.py -docs/contents.rst.inc -docs/datastructures.rst -docs/debug.rst -docs/exceptions.rst -docs/http.rst -docs/index.rst -docs/installation.rst -docs/latexindex.rst -docs/levels.rst -docs/local.rst -docs/logo.pdf -docs/make.bat -docs/makearchive.py -docs/middlewares.rst -docs/python3.rst -docs/quickstart.rst -docs/request_data.rst -docs/routing.rst -docs/serving.rst -docs/terms.rst -docs/test.rst -docs/transition.rst -docs/tutorial.rst -docs/unicode.rst -docs/utils.rst -docs/werkzeugext.py -docs/werkzeugstyle.sty -docs/wrappers.rst -docs/wsgi.rst -docs/_static/background.png -docs/_static/codebackground.png -docs/_static/contents.png -docs/_static/debug-screenshot.png -docs/_static/favicon.ico -docs/_static/header.png -docs/_static/navigation.png -docs/_static/navigation_active.png -docs/_static/shortly.png -docs/_static/shorty-screenshot.png -docs/_static/style.css -docs/_static/werkzeug.js -docs/_static/werkzeug.png -docs/_templates/sidebarintro.html -docs/_templates/sidebarlogo.html -docs/_themes/LICENSE -docs/_themes/README -docs/_themes/werkzeug_theme_support.py -docs/_themes/werkzeug/layout.html -docs/_themes/werkzeug/relations.html -docs/_themes/werkzeug/theme.conf -docs/_themes/werkzeug/static/werkzeug.css_t -docs/contrib/atom.rst -docs/contrib/cache.rst -docs/contrib/fixers.rst -docs/contrib/index.rst -docs/contrib/iterio.rst -docs/contrib/lint.rst -docs/contrib/profiler.rst -docs/contrib/securecookie.rst -docs/contrib/sessions.rst -docs/contrib/wrappers.rst -docs/deployment/cgi.rst -docs/deployment/fastcgi.rst -docs/deployment/index.rst -docs/deployment/mod_wsgi.rst -docs/deployment/proxying.rst -examples/README -examples/cookieauth.py -examples/httpbasicauth.py -examples/manage-coolmagic.py -examples/manage-couchy.py -examples/manage-cupoftee.py -examples/manage-i18nurls.py -examples/manage-plnt.py -examples/manage-shorty.py -examples/manage-simplewiki.py -examples/manage-webpylike.py -examples/upload.py -examples/contrib/README -examples/contrib/securecookie.py -examples/contrib/sessions.py -examples/coolmagic/__init__.py -examples/coolmagic/application.py -examples/coolmagic/helpers.py -examples/coolmagic/utils.py -examples/coolmagic/public/style.css -examples/coolmagic/templates/layout.html -examples/coolmagic/templates/static/about.html -examples/coolmagic/templates/static/index.html -examples/coolmagic/templates/static/not_found.html -examples/coolmagic/views/__init__.py -examples/coolmagic/views/static.py -examples/couchy/README -examples/couchy/__init__.py -examples/couchy/application.py -examples/couchy/models.py -examples/couchy/utils.py -examples/couchy/views.py -examples/couchy/static/style.css -examples/couchy/templates/display.html -examples/couchy/templates/layout.html -examples/couchy/templates/list.html -examples/couchy/templates/new.html -examples/couchy/templates/not_found.html -examples/cupoftee/__init__.py -examples/cupoftee/application.py -examples/cupoftee/db.py -examples/cupoftee/network.py -examples/cupoftee/pages.py -examples/cupoftee/utils.py -examples/cupoftee/shared/content.png -examples/cupoftee/shared/down.png -examples/cupoftee/shared/favicon.ico -examples/cupoftee/shared/header.png -examples/cupoftee/shared/logo.png -examples/cupoftee/shared/style.css -examples/cupoftee/shared/up.png -examples/cupoftee/templates/layout.html -examples/cupoftee/templates/missingpage.html -examples/cupoftee/templates/search.html -examples/cupoftee/templates/server.html -examples/cupoftee/templates/serverlist.html -examples/i18nurls/__init__.py -examples/i18nurls/application.py -examples/i18nurls/urls.py -examples/i18nurls/views.py -examples/i18nurls/templates/about.html -examples/i18nurls/templates/blog.html -examples/i18nurls/templates/index.html -examples/i18nurls/templates/layout.html -examples/partial/README -examples/partial/complex_routing.py -examples/plnt/__init__.py -examples/plnt/database.py -examples/plnt/sync.py -examples/plnt/utils.py -examples/plnt/views.py -examples/plnt/webapp.py -examples/plnt/shared/style.css -examples/plnt/templates/about.html -examples/plnt/templates/index.html -examples/plnt/templates/layout.html -examples/shortly/shortly.py -examples/shortly/static/style.css -examples/shortly/templates/404.html -examples/shortly/templates/layout.html -examples/shortly/templates/new_url.html -examples/shortly/templates/short_link_details.html -examples/shorty/__init__.py -examples/shorty/application.py -examples/shorty/models.py -examples/shorty/utils.py -examples/shorty/views.py -examples/shorty/static/style.css -examples/shorty/templates/display.html -examples/shorty/templates/layout.html -examples/shorty/templates/list.html -examples/shorty/templates/new.html -examples/shorty/templates/not_found.html -examples/simplewiki/__init__.py -examples/simplewiki/actions.py -examples/simplewiki/application.py -examples/simplewiki/database.py -examples/simplewiki/specialpages.py -examples/simplewiki/utils.py -examples/simplewiki/shared/style.css -examples/simplewiki/templates/action_diff.html -examples/simplewiki/templates/action_edit.html -examples/simplewiki/templates/action_log.html -examples/simplewiki/templates/action_revert.html -examples/simplewiki/templates/action_show.html -examples/simplewiki/templates/layout.html -examples/simplewiki/templates/macros.xml -examples/simplewiki/templates/missing_action.html -examples/simplewiki/templates/page_index.html -examples/simplewiki/templates/page_missing.html -examples/simplewiki/templates/recent_changes.html -examples/webpylike/example.py -examples/webpylike/webpylike.py -werkzeug/__init__.py -werkzeug/_compat.py -werkzeug/_internal.py -werkzeug/datastructures.py -werkzeug/exceptions.py -werkzeug/formparser.py -werkzeug/http.py -werkzeug/local.py -werkzeug/posixemulation.py -werkzeug/routing.py -werkzeug/script.py -werkzeug/security.py -werkzeug/serving.py -werkzeug/test.py -werkzeug/testapp.py -werkzeug/urls.py -werkzeug/useragents.py -werkzeug/utils.py -werkzeug/wrappers.py -werkzeug/wsgi.py -werkzeug/contrib/__init__.py -werkzeug/contrib/atom.py -werkzeug/contrib/cache.py -werkzeug/contrib/fixers.py -werkzeug/contrib/iterio.py -werkzeug/contrib/jsrouting.py -werkzeug/contrib/limiter.py -werkzeug/contrib/lint.py -werkzeug/contrib/profiler.py -werkzeug/contrib/securecookie.py -werkzeug/contrib/sessions.py -werkzeug/contrib/testtools.py -werkzeug/contrib/wrappers.py -werkzeug/debug/__init__.py -werkzeug/debug/console.py -werkzeug/debug/repr.py -werkzeug/debug/tbtools.py -werkzeug/debug/shared/FONT_LICENSE -werkzeug/debug/shared/console.png -werkzeug/debug/shared/debugger.js -werkzeug/debug/shared/jquery.js -werkzeug/debug/shared/less.png -werkzeug/debug/shared/more.png -werkzeug/debug/shared/source.png -werkzeug/debug/shared/style.css -werkzeug/debug/shared/ubuntu.ttf -werkzeug/testsuite/__init__.py -werkzeug/testsuite/compat.py -werkzeug/testsuite/datastructures.py -werkzeug/testsuite/debug.py -werkzeug/testsuite/exceptions.py -werkzeug/testsuite/formparser.py -werkzeug/testsuite/http.py -werkzeug/testsuite/internal.py -werkzeug/testsuite/local.py -werkzeug/testsuite/routing.py -werkzeug/testsuite/security.py -werkzeug/testsuite/serving.py -werkzeug/testsuite/test.py -werkzeug/testsuite/urls.py -werkzeug/testsuite/utils.py -werkzeug/testsuite/wrappers.py -werkzeug/testsuite/wsgi.py -werkzeug/testsuite/contrib/__init__.py -werkzeug/testsuite/contrib/cache.py -werkzeug/testsuite/contrib/fixers.py -werkzeug/testsuite/contrib/iterio.py -werkzeug/testsuite/contrib/securecookie.py -werkzeug/testsuite/contrib/sessions.py -werkzeug/testsuite/contrib/wrappers.py -werkzeug/testsuite/multipart/collect.py -werkzeug/testsuite/multipart/ie7_full_path_request.txt -werkzeug/testsuite/multipart/firefox3-2png1txt/file1.png -werkzeug/testsuite/multipart/firefox3-2png1txt/file2.png -werkzeug/testsuite/multipart/firefox3-2png1txt/request.txt -werkzeug/testsuite/multipart/firefox3-2png1txt/text.txt -werkzeug/testsuite/multipart/firefox3-2pnglongtext/file1.png -werkzeug/testsuite/multipart/firefox3-2pnglongtext/file2.png -werkzeug/testsuite/multipart/firefox3-2pnglongtext/request.txt -werkzeug/testsuite/multipart/firefox3-2pnglongtext/text.txt -werkzeug/testsuite/multipart/ie6-2png1txt/file1.png -werkzeug/testsuite/multipart/ie6-2png1txt/file2.png -werkzeug/testsuite/multipart/ie6-2png1txt/request.txt -werkzeug/testsuite/multipart/ie6-2png1txt/text.txt -werkzeug/testsuite/multipart/opera8-2png1txt/file1.png -werkzeug/testsuite/multipart/opera8-2png1txt/file2.png -werkzeug/testsuite/multipart/opera8-2png1txt/request.txt -werkzeug/testsuite/multipart/opera8-2png1txt/text.txt -werkzeug/testsuite/multipart/webkit3-2png1txt/file1.png -werkzeug/testsuite/multipart/webkit3-2png1txt/file2.png -werkzeug/testsuite/multipart/webkit3-2png1txt/request.txt -werkzeug/testsuite/multipart/webkit3-2png1txt/text.txt -werkzeug/testsuite/res/test.txt \ No newline at end of file diff -Nru python-werkzeug-0.9.6+dfsg/Werkzeug.egg-info/top_level.txt python-werkzeug-0.10.4+dfsg1/Werkzeug.egg-info/top_level.txt --- python-werkzeug-0.9.6+dfsg/Werkzeug.egg-info/top_level.txt 2014-06-07 10:34:33.000000000 +0000 +++ python-werkzeug-0.10.4+dfsg1/Werkzeug.egg-info/top_level.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ -werkzeug